Show simple item record

dc.contributor.advisorSmolic, Aljosaen
dc.contributor.authorGill, Ailbheen
dc.date.accessioned2022-11-18T08:49:40Z
dc.date.available2022-11-18T08:49:40Z
dc.date.issued2022en
dc.date.submitted2022en
dc.identifier.citationGill, Ailbhe, Modelling Light Field Visual Attention: A Saliency Field Approach, Trinity College Dublin.School of Computer Science & Statistics, 2022en
dc.identifier.otherYen
dc.descriptionAPPROVEDen
dc.description.abstractLight field imaging is becoming more accessible, hence understanding how people perceive and interact with it will be of immense value. Although visual attention has been explored for traditional 2D images, we extend this line of research to light fields. Since light field cameras capture information from all spatio-angular dimensions, the data captured can be rendered by adjusting the plane of focus, changing the aperture or shifting the viewing angle. To analyse the effect of the focus parameter on visual attention, we generate a database of focally varying light field renderings which can be used to gather saliency data using a state-of-the-art Fourier Disparity Layer renderer. Then, we create a corresponding eye fixation dataset gathered from 21 participants which is the first of its kind for light field visual attention ground truth. Our analysis of saliency maps generated from the eye fixation data reveal that light field saliency is of a higher dimension and encompasses that of 2D saliency. We demonstrate that light field renderings encode additional information compared to regular images, which we then exploit to build a 4D saliency field on which we can perform operations similar to the light field itself to obtain saliency maps of any light field rendering. To generate the saliency field, we integrate light field refocusing algorithms with a state-of-the-art deep learning model to create a hybrid data-driven approach to saliency prediction of light field renderings. In our first model, we use a shift-and-sum refocusing algorithm and in the second, we employ the Fourier disparity layer method. We evaluate the performance of our saliency prediction models against a baseline and show their efficacy both qualitatively and quantitatively for a variety of metrics. In this work, we concentrate mainly on validating our prediction for changes in the focus cue - an operation intrinsic to light fields and a challenge for visual attention prediction. However, our models can also generate saliency maps of renderings where the angle and aperture have been adjusted. Finally, to show the capabilities of our saliency field model as an automated ranking mechanism, we demonstrate its plausible application in optimal focal plane selection for gaze contingent blur software. In addition, we discuss other possible uses from AR/VR headset design to compression and quality assessment. We foresee this work stimulating further investigation into how the intrinsic properties of light fields influence visual attention.en
dc.publisherTrinity College Dublin. School of Computer Science & Statistics. Discipline of Computer Scienceen
dc.rightsYen
dc.subjectlight field, saliency, refocusing, rendering, visual attention, visual perceptionen
dc.titleModelling Light Field Visual Attention: A Saliency Field Approachen
dc.typeThesisen
dc.type.supercollectionthesis_dissertationsen
dc.type.supercollectionrefereed_publicationsen
dc.type.qualificationlevelDoctoralen
dc.identifier.peoplefinderurlhttps://tcdlocalportal.tcd.ie/pls/EnterApex/f?p=800:71:0::::P71_USERNAME:GILLA3en
dc.identifier.rssinternalid248132en
dc.rights.ecaccessrightsopenAccess
dc.contributor.sponsorSFI stipenden
dc.identifier.urihttp://hdl.handle.net/2262/101564


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record