Show simple item record

dc.contributor.advisorSmolic, Aljosaen
dc.contributor.authorMoynihan, Matthewen
dc.date.accessioned2022-10-21T08:59:36Z
dc.date.available2022-10-21T08:59:36Z
dc.date.issued2022en
dc.date.submitted2022en
dc.identifier.citationMoynihan, Matthew, Spatio-Temporal Processes for Volumetric Video Content Creation, Trinity College Dublin, School of Computer Science & Statistics, Computer Science, 2022en
dc.identifier.otherYen
dc.descriptionAPPROVEDen
dc.description.abstractVolumetric Video is an emerging media platform which has recently undergone many new captivating developments. It could arguably be stated that recent uptake of volumetric video in consumer media would suggest that the platform is approaching maturity. That said, there still exists a very large barrier to entry for content creators as the technological requirements far exceed that of budget-constrained creators. Furthermore, even the more affluent creators find it difficult to navigate the large data footprint of volumetric video. Hence, there is a huge demand from these communities for new systems that improve upon the quality and accessibility of this medium. Techniques which seek to ensure Spatio-temporal coherence have yielded great success with traditional 2D video from quality improvements to reduced data compression overheads. In this dissertation we aim to investigate how spatio-temporal processes may be applied to volumetric video content creation with the ultimate goal of improving quality and accessibility by means of editing and compression. Specifically we will investigate this under three applications, including upsampling and filtering of point cloud sequences, autonomous tracking and registration of mesh sequences and frameworks for learnable registration of mesh sequences. Improvements to point cloud sequences allow for volumetric video content pipelines to improve spatio-temporal coherence from early stage reconstructions, propagating these qualities towards the final volumetric mesh outputs. Tracking and registration of meshes further improves the quality of volumetric video while also adding temporal redundancy that can be exploited for compression. Finally the advantages of deep learning provide faster processing times and present a framework for more spatio-temporally aware network architectures. Under these three applications this dissertation will seek to answer the question, how can spatio-temporal processes be applied to improve volumetric video content creation?en
dc.publisherTrinity College Dublin. School of Computer Science & Statistics. Discipline of Computer Scienceen
dc.rightsYen
dc.subjectVolumetric Videoen
dc.subjectSpatio-temporalen
dc.subjectFree Viewpoint Videoen
dc.subjectVRen
dc.subjectXRen
dc.titleSpatio-Temporal Processes for Volumetric Video Content Creationen
dc.typeThesisen
dc.type.supercollectionthesis_dissertationsen
dc.type.supercollectionrefereed_publicationsen
dc.type.qualificationlevelDoctoralen
dc.identifier.peoplefinderurlhttps://tcdlocalportal.tcd.ie/pls/EnterApex/f?p=800:71:0::::P71_USERNAME:MAMOYNIHen
dc.identifier.rssinternalid246915en
dc.rights.ecaccessrightsopenAccess
dc.contributor.sponsorScience Foundation Ireland (SFI)en
dc.identifier.urihttp://hdl.handle.net/2262/101373


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record