Show simple item record

dc.contributor.advisorPitie, Francoisen
dc.contributor.authorWu, Haoen
dc.date.accessioned2023-03-14T11:12:47Z
dc.date.available2023-03-14T11:12:47Z
dc.date.issued2023en
dc.date.submitted2023en
dc.identifier.citationWu, Hao, Automated Creation of Intra-Video Social Comments, Trinity College Dublin, School of Engineering, Electronic & Elect. Engineering, 2023en
dc.identifier.otherYen
dc.descriptionAPPROVEDen
dc.description.abstractLive video comments, or ?danmu?, are an emerging social feature on Asian online video platforms. These time-synchronous comments are overlaid on the video playback and uniquely enrich the viewing experience, engaging hundreds of millions of users in rich community discussions. The presence of danmu comments has become a determining factor for video popularity. Videos with fewer danmu are not likely to be placed at the top in a search result list or to be recommended, therefore they receive less attention from viewers, which, in return, stops them from being further commented. This is similar to the cold-start problem in recommender systems. To overcome this cold start problem, we propose to automatically generate new danmu for less commented videos. Most of the existing literature on automated danmu creation has so far focused on generating danmu comments at random locations in already densely commented videos. However, the real issue faced by content creators is that videos need many danmu comments to start attracting traffic. Also, it is easier in these cases to exploit the numerous nearby comments to generate new comments. In this thesis, we study this video cold start problem and examine how new comments can be generated automatically on less-commented videos. We first propose to leverage the available information from all modalities, including video visual signals, audio soundtracks and linguistic inputs (previous comments), into a single Deep Learning Transformer architecture. We also show that, by training our network for different scenarios of danmu comment densities, ranging from the complete cold start scenario to the scenario where the video has already many comments, our method can outperform the state-of-the-art in all situations, even surpassing human comments in terms of fluency and relevancy to the original video content. To further tackle this cold-start challenge, rather than generating comments at random places in a video timeline we propose to solve the problem of where to publish danmu comments, which is something that has not yet been addressed in the literature. As danmu comments tend to aggregate at particular highlights, we propose to predict these popular locations in the video timeline by building on the same core architecture of our comment generation network. Results show danmu density trends can be reliably predicted from bare videos, thus proving that we can also predict where to publish comments in a video. Instead of separating the process of predicting the location and content of a danmu comment, we recognise that both tasks of comment generation and highlight prediction can be actually addressed within a unified framework, using the same input modalities and same core Transformer architecture, but with two different decoders. This unified network is trained in a multi-task manner. Our results show that this multi-task approach consistently outperforms the single-task baselines. Finally, the performance of our overall system is evaluated by human evaluators, measuring not only the quality of generated content but also the appropriateness of the recommended commenting locations. The evaluation results show that, when compared to human comments, our generated automated comments are more relevant to the source video and their timing tend to be more accurate than human comments.en
dc.publisherTrinity College Dublin. School of Engineering. Discipline of Electronic & Elect. Engineeringen
dc.rightsYen
dc.subjectMulti-media Analysisen
dc.subjectSocial-media Analysisen
dc.subjectNatural Language Generationen
dc.titleAutomated Creation of Intra-Video Social Commentsen
dc.typeThesisen
dc.type.supercollectionthesis_dissertationsen
dc.type.supercollectionrefereed_publicationsen
dc.type.qualificationlevelDoctoralen
dc.identifier.peoplefinderurlhttps://tcdlocalportal.tcd.ie/pls/EnterApex/f?p=800:71:0::::P71_USERNAME:WUH3en
dc.identifier.rssinternalid251833en
dc.rights.ecaccessrightsopenAccess
dc.identifier.urihttp://hdl.handle.net/2262/102260


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record