dc.contributor.author | Khosrobeigi, Zohreh | |
dc.contributor.author | Koutsombogera, Maria | |
dc.contributor.author | Vogel, Carl | |
dc.contributor.editor | Atsushi Ito and Peter Baranyi | en |
dc.date.accessioned | 2024-09-20T09:58:29Z | |
dc.date.available | 2024-09-20T09:58:29Z | |
dc.date.created | September 16-18, 202 | en |
dc.date.issued | 2024 | |
dc.date.submitted | 2024 | en |
dc.identifier.citation | Zohreh Khosrobeigi, Maria Koutsombogera and Carl Vogel, Motion Energy Alignment Analysis in Dialogue, 15th IEEE International Conference on Cognitive Infocommunications -- CogInfoCom 2024, Tokyo, Japan, September 16-18, 202, Atsushi Ito and Peter Baranyi, 2024, 91-96 | en |
dc.identifier.other | Y | |
dc.description.abstract | We study physical alignment in conversations and
motion energy (ME) validation. ME is estimated through counts
of pixel changes in video regions of interest. ME of interlocutors
is quantified with the MEA system, which uses a frame difference
algorithm, and it is designed and widely deployed in situations
defined to involve a natural dialogue. In the spirit of replication
and system validation, we report our efforts to validate motion
energy analysis on collaborative dialogues. Within successive
windows on ME in video recordings (c. 0.3 seconds duration; 9
frames), and over entire videos without windowing, we correlate
the simultaneous ME of paired interlocutors. Then, we compare
these values with those that arise where ME for one party
is either randomized or reversed. We see expected differences
between actual and randomized interlocutors, but not reversals.
We demonstrate relations among ME values, ME correlations,
and ME validation. | en |
dc.format.extent | 91-96 | en |
dc.language.iso | en | en |
dc.rights | Y | en |
dc.subject | Conversational Synchrony | en |
dc.subject | Alignment | en |
dc.subject | Windowing | en |
dc.subject | Motion Energy Validation | en |
dc.subject | MEA | en |
dc.title | Motion Energy Alignment Analysis in Dialogue | en |
dc.title.alternative | 15th IEEE International Conference on Cognitive Infocommunications -- CogInfoCom 2024 | en |
dc.type | Conference Paper | en |
dc.type.supercollection | scholarly_publications | en |
dc.type.supercollection | refereed_publications | en |
dc.identifier.peoplefinderurl | http://people.tcd.ie/vogel | |
dc.identifier.peoplefinderurl | http://people.tcd.ie/koutsomm | |
dc.identifier.peoplefinderurl | http://people.tcd.ie/khosrobz | |
dc.identifier.rssinternalid | 271060 | |
dc.rights.ecaccessrights | openAccess | |
dc.subject.TCDTheme | Creative Technologies | en |
dc.subject.TCDTheme | Digital Engagement | en |
dc.subject.TCDTheme | Digital Humanities | en |
dc.subject.TCDTheme | Telecommunications | en |
dc.subject.TCDTag | ARTIFICIAL INTELLIGENCE | en |
dc.subject.TCDTag | Communication Sciences | en |
dc.subject.TCDTag | Computational Linguistics | en |
dc.subject.TCDTag | Discourse & Dialogue | en |
dc.subject.TCDTag | Imaging and Computer Vision | en |
dc.subject.TCDTag | Multi-modal analysis of dialogue | en |
dc.subject.TCDTag | interaction analysis | en |
dc.identifier.orcid_id | 0000-0001-8928-8546 | |
dc.status.accessible | N | en |
dc.contributor.sponsor | Science Foundation Ireland (SFI) | en |
dc.contributor.sponsorGrantNumber | 18/CRT/6223 | en |
dc.identifier.uri | https://hdl.handle.net/2262/109259 | |