Print Email Facebook Twitter Towards creating a conversational memory for long-term meeting support Title Towards creating a conversational memory for long-term meeting support: predicting memorable moments in multi-party conversations through eye-gaze Author Tsfasman, Maria Fenech, Kristian (Eötvös University) Tarvirdians, M. (TU Delft Interactive Intelligence) Lorincz, Andras (Eötvös University) Jonker, C.M. (TU Delft Interactive Intelligence; Universiteit Leiden) Oertel, Catharine (TU Delft Interactive Intelligence) Date 2022 Abstract When working in a group, it is essential to understand each other's viewpoints to increase group cohesion and meeting productivity. This can be challenging in teams: participants might be left misunderstood and the discussion could be going around in circles. To tackle this problem, previous research on group interactions has addressed topics such as dominance detection, group engagement, and group creativity. Conversational memory, however, remains a widely unexplored area in the field of multimodal analysis of group interaction. The ability to track what each participant or a group as a whole find memorable from each meeting would allow a system or agent to continuously optimise its strategy to help a team meet its goals. In the present paper, we therefore investigate what participants take away from each meeting and how it is reflected in group dynamics.As a first step toward such a system, we recorded a multimodal longitudinal meeting corpus (MEMO), which comprises a first-party annotation of what participants remember from a discussion and why they remember it. We investigated whether participants of group interactions encode what they remember non-verbally and whether we can use such non-verbal multimodal features to predict what groups are likely to remember automatically. We devise a coding scheme to cluster participants' memorisation reasons into higher-level constructs. We find that low-level multimodal cues, such as gaze and speaker activity, can predict conversational memorability. We also find that non-verbal signals can indicate when a memorable moment starts and ends. We could predict four levels of conversational memorability with an average accuracy of 44 %. We also showed that reasons related to participants' personal feelings and experiences are the most frequently mentioned grounds for remembering meeting segments. Subject conversational memorymulti-modal corporamulti-party interactionsocial signals To reference this document use: http://resolver.tudelft.nl/uuid:668a53e9-1688-433b-9934-eb0de73dc89f DOI https://doi.org/10.1145/3536221.3556613 Publisher Association for Computing Machinery (ACM) ISBN 9781450393904 Source ICMI 2022 - Proceedings of the 2022 International Conference on Multimodal Interaction Event 24th ACM International Conference on Multimodal Interaction, ICMI 2022, 2022-11-07 → 2022-11-11, Bangalore, India Series ACM International Conference Proceeding Series Part of collection Institutional Repository Document type conference paper Rights © 2022 Maria Tsfasman, Kristian Fenech, M. Tarvirdians, Andras Lorincz, C.M. Jonker, Catharine Oertel Files PDF 3536221.3556613.pdf 4.26 MB Close viewer /islandora/object/uuid:668a53e9-1688-433b-9934-eb0de73dc89f/datastream/OBJ/view