Exploring Personal Memories and Video Content as Context for Facial Behavior in Predictions of Video-Induced Emotions

Conference Paper (2020)
Author(s)

B.J.W. Dudzik (TU Delft - Interactive Intelligence)

D.J. Broekens (Universiteit Leiden)

Mark Neerincx (TU Delft - Interactive Intelligence)

H.S. Hung (TU Delft - Pattern Recognition and Bioinformatics)

Research Group
Interactive Intelligence
Copyright
© 2020 B.J.W. Dudzik, D.J. Broekens, M.A. Neerincx, H.S. Hung
DOI related publication
https://doi.org/10.1145/3382507.3418814
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 B.J.W. Dudzik, D.J. Broekens, M.A. Neerincx, H.S. Hung
Research Group
Interactive Intelligence
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. @en
Pages (from-to)
153-162
ISBN (print)
978-1-4503-7581-8
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Empirical evidence suggests that the emotional meaning of facial behavior in isolation is often ambiguous in real-world conditions. While humans complement interpretations of others' faces with additional reasoning about context, automated approaches rarely display such context-sensitivity. Empirical findings indicate that the personal memories triggered by videos are crucial for predicting viewers' emotional response to such videos ?- in some cases, even more so than the video's audiovisual content. In this article, we explore the benefits of personal memories as context for facial behavior analysis. We conduct a series of multimodal machine learning experiments combining the automatic analysis of video-viewers' faces with that of two types of context information for affective predictions: \beginenumerate∗[label=(\arabic∗)] \item self-reported free-text descriptions of triggered memories and \item a video's audiovisual content \endenumerate∗. Our results demonstrate that both sources of context provide models with information about variation in viewers' affective responses that complement facial analysis and each other.

Files

3382507.3418814.pdf
(pdf | 1.11 Mb)
- Embargo expired in 08-04-2022
License info not available