Exploring Personal Memories and Video Content as Context for Facial Behavior in Predictions of Video-Induced Emotions
B.J.W. Dudzik (TU Delft - Interactive Intelligence)
D.J. Broekens (Universiteit Leiden)
Mark Neerincx (TU Delft - Interactive Intelligence)
H.S. Hung (TU Delft - Pattern Recognition and Bioinformatics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Empirical evidence suggests that the emotional meaning of facial behavior in isolation is often ambiguous in real-world conditions. While humans complement interpretations of others' faces with additional reasoning about context, automated approaches rarely display such context-sensitivity. Empirical findings indicate that the personal memories triggered by videos are crucial for predicting viewers' emotional response to such videos ?- in some cases, even more so than the video's audiovisual content. In this article, we explore the benefits of personal memories as context for facial behavior analysis. We conduct a series of multimodal machine learning experiments combining the automatic analysis of video-viewers' faces with that of two types of context information for affective predictions: \beginenumerate∗[label=(\arabic∗)] \item self-reported free-text descriptions of triggered memories and \item a video's audiovisual content \endenumerate∗. Our results demonstrate that both sources of context provide models with information about variation in viewers' affective responses that complement facial analysis and each other.