Collecting Mementos

A Multimodal Dataset for Context-Sensitive Modeling of Affect and Memory Processing in Responses to Videos

More Info
expand_more

Abstract

In this article we introduce Mementos: the first multimodal corpus for computational modeling of affect and memory processing in response to video content. It was collected online via crowdsourcing and captures 1995 individual responses collected from 297 unique viewers responding to 42 different segments of music videos. Apart from webcam recordings of their upper-body behavior (totaling 2012 minutes) and self-reports of their emotional experience, it contains detailed descriptions of the occurrence and content of 989 personal memories triggered by the video content. Finally, the dataset includes self-report measures related to individual differences in participants' background and situation (Demographics, Personality, and Mood), thereby facilitating the exploration of important contextual factors in research using the dataset. We describe 1) the construction and contents of the corpus itself, 2) analyse the validity of its content by investigating biases and consistency with existing research on affect and memory processing, 3) review previously published work that demonstrates the usefulness of the multimodal data in the corpus for research on automated detection and prediction tasks, and 4) provide suggestions for how the dataset can be used in future research on modeling Video-Induced Emotions, Memory-Associated Affect, and Memory Evocation.

Files

Collecting_Mementos_A_Multimod... (.pdf)
(.pdf | 27.9 Mb)

Download not available

Collecting_Mementos_A_Multimod... (.pdf)
(.pdf | 1.46 Mb)
- Embargo expired in 01-12-2023