Creating a Mood Database for automated affect analysis

More Info
expand_more

Abstract

Affect-adaptive systems are dependent on their ability to automatically recognize a user’s affective state. This study aims to contribute to the creation of an affect-adaptive system that can recognize negative moods of elderly in care homes from a video feed, and improve it by adapting the lighting in the room. An affective database of videos portraying different moods is required to train such a system. While many affective databases exist already, they are primarily targeting emotions rather than mood. Therefore, we introduce a new database of annotated videos that can be used for mood recognition. To maintain control over which moods are depicted in the videos in the database, we combine the use of mood induction and acted performance to portray the moods in a realistic way, incorporating in the acted scripts the results from a series of interviews with caretakers in care homes. The database covers three visual modalities: body, face and 3D Kinect data for a total of 24 hours of recorded video material. We use crowdsourcing to annotate such a large amount of material in terms of perceived mood of the person portrayed in the videos, by outsourcing via the internet the annotation task to a large number of paid annotators. A risk of using crowdsourcing is unreliable annotator performance, due to the low level of control applicable to the annotation process. We deal with this problem by filtering the annotations according to predefined criteria, checking for task commitment and self-consistency of the annotators. We validate our use of the combination of induction and actors with a comparison between the intended mood, the mood felt by the actors, and the mood perceived by annotators. Furthermore, we demonstrate that crowdsourcing is a promising tool for the annotation of mood.