Print Email Facebook Twitter Classification of valence using facial expressions of TV-viewers Title Classification of valence using facial expressions of TV-viewers Author Holkamp, Y.H. Contributor Hauff, C. (mentor) Schavemaker, J.G.M. (mentor) Faculty Electrical Engineering, Mathematics and Computer Science Department Software Technology Programme Web Information Systems Date 2014-08-26 Abstract Emotion has been shown to have a large impact on our interactions with people and devices. In our daily lives, however, these emotions are not taken into account when working with our computers and other machines. If our devices could pick up on social cues, for instance in relation to disinterest, the usability of various systems could be improved. Current software allows us to detect specific movements in people's faces from video recordings. Using these movements, facial expressions can be linked to specific emotions, allowing for the incorporation of this information in various systems. One application would be to allow a TV to monitor its viewer, suggesting alternative videos to watch when negative emotions are shown. An often used system to describe these specific facial muscle movements is the Facial Action Coding System (FACS). Despite the widespread use of this method, little research has been conducted on the use of FACS measurements to classify viewer emotion of entire videos. In this thesis we evaluated whether it is possible to use FACS measurements to perform classification on emotional labels in real-world environments. To assess the possibility of this application, we conducted a wide range of experiments. We selected an existing method that uses a public dataset of naturally occurring emotions and reproduced this method. Additionally, we developed our own, alternative method. In a novel comparison we evaluated the performance of both methods on three different datasets, selected to cover a range of demographics and experimental settings (highly controlled to near-living-room conditions). Furthermore we evaluated the inclusion of the TV viewer's head orientation. This proved to be beneficial for two datasets. One of the datasets used in our work provided access to heart rate data of the subjects. Based on this data, we included the subject's heart rate and other derived features. We found that this improved performance when training using the history of a specific person. Finally we performed a novel experiment in which we asked a crowd of laymen to annotate videos from each of the three datasets. This multi-dataset evaluation provided us with a reference of how well humans were able to detect the emotion experienced by the subjects using their facial expressions, allowing for a direct comparison with automatic classification methods. Overall we found that (1) using different data processing and aggregation, classification performance can improve and (2) that human annotation of emotional responses offers a way to compare classification difficulty between datasets and performance between classification methods. Subject machine learningsvmaffective computingfacial expressionFACSemotionvalence To reference this document use: http://resolver.tudelft.nl/uuid:3d582a35-05c6-4c41-8c92-345456d8e057 Embargo date 2014-08-26 Part of collection Student theses Document type master thesis Rights (c) 2014 Holkamp, Y.H. Files PDF Y Holkamp - Classificatio ... iewers.pdf 8.05 MB Close viewer /islandora/object/uuid:3d582a35-05c6-4c41-8c92-345456d8e057/datastream/OBJ/view