Multimodal Self-Assessed Personality Estimation during Crowded Mingle Scenarios Using Wearables Devices and Cameras

Journal Article (2022)
Authors

Laura Cabrera-Quiros (TU Delft - Pattern Recognition and Bioinformatics, The Instituto Tecnolgico de Costa Rica)

E. Gedik (TU Delft - Pattern Recognition and Bioinformatics)

H.S. Hung (TU Delft - Pattern Recognition and Bioinformatics)

Research Group
Pattern Recognition and Bioinformatics
Copyright
© 2022 L.C. Cabrera Quiros, E. Gedik, H.S. Hung
To reference this document use:
https://doi.org/10.1109/TAFFC.2019.2930605
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 L.C. Cabrera Quiros, E. Gedik, H.S. Hung
Research Group
Pattern Recognition and Bioinformatics
Issue number
1
Volume number
13
Pages (from-to)
46-59
DOI:
https://doi.org/10.1109/TAFFC.2019.2930605
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This paper focuses on the automatic classification of self-assessed personality traits from the HEXACO inventory during crowded mingle scenarios. These scenarios provide rich study cases for social behavior analysis but are also challenging to analyze automatically as people in them interact dynamically and freely in an in-the-wild face-to-face setting. To do so, we leverage the use of wearable sensors recording acceleration and proximity, and video from overhead cameras. We use 3 different behavioral modality types (movement, speech and proximity) coming from 2 sensors (wearable and camera). Unlike other works, we extract an individual's speaking status from a single body worn triaxial accelerometer instead of audio, which scales easily to large populations. Additionally, we study the effect of different combinations of modality types on the personality estimation, and how this relates to the nature of each trait. We also include an analysis of feature complementarity and an evaluation of feature importance for the classification, showing that combining complementary modality types further improves the classification performance. We estimate the self-assessed personality traits both using a binary classification (community's standard) and as a regression over the trait scores. Finally, we analyze the impact of the accuracy of the speech detection on the overall performance of the personality estimation.

Files

Multimodal_Self_Assessed_Perso... (pdf)
(pdf | 4.51 Mb)
- Embargo expired in 08-04-2022
License info not available