Cross-modal approach for conversational well-being monitoring with multi-sensory earables

Conference Paper (2018)
Author(s)

Chulhong Min (Nokia Bell Labs)

Alessandro Montanari (Nokia Bell Labs)

Akhil Mathur (Nokia Bell Labs)

Seungchul Lee (Korea Advanced Institute of Science and Technology)

Fahim Kawsar (TU Delft - Knowledge and Intelligence Design)

Research Group
Knowledge and Intelligence Design
DOI related publication
https://doi.org/10.1145/3267305.3267695 Final published version
More Info
expand_more
Publication Year
2018
Language
English
Research Group
Knowledge and Intelligence Design
Pages (from-to)
706-709
ISBN (electronic)
978-1-4503-5966-5
Event
2018 Joint ACM International Conference on Pervasive and Ubiquitous Computing, UbiComp 2018 and 2018 ACM International Symposium on Wearable Computers, ISWC 2018 (2018-10-08 - 2018-10-12), Singapore, Singapore
Downloads counter
168

Abstract

We propose a cross-modal approach for conversational well-being monitoring with a multi-sensory earable. It consists of motion, audio, and BLE models on earables. Using the IMU sensor, the microphone, and BLE scanning, the models detect speaking activities, stress and emotion, and participants in the conversation, respectively. We discuss the feasibility in qualifying conversations with our purpose-built cross-modal model in an energy-efficient and privacy-preserving way. With the cross-modal model, we develop a mobile application that qualifies on-going conversations and provides personalised feedback on social well-being.