Gestures In-The-Wild

Detecting Conversational Hand Gestures in Crowded Scenes Using a Multimodal Fusion of Bags of Video Trajectories and Body Worn Acceleration

Journal Article (2020)
Author(s)

Laura Cabrera Quiros (Instituto Tecnologico de Costa Rica, TU Delft - Pattern Recognition and Bioinformatics)

David Tax (TU Delft - Pattern Recognition and Bioinformatics)

H.S. Hung (TU Delft - Pattern Recognition and Bioinformatics)

Research Group
Pattern Recognition and Bioinformatics
Copyright
© 2020 L.C. Cabrera Quiros, D.M.J. Tax, H.S. Hung
DOI related publication
https://doi.org/10.1109/TMM.2019.2922122
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 L.C. Cabrera Quiros, D.M.J. Tax, H.S. Hung
Research Group
Pattern Recognition and Bioinformatics
Issue number
1
Volume number
22
Pages (from-to)
138-147
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This paper addresses the detection of hand gestures during free-standing conversations in crowded mingle scenarios. Unlike the scenarios of the previous works in gesture detection and recognition, crowded mingle scenes have additional challenges such as cross-contamination between subjects, strong occlusions, and nonstationary backgrounds. This makes them more complex to analyze using computer vision techniques alone. We propose a multimodal approach using video and wearable acceleration data recorded via smart badges hung around the neck. In the video modality, we propose to treat noisy dense trajectories as bags-of-trajectories. For a given bag, we can have good trajectories corresponding to the subject, and bad trajectories due for instance to cross-contamination. However, we hypothesize that for a given class, it should be possible to learn trajectories that are discriminative while ignoring noisy trajectories. We do this by exploiting multiple instance learning via embedded instance selection as our multiple instance learning approach. This technique also allows us to identify which instances contribute more to the classification. By fusing the decisions of the classifiers from the video and wearable acceleration modalities, we show improvements over the unimodal approaches with an AUC of 0.69. We also present a static analysis and a dynamic analysis to assess the impact of noisy data on the fused detection results, showing that the moments of high occlusion in the video are compensated by the information from the wearables. Finally, we applied our method to detect speaking status, leveraging the close relationship found in the literature between hand gestures and speech.

Files

2922122.pdf
(pdf | 3.08 Mb)
- Embargo expired in 08-04-2022
License info not available