TDMER
A Task-Driven Method for Multimodal Emotion Recognition
Qian Xu (Xidian University)
Yu Gu (Xidian University)
Chenyu Li (Xidian University)
He Zhang (Northwest University China)
Haixiang Lin (TU Delft - Mathematical Physics)
Linsong Liu (Xidian University)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
In multimodal emotion recognition, disentangled representation learning method effectively address the inherent heterogeneity among modalities. To facilitate the flexible integration of enhanced disentangled features into multimodal emotional features, we propose a task-driven multimodal emotion recognition method TDMER. Its Cross-Modal Learning module promotes adaptive cross-modal learning of features disentangled into modality-invariant and modality-specific subspaces, based on their contributions to emotional classification probabilities. The Task-Contribution Fusion mechanism then assigns controllable weights to the enhanced features according to their task objectives, generating multimodal fusion features that improve the emotion classifier's discriminative ability. The proposed TDMER approach has been evaluated on two widely-used multimodal emotion recognition benchmarks and demonstrated significant performance improvements compared with other state-of the-art methods.
Files
File under embargo until 15-09-2025