YG
Yu Gu
5 records found
1
TDMER
A Task-Driven Method for Multimodal Emotion Recognition
In multimodal emotion recognition, disentangled representation learning method effectively address the inherent heterogeneity among modalities. To facilitate the flexible integration of enhanced disentangled features into multimodal emotional features, we propose a task-driven mu
...
Scene-Speaker Emotion Aware Network
Dual Network Strategy for Conversational Emotion Recognition
Incorporating external knowledge has been shown to improve emotion understanding in dialogues by enriching contextual information, such as character motivations, psychological states, and causal relations between events. Filtering and categorizing this information can significant
...
Speech signals contain rich information, such as textual content, emotion, and speaker identity. To extract these features more efficiently, researchers are investigating joint training across multiple tasks, like Speech Emotion Recognition (SER) and Speaker Verification (SV), ai
...
Speech emotion recognition has been a prevalent research topic in recent years. Existing speech emotion recognition approaches mainly involve processing and analyzing speech signals, in order to discern the speaker’s emotions in speech. 2D Gabor filters have been used to extract
...
Speech emotion recognition (SER) poses one of the major challenges in human-machine interaction. We propose a new algorithm, the Voiced Segment Selection (VSS) algorithm, which can produce an accurate segmentation of speech signals. The VSS algorithm deals with the voiced signal
...