Exploring automatic translation between affect representation schemes of music affective content
A. Rugină (TU Delft - Electrical Engineering, Mathematics and Computer Science)
C.A. Raman – Mentor (TU Delft - Pattern Recognition and Bioinformatics)
B.J.W. Dudzik – Mentor (TU Delft - Pattern Recognition and Bioinformatics)
A. Hanjalic – Graduation committee member (TU Delft - Intelligent Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Studies in Music Affect Content Analysis use varying emotion schemes to represent the states induced when listening to music. However, there are limited studies that explore the translation between these representation schemes. This paper explores the feasibility of using machine learning models to translate from a dimensional scheme of Valence, Energy and Tension, to a categorical emotion of Anger, Fear, Happy, Sad, or Tender, specifically in the context of musical stimuli. Additionally, this paper considers how the close proximity of certain emotions in the dimensional space, such as Fear and Anger, negatively influence the performance of translation models. This paper reflects on past studies and presents new results concluding feasible translations of music affect content, moreover, providing suggestions for future analysis.