Exploring automatic translation between affect representation schemes of music affective content

More Info
expand_more

Abstract

Studies in Music Affect Content Analysis use varying emotion schemes to represent the states induced when listening to music. However, there are limited studies that explore the translation between these representation schemes. This paper explores the feasibility of using machine learning models to translate from a dimensional scheme of Valence, Energy and Tension, to a categorical emotion of Anger, Fear, Happy, Sad, or Tender, specifically in the context of musical stimuli. Additionally, this paper considers how the close proximity of certain emotions in the dimensional space, such as Fear and Anger, negatively influence the performance of translation models. This paper reflects on past studies and presents new results concluding feasible translations of music affect content, moreover, providing suggestions for future analysis.