Music is widely used in human–computer interaction (HCI) to enhance engagement, sustain attention, and support cognitive stimulation. Yet its potential for deliberate mood regulation, particularly through personalized memory recall, remains largely unexplored. Music-evoked autobi
...
Music is widely used in human–computer interaction (HCI) to enhance engagement, sustain attention, and support cognitive stimulation. Yet its potential for deliberate mood regulation, particularly through personalized memory recall, remains largely unexplored. Music-evoked autobiographical memories (MEAMs) are often elicited by well-known, favorite songs, yielding stronger mood effects than music without personal memory associations. However, songs can also trigger distressing memories, and will never capture all positive personal memories. Since happy personal memories can enhance mood, broader methods for retrieval are needed. To address this, we introduce Constructed Music-Evoked Episodic Memories (CoMEEMs), a framework linking chosen episodic memories to music. By creating a personalized song-memory database, CoMEEMs enable autonomous mood regulation and communication in interactive systems, integrating memory cues—such as people and places—alongside mood congruence, to help choose songs with high mood regulatory impact. In an experiment with 71 Dutch and French adults, participants described 87 positive memories and received song recommendations based on associated people and places, with and without mood matching. Results showed that song familiarity and genre were the strongest predictors of perceived fit, while valence, arousal, tempo, and lyrics played smaller roles. Mood congruence, especially in valence, significantly influenced song relevance. Participants emphasized the need for user input on emotional states and memory context. Based on these findings, we propose design guidelines to improve future music recommendation systems targeting memories.