Print Email Facebook Twitter Joy, Distress, Hope, and Fear in Reinforcement Learning (Extended Abstract) Title Joy, Distress, Hope, and Fear in Reinforcement Learning (Extended Abstract) Author Jacobs, E.J. Broekens, J. Jonker, C.M. Faculty Electrical Engineering, Mathematics and Computer Science Department Intelligent Systems Date 2014-05-05 Abstract In this paper we present a mapping between joy, distress, hope and fear, and Reinforcement Learning primitives. Joy / distress is a signal that is derived from the RL update signal, while hope/fear is derived from the utility of the current state. Agent-based simulation experiments replicate psychological and behavioral dynamics of emotion including: joy and distress reactions that develop prior to hope and fear; fear extinction; habituation of joy; and, task randomness that increases the intensity of joy and distress. This work distinguishes itself by assessing the dynamics of emotion in an adaptive agent framework - coupling it to the literature on habituation, development, and extinction. Subject reinforcement learningemotion dynamicsaffective computing To reference this document use: http://resolver.tudelft.nl/uuid:c50b548f-3c80-4426-bed4-cccc93bf68a5 Publisher ACM ISBN 978-1-4503-2738-1 Source AAMAS 2014: Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems, Paris, France, 5-9 May 2014 Part of collection Institutional Repository Document type conference paper Rights (c) 2014 International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) Files PDF Jonker_2014.pdf 370.85 KB Close viewer /islandora/object/uuid:c50b548f-3c80-4426-bed4-cccc93bf68a5/datastream/OBJ/view