Searched for: +
(1 - 3 of 3)
document
Moerland, Thomas M. (author), Broekens, D.J. (author), Plaat, Aske (author), Jonker, C.M. (author)
Sequential decision making, commonly formalized as optimization of a Markov Decision Process, is a key challenge in artificial intelligence. Two successful approaches to MDP optimization are reinforcement learning and planning, which both largely have their own research communities. However, if both research fields solve the same problem,...
journal article 2022
document
Calli, B. (author), Caarls, W. (author), Wisse, M. (author), Jonker, P.P. (author)
Grasp synthesis for unknown objects is a challenging problem as the algorithms are expected to cope with missing object shape information. This missing information is a function of the vision sensor viewpoint. The majority of the grasp synthesis algorithms in literature synthesize a grasp by using one single image of the target object and...
journal article 2018
document
Jacobs, E.J. (author), Broekens, J. (author), Jonker, C.M. (author)
In this paper we present a mapping between joy, distress, hope and fear, and Reinforcement Learning primitives. Joy / distress is a signal that is derived from the RL update signal, while hope/fear is derived from the utility of the current state. Agent-based simulation experiments replicate psychological and behavioral dynamics of emotion...
conference paper 2014