Searched for: +
(1 - 6 of 6)
document
Moerland, T.M. (author), Broekens, D.J. (author), Plaat, Aske (author), Jonker, C.M. (author)
Sequential decision making, commonly formalized as Markov Decision Process (MDP) optimization, is an important challenge in artificial intelligence. Two key approaches to this problem are reinforcement learning (RL) and planning. This survey is an integration of both fields, better known as model-based reinforcement learning. Model-based RL...
review 2023
document
Moerland, Thomas M. (author), Broekens, D.J. (author), Plaat, Aske (author), Jonker, C.M. (author)
Sequential decision making, commonly formalized as optimization of a Markov Decision Process, is a key challenge in artificial intelligence. Two successful approaches to MDP optimization are reinforcement learning and planning, which both largely have their own research communities. However, if both research fields solve the same problem,...
journal article 2022
document
Moerland, T.M. (author), Deichler, Anna (author), Baldi, S. (author), Broekens, D.J. (author), Jonker, C.M. (author)
Planning and reinforcement learning are two key approaches to sequential decision making. Multi-step approximate real-time dynamic programming, a recently successful algorithm class of which AlphaZero [Silver et al., 2018] is an example, combines both by nesting planning within a learning loop. However, the combination of planning and learning...
book chapter 2020
document
Moerland, T.M. (author), Broekens, D.J. (author), Jonker, C.M. (author)
This article provides the first survey of computational models of emotion in reinforcement learning (RL) agents. The survey focuses on agent/robot emotions, and mostly ignores human user emotions. Emotions are recognized as functional in decision-making by influencing motivation and action selection. Therefore, computational emotion models are...
journal article 2018
document
Moerland, T.M. (author), Broekens, D.J. (author), Jonker, C.M. (author)
In this paper we study how to learn stochastic, multimodal transition dynamics in reinforcement learning (RL) tasks. We focus on evaluating transition function estimation, while we defer planning over this model to future work. Stochasticity is a fundamental property of many task environments. However, discriminative function approximators have...
conference paper 2017
document
Moerland, T.M. (author), Broekens, D.J. (author), Jonker, C.M. (author)
This paper studies directed exploration for reinforcement learning agents by tracking uncertainty about the value of each available action. We identify two sources of uncertainty that are relevant for exploration. The first originates from limited data (parametric uncertainty), while the second originates from the distribution of the returns ...
conference paper 2017
Searched for: +
(1 - 6 of 6)