Model-based Reinforcement Learning

A Survey

Review (2023)
Author(s)

T.M. Moerland (Universiteit Leiden, TU Delft - Interactive Intelligence)

Joost Broekens (Universiteit Leiden, TU Delft - Interactive Intelligence)

Aske Plaat (Universiteit Leiden)

Catholijn M. Jonker (Universiteit Leiden, TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
Copyright
© 2023 T.M. Moerland, D.J. Broekens, Aske Plaat, C.M. Jonker
DOI related publication
https://doi.org/10.1561/2200000086
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 T.M. Moerland, D.J. Broekens, Aske Plaat, C.M. Jonker
Research Group
Interactive Intelligence
Issue number
1
Volume number
16
Pages (from-to)
1-118
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Sequential decision making, commonly formalized as Markov Decision Process (MDP) optimization, is an important challenge in artificial intelligence. Two key approaches to this problem are reinforcement learning (RL) and planning. This survey is an integration of both fields, better known as model-based reinforcement learning. Model-based RL has two main steps. First, we systematically cover approaches to dynamics model learning, including challenges like dealing with stochasticity, uncertainty, partial observability, and temporal abstraction. Second, we present a systematic categorization of planning-learning integration, including aspects like: where to start planning, what budgets to allocate to planning and real data collection, how to plan, and how to integrate planning in the learning and acting loop. After these two sections, we also discuss implicit model-based RL as an end-to-end alternative for model learning and planning, and we cover the potential benefits of model-based RL. Along the way, the survey also draws connections to several related RL fields, like hierarchical RL and transfer learning. Altogether, the survey presents a broad conceptual overview of the combination of planning and learning for MDP optimization.

Files

2200000086.pdf
(pdf | 2.1 Mb)
License info not available