A Unifying Framework for Reinforcement Learning and Planning

Journal Article (2022)
Author(s)

Thomas M. Moerland (Universiteit Leiden)

Joost Broekens (Universiteit Leiden)

Aske Plaat (Universiteit Leiden)

Catholijn M. Jonker (TU Delft - Interactive Intelligence, Universiteit Leiden)

DOI related publication
https://doi.org/10.3389/frai.2022.908353 Final published version
More Info
expand_more
Publication Year
2022
Language
English
Journal title
Frontiers in Artificial Intelligence
Volume number
5
Article number
908353
Downloads counter
299
Collections
Institutional Repository
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Sequential decision making, commonly formalized as optimization of a Markov Decision Process, is a key challenge in artificial intelligence. Two successful approaches to MDP optimization are reinforcement learning and planning, which both largely have their own research communities. However, if both research fields solve the same problem, then we might be able to disentangle the common factors in their solution approaches. Therefore, this paper presents a unifying algorithmic framework for reinforcement learning and planning (FRAP), which identifies underlying dimensions on which MDP planning and learning algorithms have to decide. At the end of the paper, we compare a variety of well-known planning, model-free and model-based RL algorithms along these dimensions. Altogether, the framework may help provide deeper insight in the algorithmic design space of planning and reinforcement learning.