What model does MuZero learn?

Conference Paper (2024)
Author(s)

J. He (TU Delft - Sequential Decision Making)

Thomas M. Moerland (Universiteit Leiden)

J.A. de Vries (TU Delft - Sequential Decision Making)

FA Oliehoek (TU Delft - Sequential Decision Making)

Research Group
Sequential Decision Making
DOI related publication
https://doi.org/10.3233/FAIA240666
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Sequential Decision Making
Pages (from-to)
1599-1606
ISBN (electronic)
9781643685489
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Model-based reinforcement learning (MBRL) has drawn considerable interest in recent years, given its promise to improve sample efficiency. Moreover, when using deep-learned models, it is possible to learn compact and generalizable models from data. In this work, we study MuZero, a state-of-the-art deep model-based reinforcement learning algorithm that distinguishes itself from existing algorithms by learning a value-equivalent model. Despite MuZero’s success and impact in the field of MBRL, existing literature has not thoroughly addressed why MuZero performs so well in practice. Specifically, there is a lack of in-depth investigation into the value-equivalent model learned by MuZero and its effectiveness in model-based credit assignment and policy improvement, which is vital for achieving sample efficiency in MBRL. To fill this gap, we explore two fundamental questions through our empirical analysis: 1) to what extent does MuZero achieve its learning objective of a value-equivalent model, and 2) how useful are these models for policy improvement? Among various other insights, we conclude that MuZero’s learned model cannot effectively generalize to evaluate unseen policies. This limitation constrains the extent to which we can additionally improve the current policy by planning with the model.