Model-Based Reinforcement Learning with State Abstraction: A Survey

Conference Paper (2022)
Author(s)

R.A.N. Starre (TU Delft - Interactive Intelligence)

Marco Loog (TU Delft - Pattern Recognition and Bioinformatics)

FA Oliehoek (TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
Copyright
© 2022 R.A.N. Starre, M. Loog, F.A. Oliehoek
DOI related publication
https://doi.org/10.1007/978-3-031-39144-6_9
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 R.A.N. Starre, M. Loog, F.A. Oliehoek
Research Group
Interactive Intelligence
Pages (from-to)
133–148
ISBN (print)
9783031391439
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Model-based reinforcement learning methods are promising since they can increase sample efficiency while simultaneously improving generalizability. Learning can also be made more efficient through state abstraction, which delivers more compact models. Model-based reinforcement learning methods have been combined with learning abstract models to profit from both effects. We consider a wide range of state abstractions that have been covered in the literature, from straightforward state aggregation to deep learned representations, and sketch challenges that arise when combining model-based reinforcement learning with abstraction. We further show how various methods deal with these challenges and point to open questions and opportunities for further research.

Files

978_3_031_39144_6_9.pdf
(pdf | 0.577 Mb)
- Embargo expired in 26-10-2023
License info not available