Bayesian Reinforcement Learning in Factored POMDPs

Conference Paper (2019)
Author(s)

Sammie Katt (Northeastern University)

FA Oliehoek (TU Delft - Interactive Intelligence)

Christopher Amato (Northeastern University)

Research Group
Interactive Intelligence
Copyright
© 2019 Sammie Katt, F.A. Oliehoek, Christopher Amato
More Info
expand_more
Publication Year
2019
Language
English
Copyright
© 2019 Sammie Katt, F.A. Oliehoek, Christopher Amato
Research Group
Interactive Intelligence
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care   Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. @en
Pages (from-to)
7-15
ISBN (print)
978-1-4503-6309-9
ISBN (electronic)
9781510892002
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Model-based Bayesian Reinforcement Learning (BRL) provides a principled solution to dealing with the exploration-exploitation trade-off, but such methods typically assume a fully observable environments. The few Bayesian RL methods that are applicable in partially observable domains, such as the Bayes-Adaptive POMDP (BA-POMDP), scale poorly. To address this issue, we introduce the Factored BA-POMDP model (FBA-POMDP), a framework that is able to learn a compact model of the dynamics by exploiting the underlying structure of a POMDP. The FBA-POMDP framework casts the problem as a planning task, for which we adapt the Monte-Carlo Tree Search planning algorithm and develop a belief tracking method to approximate the joint posterior over the state and model variables. Our empirical results show that this method outperforms a number of BRL baselines and is able to learn efficiently when the factorization is known, as well as learn both the factorization and the model parameters simultaneously.

Files

P7_katt.pdf
(pdf | 1.68 Mb)
- Embargo expired in 08-11-2019
License info not available