Learning Multimodal Transition Dynamics for Model-Based Reinforcement Learning

Conference Paper (2017)
Author(s)

Thomas M. Moerland (TU Delft - Interactive Intelligence)

Joost Broekens (TU Delft - Interactive Intelligence)

Catholijn M. Jonker (TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
More Info
expand_more
Publication Year
2017
Language
English
Research Group
Interactive Intelligence
Pages (from-to)
1-18
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In this paper we study how to learn stochastic, multimodal transition dynamics in reinforcement learning (RL) tasks. We focus on evaluating transition function estimation, while we defer planning over this model to future work. Stochasticity is a fundamental property of many task environments. However, discriminative function approximators have difficulty estimating multimodal stochasticity. In contrast, deep generative models do capture complex high-dimensional outcome distributions. First we discuss why, amongst such models, conditional variational inference (VI) is theoretically most appealing for model-based RL. Subsequently, we compare different VI models on their ability to learn complex stochasticity on simulated functions, as well as on a typical RL gridworld with multimodal dynamics. Results show VI successfully predicts multimodal outcomes, but also robustly ignores these for deterministic parts of the transition dynamics. In summary, we show a robust method to learn multimodal transitions using function approximation, which is a key preliminary for model-based RL in stochastic domains.

Files

SURL_2017_paper_6.pdf
(pdf | 0.66 Mb)
License info not available