Safe Policy Improvement with Baseline Bootstrapping in Factored Environments

Conference Paper (2019)
Research Group
Algorithmics
DOI related publication
https://doi.org/10.1609/aaai.v33i01.33014967
More Info
expand_more
Publication Year
2019
Language
English
Related content
Research Group
Algorithmics
Pages (from-to)
4967-4974
ISBN (electronic)
9781577358091

Abstract

We present a novel safe reinforcement learning algorithm that exploits the factored dynamics of the environment to become less conservative. We focus on problem settings in which a policy is already running and the interaction with the environment is limited. In order to safely deploy an updated policy, it is necessary to provide a confidence level regarding its expected performance. However, algorithms for safe policy improvement might require a large number of past experiences to become confident enough to change the agent’s behavior. Factored reinforcement learning, on the other hand, is known to make good use of the data provided. It can achieve a better sample complexity by exploiting independence between features of the environment, but it lacks a confidence level. We study how to improve the sample efficiency of the safe policy improvement with baseline bootstrapping algorithm by exploiting the factored structure of the environment. Our main result is a theoretical bound that is linear in the number of parameters of the factored representation instead of the number of states. The empirical analysis shows that our method can improve the policy using a number of samples potentially one order of magnitude smaller than the flat algorithm.

No files available

Metadata only record. There are no files for this record.