Safe Policy Improvement with an Estimated Baseline Policy

Conference Paper (2020)
Author(s)

Thiago D. Simão (TU Delft - Algorithmics)

Romain Laroche (Microsoft Research (MSR))

Rémi Tachet des Combes (Microsoft Research)

Research Group
Algorithmics
Copyright
© 2020 T. D. Simão, Romain Laroche, Rémi Tachet des Combes
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 T. D. Simão, Romain Laroche, Rémi Tachet des Combes
Related content
Research Group
Algorithmics
Bibliographical Note
Virtual/online event due to COVID-19@en
Pages (from-to)
1269–1277
ISBN (print)
9781450375184
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Previous work has shown the unreliability of existing algorithms in the batch Reinforcement Learning setting, and proposed the theoretically-grounded Safe Policy Improvement with Baseline Bootstrapping (SPIBB) fix: reproduce the baseline policy in the uncertain state-action pairs, in order to control the variance on the trained policy performance. However, in many real-world applications such as dialogue systems, pharmaceutical tests or crop management, data is collected under human supervision and the baseline remains unknown. In this paper, we apply SPIBB algorithms with a baseline estimate built from the data. We formally show safe policy improvement guarantees over the true baseline even without direct access to it. Our empirical experiments on finite and continuous states tasks support the theoretical findings. It shows little loss of performance in comparison with SPIBB when the baseline policy is given, and more importantly, drastically and significantly outperforms competing algorithms both in safe policy improvement, and in average performance.

Files

3398761.3398908.pdf
(pdf | 2.49 Mb)
License info not available