Transient non-stationarity and generalisation in deep reinforcement learning

Conference Paper (2021)
Author(s)

Maximilian Igl (University of Oxford)

Gregory Farquhar (University of Oxford)

Jelena Luketina (University of Oxford)

J.W. Böhmer (TU Delft - Algorithmics)

Shimon Whiteson (University of Oxford)

Research Group
Algorithmics
Copyright
© 2021 Maximilian Igl, Gregory Farquhar, Jelena Luketina, J.W. Böhmer, Shimon Whiteson
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Maximilian Igl, Gregory Farquhar, Jelena Luketina, J.W. Böhmer, Shimon Whiteson
Research Group
Algorithmics
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Non-stationarity can arise in Reinforcement Learning (RL) even in stationary environments. For example, most RL algorithms collect new data throughout training, using a non-stationary behaviour policy. Due to the transience of this non-stationarity, it is often not explicitly addressed in deep RL and a single neural network is continually updated. However, we find evidence that neural networks exhibit a memory effect, where these transient non-stationarities can permanently impact the latent representation and adversely affect generalisation performance. Consequently, to improve generalisation of deep RL agents, we propose Iterated Relearning (ITER). ITER augments standard RL training by repeated knowledge transfer of the current policy into a freshly initialised network, which thereby experiences less non-stationarity during training. Experimentally, we show that ITER improves performance on the challenging generalisation benchmarks ProcGen and Multiroom.

Files

License info not available