The Role of Diverse Replay for Generalisation in Reinforcement Learning

Preprint (2023)
Author(s)

M.R. Weltevrede (TU Delft - Algorithmics)

M.T.J. Spaan (TU Delft - Algorithmics)

Wendelin Böhmer (TU Delft - Algorithmics)

Research Group
Algorithmics
More Info
expand_more
Publication Year
2023
Language
English
Research Group
Algorithmics

Abstract

In reinforcement learning (RL), key components of many algorithms are the exploration strategy and replay buffer. These strategies regulate what environment data is collected and trained on and have been extensively studied in the RL literature. In this paper, we investigate the impact of these components in the context of generalisation in multi-task RL. We investigate the hypothesis that collecting and training on more diverse data from the training environments will improve zero-shot generalisation to new tasks. We motivate mathematically and show empirically that generalisation to tasks that are "reachable'' during training is improved by increasing the diversity of transitions in the replay buffer. Furthermore, we show empirically that this same strategy also shows improvement for generalisation to similar but "unreachable'' tasks which could be due to improved generalisation of the learned latent representations.

No files available

Metadata only record. There are no files for this record.