In reinforcement learning, the ability to generalize to unseen situations is pivotal to an agent’s success. In this thesis, two novel methods that aim to enhance the generalizability of an agent will be introduced. Both of the methods rely on the idea that the diversity of a re
...
In reinforcement learning, the ability to generalize to unseen situations is pivotal to an agent’s success. In this thesis, two novel methods that aim to enhance the generalizability of an agent will be introduced. Both of the methods rely on the idea that the diversity of a replay buffer increases an agent’s ability to generalize. The first utilizes the agent’s exploration strategies to reach interesting states. The second aims to reach further using an additional goal-conditioned agent. Both methods demonstrate improved adaptability without relying on domain-specific knowledge and show promising results.