MDP homomorphic networks

Group symmetries in reinforcement learning

Journal Article (2020)
Author(s)

Elise van der Pol (Universiteit van Amsterdam)

Daniel E. Worrall (Universiteit van Amsterdam)

Herke van Hoof (Universiteit van Amsterdam)

FA Oliehoek (TU Delft - Interactive Intelligence)

Max Welling (Universiteit van Amsterdam)

Research Group
Interactive Intelligence
Copyright
© 2020 Elise van der Pol, Daniel E. Worrall, Herke van Hoof, F.A. Oliehoek, Max Welling
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 Elise van der Pol, Daniel E. Worrall, Herke van Hoof, F.A. Oliehoek, Max Welling
Research Group
Interactive Intelligence
Volume number
2020-December
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This paper introduces MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done. We construct MDP homomorphic MLPs and CNNs that are equivariant under either a group of reflections or rotations. We show that such networks converge faster than unstructured baselines on CartPole, a grid world and Pong.

Files

License info not available