Teacher-apprentices RL (TARL)

leveraging complex policy distribution through generative adversarial hypernetwork in reinforcement learning

Journal Article (2023)
Author(s)

Shi Yuan Tang (Nanyang Technological University)

Athirai A. Irissappane (University of Washington)

Frans Oliehoek (TU Delft - Interactive Intelligence)

Jie Zhang (Nanyang Technological University)

Research Group
Interactive Intelligence
Copyright
© 2023 Shi Yuan Tang, Athirai A. Irissappane, F.A. Oliehoek, Jie Zhang
DOI related publication
https://doi.org/10.1007/s10458-023-09606-9
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Shi Yuan Tang, Athirai A. Irissappane, F.A. Oliehoek, Jie Zhang
Research Group
Interactive Intelligence
Issue number
2
Volume number
37
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Typically, a Reinforcement Learning (RL) algorithm focuses in learning a single deployable policy as the end product. Depending on the initialization methods and seed randomization, learning a single policy could possibly leads to convergence to different local optima across different runs, especially when the algorithm is sensitive to hyper-parameter tuning. Motivated by the capability of Generative Adversarial Networks (GANs) in learning complex data manifold, the adversarial training procedure could be utilized to learn a population of good-performing policies instead. We extend the teacher-student methodology observed in the Knowledge Distillation field in typical deep neural network prediction tasks to RL paradigm. Instead of learning a single compressed student network, an adversarially-trained generative model (hypernetwork) is learned to output network weights of a population of good-performing policy networks, representing a school of apprentices. Our proposed framework, named Teacher-Apprentices RL (TARL), is modular and could be used in conjunction with many existing RL algorithms. We illustrate the performance gain and improved robustness by combining TARL with various types of RL algorithms, including direct policy search Cross-Entropy Method, Q-learning, Actor-Critic, and policy gradient-based methods.

Files

S10458_023_09606_9.pdf
(pdf | 3.14 Mb)
- Embargo expired in 30-10-2023
License info not available