Beyond Local Nash Equilibria for Adversarial Networks

Conference Paper (2018)
Author(s)

FA Oliehoek (TU Delft - Interactive Intelligence)

Rahul Savani (University of Liverpool)

Jose Gallego (Universiteit van Amsterdam)

Elise van der van der Pol (Universiteit van Amsterdam)

Roderich Gross (University of Sheffield)

Research Group
Interactive Intelligence
Copyright
© 2018 F.A. Oliehoek, Rahul Savani, Jose Gallego, Elise van der Pol, Roderich Gross
More Info
expand_more
Publication Year
2018
Copyright
© 2018 F.A. Oliehoek, Rahul Savani, Jose Gallego, Elise van der Pol, Roderich Gross
Research Group
Interactive Intelligence
Bibliographical Note
Accepted author manuscript@en
Pages (from-to)
1-15
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Save for some special cases, current training methods for Generative Adversarial Networks (GANs) are at best guaranteed to converge to a `local Nash quilibrium' (LNE). Such LNEs, however, can be arbitrarily far from an actual Nash equilibrium (NE), which implies that there are no guarantees on the quality of the found generator or clas-sier. This paper proposes to model GANs explicitly as nite games in mixed strategies, thereby ensuring that every LNE is an NE. We use the Parallel Nash Memory as a solution method, which is proven to monotonically converge to a resource-bounded Nash equilibrium. We empirically demonstrate that our method is less prone to typical GAN problems such as mode collapse and produces solutions that are less exploitable than those produced by GANs and MGANs

Files

Oliehoek18benelearn.pdf
(pdf | 1.37 Mb)
License info not available