Training Generative Adversarial Networks via Stochastic Nash Games

Journal Article (2023)
Author(s)

B. Franci (TU Delft - Team Sergio Grammatico)

S. Grammatico (TU Delft - Team Bart De Schutter, TU Delft - Team Sergio Grammatico)

Research Group
Team Sergio Grammatico
Copyright
© 2023 B. Franci, S. Grammatico
DOI related publication
https://doi.org/10.1109/TNNLS.2021.3105227
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 B. Franci, S. Grammatico
Research Group
Team Sergio Grammatico
Issue number
3
Volume number
34
Pages (from-to)
1319-1328
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Generative adversarial networks (GANs) are a class of generative models with two antagonistic neural networks: a generator and a discriminator. These two neural networks compete against each other through an adversarial process that can be modeled as a stochastic Nash equilibrium problem. Since the associated training process is challenging, it is fundamental to design reliable algorithms to compute an equilibrium. In this article, we propose a stochastic relaxed forward-backward (SRFB) algorithm for GANs, and we show convergence to an exact solution when an increasing number of data is available. We also show convergence of an averaged variant of the SRFB algorithm to a neighborhood of the solution when only a few samples are available. In both cases, convergence is guaranteed when the pseudogradient mapping of the game is monotone. This assumption is among the weakest known in the literature. Moreover, we apply our algorithm to the image generation problem.

Files

Training_Generative_Adversaria... (pdf)
(pdf | 1.88 Mb)
- Embargo expired in 26-02-2022
License info not available