Quantitative evaluation of Generative Adversarial Networks and improved training techniques

More Info
expand_more

Abstract

Generative adversarial networks (GANs) are a class of generative models, for which the goal is to learn from training data and then to generate data with similar characteristics. Despite the wide use of GANs, a quantitative evaluation method of their performance is lacking. In the current work, we invented a series of artificial datasets, consisting of images with one or two spheres in arbitrary locations, to evaluate the performance of GANs from two aspects: the quality, which evaluates the visual quality of generated images, and the diversity, which evaluates GANs’ ability to generate samples with different modes in the real data distribution. We further explored the validity of an alternative evaluation metric, the Wasserstein distance, as an indicator of the quality and the diversity for Wasserstein GANs. Furthermore, we explored two improving techniques that boosted the performance of GANs, namely the addition of regularization terms and the ‘Smooth-to-Sharp’ training algorithm, and validate their efficacy on the MINIST dataset. Our proposed quantitative evaluation method could help researchers select better models and promote the improvement of GANs’ performances. Future work to increase the complexity of our artificial datasets, to validate our improving techniques in more complicated real-world datasets, and to investigate the use of estimated Wasserstein distance is planned.