Iterative training with human rated images to improve GAN generated image aesthetics

Effects of dataset size and training length

More Info
expand_more

Abstract

Generative Adversarial Networks (GANs) brought rapid developments in generating synthetic images by mimicking structures in the training data. With the list of application of GANs growing drastically, it has lately become an exciting technology to explore for designers to communicate their ideas and arts through technology and create engaging experiences for humans. Nevertheless, translating human experiences to artificial intelligence and creating visually pleasant imagery is a challenging task due to complex semantics of human perception. To address this issue, we introduce an iterative training approach in which the generated images are curated by humans and the most pleasing ones are fed back into the network to retrain. Additionally, we do a factorial analysis to investigate how the aesthetic quality and the diversity are affected by the size of training data and training length. In experiments, we validate that this method can significantly improve the aesthetic quality of generated images regardless of the dataset size and training length, however the use of smaller datasets comes with a cost of reduction in the image diversity and novelty in the output images. The aesthetic bias towards certain contexts can also deteriorate the diversity and affect the model evaluations. On the other hand, no significant relationship has been found regarding the training length, however this could possibly be due to instabilities that happen during the model convergence progress.