Towards deeper understanding of semi-supervised learning with variational autoencoders

More Info
expand_more

Abstract

Recently, deep generative models have been shown to achieve state-of-the-art performance on semi-supervised learning tasks. In particular, variational autoencoders have been adopted to use labeled data, which allowed the development of SSL models with the usage of deep neural networks. However, some of these models rely on ad-hoc loss additions for training, and have constraints on the latent space, which effectively prevent the use of recent developments in improving the posterior approximations. In this paper, we analyse the limitations of semi-supervised deep generative models based on VAEs, and show that it is possible to drop the assumptions made on the latent space. We present a simplified method for semi-supervised learning which combines the discriminative and generative loss in a principled manner. Our model allows for straightforward application of normalizing flows and achieves competitive results in semi-supervised classification tasks.