The Effect of Different Initialization Methods on VAEs for Modeling Cancer using RNA Genome Expressions

More Info
expand_more

Abstract

Variational Auto-Encoders are a class of machine learning models that have been used in varying context, such as cancer research. Earlier research has shown that initialization plays a crucial part in training these models, since it can increase performance. Therefore, this paper studies the effect initialization methods on VAEs. This research shows that if using only one hidden layer, Uniform methods and Xavier methods perform best depending on the VAE model, where the standard VAE shows the most sensitivity to these methods. But, if using more hidden layers, the uniform method performs significantly worse than a method that uses the number of inputs of the layer such as the default implementation of PyTorch, Xavier Normal or XavierUniform. However, after enough epochs in all other models these initialization methods converge.