Benchmarking variational AutoEncoders on cancer transcriptomics data

Journal Article (2023)
Author(s)

Mostafa Eltager (TU Delft - Pattern Recognition and Bioinformatics)

Tamim R. Abdelaal (TU Delft - Pattern Recognition and Bioinformatics, Leiden University Medical Center)

Mohammed Charrout (TU Delft - Pattern Recognition and Bioinformatics)

AMETA Mahfouz (Leiden University Medical Center, TU Delft - Pattern Recognition and Bioinformatics)

Marcel J.T. Reinders (TU Delft - Pattern Recognition and Bioinformatics, Leiden University Medical Center)

S. Makrodimitris (Erasmus MC, TU Delft - Pattern Recognition and Bioinformatics)

Research Group
Pattern Recognition and Bioinformatics
Copyright
© 2023 M.A.M.E. Eltager, T.R.M. Abdelaal, M. Charrout, A.M.E.T.A. Mahfouz, M.J.T. Reinders, S. Makrodimitris
DOI related publication
https://doi.org/10.1371/journal.pone.0292126
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 M.A.M.E. Eltager, T.R.M. Abdelaal, M. Charrout, A.M.E.T.A. Mahfouz, M.J.T. Reinders, S. Makrodimitris
Research Group
Pattern Recognition and Bioinformatics
Issue number
10
Volume number
18
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Deep generative models, such as variational autoencoders (VAE), have gained increasing attention in computational biology due to their ability to capture complex data manifolds which subsequently can be used to achieve better performance in downstream tasks, such as cancer type prediction or subtyping of cancer. However, these models are difficult to train due to the large number of hyperparameters that need to be tuned. To get a better understanding of the importance of the different hyperparameters, we examined six different VAE models when trained on TCGA transcriptomics data and evaluated on the downstream tasks of cluster agreement with cancer subtypes and survival analysis. We studied the effect of the latent space dimensionality, learning rate, optimizer, initialization and activation function on the quality of subsequent downstream tasks on the TCGA samples. We found β-TCVAE and DIP-VAE to have a good performance, on average, despite being more sensitive to hyperparameters selection. Based on these experiments, we derived recommendations for selecting the different hyperparameters settings. To ensure generalization, we tested all hyperparameter configurations on the GTEx dataset. We found a significant correlation (ρ = 0.7) between the hyperparameter effects on clustering performance in the TCGA and GTEx datasets. This highlights the robustness and generalizability of our recommendations. In addition, we examined whether the learned latent spaces capture biologically relevant information. Hereto, we measured the correlation and mutual information of the different representations with various data characteristics such as gender, age, days to metastasis, immune infiltration, and mutation signatures. We found that for all models the latent factors, in general, do not uniquely correlate with one of the data characteristics nor capture separable information in the latent factors even for models specifically designed for disentanglement.