Voxelwise rs-fMRI representation learning

A non-linear variational approach

More Info
expand_more

Abstract

Resting-state fMRI (rs-fMRI) has become an important imaging modality and is commonly used to study intrinsic brain networks. These networks can be obtained by decomposing rs-fMRI data into components, using independent component analysis (ICA). Recently, these ICA components have been used as inputs for neural networks to learn complex relations between the intrinsic networks of the brain and mental disorders or demographic variables. Instead of training a non-linear classifier on these linearly decomposed components, this work asks whether unsupervised representation learning can lead to linearly separable representations for multiple downstream tasks.

We propose to apply non-linear representation learning to voxelwise rs-fMRI data. Learning the non-linear representations is done using two versions of a variational autoencoder (VAE). The first version is a vanilla VAE with 3D residual blocks in both its encoder and decoder. The second version is based on the identifiable VAE and uses a time-dependent prior. The models train to reconstruct the original input data from latent variables it infers. Three predictive models then evaluate the predictive power of the latent variables on an age regression, a sex classification, and a schizophrenia classification task. Each of the predictive models performs predictions for each of the three tasks. The predictive models are a support vector machine (SVM), a k-nearest neighbor (k-NN) model, and a long short-term memory (LSTM) neural network.

We show that our method performs exceptionally well on the age regression and sex classification tasks without any supervision. These results imply that VAEs can model predictive variations in their latent spaces for demographic variables. The models, however, do not do well on the schizophrenia classification task, even when the models are pretrained. Despite the lower performance on the schizophrenia classification task, the overall results are encouraging and pave the way for future work on voxelwise representation learning.

Files

Thesis_EPTGeenjaar.pdf
(pdf | 4.99 Mb)
License info not available