Print Email Facebook Twitter Manifold-Aware Regularization for Masked Autoencoders Title Manifold-Aware Regularization for Masked Autoencoders Author Dondera, Alin (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Jamali-Rad, H. (mentor) van Gemert, J.C. (mentor) Migut, M.A. (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science Date 2024-06-24 Abstract Masked Autoencoders (MAEs) represent a significant shift in self-supervised learning (SSL) due to their independence from augmentation techniques for generating positive (and/or negative) pairs as in contrastive frameworks. Their masking and reconstruction strategy also aligns well with SSL approaches in natural language processing. Most MAEs are built upon Transformer-based architectures where visual features are not regularized as opposed to their convolutional neural network (CNN) based counterparts, which can potentially limit their effectiveness. To address this, we introduce a novel batch-wide layer-wise regularization loss applied to representations of different Transformer layers. We demonstrate that by plugging in the proposed regularization loss, one can significantly improve the performance of MAE-based baselines. Subject Self-supervised learningManifold LearingRegularizationMasked Autoencoders To reference this document use: http://resolver.tudelft.nl/uuid:189bdb4f-ff47-4249-bab2-a40b63616565 Part of collection Student theses Document type master thesis Rights © 2024 Alin Dondera Files PDF MSc_Report_Alin_Dondera.pdf 16.6 MB Close viewer /islandora/object/uuid:189bdb4f-ff47-4249-bab2-a40b63616565/datastream/OBJ/view