BatMan-CLR

Making Few-Shots Meta-learners Resilient Against Label Noise

Conference Paper (2026)
Author(s)

J.M. Galjaard (TU Delft - Data-Intensive Systems)

Robert Birke (University of Turin)

Juan F. Perez (Universidad de los Andes)

Lydia Y. Chen (TU Delft - Data-Intensive Systems, University of Neuchâtel)

Research Group
Data-Intensive Systems
DOI related publication
https://doi.org/10.1007/978-3-032-06106-5_15
More Info
expand_more
Publication Year
2026
Language
English
Research Group
Data-Intensive Systems
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository as part of the Taverne amendment. More information about this copyright law amendment can be found at https://www.openaccess.nl. Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.@en
Pages (from-to)
254-271
ISBN (print)
9783032061058
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The negative impact of label noise is well studied in classical supervised learning yet remains an open research question in meta-learning. Meta-learners aim to adapt to unseen tasks by learning a good initial model in meta-training and fine-tuning it to new tasks during meta-testing. In this paper, we present an extensive analysis of the impact of label noise on the performance of meta-learners, specifically gradient-based N-way K-shot learners. We show that the accuracy of Reptile, iMAML, and foMAML drops by up to 34% when meta-training is affected by label noise on the three representative datasets: Omniglot, CifarFS, and MiniImageNet. To strengthen the resilience against label noise, we propose two sampling techniques, namely manifold (Man) and batch manifold (BatMan), which transforms the noisy supervised learners into semi-supervised learners to increase the utility of noisy labels. We construct N-way 2-contrastive-shot tasks through augmentation, learn the embedding via a contrastive loss in meta-training, and perform classification through zeroing on the embeddings in meta-testing. We show that our approach can effectively mitigate the impact of meta-training label noise. Even with 60% wrong labels BatMan and Man can limit the meta-testing accuracy drop to 2.5, 9.4, 1.1% points with existing meta-learners across Omniglot, CifarFS, and MiniImageNet, respectively. We provide our code online: https://gitlab.ewi.tudelft.nl/dmls/publications/batman-clr-noisy-meta-learning.

Files

License info not available
warning

File under embargo until 03-04-2026