Multi-Label Gold Asymmetric Loss Correction with Single-Label Regulators

Bachelor Thesis (2021)
Author(s)

C.O. Pene (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Lydia Chen – Mentor (TU Delft - Data-Intensive Systems)

S. Ghiassi – Mentor (TU Delft - Data-Intensive Systems)

T. Younesian – Mentor (TU Delft - Data-Intensive Systems)

F.A. Kuipers – Graduation committee member (TU Delft - Embedded Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2021 Cosmin Pene
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Cosmin Pene
Graduation Date
02-07-2021
Awarding Institution
Delft University of Technology
Project
CSE3000 Research Project
Programme
Computer Science and Engineering
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Multi-label learning is an emerging extension of the multi-class classification where an image contains multiple labels. Not only acquiring a clean and fully labeled dataset in multi-label learning is extremely expensive, but also many of the actual labels are corrupted or missing due to the automated or non-expert annotation techniques. Noisy label data decrease the prediction performance drastically. In this paper, we propose a novel Gold Asymmetric Loss Correction with Single-Label Regulators (GALC-SLR) that operates robust against noisy labels. GALC-SLR estimates the noise confusion matrix using single-label samples, then constructs an asymmetric loss correction via estimated confusion matrix to avoid overfitting to the noisy labels. Empirical results show that our method outperforms the state-of-the-art original asymmetric loss multi-label classifier under all corruption levels, showing mean average precision improvement up to 28.67\% on a real-world dataset of MS-COCO, yielding a better generalization of the unseen data and increased prediction performance.

Files

License info not available