Trusted Loss Correction for Noisy Multi-Label Learning

Journal Article (2022)
Author(s)

S. Ghiassi (TU Delft - Data-Intensive Systems)

Cosmin Octavian Pene (Student TU Delft)

Robert Birke (University of Turin)

Lydia Y. Chen (TU Delft - Data-Intensive Systems)

Research Group
Data-Intensive Systems
More Info
expand_more
Publication Year
2022
Language
English
Research Group
Data-Intensive Systems
Volume number
189
Pages (from-to)
343-358
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Noisy and corrupted labels are shown to significantly undermine the performance of multi-label learning, which has multiple labels in each image. Correcting the loss via a label corruption matrix is effective in improving the robustness of single-label classification against noisy labels. However, estimating the corruption matrix for multi-label problems is no mean feat due to the unbalanced distributions of labels and the presence of multiple objects that may be mapped into the same labels. In this paper, we propose a robust multi-label classifier against label noise, TLCM, which corrects the loss based on a corruption matrix estimated on trusted data. To overcome the challenge of unbalanced label distribution and multi-object mapping, we use trusted single-label data as regulators to correct the multi-label corruption matrix. Empirical evaluation on real-world vision and object detection datasets, i.e., MS-COCO, NUS-WIDE, and MIRFLICKR, shows that our method under medium (30%) and high (60%) corruption levels outperforms state-of-the-art multi-label classifier (ASL) and noise-resilient multi-label classifier (MPVAE), by on average 12.5% and 26.3% mean average precision (mAP) points, respectively.

Files

Ghiassi23a.pdf
(pdf | 3.93 Mb)
License info not available