Behind the Labels: Transparency Pitfalls in Annotation Practices for Societally Impactful ML
A deep dive into annotation transparency and consistency in CVPR corpus
C. Scorţia (TU Delft - Electrical Engineering, Mathematics and Computer Science)
A.M. Demetriou – Mentor (TU Delft - Multimedia Computing)
Cynthia C. S. Liem – Mentor (TU Delft - Multimedia Computing)
J. Yang – Graduation committee member (TU Delft - Web Information Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This study investigates annotation and reporting practices in machine learning (ML) research, focusing on societally impactful applications presented at the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) conferences. By structurally analyzing the 75 most-cited CVPR papers from the past 2, 5, and 15 years, we evaluate how the human annotations foundation of supervised ML is documented. We introduce a 27-field annotation-reporting schema and apply it to 60 datasets, revealing that nearly 30% of relevant information is routinely omitted. Key findings include the pervasive underreporting of annotator details such as training, prescreening, and inter-rater reliability (IRR) metrics. While popular datasets like COCO and ImageNet exhibit widespread use, transparency about annotation methodologies remains inconsistent. The impact of a few fields shows that basic metadata, such as the selection process of annotators and how the labels' overlap is managed, strongly anticipate overall documentation quality. Our findings support previous calls for standardization and underscore the need for institutionalized reporting practices to ensure reproducibility, fairness, and trust in ML systems.