This study investigates annotation and reporting practices in machine learning (ML) research, focusing on societally impactful applications presented at the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) conferences. By structurally analyzing the 75 most-cited CVPR paper
...
This study investigates annotation and reporting practices in machine learning (ML) research, focusing on societally impactful applications presented at the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) conferences. By structurally analyzing the 75 most-cited CVPR papers from the past 2, 5, and 15 years, we evaluate how the human annotations foundation of supervised ML is documented. We introduce a 27-field annotation-reporting schema and apply it to 60 datasets, revealing that nearly 30% of relevant information is routinely omitted. Key findings include the pervasive underreporting of annotator details such as training, prescreening, and inter-rater reliability (IRR) metrics. While popular datasets like COCO and ImageNet exhibit widespread use, transparency about annotation methodologies remains inconsistent. The impact of a few fields shows that basic metadata, such as the selection process of annotators and how the labels' overlap is managed, strongly anticipate overall documentation quality. Our findings support previous calls for standardization and underscore the need for institutionalized reporting practices to ensure reproducibility, fairness, and trust in ML systems.