Annotation Practices in Societally Impactful Machine Learning Applications

What are the recommender systems models actually trained on?

More Info
expand_more

Abstract

Machine Learning models are nowadays infused into all aspects of our lives. Perhaps one of its most common applications regards recommender systems, as they facilitate users' decision-making processes in various scenarios (e.g., e-commerce, social media, news, online learning, etc.). Training performed on large volumes of data is what ultimately drives such a system to provide meaningful recommendations, and yet there has been observed a lack of standardized practices when it comes to data collection and annotation methods for Machine Learning datasets. This research paper systematically identifies and synthesizes such processes by examining existing literature on recommender systems. The review includes 100 most-cited papers from the most impactful venues within the Computing and Information Technology field. Multiple facets of the employed techniques are touched upon, such as reported human annotations and annotator diversity, label quality, and the public availability of training datasets.
Recurrent use of just a few benchmark datasets, poor documentation practices, and reproducibility issues in experiments are some of the most striking findings uncovered by this study. A discussion is centered around the necessity of transitioning from reliance solely on algorithmic performance metrics in favor of prioritizing data quality and fit. Finally, valid concerns are raised when it comes to biases and socio-psychological factors inherent in the datasets, and further exploration of embedding these early in the design of ML models is suggested.