Who Cares About Fairness

How Background Influences the Way Practitioners Consider Machine Learning Harms

More Info
expand_more

Abstract

The increasing dangers of unfairness in machine learning (ML) are becoming a frequent subject of discussion, both, in academia and popular media. Recent literature focused on introducing and assessing algorithmic solutions to bias in ML. However, there is a disconnect between these solutions and practitioners' needs. By interviewing 30 ML practitioners of diverse backgrounds across 16 countries, and presenting them with a simulated use case, our study aims to investigate common fairness practices among professionals and how these are influenced by their backgrounds. The results reveal a superlative disparity among academia and industry practitioners. We also identify different practices in fairness and data exploration stages, influenced by the educational background as well as the level of experience of practitioners. Our study also finds how demographics have an impact on several aspects, such as willingness to accept and support legal actions taken against ML discrimination. In accordance with our findings, we suggest several actions that can be taken to improve fairness solutions, and we also highlight future directions for fairness research that can cause a positive impact on the way fairness is perceived by practitioners.