Who Cares About Fairness

How Background Influences the Way Practitioners Consider Machine Learning Harms

Bachelor Thesis (2022)
Author(s)

P. Biedma Nuñez (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Agathe Balayn – Mentor (TU Delft - Web Information Systems)

Ujwal Gadiraju – Mentor (TU Delft - Web Information Systems)

J. Yang – Mentor (TU Delft - Web Information Systems)

F. Broz – Graduation committee member (TU Delft - Interactive Intelligence)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2022 Pablo Biedma Nuñez
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Pablo Biedma Nuñez
Graduation Date
24-06-2022
Awarding Institution
Delft University of Technology
Project
CSE3000 Research Project
Programme
Computer Science and Engineering
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The increasing dangers of unfairness in machine learning (ML) are becoming a frequent subject of discussion, both, in academia and popular media. Recent literature focused on introducing and assessing algorithmic solutions to bias in ML. However, there is a disconnect between these solutions and practitioners' needs. By interviewing 30 ML practitioners of diverse backgrounds across 16 countries, and presenting them with a simulated use case, our study aims to investigate common fairness practices among professionals and how these are influenced by their backgrounds. The results reveal a superlative disparity among academia and industry practitioners. We also identify different practices in fairness and data exploration stages, influenced by the educational background as well as the level of experience of practitioners. Our study also finds how demographics have an impact on several aspects, such as willingness to accept and support legal actions taken against ML discrimination. In accordance with our findings, we suggest several actions that can be taken to improve fairness solutions, and we also highlight future directions for fairness research that can cause a positive impact on the way fairness is perceived by practitioners.

Files

License info not available