Machine Learning (ML) algorithms have the potential to reproduce biases that already exist in society, a fact that leads to scholarly work trying to quantify algorithmic discrimination through fairness metrics. Although there are now a plethora of metrics, some of them are even c
...
Machine Learning (ML) algorithms have the potential to reproduce biases that already exist in society, a fact that leads to scholarly work trying to quantify algorithmic discrimination through fairness metrics. Although there are now a plethora of metrics, some of them are even contradictory, making fairness become a problem of knowing which measurement to choose over another. Consequently, scholars began considering that fairness should be discussed by placing algorithms in their social contexts. Since (1) these social aspects are related to structures of discrimination and (2) feminism aims to criticise discrimination against the marginalised, I introduce the possibility of analysing the social context of ML algorithms through a feminist lens. By doing this, I highlight social and political aspects that are equally important to consider for a faithful discussion on fairness: corporate lobbying, the lack of diverse hiring which leads to fairness discussions that do not consider the experiences of marginalised groups and, lastly, the broader context that an algorithm is used in. Moreover, I emphasise how feminist ethics of care constitute an essential framework for a conversation about actually implementable fairness solutions, since it shows the need to listen to both the marginalised community and to the developers who might want to build fairer ML but currently cannot. Having built a bridge between the hegemony and the feminist camp, I highlight how Northpointe’s (now Equivant’s) Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm can be considered biased against black people. Through this, I illustrate how feminist considerations bring clarity to fairness debates by helping choose a fairness metric or by claiming that an algorithm is unfair by nature and should be abolished. To follow, I use the same feminist critiques to draw attention to the possible weak points of current sociotechnical solutions. For instance, the EU AI Act risks being too susceptible to company lobbying, leading to not strict enough regulations. Furthermore, the AI committee should ensure that they hire a diverse group of people in order to develop regulations that positively consider all marginalised groups. Lastly, I highlight how ethics education is essential for creating a new generation of responsible engineers. Considering this, I emphasise the urgency of making ethics courses at TU Delft (and not only) more interdisciplinary by interacting more with critique points coming from the social sciences. This will open up possibilities for more research tackling fairness from a multitude of perspectives.