From Data to Decision
Investigating Bias Amplification in Decision-Making Algorithms
E. Mihalache (TU Delft - Electrical Engineering, Mathematics and Computer Science)
S.E. Carter – Mentor (TU Delft - Web Information Systems)
J. Yang – Mentor (TU Delft - Web Information Systems)
Stefan Buijsman – Graduation committee member (TU Delft - Ethics & Philosophy of Technology)
Marcus M. Specht – Graduation committee member (TU Delft - Web Information Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This research investigates how biases in datasets influence the outputs of decision-making algorithms, specifically whether these biases are merely reflected or further amplified by the algorithms. Using the Adult/Census Income dataset from the UCI Machine Learning Repository, the research explores biases through the lens of three machine learning models: Logistic Regression, Decision Tree, and Random Forest. The analysis reveals that all models exhibit varying degrees of bias, dependent on the fairness metrics applied: Demographic Parity, Disparate Impact, Equal Opportunity, Equalized Odds. It has been found that higher accuracy does not necessarily equate to fairness. The findings emphasize the complex nature of algorithmic bias and the challenge of achieving fairness in automated decision-making systems. This research contributes to the understanding of bias amplification in algorithms and underscores the need for continued efforts to develop fairer decision-making systems in various sectors.