From Data to Decision

Investigating Bias Amplification in Decision-Making Algorithms

More Info
expand_more

Abstract

This research investigates how biases in datasets influence the outputs of decision-making algorithms, specifically whether these biases are merely reflected or further amplified by the algorithms. Using the Adult/Census Income dataset from the UCI Machine Learning Repository, the research explores biases through the lens of three machine learning models: Logistic Regression, Decision Tree, and Random Forest. The analysis reveals that all models exhibit varying degrees of bias, dependent on the fairness metrics applied: Demographic Parity, Disparate Impact, Equal Opportunity, Equalized Odds. It has been found that higher accuracy does not necessarily equate to fairness. The findings emphasize the complex nature of algorithmic bias and the challenge of achieving fairness in automated decision-making systems. This research contributes to the understanding of bias amplification in algorithms and underscores the need for continued efforts to develop fairer decision-making systems in various sectors.