Fairness and Bias in Recommendation Systems

How effective are current fairness intervention methods in addressing unfairness in recommendation systems, and what trade-offs do they introduce in terms of accuracy?

Bachelor Thesis (2025)
Author(s)

J. Huang (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Masoud Mansoury – Mentor (Eindhoven University of Technology)

Masoud Mansoury – Graduation committee member (Eindhoven University of Technology)

N. Tömen – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
24-06-2025
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

As important tools for information filtering, recommendation systems have greatly improved the efficiency of users' access to information in daily life by providing personalized suggestions. However, as people's reliance on it grows, recent studies have gradually revealed their potential risks of social unfairness, such as gender discrimination that may result from job recommendations. The unfairness not only harms the interests of specific individuals or groups but also threatens the credibility and long-term sustainability of systems. Therefore, building fairness-aware recommendation systems that proactively identify and mitigate unfairness is crucial for achieving responsible recommendation services. This study focuses on systematically evaluating the effectiveness of current fairness intervention strategies. Specifically, pre-processing methods (including data relabeling and resampling) and post-processing methods (including re-ranking, calibration, and equity of attention) are selected and implemented on the two datasets MovieLens-1M and Lastfm-NL, then comprehensively evaluated in terms of two types of metrics: accuracy and fairness. The experimental results show that different methods are effective in improving different fairness targets, with varying degrees of accuracy loss or gain. This paper further explores the trade-offs between maintaining accuracy and improving fairness on intervention methods, and proposes future improvement directions for fairness-aware recommendation systems in light of the experimental results.

Files

Research_paper_final.pdf
(pdf | 0.847 Mb)
License info not available