The Effects of Debiasing Methods on the Fairness and Accuracy of Recommender Systems
F. Čajági (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Masoud Mansoury – Mentor (Eindhoven University of Technology)
Nergis Tomen – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Recommender systems leverage user interactions to predict their preferences and deliver personalized recommendations. Recent years have seen a great increase in their widespread usage in online areas, such as social media, e-commerce and even job applications. However, due to how these systems collect and learn from data, they are vulnerable to various biases, such as popularity bias, which also raises the question of their fairness for both users and providers. Researchers in the area have tried addressing the issue with various debiasing and fairness intervention methods, but these are often studied in separate strands of research, and the trade-off between fairness and accuracy is rarely explicitly evaluated when it comes to debiasing methods.
In this project, we replicate three state-of-the-art debiasing methods and analyze their impact on the fairness and accuracy of recommender models, particularly the trade-off between the two and how it can be controlled using hyper-parameters. We find that while the impact heavily depends on the method and dataset used, in many cases significant improvements can be made to fairness with little to no decrease in accuracy, using the right configuration of hyper-parameters.