Unfairness in Recommender Systems
To what extent do content-based recommendation models suffer from unfairness, and how does this differ from collaborative filtering?
F. Angheluta (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Masoud Mansoury – Mentor (Eindhoven University of Technology)
Nergis Tomen – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Fairness in recommender systems is an increasingly critical concern as these models mediate access to information, opportunities, and visibility. While collaborative filtering (CF) approaches have been extensively scrutinized for popularity bias and unfair exposure, the fairness properties of content-based recommendation (CBR) models remain underexplored. In this work, we present a comparative evaluation of CF and CBR models—introducing a modular, feature-fused content-based recommender (MultiFuseCB)—on MovieLens 1M and Amazon Beauty datasets. We systematically analyze how the selection and weighting of content features, as well as the choice of embedding models, affect both recommendation accuracy and fairness, using metrics such as item coverage and popularity bias. Our results show that CBR models, with appropriate feature engineering, can achieve competitive accuracy while substantially improving fairness relative to CF baselines. We further demonstrate that certain features (e.g., year, genre, plot) and embedding choices can be leveraged to promote more equitable item exposure. These findings provide actionable insights for designing fairer content-based recommenders and highlight the importance of feature selection and model tuning in achieving both accuracy and fairness.