Fairness in Collaborative Filtering Recommender Systems
A Comparative Analysis of Trade-offs Across Model Architectures
J. kang (TU Delft - Electrical Engineering, Mathematics and Computer Science)
M. Mansoury – Mentor (TU Delft - Multimedia Computing)
Nergis Tomen – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Recommender systems personalize content by predicting user preferences, but this often results in unequal treatment of users and items—for example, some users may receive lower-quality recommendations, while niche items remain underexposed. Although fairness-enhancing interventions exist, they can obscure the extent to which disparities stem from model architecture alone.
This study investigates how collaborative filtering architectures affect both accuracy and fairness. We evaluate six models, including two non-personalized baselines, across two public datasets using a unified pipeline without fairness-specific interventions.
Our results reveal a general trade-off: models with higher accuracy often exhibit greater fairness disparities, particularly on the user side. For example, LightGCN combines strong accuracy with relatively high item-side fairness, while SLIMElastic ranks high in accuracy but worsens unfairness. However, this trade-off is not uniform across datasets; NeuMF degrades notably on sparser data.
These findings demonstrate that model architecture alone can shape fairness–accuracy trade-offs, highlighting the importance of considering dataset characteristics and model design when selecting or developing recommender systems.