Mitigating Mainstream Bias in Recommendation via Cost-sensitive Learning

Conference Paper (2023)
Author(s)

Roger Zhe Li (TU Delft - Multimedia Computing)

Julián Urbano (TU Delft - Multimedia Computing)

A Hanjalic (TU Delft - Intelligent Systems)

Multimedia Computing
Copyright
© 2023 Roger Zhe Li, Julián Urbano, A. Hanjalic
DOI related publication
https://doi.org/10.1145/3578337.3605134
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Roger Zhe Li, Julián Urbano, A. Hanjalic
Related content
Multimedia Computing
Pages (from-to)
135-142
ISBN (electronic)
9798400700736
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Mainstream bias, where some users receive poor recommendations because their preferences are uncommon or simply because they are less active, is an important aspect to consider regarding fairness in recommender systems. Existing methods to mitigate mainstream bias do not explicitly model the importance of these non-mainstream users or, when they do, it is in a way that is not necessarily compatible with the data and recommendation model at hand. In contrast, we use the recommendation utility as a more generic and implicit proxy to quantify mainstreamness, and propose a simple user-weighting approach to incorporate it into the training process while taking the cost of potential recommendation errors into account. We provide extensive experimental results showing that quantifying mainstreamness via utility is better able at identifying non-mainstream users, and that they are indeed better served when training the model in a cost-sensitive way. This is achieved with negligible or no loss in overall recommendation accuracy, meaning that the models learn a better balance across users. In addition, we show that research of this kind, which evaluates recommendation quality at the individual user level, may not be reliable if not using enough interactions when assessing model performance.