Ensuring Fairness in Group Recommendations by Rank-Sensitive Balancing of Relevance

Conference Paper (2020)
Author(s)

M. Kaya (TU Delft - Web Information Systems)

Derek Bridge (University College Cork)

N. Tintarev (TU Delft - Web Information Systems)

Research Group
Web Information Systems
Copyright
© 2020 M. Kaya, Derek Bridge, N. Tintarev
DOI related publication
https://doi.org/10.1145/3383313.3412232
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 M. Kaya, Derek Bridge, N. Tintarev
Research Group
Web Information Systems
Pages (from-to)
101-110
ISBN (electronic)
978-1-4503-7583-2
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

For group recommendations, one objective is to recommend an ordered set of items, a top-N , to a group such that each individual recommendation is relevant for everyone. A common way to do this is to select items on which the group can agree, using so-called ‘aggregation strategies’. One weakness of these aggregation strategies is that they select items independently of each other. They therefore cannot guarantee properties such as fairness, that apply to the set of recommendations as a whole. In this paper, we give a definition of fairness that ‘balances’ the relevance of the recommended items across the group members in a rank-sensitive way. Informally, an ordered set of recommended items is considered fair to a group if the relevance of the items in the top-N is balanced across the group members for each prefix of the top-N . In other words, the first item in the top-N should, as far as possible, balance the interests of all group members; the first two items taken together must do the same; also the first three; and so on up to N . In this paper, we formalize this notion of rank-sensitive balance and provide a greedy algorithm (GFAR) for finding a top-N set of group recommendations that satisfies our definition. We compare the performance of GFAR to five approaches from the literature on two datasets, one from each of the movie and music domains. We evaluate performance for 42 different configurations (two datasets, seven different group sizes, three different group types) and for ten evaluation metrics. We find that GFAR performs significantly better than all other algorithms around 43% of the time; in only 10% of cases are there algorithms that are significantly better than GFAR. Furthermore, GFAR performs particularly well in the most difficult cases, where groups are large and interests within the group diverge. We attribute GFAR’s success both to its rank-sensitivity and its way of balancing relevance. Current methods do not define fairness in a rank-sensitive way (although some achieve a degree of rank-sensitivity through the use of greedy algorithms) and none define balance in the way that we do.

Files

License info not available