As important tools for information filtering, recommendation systems have greatly improved the efficiency of users' access to information in daily life by providing personalized suggestions. However, as people's reliance on it grows, recent studies have gradually revealed their p
...
As important tools for information filtering, recommendation systems have greatly improved the efficiency of users' access to information in daily life by providing personalized suggestions. However, as people's reliance on it grows, recent studies have gradually revealed their potential risks of social unfairness, such as gender discrimination that may result from job recommendations. The unfairness not only harms the interests of specific individuals or groups but also threatens the credibility and long-term sustainability of systems. Therefore, building fairness-aware recommendation systems that proactively identify and mitigate unfairness is crucial for achieving responsible recommendation services. This study focuses on systematically evaluating the effectiveness of current fairness intervention strategies. Specifically, pre-processing methods (including data relabeling and resampling) and post-processing methods (including re-ranking, calibration, and equity of attention) are selected and implemented on the two datasets MovieLens-1M and Lastfm-NL, then comprehensively evaluated in terms of two types of metrics: accuracy and fairness. The experimental results show that different methods are effective in improving different fairness targets, with varying degrees of accuracy loss or gain. This paper further explores the trade-offs between maintaining accuracy and improving fairness on intervention methods, and proposes future improvement directions for fairness-aware recommendation systems in light of the experimental results.