Machine learning has revolutionized recommendation systems by employing ranking models for personalized item suggestions. Despite their effectiveness, learning-to-rank (LTR) models often operate as complex systems, making it difficult to discern the factors influencing their rank
...
Machine learning has revolutionized recommendation systems by employing ranking models for personalized item suggestions. Despite their effectiveness, learning-to-rank (LTR) models often operate as complex systems, making it difficult to discern the factors influencing their ranking decisions. This lack of transparency raises concerns about potential errors, biases, and ethical implications. As a result, interpretable LTR models have emerged as a solution to enhance transparency and mitigate these challenges.
Currently, the state-of-the-art intrinsically interpretable ranking model is led by generalized additive models. However, ranking GAMs have some limitations that affect their successful application in experiment environments, such as being computationally intensive and struggling to handle high-dimensional data. In contrast to these drawbacks, post-hoc methods can potentially provide more scalable and efficient solutions for real-time ranking. In this study, we propose a post-hoc method for learning-to-rank tasks combined with the interpretable GAMs. The evaluation results tested with Kendall’s 𝜏 value indicate that our model can effectively explain different types of black-box rankers.