Are We Evaluating Rigorously? Benchmarking Recommendation for Reproducible Evaluation and Fair Comparison

Conference Paper (2020)
Authors

Zhu Sun (Macquarie University)

DI Yu (Shanghai University of Finance and Economics)

H. Fang (Shanghai University of Finance and Economics)

J. Yang (TU Delft - Web Information Systems)

Xinghua Qu (Nanyang Technological University)

Jie Zhang (Nanyang Technological University)

Cong Geng (Shanghai University of Finance and Economics)

Research Group
Web Information Systems
To reference this document use:
https://doi.org/10.1145/3383313.3412489
More Info
expand_more
Publication Year
2020
Language
English
Research Group
Web Information Systems
Pages (from-to)
23-32
ISBN (electronic)
9781450375832
DOI:
https://doi.org/10.1145/3383313.3412489

Abstract

With tremendous amount of recommendation algorithms proposed every year, one critical issue has attracted a considerable amount of attention: there are no effective benchmarks for evaluation, which leads to two major concerns, i.e., unreproducible evaluation and unfair comparison. This paper aims to conduct rigorous (i.e., reproducible and fair) evaluation for implicit-feedback based top-N recommendation algorithms. We first systematically review 85 recommendation papers published at eight top-tier conferences (e.g., RecSys, SIGIR) to summarize important evaluation factors, e.g., data splitting and parameter tuning strategies, etc. Through a holistic empirical study, the impacts of different factors on recommendation performance are then analyzed in-depth. Following that, we create benchmarks with standardized procedures and provide the performance of seven well-tuned state-of-the-arts across six metrics on six widely-used datasets as a reference for later study. Additionally, we release a user-friendly Python toolkit, which differs from existing ones in addressing the broad scope of rigorous evaluation for recommendation. Overall, our work sheds light on the issues in recommendation evaluation and lays the foundation for further investigation. Our code and datasets are available at GitHub (https://github.com/AmazingDD/daisyRec).

No files available

Metadata only record. There are no files for this record.