DaisyRec 2.0

Benchmarking Recommendation for Rigorous Evaluation

Journal Article (2023)
Author(s)

Zhu Sun (Institute of High Performance Computing)

Hui Fang

J. Yang (TU Delft - Web Information Systems)

Xinghua Qu (Bytedance AI Lab)

Hongyang Liu (Yanshan University)

DI Yu (Singapore Management University)

Yew Soon Ong (Nanyang Technological University)

Jie Zhang (Nanyang Technological University)

Research Group
Web Information Systems
Copyright
© 2023 Zhu Sun, Hui Fang, J. Yang, Xinghua Qu, Hongyang Liu, Di Yu, Yew Soon Ong, Jie Zhang
DOI related publication
https://doi.org/10.1109/TPAMI.2022.3231891
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Zhu Sun, Hui Fang, J. Yang, Xinghua Qu, Hongyang Liu, Di Yu, Yew Soon Ong, Jie Zhang
Research Group
Web Information Systems
Issue number
7
Volume number
45
Pages (from-to)
8206-8226
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Recently, one critical issue looms large in the field of recommender systems - there are no effective benchmarks for rigorous evaluation - which consequently leads to unreproducible evaluation and unfair comparison. We, therefore, conduct studies from the perspectives of practical theory and experiments, aiming at benchmarking recommendation for rigorous evaluation. Regarding the theoretical study, a series of hyper-factors affecting recommendation performance throughout the whole evaluation chain are systematically summarized and analyzed via an exhaustive review on 141 papers published at eight top-tier conferences within 2017-2020. We then classify them into model-independent and model-dependent hyper-factors, and different modes of rigorous evaluation are defined and discussed in-depth accordingly. For the experimental study, we release DaisyRec 2.0 library by integrating these hyper-factors to perform rigorous evaluation, whereby a holistic empirical study is conducted to unveil the impacts of different hyper-factors on recommendation performance. Supported by the theoretical and experimental studies, we finally create benchmarks for rigorous evaluation by proposing standardized procedures and providing performance of ten state-of-the-arts across six evaluation metrics on six datasets as a reference for later study. Overall, our work sheds light on the issues in recommendation evaluation, provides potential solutions for rigorous evaluation, and lays foundation for further investigation.

Files

DaisyRec_2.0_Benchmarking_Reco... (pdf)
(pdf | 3.74 Mb)
- Embargo expired in 26-06-2023
License info not available