Leveraging Large Language Models for Sequential Recommendation

Conference Paper (2023)
Author(s)

Jesse Harte (Delivery Hero, Student TU Delft)

Wouter Zorgdrager (Delivery Hero)

Panagiotis Louridas (Athens University of Economics and Business)

A Katsifodimos (TU Delft - Web Information Systems)

Dietmar Jannach (University of Klagenfurt)

M. Fragkoulis (Delivery Hero)

Research Group
Web Information Systems
Copyright
© 2023 Jesse Harte, Wouter Zorgdrager, Panos Louridas, A Katsifodimos, Dietmar Jannach, M. Fragkoulis
DOI related publication
https://doi.org/10.1145/3604915.3610639
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Jesse Harte, Wouter Zorgdrager, Panos Louridas, A Katsifodimos, Dietmar Jannach, M. Fragkoulis
Research Group
Web Information Systems
Pages (from-to)
1096-1102
ISBN (electronic)
9798400702419
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Sequential recommendation problems have received increasing attention in research during the past few years, leading to the inception of a large variety of algorithmic approaches. In this work, we explore how large language models (LLMs), which are nowadays introducing disruptive effects in many AI-based applications, can be used to build or improve sequential recommendation approaches. Specifically, we devise and evaluate three approaches to leverage the power of LLMs in different ways. Our results from experiments on two datasets show that initializing the state-of-the-art sequential recommendation model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20% compared to the vanilla BERT4Rec model. Furthermore, we find that a simple approach that leverages LLM embeddings for producing recommendations, can provide competitive performance by highlighting semantically related items. We publicly share the code and data of our experiments to ensure reproducibility.1