Adaptive In-Context Learning with Large Language Models for Bundle Generation

Conference Paper (2024)
Author(s)

Zhu Sun (Singapore University of Technology and Design)

Kaidong Feng (Yanshan University)

Jie Yang (TU Delft - Web Information Systems)

Xinghua Qu (Tianqiao and Chrissy Chen Institute)

Hui Fang (Shanghai University of Finance and Economics)

Yew-Soon Ong (Nanyang Technological University)

Wenyuan Liu (Yanshan University)

Research Group
Web Information Systems
DOI related publication
https://doi.org/10.1145/3626772.3657808
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Web Information Systems
Pages (from-to)
966-976
ISBN (print)
979-8-4007-0431-4
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Most existing bundle generation approaches fall short in generating fixed-size bundles. Furthermore, they often neglect the underlying user intents reflected by the bundles in the generation process, resulting in less intelligible bundles. This paper addresses these limitations through the exploration of two interrelated tasks, i.e., personalized bundle generation and the underlying intent inference, based on different user sessions. Inspired by the reasoning capabilities of large language models (LLMs), we propose an adaptive in-context learning paradigm, which allows LLMs to draw tailored lessons from related sessions as demonstrations, enhancing the performance on target sessions. Specifically, we first employ retrieval augmented generation to identify nearest neighbor sessions, and then carefully design prompts to guide LLMs in executing both tasks on these neighbor sessions. To tackle reliability and hallucination challenges, we further introduce (1) a self-correction strategy promoting mutual improvements of the two tasks without supervision signals and (2) an auto-feedback mechanism for adaptive supervision based on the distinct mistakes made by LLMs on different neighbor sessions. Thereby, the target session can gain customized lessons for improved performance by observing the demonstrations of its neighbor sessions. Experiments on three real-world datasets demonstrate the effectiveness of our proposed method.

Files

3626772.3657808.pdf
(pdf | 4.98 Mb)
- Embargo expired in 11-01-2025
License info not available