Controlled Yet Natural: A Hybrid BDI-LLM Conversational Agent for Child Helpline Training

Conference Paper (2025)
Author(s)

M. Al Owayyed (TU Delft - Interactive Intelligence, King Saud University)

A.A. Denga (Student TU Delft)

W.P. Brinkman (TU Delft - Interactive Intelligence)

DOI related publication
https://doi.org/10.1145/3717511.3747075 Final published version
More Info
expand_more
Publication Year
2025
Language
English
Article number
17
Publisher
ACM
ISBN (electronic)
979-8-4007-1508-2
Event
Downloads counter
81
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Child helpline training often relies on human-led roleplay, which is both time- and resource-consuming. To address this, rule-based interactive agent simulations have been proposed to provide a structured training experience for new counsellors. However, these agents might suffer from limited language understanding and response variety. To overcome these limitations, we present a hybrid interactive agent that integrates Large Language Models (LLMs) into a rule-based Belief-Desire-Intention (BDI) framework, simulating more realistic virtual child chat conversations. This hybrid solution incorporates LLMs into three components: intent recognition, response generation, and a bypass mechanism. We evaluated the system through two studies: a script-based assessment comparing LLM-generated responses to human-crafted responses, and a within-subject experiment (N = 37) comparing the LLM-integrated agent with a rule-based version. The first study provided evidence that the three LLM components were non-inferior to human-crafted responses. In the second study, we found credible support for two hypotheses: participants perceived the LLM-integrated agent as more believable and reported more positive attitudes toward it than the rule-based agent. Additionally, although weaker, there was some support for increased engagement (posterior probability = 0.845, 95% HDI [-0.149, 0.465]). Our findings demonstrate the potential of integrating LLMs into rule-based systems, offering a promising direction for more flexible but controlled training systems.