ContextBot

Improving Response Consistency in Crowd-Powered Conversational Systems for Affective Support Tasks

Conference Paper (2023)
Author(s)

Yao Ma (Student TU Delft)

T. Abbas (TU Delft - Web Information Systems)

Ujwal Gadiraju (TU Delft - Web Information Systems)

Research Group
Web Information Systems
Copyright
© 2023 Yao Ma, T. Abbas, Ujwal Gadiraju
DOI related publication
https://doi.org/10.1145/3603163.3609031
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Yao Ma, T. Abbas, Ujwal Gadiraju
Research Group
Web Information Systems
ISBN (electronic)
9798400702327
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Crowd-powered conversational systems (CPCS) solicit the wisdom of crowds to quickly respond to on-demand users' needs. The very factors that make this a viable solution - -such as the availability of diverse crowd workers on-demand - - also lead to great challenges. The ever-changing pool of online workers powering conversations with individual users makes it particularly difficult to generate contextually consistent responses from a single user's standpoint. To tackle this, prior work has employed conversational facts extracted by workers to maintain a global memory, albeit with limited success. Through a controlled experiment, we explored if a conversational agent, dubbed ContextBot, can provide workers with the required context on the fly for successful completion of affective support tasks in CPCS, and explore the impact of ContextBot on the response quality of workers and their interaction experience. To this end, we recruited workers (N=351) from the Prolific crowd-sourcing platform and carried out a 3×3 factorial between-subjects study. Experimental conditions varied based on (i) whether or not context was elicited and informed by motivational interviewing techniques (MI-adherent guidance, general guidance, and no guidance), and (ii) different conversational entry points for workers to produce responses (early, middle, and late). Our findings show that: (a) workers who entered the conversation earliest were more likely to produce highly consistent responses after interacting with ContextBot; (b) showed better user experience after they interacted with ContextBot with a long chat history to surf; (c) produced more professional responses as endorsed by psychologists; (d) and that interacting with ContextBot through task completion did not negatively impact workers' cognitive load. Our findings shed light on the implications of building intelligent interfaces for scaffolding strategies to preserve consistency in dialogue in CPCS.

Files

3603163.3609031.pdf
(pdf | 2.14 Mb)
License info not available