ContextBot: Improving Response Consistency in Crowd-Powered Conversational Systems
More Info
expand_more
Abstract
Crowd-powered conversational systems (CPCS) solicit the wisdom of crowds to quickly respond to on-demand users' needs. The very factors that make this a viable solution ---such as the availability of diverse crowd workers on-demand--- also lead to great challenges. The ever-changing pool of online workers powering conversations with individual users makes it particularly difficult to generate contextually consistent responses from a single user's standpoint. To tackle this, prior work has employed conversational facts extracted by workers to maintain a global memory, albeit with limited success. Leveraging systematic context in affective crowdsourcing tasks has remained unexplored. Through a controlled experiment, we explored if a conversational agent, dubbed ContextBot, can provide workers with the required context on the fly for successful completion of affective support tasks in CPCS, and explore the impact of ContextBot on the response quality of workers and their interaction experience. To this end, we recruited workers (N=351) from the Prolific crowdsourcing platform and carried out a 3*3 factorial between-subjects study. Experimental conditions varied based on (i) whether or not context was elicited and informed by motivational interviewing techniques (MI, non-MI, and chat history), and (ii) different conversational entry points for workers to produce responses (early, middle, and late). Our findings showed that workers who entered the conversation earliest were more likely to produce highly consistent responses after interacting with ContextBot. Better user experience from workers was expected after they interacted with ContextBot at a late entry. We found that interacting with ContextBot through task completion did not negatively impact workers' cognitive load.