Break, Repair, Learn, Break Less

Investigating User Preferences for Assignment of Divergent Phrasing Learning Burden in Human-Agent Interaction to Minimize Conversational Breakdowns

More Info
expand_more

Abstract

Conversational agents (CA) occasionally fail to understand the user's intention or respond inappropriately due to natural language complexity. These conversational breakdowns can happen because of low intent and entity prediction confidence scores. A promising repair strategy in such cases is that the CA proposes to users likely alternatives to proceed. If one of these options matches the user's intention, the breakdown is repaired successfully. We propose that successful repairs should be followed by a learning mechanism to minimize future breakdowns. After a successful repair, the CA, user, or both can learn each other's specific phrasing. This prevents similar phrasings from causing reoccurring breakdowns. We compared user preferences for these learning mechanisms in a scenario-based study with manufacturing workers (). Our result showed that users first prefer to share the learning burden with the CA (61.3%), followed by entirely outsourcing the learning burden to the CA (60.7%) as opposed to themselves.