We design a task to help identify how an agent can engage with information in a meaningful way through dialogue to foster collaboration. Specifically, the task involves a human and an agent sharing their memories of past events with each other, resulting in diverse information ab
...
We design a task to help identify how an agent can engage with information in a meaningful way through dialogue to foster collaboration. Specifically, the task involves a human and an agent sharing their memories of past events with each other, resulting in diverse information about those events. In a pilot study, we explore to what extent an LLM can be used to classify memories from the different sources as overlapping, complementary or conflicting. Knowing which of these categories a piece of information falls into will aid the agent in how to address it in dialogue, for instance to ask for further information, to adopt a shared perspective, or to agree to disagree about a conflict. We find that the LLM especially struggles with distinguishing between complementary and conflicting information, and that differing opinions about what is and is not implied by the event descriptions lead to many disagreements between the LLM and our human annotators. In future work, we will investigate to what extent conversing with the human can alleviate these issues.