Zero-shot learning for (dis)agreement detection in meeting trancripts

Comparing latent topic models and large language models

More Info
expand_more

Abstract

This paper presents a novel approach to detect agreement and disagreement moments between participants in meeting transcripts without relying on labeled data. We propose a model in which disagreement detection is defined as the process of first identifying argumentative theses relevant to a given corpus of text and then classifying all phrases in the text as being either in favor of, against or expressing no opinion on a given thesis. To identify relevant theses, we compare the performance of a latent Dirichlet allocation-based topic model against that of a diverse set of large language models. To classify the stance of a phrase with respect to a thesis, only large language models are used. We find that, while state-of-the-art large language models do not outperform topic modeling-based approaches in extracting semantically relevant content, they are capable of presenting such content in a more concise and grammatically correct manner. We also find that state-of-the-art large language models are not capable of accurately performing stance classification as described above.