The generalizability of argument quality dimensions in NLP models
J.H. Nguyen (TU Delft - Electrical Engineering, Mathematics and Computer Science)
CM Jonker – Mentor (TU Delft - Interactive Intelligence)
Pradeep Kumar Murukannaiah – Mentor (TU Delft - Interactive Intelligence)
Michiel Van Der Meer – Mentor (Universiteit Leiden)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This research revolves around measuring the quality of arguments. High-quality arguments help in improving political discussions, resulting in better decision-making. Wachsmuth et al. developed a taxonomy breaking down argument quality into several dimensions. This work makes use of that taxonomy and combines it with modern NLP models. A cross-dataset examination of argument quality models was conducted. In particular, models were investigated on their generalizability between dimensions. Overall results show that there is no large difference in accuracy and agreement when models predict data of a quality dimension they were trained on, over dimensions they were not trained on. One can conclude that generalizations of argument quality dimensions with language models were not found. Nevertheless, qualitative analysis highlights findings that indicate some generalization to other dimensions.