Using Large Language Models to Detect Deliberative Elements in Public Discourse

Detecting Subjective Emotions in Public Discourse

Bachelor Thesis (2024)
Authors

B.C.P. Zuurbier (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Supervisors

Luciano Cavalcante Siebert (TU Delft - Interactive Intelligence)

A. Homayounirad (TU Delft - Interactive Intelligence)

Enrico Liscio (TU Delft - Interactive Intelligence)

Faculty
Electrical Engineering, Mathematics and Computer Science, Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2024
Language
English
Graduation Date
27-06-2024
Awarding Institution
Delft University of Technology
Project
CSE3000 Research Project
Programme
Computer Science and Engineering
Faculty
Electrical Engineering, Mathematics and Computer Science, Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In order to tackle topics such as climate change together with the population, public discourse should be scaled up. This discourse should be mediated as it makes it more likely that people understand each other and change their point of view. To help the mediator with this task, emotion detection can greatly help. Positive emotions can improve communications, while negative emotions cause people to be irrational and irritated. However, since emotions are highly subjective, it can make both predictions and evaluation more difficult.

Still, Large Language Models (LLMs) could be used to detect these subjective emotions using different prompting strategies and labels. The experiment included zero-, one-, fewshot and Chain of Thought (CoT) strategies. The precision was better for the one- and fewshot method compared to zeroshot. The CoT methods also showed an increase in precision, but a decrease in recall. The different labels were hard majority labels, soft labels and hard per annotator labels. In conclusion, providing examples improved the performance of the LLM. The CoT strategies were more precise, but gave a worse general prediction. The hard majority labels allow for more general predictions, where per annotator hard labels capture the perspective of different annotators. Soft labels reflect the subjective nature of the labels by providing probabilities instead of binary classification.

The experiment was done on a small data sample, so it is recommended to try the strategies on a larger data sample. Looking into appropriate evaluations for subjective predictions is also recommended in order to reflect the actual performance better.

Files

License info not available