Estimating the transferability of state-of-the-art models in predicting moral values
A.E. Dondera (TU Delft - Electrical Engineering, Mathematics and Computer Science)
E. Liscio – Mentor (TU Delft - Interactive Intelligence)
P.K. Murukannaiah – Mentor (TU Delft - Interactive Intelligence)
R. Guerra Marroquim – Graduation committee member (TU Delft - Computer Graphics and Visualisation)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Moral values play a crucial role in our decision-making process by defining what is right and wrong. With the emergence of political activism and moral discourse on social media, and the latest developments in Natural Language Processing, we are looking at an opportunity to analyze moral values to observe trends as they form. Recent studies have extensively examined the performance of different NLP models for estimating moral values from text, but none of them has tackled the problem of transfer learning. Our study provides a comprehensive look into the cross-domain performance of three state-of-the-art models. We find that BERT, the current most used model in Natural Language Processing, offers the best results. For reproducibility, we publicly release our code on GitHub.