Estimating the transferability of state-of-the-art models in predicting moral values

More Info
expand_more

Abstract

Moral values play a crucial role in our decision-making process by defining what is right and wrong. With the emergence of political activism and moral discourse on social media, and the latest developments in Natural Language Processing, we are looking at an opportunity to analyze moral values to observe trends as they form. Recent studies have extensively examined the performance of different NLP models for estimating moral values from text, but none of them has tackled the problem of transfer learning. Our study provides a comprehensive look into the cross-domain performance of three state-of-the-art models. We find that BERT, the current most used model in Natural Language Processing, offers the best results. For reproducibility, we publicly release our code on GitHub.