Why emotions and works of art are pertinent to the assessment of the ethical risks of AI

Journal Article (2025)
Author(s)

Maria Danielsen (University of Tromsø)

S Roeser (TU Delft - Ethics & Philosophy of Technology)

Research Group
Ethics & Philosophy of Technology
DOI related publication
https://doi.org/10.1007/s10676-025-09849-y
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Ethics & Philosophy of Technology
Issue number
3
Volume number
27
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

AI systems and tools are being implemented at an increasingly rapid rate in society for a variety of purposes such as decision-making, managing job applications, and socializing. These new technologies have a lot of promise but may also introduce new risks by threatening human moral and relational values, as well as values connected to flourishing. Mainstream approaches to risk assessment do not pay sufficient attention to these values. The study of emotions as they are connected to human values can therefore play an important role in risk management. We will contribute to this discussion by introducing the concept of human needs, or what we consider to be the sources of values that constitute emotions. This brings a new perspective to the debate around AI and risk. By combining insights from Martha Nussbaum and Soran Reader, we argue that while emotions are crucial for highlighting what values are activated in a particular situation, the sources of an important part of human values are human needs. This provides for what we call the ‘needs-values-emotions nexus’. We argue that this framework can add to the discussion about the ethical risks of AI in two fundamental ways. First, highlighting the crucial role of needs helps to explain why AI systems cannot develop, feel, nor reason according to human values. On the most basic level, AI systems lack a constitutive part of these values, i.e., they lack needs. The deployment of AI, for example to replace human decision-making, may therefore threaten human values. We discuss this by zooming in on a recent example, the so-called Dutch tax benefit scandal. Second, this paper argues that we need emotions to concretize and deliberate on what values are at risk when developing and using AI technology. Further building on the ‘needs-values-emotions nexus’ developed in this paper, we argue why art is a preeminent medium to elicit emotions and ethical reflection on the risks of AI. Discussing a concrete example, we illustrate how contemporary artists can contribute to ethical risk-assessments by focusing on the societal impact of AI.