Helpful, harmless, honest?

Sociotechnical limits of AI alignment and safety through Reinforcement Learning from Human Feedback

Journal Article (2025)
Author(s)

Adam Dahlgren Lindström (Umeå University)

Leila Methnani (Umeå University)

Lea Krause (Vrije Universiteit Amsterdam)

Petter Ericson (Umeå University)

Inigo De Troya (TU Delft - Information and Communication Technology)

Dimitri Coelho Mollo (Umeå University)

R. I.J. Dobbe (TU Delft - Information and Communication Technology)

Research Group
Information and Communication Technology
DOI related publication
https://doi.org/10.1007/s10676-025-09837-2
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Information and Communication Technology
Issue number
2
Volume number
27
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This paper critically evaluates the attempts to align Artificial Intelligence (AI) systems, especially Large Language Models (LLMs), with human values and intentions through Reinforcement Learning from Feedback methods, involving either human feedback (RLHF) or AI feedback (RLAIF). Specifically, we show the shortcomings of the broadly pursued alignment goals of honesty, harmlessness, and helpfulness. Through a multidisciplinary sociotechnical critique, we examine both the theoretical underpinnings and practical implementations of RLHF techniques, revealing significant limitations in their approach to capturing the complexities of human ethics, and contributing to AI safety. We highlight tensions inherent in the goals of RLHF, as captured in the HHH principle (helpful, harmless and honest). In addition, we discuss ethically-relevant issues that tend to be neglected in discussions about alignment and RLHF, among which the trade-offs between user-friendliness and deception, flexibility and interpretability, and system safety. We offer an alternative vision for AI safety and ethics which positions RLHF approaches within a broader context of comprehensive design across institutions, processes and technological systems, and suggest the establishment of AI safety as a sociotechnical discipline that is open to the normative and political dimensions of artificial intelligence.