The Ethics and Epistemology of Clinician-AI Disagreement in Medicine

Beyond Opposition

Journal Article (2026)
Author(s)

Giorgia Pozzi (TU Delft - Ethics & Philosophy of Technology)

Martin Sand (TU Delft - Ethics & Philosophy of Technology)

Karin Jongsma (University Medical Center Utrecht)

Research Group
Ethics & Philosophy of Technology
DOI related publication
https://doi.org/10.1080/15265161.2026.2632008
More Info
expand_more
Publication Year
2026
Language
English
Research Group
Ethics & Philosophy of Technology
Journal title
American Journal of Bioethics
Article number
2632008
Downloads counter
5
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The integration of AI systems in medical care magnifies questions related to how physicians should work with such systems to ensure the best patient outcomes. A particularly thorny issue is related to dealing with situations of possible disagreement between an AI system’s recommendation and the course of medical action envisaged by a human clinician. The current academic debate has so far suggested three possible ways of dealing with such clinician-AI disagreements. First, by considering when clinicians are justified in deferring to the AI output (what we call the deference approach), second when the human user overrules the AI system’s output in cases of disagreement (the overruling approach), and lastly when a second human opinion is deemed necessary to resolve disagreements (the second opinion approach). In this paper, we aim to spell out the shortcomings of these three approaches for dealing with clinician-AI disagreement and offer a more nuanced perspective on such disagreements. We argue that differentiation between types of disagreements, taking into account the role attributed to AI in medical practice, is essential before determining how clinician-AI disagreements should be dealt with. By drawing on a case that exemplifies how multifaceted medical decision-making is, we point out the normative implications of possible clinician-AI disagreements ensuing from it. We highlight the distinctive uncertainties inherent to medical decision-making, showing that disagreements in these contexts are not merely unavoidable but can even be epistemically valuable. Ultimately, by considering the epistemic positions of clinicians and AI systems, our analysis raises important questions for the epistemology of disagreement that need timely attention.