Responsibility beyond design: Physicians’ requirements for ethical medical AI

Journal Article (2021)
Author(s)

M. Sand (TU Delft - Ethics & Philosophy of Technology)

Juan M. Duran (TU Delft - Ethics & Philosophy of Technology)

Karin R. Jongsma (University Medical Center Utrecht)

Research Group
Ethics & Philosophy of Technology
Copyright
© 2021 M. Sand, J.M. Duran, Karin Rolanda Jongsma
DOI related publication
https://doi.org/10.1111/bioe.12887
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 M. Sand, J.M. Duran, Karin Rolanda Jongsma
Research Group
Ethics & Philosophy of Technology
Issue number
2
Volume number
36
Pages (from-to)
162-169
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Medical AI is increasingly being developed and tested to improve medical diagnosis, prediction and treatment of a wide array of medical conditions. Despite worries about the explainability and accuracy of such medical AI systems, it is reasonable to assume that they will be increasingly implemented in medical practice. Current ethical debates focus mainly on design requirements and suggest embedding certain values such as transparency, fairness, and explainability in the design of medical AI systems. Aside from concerns about their design, medical AI systems also raise questions with regard to physicians' responsibilities once these technologies are being implemented and used. How do physicians’ responsibilities change with the implementation of medical AI? Which set of competencies do physicians have to learn to responsibly interact with medical AI? In the present article, we will introduce the notion of forward-looking responsibility and enumerate through this conceptual lens a number of competencies and duties that physicians ought to employ to responsibly utilize medical AI in practice. Those include amongst others understanding the range of reasonable outputs, being aware of own experience and skill decline, and monitoring potential accuracy decline of the AI systems.