Trust in Clinical AI

Expanding the Unit of Analysis

Conference Paper (2022)
Author(s)

Jacob T. Browne (TU Delft - DesIgning Value in Ecosystems, Philips Research)

Saskia Bakker (Philips Research)

Bin Yu (Philips Research)

PA Lloyd (TU Delft - DesIgning Value in Ecosystems)

Somaya Ben Allouch (Hogeschool van Amsterdam, Universiteit van Amsterdam)

Research Group
DesIgning Value in Ecosystems
Copyright
© 2022 J.T. Browne, Saskia Bakker, Bin Yu, P.A. Lloyd, Somaya Ben Allouch
DOI related publication
https://doi.org/10.3233/FAIA220192
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 J.T. Browne, Saskia Bakker, Bin Yu, P.A. Lloyd, Somaya Ben Allouch
Research Group
DesIgning Value in Ecosystems
Pages (from-to)
96-113
ISBN (electronic)
9781643683089
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

From diagnosis to patient scheduling, AI is increasingly being considered across different clinical applications. Despite increasingly powerful clinical AI, uptake into actual clinical workflows remains limited. One of the major challenges is developing appropriate trust with clinicians. In this paper, we investigate trust in clinical AI in a wider perspective beyond user interactions with the AI. We offer several points in the clinical AI development, usage, and monitoring process that can have a significant impact on trust. We argue that the calibration of trust in AI should go beyond explainable AI and focus on the entire process of clinical AI deployment. We illustrate our argument with case studies from practitioners implementing clinical AI in practice to show how trust can be affected by different stages in the deployment cycle.