Trust in Clinical AI
Expanding the Unit of Analysis
Jacob T. Browne (TU Delft - DesIgning Value in Ecosystems, Philips Research)
Saskia Bakker (Philips Research)
Bin Yu (Philips Research)
PA Lloyd (TU Delft - DesIgning Value in Ecosystems)
Somaya Ben Allouch (Hogeschool van Amsterdam, Universiteit van Amsterdam)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
From diagnosis to patient scheduling, AI is increasingly being considered across different clinical applications. Despite increasingly powerful clinical AI, uptake into actual clinical workflows remains limited. One of the major challenges is developing appropriate trust with clinicians. In this paper, we investigate trust in clinical AI in a wider perspective beyond user interactions with the AI. We offer several points in the clinical AI development, usage, and monitoring process that can have a significant impact on trust. We argue that the calibration of trust in AI should go beyond explainable AI and focus on the entire process of clinical AI deployment. We illustrate our argument with case studies from practitioners implementing clinical AI in practice to show how trust can be affected by different stages in the deployment cycle.