Human–AI Relationship in Healthcare

Book Chapter (2023)
Author(s)

Manoj Joshi (Philips Healthcare Nederland)

Nicola Pezzotti (Eindhoven University of Technology, TU Delft - Computer Graphics and Visualisation)

J.T. Browne (TU Delft - DesIgning Value in Ecosystems)

Research Group
DesIgning Value in Ecosystems
DOI related publication
https://doi.org/10.1201/9781003333425-1
More Info
expand_more
Publication Year
2023
Language
English
Research Group
DesIgning Value in Ecosystems
Pages (from-to)
1-22
ISBN (print)
9781032367118
ISBN (electronic)
9781000906394

Abstract

In the age of machine learning, deep learning and artificial intelligence (AI) are expected to improve our lives. Particularly in the field of medicine and medical imaging, AI can make sense of tens if not hundreds of different parameters and find patterns and correlations that are difficult for humans to process. AI is expected to assist doctors in improving patient care and reducing burden. Despite many papers showing how AI algorithms can match or outperform humans in different domains of medicine, not many have been adopted into practice (Kelly et al., 2019). One of the major challenges is trust and acceptance of AI results. These are important issues that are complex. Confidence, trust, and uncertainty influence the way humans make decisions using AI. AI (deep learning algorithms in particular) is a “black box” to users and even the creators of these algorithms, making it very difficult to adopt. Should humans trust AI? Do humans overly trust AI? This chapter explores the human–AI relationship. It starts with a discussion on trust and human interactions. The expert–apprentice model is described to inform how AI could interact with clinicians. Recent technological developments and experience design aspects are detailed, giving an outline of recommendations for designing explainable AI, or XAI.

No files available

Metadata only record. There are no files for this record.