Human–AI Relationship in Healthcare

More Info
expand_more

Abstract

In the age of machine learning, deep learning and artificial intelligence (AI) are expected to improve our lives. Particularly in the field of medicine and medical imaging, AI can make sense of tens if not hundreds of different parameters and find patterns and correlations that are difficult for humans to process. AI is expected to assist doctors in improving patient care and reducing burden. Despite many papers showing how AI algorithms can match or outperform humans in different domains of medicine, not many have been adopted into practice (Kelly et al., 2019). One of the major challenges is trust and acceptance of AI results. These are important issues that are complex. Confidence, trust, and uncertainty influence the way humans make decisions using AI. AI (deep learning algorithms in particular) is a “black box” to users and even the creators of these algorithms, making it very difficult to adopt. Should humans trust AI? Do humans overly trust AI? This chapter explores the human–AI relationship. It starts with a discussion on trust and human interactions. The expert–apprentice model is described to inform how AI could interact with clinicians. Recent technological developments and experience design aspects are detailed, giving an outline of recommendations for designing explainable AI, or XAI.