Machine Learning-Induced Epistemic Injustice in Medicine and Healthcare

More Info
expand_more

Abstract

The advancement of AI-based technologies, such as machine learning (ML) systems, for implementation in healthcare is progressing rapidly. Since these systems are used to support healthcare professionals in crucial medical practices, their role in medical decision-making needs to be epistemologically and ethically assessed. However, a central issue at the intersection of the ethics and epistemology of ML has been largely neglected. This pertains to the careful scrutiny of how ML systems can degrade individuals’ epistemic standing as receivers and conveyors of knowledge and, thereby, perpetrate epistemic injustice. Since ML systems are powerful epistemic entities that are not easily contestable, and their decision-making rationale is often inaccessible, it is crucial to consider their role in creating imbalances in patients’ disfavor and the ways to mitigate such imbalances. This is especially important when it comes to interactions between patients and physicians, in which questions of credibility, trust, and understanding are central. Against this background, the overarching purpose of this dissertation is to fill this research gap by providing a framework to identify and, on occasion, mitigate epistemic injustices that are ML-induced, i.e., that emerge specifically due to the role that ML systems play in patient-physician interactions.