Why we should talk about institutional (dis)trustworthiness and medical machine learning
Michiel De Proost (Universiteit Gent)
Giorgia Pozzi (TU Delft - Ethics & Philosophy of Technology)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
The principle of trust has been placed at the centre as an attitude for engaging with clinical machine learning systems. However, the notions of trust and distrust remain fiercely debated in the philosophical and ethical literature. In this article, we proceed on a structural level ex negativo as we aim to analyse the concept of “institutional distrustworthiness” to achieve a proper diagnosis of how we should not engage with medical machine learning. First, we begin with several examples that hint at the emergence of a climate of distrust in the context of medical machine learning. Second, we introduce the concept of institutional trustworthiness based on an expansion of Hawley’s commitment account. Third, we argue that institutional opacity can undermine the trustworthiness of medical institutions and can lead to new forms of testimonial injustices. Finally, we focus on possible building blocks for repairing institutional distrustworthiness.