Will Algorithms Blind People? The Effect of Explainable AI and Decision-Makers’ Experience on AI-supported Decision-Making in Government

Journal Article (2020)
Author(s)

Marijn Janssen (TU Delft - Information and Communication Technology)

Martijn Hartog (TU Delft - Information and Communication Technology)

R. Matheus (TU Delft - Information and Communication Technology)

Aaron Ding (TU Delft - Information and Communication Technology)

George Kuk (Nottingham Trent University)

Research Group
Information and Communication Technology
Copyright
© 2020 M.F.W.H.A. Janssen, M.W. Hartog, R. Matheus, Aaron Yi Ding, George Kuk
DOI related publication
https://doi.org/10.1177/0894439320980118
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 M.F.W.H.A. Janssen, M.W. Hartog, R. Matheus, Aaron Yi Ding, George Kuk
Research Group
Information and Communication Technology
Issue number
2
Volume number
40
Pages (from-to)
478-493
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Computational artificial intelligence (AI) algorithms are increasingly used to support decision making by governments. Yet algorithms often remain opaque to the decision makers and devoid of clear explanations for the decisions made. In this study, we used an experimental approach to compare decision making in three situations: humans making decisions (1) without any support of algorithms, (2) supported by business rules (BR), and (3) supported by machine learning (ML). Participants were asked to make the correct decisions given various scenarios, while BR and ML algorithms could provide correct or incorrect suggestions to the decision maker. This enabled us to evaluate whether the participants were able to understand the limitations of BR and ML. The experiment shows that algorithms help decision makers to make more correct decisions. The findings suggest that explainable AI combined with experience helps them detect incorrect suggestions made by algorithms. However, even experienced persons were not able to identify all mistakes. Ensuring the ability to understand and traceback decisions are not sufficient for avoiding making incorrect decisions. The findings imply that algorithms should be adopted with care and that selecting the appropriate algorithms for supporting decisions and training of decision makers are key factors in increasing accountability and transparency.