Spotting When Algorithms Are Wrong

Journal Article (2022)
Author(s)

S.N.R. Buijsman (TU Delft - Ethics & Philosophy of Technology)

Herman Veluwenkamp (TU Delft - Ethics & Philosophy of Technology)

Research Group
Ethics & Philosophy of Technology
Copyright
© 2022 S.N.R. Buijsman, H.M. Veluwenkamp
DOI related publication
https://doi.org/10.1007/s11023-022-09591-0
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 S.N.R. Buijsman, H.M. Veluwenkamp
Research Group
Ethics & Philosophy of Technology
Issue number
4
Volume number
33
Pages (from-to)
541-562
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Users of sociotechnical systems often have no way to independently verify whether the system output which they use to make decisions is correct; they are epistemically dependent on the system. We argue that this leads to problems when the system is wrong, namely to bad decisions and violations of the norm of practical reasoning. To prevent this from occurring we suggest the implementation of defeaters: information that a system is unreliable in a specific case (undercutting defeat) or independent information that the output is wrong (rebutting defeat). Practically, we suggest to design defeaters based on the different ways in which a system might produce erroneous outputs, and analyse this suggestion with a case study of the risk classification algorithm used by the Dutch tax agency.

Files

License info not available