The risks of autonomous machines: from responsibility gaps to control gaps

Journal Article (2023)
Author(s)

Frank Hindriks (Rijksuniversiteit Groningen)

Herman Veluwenkamp (TU Delft - Ethics & Philosophy of Technology)

Research Group
Ethics & Philosophy of Technology
Copyright
© 2023 Frank Hindriks, H.M. Veluwenkamp
DOI related publication
https://doi.org/10.1007/s11229-022-04001-5
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Frank Hindriks, H.M. Veluwenkamp
Research Group
Ethics & Philosophy of Technology
Issue number
1
Volume number
201
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Responsibility gaps concern the attribution of blame for harms caused by autonomous machines. The worry has been that, because they are artificial agents, it is impossible to attribute blame, even though doing so would be appropriate given the harms they cause. We argue that there are no responsibility gaps. The harms can be blameless. And if they are not, the blame that is appropriate is indirect and can be attributed to designers, engineers, software developers, manufacturers or regulators. The real problem lies elsewhere: autonomous machines should be built so as to exhibit a level of risk that is morally acceptable. If they fall short of this standard, they exhibit what we call ‘a control gap.’ The causal control that autonomous machines have will then fall short of the guidance control they should emulate.