Designing for Responsibility

More Info
expand_more

Abstract

Governments are increasingly using sophisticated self-learning algorithms to automate and standardize decision-making on a large scale. However, despite aspirations for predictive data and more efficient decision-making, the introduction of artificial intelligence (AI) also gives rise to risks and creates a potential for harm. The attribution of responsibility to individuals for the harm caused by these novel socio-Technical decision-making systems is epistemically and normatively challenging. The conditions necessary for individuals to be adequately held responsible-moral agency, freedom, control, and knowledge, can be undermined by the introduction of algorithmic decision-making. Thereby responsibility gaps are created where seemingly no one is sufficiently responsible for the system's outcome. We turn this challenge to adequately attribute responsibility into a design challenge to design for these responsibility conditions. Drawing on philosophical responsibility literature, we develop a conceptual framework to scrutinize the task responsibilities of actors involved in the (re-)design and application of algorithmic decision-making systems. This framework is applied to an empirical case study involving AI in automated governmental decision-making. We find that the framework enables the critical assessment of a socio-Technical system's design for responsibility and provides valuable insights to prevent future harm. The article addresses the current academic and empirical lack of philosophical insights to understand and design for responsibilities in novel algorithmic ICT systems.