Print Email Facebook Twitter Meaningful human control: actionable properties for AI system development Title Meaningful human control: actionable properties for AI system development Author Cavalcante Siebert, L. (TU Delft Interactive Intelligence) Lupetti, M.L. (TU Delft Design Aesthetics) Aizenberg, E. (TU Delft Cyber Security) Beckers, N.W.M. (TU Delft Human-Robot Interaction) Zgonnikov, A. (TU Delft Human-Robot Interaction) Veluwenkamp, H.M. (TU Delft Ethics & Philosophy of Technology) Abbink, D.A. (TU Delft Human-Robot Interaction) Giaccardi, Elisa (TU Delft Human Information Communication Design) Houben, G.J.P.M. (TU Delft Web Information Systems) Jonker, C.M. (TU Delft Interactive Intelligence) van den Hoven, M.J. (TU Delft Ethics & Philosophy of Technology) Forster, D. (TU Delft Human-Robot Interaction) Lagendijk, R.L. (TU Delft Cyber Security) Date 2022 Abstract How can humans remain in control of artificial intelligence (AI)-based systems designed to perform tasks autonomously? Such systems are increasingly ubiquitous, creating benefits - but also undesirable situations where moral responsibility for their actions cannot be properly attributed to any particular person or group. The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans; however, clear requirements for researchers, designers, and engineers are yet inexistent, making the development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying, through an iterative process of abductive thinking, four actionable properties for AI-based systems under meaningful human control, which we discuss making use of two applications scenarios: automated vehicles and AI-based hiring. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human’s ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue that these four properties will support practically minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control. Subject Artificial intelligenceAI ethicsMeaningful human controlMoral responsibilitySocio-technical systems To reference this document use: http://resolver.tudelft.nl/uuid:0bf0252b-f49d-408a-b908-8903be8e86bc DOI https://doi.org/10.1007/s43681-022-00167-3 ISSN 2730-5961 Source AI and Ethics Part of collection Institutional Repository Document type journal article Rights © 2022 L. Cavalcante Siebert, M.L. Lupetti, E. Aizenberg, N.W.M. Beckers, A. Zgonnikov, H.M. Veluwenkamp, D.A. Abbink, Elisa Giaccardi, G.J.P.M. Houben, C.M. Jonker, M.J. van den Hoven, D. Forster, R.L. Lagendijk Files PDF CavalcanteSiebert2022_Art ... tionab.pdf 1.07 MB Close viewer /islandora/object/uuid:0bf0252b-f49d-408a-b908-8903be8e86bc/datastream/OBJ/view