Accountable AI
It Takes Two to Tango
J.E. Constantino (TU Delft - Organisation & Governance)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This Chapter argues that accountable artificial intelligence (AI) requires examining the role of humans in AI development and deployment. Hence, it discusses the importance of addressing the obligations of deployers and developers of AI systems to achieve accountable AI. The EU AI Act has implemented measures such as transparency or technical obligations to achieve such accountability. Similarly, it has implemented human oversight requirements outlined in Arts. 14 and 26 against high-risk AI systems. Some scholars and practitioners may argue that Art. 14 only applies to developers of AI systems. However, we understand that human oversight requirements govern both actors. Human oversight cannot be applied in isolation by requiring compliance of only one party. Otherwise, it would defeat the purpose of adding human control features to prevent AI systems from harming fundamental rights. Based on this perspective, we propose that (at least) two actors are required to make accountable AI more tangible. Nonetheless, we are conscious that this legislation is in its infancy, and only time will tell how human oversight obligations (Arts. 14 and 26) are to be applied – whether in isolation or in conjunction.