Elucidating a ‘black-box’ transcends explaining the algorithm

Exploring Explainable AI (XAI) as a way to address AI implementation challenges in the Dutch public sector

More Info
expand_more

Abstract

Responding to the trend of increasing use of artificial intelligence (AI), we need to ensure applications of AI are designed, implemented, utilised and evaluated in a careful manner. Explainable AI, or XAI, meaning; - given a certain audience, the details and reasons of both technical processes of the algorithm-support system and the reasoning behind the system to make its functioning clear or
easy to understand - is one of the ways to responsibly design and implement AI systems. This research looks into AI-supported public decision-making processes in the Netherlands and the role and possible contribution of XAI in such a context. To this end, I conducted a mixed-method qualitative study; interviewing sixteen respondents from three key-actor groups within two Dutch national public sector executive bodies, additionally performing three observations and document-analysis. Differentiating between different phases of an AI system’s implementation life-cycle, the study unveils how the respective actors - managers, data scientists and domain experts/(potential) AI users - encounter various challenges in bringing an AI system from idea to production. The empirical findings show that many AI systems, whilst technically developed, are not deployed or adopted by the wider organisation. The study discerns the challenges hindering the AI implementation process from an organisational,
human and technical point of view. Moreover, the study highlights the need to approach XAI from a multi-purpose, multi-actor perspective; both acknowledging that various actors need different kinds of explanations, but also bridging different respective professional worldviews to apprehend one another.
XAI is often seen as a one-size-fits-all solution for various implementation challenges, however the study shows that certain challenges need to be addressed at least beyond traditional ways of XAI from a computer science perspective, and perhaps beyond XAI all together. As such, the insights of this
thesis contribute to generating a more realistic idea about the opportunities and limitations of XAI, within real-world AI implementation processes in the public sector.