The perils and pitfalls of explainable AI

Strategies for explaining algorithmic decision-making

More Info
expand_more

Abstract

Governments look at explainable artificial intelligence's (XAI) potential to tackle the criticisms of the opaqueness of algorithmic decision-making with AI. Although XAI is appealing as a solution for automated decisions, the wicked nature of the challenges governments face complicates the use of XAI. Wickedness means that the facts that define a problem are ambiguous and that there is no consensus on the normative criteria for solving this problem. In such a situation, the use of algorithms can result in distrust. Whereas there is much research advancing XAI technology, the focus of this paper is on strategies for explainability. Three illustrative cases are used to show that explainable, data-driven decisions are often not perceived as objective by the public. The context might raise strong incentives to contest and distrust the explanation of AI, and as a consequence, fierce resistance from society is encountered. To overcome the inherent problems of XAI, decisions-specific strategies are proposed to lead to societal acceptance of AI-based decisions. We suggest strategies to embrace explainable decisions and processes, co-create decisions with societal actors, move away from an instrumental to an institutional approach, use competing and value-sensitive algorithms, and mobilize the tacit knowledge of professionals