Explainable AI for All
A Roadmap for Inclusive XAI for people with Cognitive Disabilities
ML Tielman (TU Delft - Interactive Intelligence)
Mari Carmen Suárez-Figueroa (Universidad Politécnica de Madrid)
Arne Jönsson (Linköping University)
Mark Neerincx (TNO, TU Delft - Interactive Intelligence)
L. Siebert (TU Delft - Interactive Intelligence)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Artificial intelligence (AI) is increasingly prevalent in our daily lives, setting specific requirements for responsible development and deployment: The AI should be explainable and inclusive. Despite substantial research and development investment in explainable AI, there is a lack of effort into making AI explainable and inclusive to people with cognitive disabilities as well. In this paper, we present the first steps towards this research topic. We argue that three main questions guide this research, namely: 1) How explainable should a system be?; 2) What level of understanding can the user reach, and what is the right type of explanation to help them reach this level?; and 3) How can we implement an AI system that can generate the necessary explanations? We present the current state of the art in research on these three topics, the current open questions and the next steps. Finally, we present the challenges specific to bringing these three research topics together, in order to eventually be able to answer the question of how to make AI systems explainable also to people with cognitive disabilities.