How interpretable is explainable?

The development of a framework to assess how interpretable Explainable Artificial Intelligence is for laypeople

More Info
expand_more

Abstract

Explainable AI (XAI) systems are rapidly gaining significance. While frameworks for XAI interpretability for experts abound, metrics for laypeople’s comprehension are absent. This study addresses this gap by investigating interpretability factors from both developer and layperson perspectives. The core research question is: "How can XAI developers assess to what extent XAI is interpretable for laypeople?".

Applying the Design Science Research Methodology, findings from multiple literature reviews are combined to construct a preliminary XAI interpretability framework for laypeople, featuring crucial factors and their relationships, as well as associated principles. The proposed framework underwent validation through semi-structured interviews with 12 XAI experts, informing revisions and refinement of our key principles. Subsequent layperson surveys, considering a specific use case, offered insights into preferences about interpretability factors, informing further refinement.

The final theoretical framework highlights pivotal factors including simplicity, transparency, comprehensiveness, complexity, clarity, generalizability, trustworthiness, explanation fidelity, model fidelity, intentionality, relevance, affordance, coherence with prior beliefs, and actionability. Surrounding the framework are key principles emphasizing trustworthiness, relevance, simplicity, clarity, coherence, intentionality, actionability, fidelity, contextualization, and ethical considerations, serving as actionable guidelines for XAI developers and researchers.

The implications of the study are profound, offering valuable insights for advancing XAI research and system design. The refined framework and principles act as a foundation for both novice and experienced XAI developers, fostering interdisciplinary research among AI, human-computer interaction, psychology, and philosophy experts. These findings can drive the responsible adoption of AI systems across sectors like healthcare, finance, and transportation, while informing policies and regulations governing AI technologies. Our study promotes responsible AI practices, enhancing user trust and understanding, while facilitating the creation of more effective guidelines and standards.

Files