Interactive Model Explanations for Greater Intelligibility

More Info
expand_more

Abstract

As AI is progressively incorporated into several spheres of society. This rapid growth has also brought a lot of challenges such as discriminating or skewed results and a lack of accountability. To address these challenges, there is a growing interest in Human-AI teams where AI-assisted decision-making includes humans in the loop. This approach has been widely explored to address the issue regarding transparency, reliability, and trustworthiness.

However, the essential premise of Human-AI teams in critical applications is that humans must comprehend the reasoning behind an AI system’s decisions. Because of the opaqueness of the AI systems, it has been proved very challenging for humans. The field of explainable AI, is promoted as the link that permits human comprehension of AI systems. A wide range of machine learning explainability techniques have been created. However, standalone explanation techniques have been found to have limited success in ensuring a coherent understanding of AI systems. The primary cause is the insufficient interactivity, absence of actionable human feedback, limitation to specific information, and lack of personalization. Other approaches, such as XAI Dashboards that provide users with multiple standalone explanations have been found to cause information overload. Recent studies suggest that an overload of information can lead to suboptimal AI reliance and understanding. Additional studies also show that XAI dashboards because of their limited interactive nature, the information interchange is mostly unidirectional. Further studies pointed out that XAI dashboard may fail due to unidirectional information exchange, which hinders active user exploration. This may result in an incoherent understanding of the AI system.

Delivering explanations through conversations (conversational XAI) can be a potential solution to address the research gap. Recent studies have shown that the interactive exchange of information may promote a better understanding and uncertainty awareness of AI systems. Additionally, the ability to selectively answer user-specific queries may help users create a better mental model of the AI system and hence improve appropriate trust and reliance. Finally, the personalized conversation may also help in higher perceived trust and address user information need about AI systems.

In this research work, we performed an empirical study (𝑁 = 245) to evaluate the impact of conversational XAI on the understanding, trust and reliance of the AI system. The interface for conversational XAI is built with a rule-based approach. To understand how the impact varies compared to widely adopted alternatives — XAI Dashboard, we compared the understanding, trust, and reliance of AI systems with a between-subjects setup. Additional effects of user-specific personalization of conversational XAI were also studied.

Overall, we found that participants with explainer interfaces showed improved trust and reliance compared to the control condition (i.e., no XAI). However, such increased reliance are not necessarily appropriate reliance. The experimental results observed a clear over-reliance on the AI system for participants with XAI. Additionally, no significant difference was observed in user understanding, trust and reliance between XAI dashboard and conversational XAI interface. Our results and findings may provide useful guidelines to future work about conversational XAI interface and XAI-assisted decision-making.

Files

Thesis_Nilay_Aishwarya.pdf
(.pdf | 4.81 Mb)
- Embargo expired in 31-12-2023