Understanding the Role of Explanation Modality in AI-assisted decision-making

More Info


Advances in artificial intelligence and machine learning have led to a steep rise in the adoption of AI to augment or support human decision-making across domains.
There has been an increasing body of work addressing the benefits of model interpretability and explanations to help end-users or other stakeholders decipher the inner workings of the so-called "black box AI systems".
Yet, little is currently understood about the role of modalities through which explanations can be communicated (e.g. text, visualizations, or audio) to inform, augment, and shape human decision-making.
This thesis addresses this research gap through the lens of a credibility assessment system.
Considering the deluge of information available through various channels, people constantly make decisions while considering the perceived credibility of the information they consume.
However, with an increasing information overload, assessing the credibility of the information we encounter is a non-trivial task.
To help users in this task, automated credibility assessment systems have been devised as decision support systems in various contexts (e.g. assessing the credibility of news or social media posts).
However, for these systems to be effective in supporting users, they need to be trusted and understood.
Explanations have been shown to play an essential role in informing users' reliance on decision support systems.
This thesis investigates the influence of explanation modalities on an AI-assisted credibility assessment task.
A between-subjects experiment (N=375) was performed, spanning six different explanation modalities, to evaluate the role of explanation modality on the accuracy of AI-assisted decision outcomes, the perceived system trust among users, and system usability.
The results indicate that explanations play a significant role in shaping users' reliance on the decision support system and, thereby, the accuracy of decisions made.
Users were found to perform with higher accuracy while assessing the credibility of statements in the presence of explanations.
Users additionally had a significantly harder time agreeing on statement credibility without explanations. With explanations present, text and audio explanations were more effective than graphic explanations.
This thesis concludes that combinations of graphical and text and/or audio explanations were significantly effective. Such combinations of modalities led to a higher user performance than using graphical explanations alone.