Knowing About Knowing

An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems

Conference Paper (2023)
Author(s)

G. He (TU Delft - Web Information Systems)

L.A. Kuiper (TU Delft - Externenregistratie)

Ujwal Gadiraju (TU Delft - Web Information Systems)

Research Group
Web Information Systems
Copyright
© 2023 G. He, L.A. Kuiper, Ujwal Gadiraju
DOI related publication
https://doi.org/10.1145/3544548.3581025
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 G. He, L.A. Kuiper, Ujwal Gadiraju
Research Group
Web Information Systems
ISBN (electronic)
978-1-4503-9421-5
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The dazzling promises of AI systems to augment humans in various tasks hinge on whether humans can appropriately rely on them. Recent research has shown that appropriate reliance is the key to achieving complementary team performance in AI-assisted decision making. This paper addresses an under-explored problem of whether the Dunning-Kruger Effect (DKE) among people can hinder their appropriate reliance on AI systems. DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance. Through an empirical study (N = 249), we explored the impact of DKE on human reliance on an AI system, and whether such effects can be mitigated using a tutorial intervention that reveals the fallibility of AI advice, and exploiting logic units-based explanations to improve user understanding of AI advice. We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems, which hinders optimal team performance. Logic units-based explanations did not help users in either improving the calibration of their competence or facilitating appropriate reliance. While the tutorial intervention was highly effective in helping users calibrate their self-assessment and facilitating appropriate reliance among participants with overestimated self-assessment, we found that it can potentially hurt the appropriate reliance of participants with underestimated self-assessment. Our work has broad implications on the design of methods to tackle user cognitive biases while facilitating appropriate reliance on AI systems. Our findings advance the current understanding of the role of self-assessment in shaping trust and reliance in human-AI decision making. This lays out promising future directions for relevant HCI research in this community.