Disentangling Fairness Perceptions in Algorithmic Decision-Making
The Effects of Explanations, Human Oversight, and Contestability
M. Yurrita Semperena (TU Delft - Human Technology Relations)
T.A. Draws (TU Delft - Web Information Systems)
A.M.A. Balayn (TU Delft - Organisation & Governance, TU Delft - Web Information Systems)
Dave Murray-Rust (TU Delft - Human Technology Relations)
Nava Tintarev (Universiteit Maastricht)
A. Bozzon (TU Delft - Human-Centred Artificial Intelligence)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Recent research claims that information cues and system attributes of algorithmic decision-making processes affect decision subjects' fairness perceptions. However, little is still known about how these factors interact. This paper presents a user study (N = 267) investigating the individual and combined effects of explanations, human oversight, and contestability on informational and procedural fairness perceptions for high- and low-stakes decisions in a loan approval scenario. We find that explanations and contestability contribute to informational and procedural fairness perceptions, respectively, but we find no evidence for an effect of human oversight. Our results further show that both informational and procedural fairness perceptions contribute positively to overall fairness perceptions but we do not find an interaction effect between them. A qualitative analysis exposes tensions between information overload and understanding, human involvement and timely decision-making, and accounting for personal circumstances while maintaining procedural consistency. Our results have important design implications for algorithmic decision-making processes that meet decision subjects' standards of justice.