For What It's Worth

Humans Overwrite Their Economic Self-interest to Avoid Bargaining With AI Systems

Conference Paper (2022)
Author(s)

Alexander Erlei (Georg-August-University)

Richeek Das (Indian Institute of Technology Bombay)

Lukas Meub (Georg-August-University)

Avishek Anand (Leibniz Universität)

Ujwal Gadiraju (TU Delft - Web Information Systems)

Research Group
Web Information Systems
Copyright
© 2022 Alexander Erlei, Richeek Das, Lukas Meub, A. Anand, Ujwal Gadiraju
DOI related publication
https://doi.org/10.1145/3491102.3517734
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Alexander Erlei, Richeek Das, Lukas Meub, A. Anand, Ujwal Gadiraju
Research Group
Web Information Systems
ISBN (electronic)
978-1-4503-9157-3
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

As algorithms are increasingly augmenting and substituting human decision-making, understanding how the introduction of computational agents changes the fundamentals of human behavior becomes vital. This pertains to not only users, but also those parties who face the consequences of an algorithmic decision. In a controlled experiment with 480 participants, we exploit an extended version of two-player ultimatum bargaining where responders choose to bargain with either another human, another human with an AI decision aid or an autonomous AI-system acting on behalf of a passive human proposer. Our results show strong responder preferences against the algorithm, as most responders opt for a human opponent and demand higher compensation to reach a contract with autonomous agents. To map these preferences to economic expectations, we elicit incentivized subject beliefs about their opponent's behavior. The majority of responders maximize their expected value when this is line with approaching the human proposer. In contrast, responders predicting income maximization for the autonomous AI-system overwhelmingly override economic self-interest to avoid the algorithm.