Incentive-Tuning

Understanding and Designing Incentives for Empirical Human-AI Decision-Making Studies

More Info
expand_more

Abstract

With the rapid advance of artificial intelligence technologies, AI's potential to transform decision-making processes has garnered considerable interest. From criminal justice and healthcare to finance and management, AI systems are poised to revolutionize how humans make decisions across various fields. Their ability to analyze massive datasets and identify patterns offers significant advantages, including faster decision-making and improved accuracy. At the same time, human judgment and empathy are paramount for decision-making, especially in high-stakes scenarios. This has fueled explorations of collaborative decision-making between humans and AI systems, aiming to leverage the strengths of both human and machine intelligence. 
Integrating AI effectively into decision-making processes requires a deep understanding of how humans interact with AI. To explore this dynamic, researchers conduct empirical studies. These studies are thus crucial for shaping the future of human-AI decision-making. They not only illuminate the fundamental nature of this interaction but also guide the development of new AI techniques and responsible practices.

A critical aspect of conducting these studies is the role of participants. The validity of such studies hinges on the behaviours of the participants, who act as the human decision-maker. Study participants might not necessarily embody the true motivations that drive the humans making decisions in the real-world. Effective incentives that motivate participants may lead to improved engagement and make participants more invested in the decision-making process. Incentive schemes can thus be the bridge between the controlled environment of the study and the complexities of real-world decision-making. By carefully designing incentives that align with the study goals and participant motivations, researchers can unlock the true potential of empirical studies for investigating human-AI decision-making. 

Thus, in this thesis, we highlight and address the critical role of incentive design for conducting empirical human-AI decision-making studies. We focus our exploration on understanding, designing, and documenting incentive schemes.

Through a thematic review of existing research, we lay bare the landscape of current practices, challenges, and opportunities associated with incentive design in human-AI decision-making, in order to to facilitate a more nuanced understanding. We identify recurring patterns, or themes, such as what comprises the components of an incentive scheme, how incentive schemes are manipulated by researchers, and the impact they can have on research outcomes. We further raise several questions to lead the way for future research endeavours.
Leveraging the acquired understanding, we present a practical tool to guide researchers in designing effective incentive schemes - the Incentive-Tuning Checklist. The checklist outlines how researchers should undertake the incentive design process step-by-step, and prompts them to critically reflect on the trade-offs and implications associated with the various design choices and decisions.
Further, recognizing the importance of knowledge capture and dissemination, we provide tools to meticulously document the design of incentive schemes in the form of a reporting template and a collaborative repository. 

By advocating for a standardized yet flexible approach to incentive design and contributing valuable insights along with practical tools, this thesis paves the way for more reliable and generalizable knowledge in the field of human-AI decision-making. Ultimately, we aim for our contributions to empower researchers in developing effective human-AI partnerships for decision-making.