The Human in Command

An exploratory study into human moral autonomy of Behavioural Artificial Intelligence Technology

Master Thesis (2021)
Author(s)

C.E. Yildiz (TU Delft - Technology, Policy and Management)

Contributor(s)

I. R. van de Poel – Mentor (TU Delft - Ethics & Philosophy of Technology)

LJ Kortmann – Mentor (TU Delft - Policy Analysis)

C. G. Chorus – Mentor (TU Delft - Engineering, Systems and Services)

Faculty
Technology, Policy and Management
Copyright
© 2021 Can Yildiz
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Can Yildiz
Graduation Date
09-09-2021
Awarding Institution
Delft University of Technology
Programme
['Engineering and Policy Analysis']
Sponsors
Councyl
Faculty
Technology, Policy and Management
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The accelerating development of algorithms causes a disruptive effect in many domains, including the complex decision-making of knowledge workers. Experts can manage difficult but repetitive decisions with software technologies like a Decision Support System (DSS). A DSS is used to monitor decisions, get additional insights and improve decisions over time. Their supportive performance characterises these systems to assist human decision-makers. To answer to the pressing demand for transparency in DSSs, Councyl developed Behavioural Artificial Intelligence Tenchnology (BAIT). BAIT is a DSS that adequately supports experts with making decisions. However, algorithms like BAIT may affect the autonomy of experts and their decisions in numerous ways. This thesis studies human moral autonomy (HMA) of end-users in the context of BAIT. We do this by measuring perceptions of end-users. The product arising from this study is the HMA Survey.

Files

License info not available