Reflective Hybrid Intelligence for Meaningful Human Control in Decision-Support Systems

Book Chapter (2024)
Author(s)

Catholijn M. Jonker (TU Delft - Interactive Intelligence)

Luciano C. Cavalcante Siebert (TU Delft - Interactive Intelligence)

Pradeep Kumar Murukannaiah (TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
DOI related publication
https://doi.org/10.4337/9781802204131.00019
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Interactive Intelligence
Volume number
1
Pages (from-to)
188-204
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

With the growing capabilities and pervasiveness of AI systems, societies must collectively choose between reduced human autonomy, endangered democracies and limited human rights, and AI that is aligned to human and social values, nurturing collaboration, resilience, knowledge and ethical behaviour. In this chapter, we introduce the notion of self-reflective AI systems for meaningful human control over AI systems. Focusing on decision support systems, we propose a framework that integrates knowledge from psychology and philosophy with formal reasoning methods and machine learning approaches to create AI systems responsive to human values and social norms. We also propose a possible research approach to design and develop self-reflective capability in AI systems. Finally, we argue that self-reflective AI systems can lead to self-reflective hybrid systems (human + AI), thus increasing meaningful human control and empowering human moral reasoning by providing comprehensible information and insights on possible human moral blind spots.

Files

9781802204131-book-part-978180... (pdf)
(pdf | 0.674 Mb)
- Embargo expired in 02-06-2025
License info not available