The paradox of the artificial intelligence system development process

the use case of corporate wellness programs using smart wearables

Journal Article (2022)
Author(s)

Alessandra Angelucci (Politecnico di Milano)

Ziyue Li (University of Cologne, The Hong Kong University of Science and Technology)

N. Stoimenova (TU Delft - DesIgning Value in Ecosystems)

Stefano Canali (Politecnico di Milano)

Research Group
DesIgning Value in Ecosystems
Copyright
© 2022 Alessandra Angelucci, Ziyue Li, N. Stoimenova, Stefano Canali
DOI related publication
https://doi.org/10.1007/s00146-022-01562-4
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Alessandra Angelucci, Ziyue Li, N. Stoimenova, Stefano Canali
Research Group
DesIgning Value in Ecosystems
Issue number
3
Volume number
39 (2024)
Pages (from-to)
1465-1475
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Artificial intelligence (AI) systems have been widely applied to various contexts, including high-stake decision processes in healthcare, banking, and judicial systems. Some developed AI models fail to offer a fair output for specific minority groups, sparking comprehensive discussions about AI fairness. We argue that the development of AI systems is marked by a central paradox: the less participation one stakeholder has within the AI system’s life cycle, the more influence they have over the way the system will function. This means that the impact on the fairness of the system is in the hands of those who are less impacted by it. However, most of the existing works ignore how different aspects of AI fairness are dynamically and adaptively affected by different stages of AI system development. To this end, we present a use case to discuss fairness in the development of corporate wellness programs using smart wearables and AI algorithms to analyze data. The four key stakeholders throughout this type of AI system development process are presented. These stakeholders are called service designer, algorithm designer, system deployer, and end-user. We identify three core aspects of AI fairness, namely, contextual fairness, model fairness, and device fairness. We propose a relative contribution of the four stakeholders to the three aspects of fairness. Furthermore, we propose the boundaries and interactions between the four roles, from which we make our conclusion about the possible unfairness in such an AI developing process.