Navigating the perils of artificial intelligence

a focused review on ChatGPT and responsible research and innovation

Journal Article (2024)
Author(s)

A. Polyportis (TU Delft - BT/Biotechnology and Society)

N. Pachos-Fokialis (TU Delft - Economics of Technology and Innovation)

Research Group
BT/Biotechnology and Society
Copyright
© 2024 A. Polyportis, N. Pachos-Fokialis
DOI related publication
https://doi.org/10.1057/s41599-023-02464-6
More Info
expand_more
Publication Year
2024
Language
English
Copyright
© 2024 A. Polyportis, N. Pachos-Fokialis
Research Group
BT/Biotechnology and Society
Issue number
1
Volume number
11
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

While the rise of artificial intelligence (AI) tools holds promise for delivering benefits, it is important to acknowledge the associated risks of their deployment. In this article, we conduct a focused literature review to address two central research inquiries concerning ChatGPT and similar AI tools. Firstly, we examine the potential pitfalls linked with the development and implementation of ChatGPT across the individual, organizational, and societal levels. Secondly, we explore the role of a multi-stakeholder responsible research and innovation framework in guiding chatbots’ sustainable development and utilization. Drawing inspiration from responsible research and innovation and stakeholder theory principles, we underscore the necessity of comprehensive ethical guidelines to navigate the design, inception, and utilization of emerging AI innovations. The findings of the focused review shed light on the potential perils of ChatGPT implementation across various societal levels, including issues such as devaluation of relationships, unemployment, privacy concerns, bias, misinformation, and digital inequities. Furthermore, the proposed multi-stakeholder Responsible Research and Innovation framework can empower AI stakeholders to proactively anticipate and deliberate upon AI’s ethical, social, and environmental implications, thus substantially contributing to the pursuit of responsible AI implementation.