A.I. Robustness
a Human-Centered Perspective on Technological Challenges and Opportunities
Andrea Tocchetti (TU Delft - Web Information Systems, Politecnico di Milano)
Lorenzo Corti (TU Delft - Web Information Systems)
A.M.A. Balayn (TU Delft - Web Information Systems)
Mireia Yurrita Semperena (TU Delft - Perceptual Intelligence)
Philip Lippmann (TU Delft - Web Information Systems)
Marco Brambilla (Politecnico di Milano)
Jie Yang (TU Delft - Web Information Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Despite the impressive performance of Artificial Intelligence (AI) systems, their robustness remains elusive and constitutes a key issue that impedes large-scale adoption. Besides, robustness is interpreted differently across domains and contexts of AI. In this work, we systematically survey recent progress to provide a reconciled terminology of concepts around AI robustness. We introduce three taxonomies to organize and describe the literature both from a fundamental and applied point of view: (1) methods and approaches that address robustness in different phases of the machine learning pipeline; (2) methods improving robustness in specific model architectures, tasks, and systems; and in addition, (3) methodologies and insights around evaluating the robustness of AI systems, particularly the tradeoffs with other trustworthiness properties. Finally, we identify and discuss research gaps and opportunities and give an outlook on the field. We highlight the central role of humans in evaluating and enhancing AI robustness, considering the necessary knowledge they can provide, and discuss the need for better understanding practices and developing supportive tools in the future.