Fairness in agreement with European values
An interdisciplinary perspective on ai regulation
Alejandra Bringas Colmenarejo (University of Southampton)
Luca Nannini (Universidade de Santiago de Compostela)
Alisa Rieger (TU Delft - Web Information Systems)
Kristen M. Scott (Katholieke Universiteit Leuven)
Xuan Zhao (Eberhard Karls Universität Tübingen)
Gourab K. Patro (Indian Institute of Technology Kharagpur)
Gjergji Kasneci (Eberhard Karls Universität Tübingen)
Katharina Kinder-Kurlanda (University of Klagenfurt)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
With increasing digitalization, Artificial Intelligence (AI) is becoming ubiquitous. AI-based systems to identify, optimize, automate, and scale solutions to complex economic and societal problems are being proposed and implemented. This has motivated regulation efforts, including the Proposal of an EU AI Act. This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them, focusing on (but not limited to) the Proposal. We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives. Then, we map these perspectives along three axes of interests: (i) Standardization vs. Localization, (ii) Utilitarianism vs. Egalitarianism, and (iii) Consequential vs. Deontological ethics which leads us to identify a pattern of common arguments and tensions between these axes. Positioning the discussion within the axes of interest and with a focus on reconciling the key tensions, we identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.