Towards a multi-stakeholder value-based assessment framework for algorithmic systems

Conference Paper (2022)
Author(s)

M. Yurrita Semperena (TU Delft - Human Technology Relations)

D.S. Murray-Rust (TU Delft - Human Technology Relations)

A.M.A. Balayn (TU Delft - Web Information Systems)

A. Bozzon (TU Delft - Sustainable Design Engineering, TU Delft - Human-Centred Artificial Intelligence)

Research Group
Human Technology Relations
Copyright
© 2022 M. Yurrita Semperena, D.S. Murray-Rust, A.M.A. Balayn, A. Bozzon
DOI related publication
https://doi.org/10.1145/3531146.3533118
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 M. Yurrita Semperena, D.S. Murray-Rust, A.M.A. Balayn, A. Bozzon
Research Group
Human Technology Relations
Pages (from-to)
535-563
ISBN (print)
978-1-4503-9352-2
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In an effort to regulate Machine Learning-driven (ML) systems, current auditing processes mostly focus on detecting harmful algorithmic biases. While these strategies have proven to be impactful, some values outlined in documents dealing with ethics in ML-driven systems are still underrepresented in auditing processes. Such unaddressed values mainly deal with contextual factors that cannot be easily quantified. In this paper, we develop a value-based assessment framework that is not limited to bias auditing and that covers prominent ethical principles for algorithmic systems. Our framework presents a circular arrangement of values with two bipolar dimensions that make common motivations and potential tensions explicit. In order to operationalize these high-level principles, values are then broken down into specific criteria and their manifestations. However, some of these value-specific criteria are mutually exclusive and require negotiation. As opposed to some other auditing frameworks that merely rely on ML researchers' and practitioners' input, we argue that it is necessary to include stakeholders that present diverse standpoints to systematically negotiate and consolidate value and criteria tensions. To that end, we map stakeholders with different insight needs, and assign tailored means for communicating value manifestations to them. We, therefore, contribute to current ML auditing practices with an assessment framework that visualizes closeness and tensions between values and we give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.