Estimating Value Preferences in a Hybrid Participatory System

Conference Paper (2022)
Author(s)

Luciano C. Cavalcante Siebert (TU Delft - Interactive Intelligence)

E. Liscio (TU Delft - Interactive Intelligence)

Pradeep Kumar Murukannaiah (TU Delft - Interactive Intelligence)

Lionel Kaptein (Student TU Delft)

Shannon Spruit (Populytics B.V)

Jeroen van den Hoven (TU Delft - Ethics & Philosophy of Technology)

Catholijn Jonker (TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
Copyright
© 2022 L. Cavalcante Siebert, E. Liscio, P.K. Murukannaiah, Lionel Kaptein, Shannon Spruit, M.J. van den Hoven, C.M. Jonker
DOI related publication
https://doi.org/10.3233/FAIA220193
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 L. Cavalcante Siebert, E. Liscio, P.K. Murukannaiah, Lionel Kaptein, Shannon Spruit, M.J. van den Hoven, C.M. Jonker
Research Group
Interactive Intelligence
Pages (from-to)
114-127
ISBN (electronic)
9781643683089
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We propose methods for an AI agent to estimate the value preferences of individuals in a hybrid participatory system, considering a setting where participants make choices and provide textual motivations for those choices. We focus on situations where there is a conflict between participants' choices and motivations, and operationalize the philosophical stance that 'valuing is deliberatively consequential.' That is, if a user's choice is based on a deliberation of value preferences, the value preferences can be observed in the motivation the user provides for the choice. Thus, we prioritize the value preferences estimated from motivations over the value preferences estimated from choices alone. We evaluate the proposed methods on a dataset of a large-scale survey on energy transition. The results show that explicitly addressing inconsistencies between choices and motivations improves the estimation of an individual's value preferences. The proposed methods can be integrated in a hybrid participatory system, where artificial agents ought to estimate humans' value preferences to pursue value alignment.