Exploring the design space for Wellbeing in the context of digital experience

A sensitizing toolkit design

More Info
expand_more

Abstract

The Netflix documentary, ‘The Social Dilemma’, has shown that ubiquitous digital platforms such as Facebook, YouTube tend to use artificial intelligence to optimize the system for user engagement. The documentary provides an example of why metrics that optimize engagement, such as time-on-site, can be detrimental to our society. Overemphasizing current metrics resulting in manipulation, short-term concerns, and other inadvertent negative consequences.

OPTIMIZE FOR WELLBEING
Why do companies not optimize for wellbeing since it is in their best interest in the long-term to keep their users subscribed to their service? The answer is that it is easier and, in the short term, profitable to measure time on-site than wellbeing. In a 2016 TED talk, Tristan Harris provided an example: “Tinder, where instead of measuring the number of swipes left and right people did, measuring the deep, romantic, fulfilling connections people created.” Here, the existing metric, the number of swipes left and right, is easy to measure; conversely, the alternative, fulfilling romantic relationship, is not. However, the goal of a dating app should be to connect two individuals meaningfully; the number of swipes not equal to a positive relationship. This problem highlights the difficulty of translating human values into feasible metrics.

DEEPER QUESTION BEHIND ETHICAL AI
What if we want ethical AI, systems that can do good to people? What if we want AI value alignment, for instance, ensuring AI systems obey human value. According to the paper “Artificial Intelligence, Values, and Alignment.”, behind each vision for ethically-aligned AI sits a deeper question. How are we to decide which principles or objectives to encode in AI and who has the right to make these decisions?

INCLUDE STAKEHOLDERS TO REACH AI VALUE ALIGNMENT
One possible way of answering this is by including stakeholders and those most impacted in the design process and combine quantitative measures with qualitative information. Columbia professor and New York Times Chief Data Scientist Chris Wiggins stated,“Since we cannot know in advance every phenomenon users will experience, we cannot know in advance what metrics will quantify these phenomena”. In other words, we first need to understand the user’s perspective of AI experience to be able to develop suitable metrics. Thus, participatory design can be a way to translate human values into wellbeing metrics that fit the context. In this project, we focus on wellbeing as our value.

PROJECT FOCUS
To operationalize wellbeing, we need to know which aspects of wellbeing require focus by giving users a voice to inform us of their perspectives. This project will focus on developing a method that could operationalize wellbeing concept for participatory design. This methodology will be applied to sensitize end-users and allow them to share their wellbeing concern freely and meaningfully. Although the motivation of this project comes from the needs of AI for wellbeing, the project focuses on a problem that exists outside of AI alone, namely, sensitizing wellbeing. Because to be able to design an AI for wellbeing system, we first need to enable people to talk about wellbeing.