Mitigating Mental Health Misinformation on Instagram: A Systemic and Value-based Approach
S. Zanon Brenck (TU Delft - Technology, Policy and Management)
R.I.J. Dobbe – Mentor (TU Delft - Information and Communication Technology)
Lavinia Marin – Mentor (TU Delft - Ethics & Philosophy of Technology)
S. Hinrichs – Graduation committee member (TU Delft - Policy Analysis)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
The widespread use of social networks has created new pathways for sharing and engaging with mental health information, particularly in contexts where access to care is limited and stigma remains high. Platforms like Instagram have become informal spaces for support, especially among young people. However, these same platforms also amplify misinformation, often through algorithmic systems that shape user perceptions and behavior in harmful ways.
This thesis investigates how to mitigate the risks of mental health misinformation on Instagram, using the Brazilian context as a case study. Integrating System-Theoretic Process Analysis (STPA) and Value Sensitive Design (VSD) to develop socio-technical interventions that are both system-aware and ethically grounded. The research unfolds in three phases: a conceptual phase to identify hazards and system dynamics; an empirical phase based on semi-structured interviews with Brazilian young adults; and a technical phase that synthesizes system risks and user values into actionable design recommendations.
The findings emphasize the importance of four core values: Knowledge, Autonomy, Safety and Integrity in shaping how users interpret and respond to mental health content. Participants described how Instagram’s algorithm reinforces emotionally charged echo chambers, increasing exposure to harmful misinformation while weakening trust in credible sources. Based on this, four interventions are proposed: (1) echo chamber disruption mechanisms, (2) content trigger warnings, (3) verified institutional accounts, and (4) user-controlled filters for validated content.
By framing misinformation as a socio-technical problem rooted in both platform architecture and user experience, this study offers a novel methodological contribution. It demonstrates how combining system-level safety analysis with value-centered design can support the development of interventions that are not only effective but also aligned with user priorities and sociocultural context.