Tapping into Key Drivers

Self-Disclosure in Sensitive Health Conversations with ChatGPT

Journal Article (2025)
Author(s)

Sage Kelly (Queensland University of Technology)

Katherine M. White (Queensland University of Technology)

Sherrie Anne Kaye (Queensland University of Technology)

Oscar Oviedo-Trespalacios (TU Delft - Safety and Security Science)

Safety and Security Science
DOI related publication
https://doi.org/10.1080/10447318.2025.2499656
More Info
expand_more
Publication Year
2025
Language
English
Safety and Security Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The rise of ChatGPT has prompted concerns over users’ agency when revealing personal data to artificial intelligence. This study examined users’ likelihood of disclosing their data to ChatGPT in physical and mental health scenarios. Participants (N = 216) completed a repeated measures survey where they viewed four vignettes of hypothetical scenarios and were asked to imagine disclosing health information (physical and mental health) at two sensitivity levels (low and high self-disclosure). A repeated measures ANOVA revealed participants were significantly more likely to provide their data when the information required low-disclosure than high-disclosure. Furthermore, participants were significantly more likely to report uploading their health information in the physical health scenario than in the mental health scenario. The findings suggest ChatGPT users exercise caution in disclosing data to the platform. Reluctance to upload information in sensitive scenarios reduces the training data for large language models, resulting in potential stagnation in technology development.