The rise of ChatGPT has prompted concerns over users’ agency when revealing personal data to artificial intelligence. This study examined users’ likelihood of disclosing their data to ChatGPT in physical and mental health scenarios. Participants (N = 216) completed a repeated measures survey where they viewed four vignettes of hypothetical scenarios and were asked to imagine disclosing health information (physical and mental health) at two sensitivity levels (low and high self-disclosure). A repeated measures ANOVA revealed participants were significantly more likely to provide their data when the information required low-disclosure than high-disclosure. Furthermore, participants were significantly more likely to report uploading their health information in the physical health scenario than in the mental health scenario. The findings suggest ChatGPT users exercise caution in disclosing data to the platform. Reluctance to upload information in sensitive scenarios reduces the training data for large language models, resulting in potential stagnation in technology development.