As mental health issues continue to rise around the world, AI chatbots are becoming a promising way to provide accessible and scalable support. This study explores how different levels of chatbot self-disclosure affect users’ willingness to share personal information in a mental
...
As mental health issues continue to rise around the world, AI chatbots are becoming a promising way to provide accessible and scalable support. This study explores how different levels of chatbot self-disclosure affect users’ willingness to share personal information in a mental health context. A within-subjects experiment was conducted with 94 participants, each interacting with three versions of a chatbot: one with no self-disclosure, one with factual self-disclosure, and one with emotional self-disclosure. Participants engaged in a fictional role-play and rated their willingness to disclose across five personal topics, as well as their level of trust and comfort with the chatbot.
The chatbot using factual self-disclosure received the highest average scores for trust, comfort, and willingness to disclose. However, statistical tests (ANOVA) showed no significant differences between chatbot types on these measures, except for changes in willingness to disclose. Participants who interacted with the emotional chatbot were more likely to report a negative change in their willingness to share. This result was unexpected and suggests that emotional self-disclosure may reduce user openness during early interactions, possibly because it feels unnatural or too personal too soon.
These findings show that emotional expression is not always the best approach. Instead, it is important to match the chatbot’s disclosure style to the situation and the user's comfort level, especially in sensitive areas like mental health support.