Mental health chatbots are increasingly adopted to address shortage mental health services, by offering non-judgmental, always-available support. User self-disclosure is a critical factor which allows mental health chatbots to better understand users and provide more therapeutic
...
Mental health chatbots are increasingly adopted to address shortage mental health services, by offering non-judgmental, always-available support. User self-disclosure is a critical factor which allows mental health chatbots to better understand users and provide more therapeutic experiences. Although prior work has explored how factors such as chatbot modality and tone affect self-disclosure, the role of privacy policies and how question sensitivity affects disclosure remains under examined. In this study, we investigate how privacy policies and the sensitivity of questions in voice-based mental health chatbots impacts user self-disclosure. Through a controlled user study, we explore whether the presence of a privacy policy leads to increased self-disclosure, whether question sensitivity influences self-disclosure willingness and whether there is any interaction effect between these two factors. Preliminary findings indicate that while providing a privacy policy did not significantly impact users' privacy understanding or willingness to self-disclose, question sensitivity notably influenced disclosure. Specifically, participants were more willing to disclose to low and medium sensitivity questions compared to high sensitivity. No interaction effect between the privacy policy and the question sensitivity was observed. Future research should expand participant pools, investigate self-disclosure in free-form interactions, and explore alternative methods of communicating privacy information for deeper insights into user perceptions regarding privacy, sensitivity and disclosure.