AI-powered mental health chatbots offer scalable and accessible support, but their effectiveness hinges on users’ willingness to self-disclose—an outcome shaped by chatbot communication style and the sensitivity of the topic. While prior work has explored empathy and rapport, the
...
AI-powered mental health chatbots offer scalable and accessible support, but their effectiveness hinges on users’ willingness to self-disclose—an outcome shaped by chatbot communication style and the sensitivity of the topic. While prior work has explored empathy and rapport, the role of conversational anthropomorphism remains underexamined, particularly in relation to question sensitivity as a potential moderator. This study addresses that gap through a mixed-design experiment (n = 30) in which participants interacted with either an anthropomorphic or a neutral chatbot and rated their willingness to respond to questions varying in sensitivity. Although no effects reached statistical significance, descriptive trends suggest that anthropomorphic cues—such as informal tone, emojis, and adaptive responses—may increase willingness to disclose, while higher question sensitivity slightly reduces it. No significant interaction effect was found, but anthropomorphic language appeared to promote disclosure regardless of sensitivity level. These findings offer tentative support for the use of calibrated human-like design in mental health chatbots. Future work should incorporate open-ended interactions, behavioral measures, and longitudinal designs to better capture disclosure dynamics and trust formation.