What factors predict user acceptance of ChatGPT for mental and physical healthcare
an extended technology acceptance model framework
Sage Kelly (Queensland University of Technology)
Sherrie Anne Kaye (Queensland University of Technology)
Katherine M. White (Queensland University of Technology)
Oscar Oviedo-Trespalacios (TU Delft - Safety and Security Science)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
The rise of ChatGPT has emphasized the need for an improved conceptual understanding of users’ agency when interacting with artificial intelligence (AI) systems for healthcare. Australian ChatGPT users (N = 216) completed a repeated measures online survey. Hierarchical regression analyses assessed the influence of demographic factors (age and gender), Technology Acceptance Model constructs (perceived usefulness and perceived ease of use), and extended variables (trust, privacy concerns) on users' behavioral intentions to use ChatGPT for physical and mental healthcare. The proposed model was partially supported: the findings emphasized the need to establish user trust in ChatGPT and its perceived usefulness in both areas of healthcare. Privacy concerns were a significant predictor of intentions to use ChatGPT for mental healthcare with perceived ease of use predicting intentions to use ChatGPT for physical healthcare. The findings indicate predictors of uses of AI cannot be generalized across healthcare types and unique drivers should be considered.