What factors predict user acceptance of ChatGPT for mental and physical healthcare

an extended technology acceptance model framework

Journal Article (2025)
Author(s)

Sage Kelly (Queensland University of Technology)

Sherrie Anne Kaye (Queensland University of Technology)

Katherine M. White (Queensland University of Technology)

Oscar Oviedo-Trespalacios (TU Delft - Safety and Security Science)

Safety and Security Science
DOI related publication
https://doi.org/10.1007/s00146-025-02334-6
More Info
expand_more
Publication Year
2025
Language
English
Safety and Security Science
Issue number
8
Volume number
40
Pages (from-to)
6257-6275
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The rise of ChatGPT has emphasized the need for an improved conceptual understanding of users’ agency when interacting with artificial intelligence (AI) systems for healthcare. Australian ChatGPT users (N = 216) completed a repeated measures online survey. Hierarchical regression analyses assessed the influence of demographic factors (age and gender), Technology Acceptance Model constructs (perceived usefulness and perceived ease of use), and extended variables (trust, privacy concerns) on users' behavioral intentions to use ChatGPT for physical and mental healthcare. The proposed model was partially supported: the findings emphasized the need to establish user trust in ChatGPT and its perceived usefulness in both areas of healthcare. Privacy concerns were a significant predictor of intentions to use ChatGPT for mental healthcare with perceived ease of use predicting intentions to use ChatGPT for physical healthcare. The findings indicate predictors of uses of AI cannot be generalized across healthcare types and unique drivers should be considered.