Smoking cessation interventions sometimes involve the use of both technology and human support to increase effectiveness. Nevertheless, little is known about user preferences for allocating human support and whether large language models (LLMs) can support qualitative analysis to
...
Smoking cessation interventions sometimes involve the use of both technology and human support to increase effectiveness. Nevertheless, little is known about user preferences for allocating human support and whether large language models (LLMs) can support qualitative analysis to better understand these preferences. This study analyzes how smokers’ ethical perspectives shape their preferences for time allocation mechanisms in online smoking cessation programs, while also evaluating how can LLMs support this analysis.
We conducted a deductive thematic analysis of open-ended responses from users who completed a questionnaire after participating in a smoking cessation program with a virtual coach. Additionally, we employed the LLaMA large language model to identify patterns in the responses and to assign the ethical themes discovered during the analysis.
The findings indicate that some users valued fairness and preferred scheduled, randomized interventions or no feedback. Others emphasized autonomy, wanting users to request feedback themselves. Some suggested prioritizing motivated or advanced users. A common view was that interventions should focus on those in need, users at risk of disengagement or health problems, those making little progress, experiencing emotional difficulties, or lacking clarity. The large language model was successful in identifying themes but not in accurately allocating themes, reflected by a low Cohen’s Kappa of 0.05.
The results presenting user preferences can guide the design of interventions that need to be effective and ethically sound. The findings suggest that while large language models can identify themes, they are not yet suitable for allocating those themes.