Stress detection through prompt engineering with a general-purpose LLM

Journal Article (2025)
Author(s)

Nima Esmi (Rijksuniversiteit Groningen, Khazar University)

Asadollah Shahbahrami (Khazar University, University of Guilan)

Yasaman Nabati (University of Guilan)

Bita Rezaei (University of Guilan)

Georgi Gaydadjiev (TU Delft - Computer Engineering)

Peter de Jonge (Rijksuniversiteit Groningen)

Research Group
Computer Engineering
DOI related publication
https://doi.org/10.1016/j.actpsy.2025.105462
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Computer Engineering
Volume number
260
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Advancements in large language models (LLMs) have opened new avenues for mental health monitoring through social media analysis. In this study, we present an iterative prompt engineering framework that significantly enhances the performance of the general-purpose LLM, GPT-4, for stress detection in social media posts, leveraging psychologist-informed hints. This approach achieved a substantial 17% accuracy improvement from 72% to 89% for the January 2025 version of GPT-4, alongside an 80% reduction in false positives compared to baseline zero-shot prompting. Our method not only surpassed domain-specific models like Mental-RoBERTa by 5% but also uniquely generates human-readable rationales. These rationales are crucial for mental health professionals, assisting them in understanding and validating the model’s outputs—a key benefit for sensitive mental health applications. These results highlight prompt engineering as a resource-efficient, transparent strategy to adapt general-purpose LLMs for specialized tasks, offering a scalable solution for mental health monitoring without the need for costly fine-tuning.