The use of Artificial Social Agents (ASAs) is rapidly expanding across society. As these agents become more integrated into our interactions, understanding the user experience of them becomes increasingly necessary to ensure their design aligns with user needs, promotes trust, an
...
The use of Artificial Social Agents (ASAs) is rapidly expanding across society. As these agents become more integrated into our interactions, understanding the user experience of them becomes increasingly necessary to ensure their design aligns with user needs, promotes trust, and supports meaningful engagement. This study aims to investigate how users experience interactions with ASAs, focusing on using thematic analysis to identify recurring themes in user-reported experiences with ASAs. In addition, it also explores the reliability of locally hosted Large Language Models (LLMs) in identifying those experiences. We conducted a manual -peer validated- thematic analysis, resulting in a total of 31 themes. Afterwards, we conducted two experiments with LLMs, namely giving hem an unguided prompt (i.e. the LLM discovers and groups themes independently) and a guided prompt (i.e. the LLM matches predefined themes to responses) and measured their agreements with the manual analysis both intuitively and analytically. From our findings, it became clear that users experience ASAs through a balance of practical utility and emotional engagement. Themes covering the agent's helpfulness, sociability, enjoyability and perceived intelligence played a central role in shaping user experience. Most users responded positively to ASAs that felt intuitive, responsive, and human-like, though perceptions of human-likeness varied, sometimes enhancing the experience and other times creating discomfort. Our evaluation of LLMs showed that while they are capable of uncovering broad thematic patterns through unguided analysis, they fall short when tasked with consistently identifying and labeling predefined themes at the individual response level. This suggests that current LLMs, while useful as supplementary tools, are not yet reliable replacements for human-led thematic analysis in capturing the full nuance of user experiences at a detailed level. The conclusions reinforce the continued value and need of human-led thematic analysis, particularly when aiming to capture subtle, context-dependent insights that automated models may overlook.