This study investigates how large language models (LLMs) narrate ADHD-related experiences and whether their narrative forms give rise to hermeneutical injustice. Rather than comparing experience itself, this study analyzes how experiences are narrated. Using a hybrid coding strat
...
This study investigates how large language models (LLMs) narrate ADHD-related experiences and whether their narrative forms give rise to hermeneutical injustice. Rather than comparing experience itself, this study analyzes how experiences are narrated. Using a hybrid coding strategy based on Reflexive Thematic Analysis, it compares LLM-generated outputs with first-person narratives from ADHD communities. The analysis identifies several recurring misnarration patterns, Truncated Subjectivity, One-Way Definition, Illocutionary Disablement, and Skewed Style Replacement. Each of these patterns constrains the interpretive space for expressing ADHD experience. Sub-themes are developed to further reveal injustice embedded in LLMs. These patterns are linked to both the training data and the optimization process. In addition, the underlying mechanism of LLMs lacks the différance structure that characterizes human narration.