Large Language Models are increasingly integrated into everyday applications, but their responses often reflect dominant cultural narratives, which can lead to misrepresentation of marginalized communities. This paper addresses the underexplored issue of hermeneutical epistemic i
...
Large Language Models are increasingly integrated into everyday applications, but their responses often reflect dominant cultural narratives, which can lead to misrepresentation of marginalized communities. This paper addresses the underexplored issue of hermeneutical epistemic injustice (HEI) in LLM outputs, particularly how these systems fail to accurately represent the lived experiences of people with ADHD when answering causal questions, and whether different prompting techniques can influence and improve the justice reflected in their responses. We introduce a practical framework for measuring HEI based on four proxies: intelligibility, conceptual fit, recognition of structural barriers, and expression style. Through a within-subjects user study with seven adults with ADHD, we evaluated three prompting strategies: Vanilla (baseline), Step-Back, and Human Persona + System 2. Our findings show that Human Persona + System 2 prompting stood out for its empathetic tone, balanced perspectives, and non-judgmental framing, thereby improving fairness across multiple HEI dimensions. Surprisingly, Vanilla prompts performed comparably well overall, while Step-Back responses offered clear practical information and contextual relevance, but were limited by an impassive, matter-of-fact tone. These results suggest that prompt design can meaningfully affect how well LLMs represent marginalized experiences. We conclude that advancing epistemic justice in generative AI requires thoughtful prompt design and may benefit from deeper engagement with affected communities to more accurately and respectfully represent their realities.