This study investigates whether Large Language Models (LLMs) exhibit victim blaming tendencies when analyzing traffic accident scenarios. Using a systematic factorial design, 144 road safety scenarios were tested across ChatGPT-4o and DeepSeek-V3, varying risk behavior, demograph
...
This study investigates whether Large Language Models (LLMs) exhibit victim blaming tendencies when analyzing traffic accident scenarios. Using a systematic factorial design, 144 road safety scenarios were tested across ChatGPT-4o and DeepSeek-V3, varying risk behavior, demographics, injury severity, and driving context. Through mixed-methods analysis of 288 responses, the research reveals that LLMs do not exhibit traditional victim blaming but demonstrate compliance-based attribution. They adapt their analytical approach based on question framing rather than maintaining consistent safety principles. When asked about prevention, models provided comprehensive systems analysis with 89.5% of suggestions targeting systemic factors. However, responsibility attribution followed context-driven patterns: 100% driver blame in private scenarios versus 69.4% company blame in work-related scenarios, regardless of demographics. The convergence of these patterns across different AI architectures suggests fundamental challenges in analytical consistency rather than model-specific biases. These findings have critical implications for AI deployment in safety-critical contexts, revealing that sophisticated responses can mask inappropriate analytical frameworks. The study contributes new theoretical concepts to AI ethics and provides evidence-based guidance for ensuring AI systems support rather than undermine advances in safety science.