Victim Blaming Bias in Traffic Accidents Using Large Language Models

Master Thesis (2025)
Author(s)

I. Oğuz (TU Delft - Technology, Policy and Management)

Contributor(s)

Oscar Oviedo-Trespalacios – Mentor (TU Delft - Safety and Security Science)

H. Torkamaan – Graduation committee member (TU Delft - System Engineering)

P.H.A.J.M. Van Gelder – Graduation committee member (TU Delft - Safety and Security Science)

Faculty
Technology, Policy and Management
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
28-07-2025
Awarding Institution
Delft University of Technology
Programme
['Management of Technology (MoT)']
Faculty
Technology, Policy and Management
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This study investigates whether Large Language Models (LLMs) exhibit victim blaming tendencies when analyzing traffic accident scenarios. Using a systematic factorial design, 144 road safety scenarios were tested across ChatGPT-4o and DeepSeek-V3, varying risk behavior, demographics, injury severity, and driving context. Through mixed-methods analysis of 288 responses, the research reveals that LLMs do not exhibit traditional victim blaming but demonstrate compliance-based attribution. They adapt their analytical approach based on question framing rather than maintaining consistent safety principles. When asked about prevention, models provided comprehensive systems analysis with 89.5% of suggestions targeting systemic factors. However, responsibility attribution followed context-driven patterns: 100% driver blame in private scenarios versus 69.4% company blame in work-related scenarios, regardless of demographics. The convergence of these patterns across different AI architectures suggests fundamental challenges in analytical consistency rather than model-specific biases. These findings have critical implications for AI deployment in safety-critical contexts, revealing that sophisticated responses can mask inappropriate analytical frameworks. The study contributes new theoretical concepts to AI ethics and provides evidence-based guidance for ensuring AI systems support rather than undermine advances in safety science.

Files

License info not available