Assessing Human Drivers

From Raw Data to Context-Aware Interpretations

Doctoral Thesis (2025)
Author(s)

T. Driessen (TU Delft - Human-Robot Interaction)

Contributor(s)

J.C.F. de Winter – Promotor (TU Delft - Human-Robot Interaction)

D. Dodou – Promotor (TU Delft - Medical Instruments & Bio-Inspired Technology)

Dick De Waard – Promotor (University Medical Center Groningen, Rijksuniversiteit Groningen)

Research Group
Human-Robot Interaction
DOI 4TU.ResearchData dataset
https://doi.org/10.4121/uuid:be5e6366-d881-4ab4-8522-42416efab787
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Human-Robot Interaction
ISBN (print)
9789463848114
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Road traffic accidents remain a major public health concern worldwide. Technological advances in vehicle sensing, automation, and artificial intelligence present novel opportunities to assess and improve human driving. This dissertation explores these opportunities by developing and evaluating algorithms to assess the behavior of car and truck drivers.

Initial research establishes the perspectives of driving examiners and professional truck drivers on the acceptance of data-driven tools to assess driver behavior. The work then demonstrates that practical methods using readily available GPS and accelerometer data can successfully identify driving styles and predict negative outcomes like fines and damage incidents at a population level. However, these simple metrics prove insufficient for fair individual assessment due to the lack of situational context embedded in such data.

To address this limitation, the thesis explores modern AI-based approaches. It demonstrates how AI systems from automated driving can provide continuous behavioral references to evaluate human performance, and concludes by showing that vision-language models can establish a more holistic, "context-aware" risk assessment using images of typical traffic situations.