Meaningful human control of partially automated driving systems
Insights from interviews with Tesla users
Lucas Elbert Suryana (TU Delft - Transport, Mobility and Logistics)
S. Nordhoff (University of California, TU Delft - Traffic Systems Engineering)
S. C. Calvert (TU Delft - Traffic Systems Engineering)
Arkady Zgonnikov (TU Delft - Human-Robot Interaction)
B Arem (TU Delft - Transport, Mobility and Logistics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Partially automated driving systems are designed to perform specific driving tasks—such as steering, accelerating, and braking—while still requiring human drivers to monitor the environment and intervene when necessary. This shift of driving responsibilities from human drivers to automated systems raises concerns about accountability, particularly in scenarios involving unexpected events. To address these concerns, the concept of meaningful human control (MHC) has been proposed. MHC emphasises the importance of humans retaining oversight and responsibility for decisions made by automated systems. Despite extensive theoretical discussion of MHC in driving automation, there is limited empirical research on how real-world partially automated systems align with MHC principles. This study offers two main contributions: (1) an empirical evaluation of MHC in partially automated driving, based on 103 semi-structured interviews with users of Tesla's Autopilot and Full Self-Driving (FSD) Beta systems; and (2) a methodological framework for assessing MHC through qualitative interview data. We operationalise the previously proposed tracking and tracing conditions of MHC using a set of evaluation criteria to determine whether these systems support meaningful human control in practice. Our findings indicate that several factors influence the degree to which MHC is achieved. Failures in tracking—where drivers' expectations regarding system safety are not adequately met—arise from technological limitations, susceptibility to environmental conditions (e.g., adverse weather or inadequate infrastructure), and discrepancies between technical performance and user satisfaction. Tracing performance—the ability to clearly assign responsibility—is affected by inconsistent adherence to safety protocols, varying levels of driver confidence, and the specific driving mode in use (e.g., Autopilot versus FSD Beta). These findings contribute to ongoing efforts to design partially automated driving systems that more effectively support meaningful human control and promote more appropriate use of automation.