Realistic Adversarial Attacks for Robustness Evaluation of Trajectory Prediction Models

More Info
expand_more

Abstract

Trajectory prediction is a key element of autonomous vehicle systems, enabling them to anticipate and react to the movements of other road users. Robustness testing through adversarial methods is essential for evaluating the reliability of these prediction models. However, current approaches tend to focus solely on manipulating model inputs, which can generate unrealistic scenarios and overlook critical vulnerabilities. This limitation may result in incomplete assessments of model performance in real-world conditions. The specific effects of more comprehensive adversarial attacks on trajectory prediction models have not been thoroughly investigated. In this work, we demonstrate that by perturbing both model inputs and anticipated future states, we can uncover previously undetected weaknesses and provide a more realistic evaluation of model robustness. Our novel approach incorporates dynamical constraints and preserves tactical behaviors, enabling more effective and realistic adversarial attacks. We introduce new performance measures to assess the realism and impact of these adversarial trajectories. Testing our method on a state-of-the-art prediction model reveals significant increases in prediction errors and collision rates under adversarial conditions. Qualitative analysis further shows that our attacks can expose critical weaknesses, such as the model’s inability to detect potential collisions in what appear to be safe predictions. These results underscore the need for more comprehensive adversarial testing to better evaluate and improve the reliability of trajectory prediction models for autonomous vehicles. To support further research in this area, we provide an open-source framework for studying adversarial robustness in trajectory prediction. This work advances adversarial testing techniques, contributing to the safety and reliability of autonomous driving systems.