Adversarial Attacks against the Perception System of Autonomous Vehicles

Master Thesis (2023)
Author(s)

Y. Gao (TU Delft - Mechanical Engineering)

Contributor(s)

L. Laurenti – Mentor (TU Delft - Team Luca Laurenti)

A. Zgonnikov – Mentor (TU Delft - Human-Robot Interaction)

Koyal Koyal – Mentor

Koen Boer – Mentor

Holger Caesar – Graduation committee member (TU Delft - Intelligent Vehicles)

Faculty
Mechanical Engineering
Copyright
© 2023 Yuxing Gao
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Yuxing Gao
Graduation Date
21-12-2023
Awarding Institution
Mechanical Engineering, Delft University of Technology
Programme
Mechanical Engineering | Systems and Control
Sponsors
RDW
Faculty
Mechanical Engineering
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The rapid advancement in autonomous driving technology underscores the importance of studying the fragility of perception systems in autonomous vehicles, particularly due to their profound impact on public transportation safety. These systems are of paramount importance due to their direct impact on the lives of passengers and pedestrians. Additionally, their reliability can be easily compromised given the complexity and unpredictability of driving environments. However, current research and existing regulations often fail to adequately address the adversarial robustness of autonomous vehicle perception systems. This thesis delves into the adversarial robustness of camera-based perception systems of autonomous vehicles. Our research concentrates on developing and implementing evasion attacks that use black-box gradient estimation, as well as physical attacks in traffic sign detection and classification systems. Our findings indicate that even minor perturbations can impact the accuracy of these systems, leading to detection and classification errors. This finding highlights a critical vulnerability in the perception system's robustness against adversarial attacks. Moreover, the study extends to assess the transferability of adversarial examples across diverse perception models. Our results also expose significant gaps in the current regulatory frameworks of autonomous vehicles, necessitating the establishment of more rigorous and comprehensive safety standards.

Files

License info not available