Adversarial Attacks against the Perception System of Autonomous Vehicles

More Info
expand_more

Abstract

The rapid advancement in autonomous driving technology underscores the importance of studying the fragility of perception systems in autonomous vehicles, particularly due to their profound impact on public transportation safety. These systems are of paramount importance due to their direct impact on the lives of passengers and pedestrians. Additionally, their reliability can be easily compromised given the complexity and unpredictability of driving environments. However, current research and existing regulations often fail to adequately address the adversarial robustness of autonomous vehicle perception systems. This thesis delves into the adversarial robustness of camera-based perception systems of autonomous vehicles. Our research concentrates on developing and implementing evasion attacks that use black-box gradient estimation, as well as physical attacks in traffic sign detection and classification systems. Our findings indicate that even minor perturbations can impact the accuracy of these systems, leading to detection and classification errors. This finding highlights a critical vulnerability in the perception system's robustness against adversarial attacks. Moreover, the study extends to assess the transferability of adversarial examples across diverse perception models. Our results also expose significant gaps in the current regulatory frameworks of autonomous vehicles, necessitating the establishment of more rigorous and comprehensive safety standards.