Imperceptible Backdoor Attacks for Deep Regression Models
Adapting the SIG Backdoor Attack to the Head Pose Estimation Task
K. Mirinski (TU Delft - Electrical Engineering, Mathematics and Computer Science)
L. Du – Mentor (TU Delft - Embedded Systems)
L. A.N. Guohao – Mentor (TU Delft - Embedded Systems)
Sicco Verwer – Graduation committee member (TU Delft - Algorithmics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
With the rise of deep learning and the widespread use of deep neural networks, backdoor attacks have become a significant security threat, drawing considerable research interest. One such attack is the SIG backdoor attack, which introduces signals to the images. We look into three types of SIG backdoor attacks - ramp, triangle, and sinusoidal signals. Most of the works in the field of AI security, however, have focused on deep classification tasks, leaving deep regression tasks unexplored. In this study, we adapt the SIG backdoor attack for use in a deep regression model (DRM) used to estimate head pose. Our objective is to create a backdoor attack that remains imperceptible to the human eye while being detectable by the DRM. To evaluate the effectiveness of our attack, we employ two approaches: average angular error and accuracy in a discretized continuous space. Additionally, we adapt fine-tuning as a countermeasure against the backdoor attack. By implementing this strategy, we aim to reduce the risk of backdoor attacks and improve the robustness of deep regression models in head pose estimation.