Imperceptible Backdoor Attacks on Deep Regression Models

Applying a backdoor attack to compromise a gaze estimation model

More Info
expand_more

Abstract

This research investigates backdoor attacks on deep regression models, focusing on the gaze estimation task. Backdoor triggers can be used to poison a model during training phase to have a hidden misbehaving functionality. For gaze estimation, a backdoored model will return an attacker-chosen target gaze direction, normally incorrect, regardless of image content, when presented with an image containing a trigger. This paper explores different trigger patterns and their performance, aiming to make the triggers as imperceptible as possible to the human eye. Furthermore, the research explores a method to make the corruption of the training set as stealthy as possible while achieving a good attack performance. In the end, the findings showed that backdoor attacks on deep regression models can be made imperceptible and highly performant using complex trigger patterns. While stealthy corruption was also possible, achieving an efficient model would require a larger dataset.

Files

Rp.pdf
(.pdf | 2.26 Mb)