Backdoor Attack on gaze estimation

More Info
expand_more

Abstract

Badnets are a type of backdoor attack that aims at manipulating the behavior of Convolutional Neural Networks. The training is modified such that when certain triggers appear in the inputs the CNN is going to behave accordingly. In this paper, we apply this type of backdoor attack to a regression task on gaze estimation. We examine different triggers to discover which of them lead to better performance and thus infer which trigger aspects one can take the most advantage from. It turns out that placing frames around and drawing multiple lines across the images are the most effective for the training of Badnets.

Files