With the rise of AI, more attacks are targeted towards AI models. Trying to gain control over the output of the model. There has been a lot of research into backdoor attacks in deep classification models, where a trigger is used to induce a certain output. However, whether deep r
...
With the rise of AI, more attacks are targeted towards AI models. Trying to gain control over the output of the model. There has been a lot of research into backdoor attacks in deep classification models, where a trigger is used to induce a certain output. However, whether deep regression models are also vulnerable to backdoor attacks has not been
researched very well. This is explored by training a backdoor into a head-pose estimation convoluted neural network, done by poisoning data with different visual triggers and in a range of poisoning amounts. And tested by comparing the loss to a benign model. The results show a test loss of around 1.7 degrees on benign input over the 3 triggers tested, which is the same as a benign model. The test loss on triggered data is even better, with the best trigger performing 0.5 degrees. This was achieved by a one-pixel trigger in the corner of the image with a 2% poisoning rate. Thus, a back doored model is created that reacts to a visual trigger. Showing Deep regression models are vulnerable to backdoor attacks.