The susceptibility of deep regression models to imperceptible backdoor attacks

More Info
expand_more

Abstract

Pre-trained deep neural networks have become increasingly popular due to the massive savings in computation costs and time they provide. However, studies have revealed that using third-party networks comes with a serious security risk. Backdoor injections can compromise such models, causing them to misbehave on command, which can have detrimental effects on the applications involved. The problem affects a wide range of tasks, as numerous studies have shown over recent years. Most of these studies focus on tasks performed by deep classification models, but insufficient studies exist to determine whether or not deep regression models suffer from the same consequences. This study aims to verify to what extent this is the case. To do so, we constructed our own deep regression model and compromised it with an existing backdoor injection. We defined the necessary evaluation metric to compare the susceptibility of our regression model to that of the classification models that have already been shown to be affected. The code is made available for further details and reproducibility.