Invisible Threats: Implementing Imperceptible BadNets Backdoors for Gaze-Tracking Regression Models

More Info
expand_more

Abstract

The use of deep learning models has advanced in gaze-tracking systems, but it has also introduced new vulnerabilities to backdoor attacks, such as BadNets. This attack allows models to behave normally on regular inputs. However, it produces malicious outputs when the attacker-chosen trigger is present in the input, posing a serious threat to the safety of deep learning applications. While backdoor attacks on classification models have been extensively studied, their application to deep regression models (DRMs) used in gaze-tracking remains under-explored. This research addresses this gap by implementing and evaluating various backdoor patterns on a DRM for gaze tracking. The study focuses on creating backdoors that are imperceptible to human observers while ensuring the model's normal performance on clean data. Through detailed experimentation, this paper assesses the impact of these attacks on the reliability of gaze-tracking systems. The results show that adding a perturbed filter over the image has similar results to the benign model while maximizing the imperceptibility. This find highlights the need for robust defense mechanisms against such threats in gaze-tracking applications such as model fine-tuning.