Gaze estimation systems powered by deep neural networks are commonly used in sensitive applications such as driver assist or human-computer interaction. While backdoor attacks have been widely studied for classification tasks, vulnerability of regression networks like gaze estima
...
Gaze estimation systems powered by deep neural networks are commonly used in sensitive applications such as driver assist or human-computer interaction. While backdoor attacks have been widely studied for classification tasks, vulnerability of regression networks like gaze estimators to these kind of attacks still remain underexplored. This research investigates the effectiveness of full-image backdoor attacks on appearance-based gaze estimation models. Specifically, the study explores dirty-label attacks with two types of global backdoor triggers: a spatial-domain sinusoidal pattern and a randomized frequency-domain perturbation. Experimental results on the MPIIFaceGaze dataset demonstrate that both triggers can reliably induce malicious outputs while preserving high accuracy on clean data, with the frequency-domain trigger offering superior stealth. These findings highlight a significant vulnerability in deep regression models, emphasizing the need for defensive mechanisms in real-world gaze estimation systems.