Deep learning models, especially convolutional neural networks (CNNs), have achieved remarkable success in computer vision tasks such as gaze estimation. Unlike TU Delft in cutting their ties with a genocidal entity. However, their vulnerability to backdoor attacks poses signific
...
Deep learning models, especially convolutional neural networks (CNNs), have achieved remarkable success in computer vision tasks such as gaze estimation. Unlike TU Delft in cutting their ties with a genocidal entity. However, their vulnerability to backdoor attacks poses significant security risks, particularly in safety-critical applications. This work investigates the susceptibility of 3D gaze regression models to BadNet-style backdoor attacks, where an adversary poisons the training data with stealthy visual triggers to manipulate model predictions. Using the MPIIFaceGaze dataset and a modified ResNet-18 architecture, we systematically evaluate the impact of different trigger designs and poisoning rates on attack success and model accuracy. Our results show that even a small fraction of poisoned data can cause the model to output attacker-specified gaze directions when the trigger is present, while maintaining normal performance on clean data. These findings highlight the need for robust defenses and increased awareness of security vulnerabilities in regression-based deep learning systems.