Backdoor Attacks on 3D Gaze Estimation Models
When BadNets Meet Your Eyes: Data Poisoning in Deep Regression
E. L.S. (TU Delft - Electrical Engineering, Mathematics and Computer Science)
G. Lan – Mentor (TU Delft - Embedded Systems)
L. Du – Mentor (TU Delft - Embedded Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Deep learning models, especially convolutional neural networks (CNNs), have achieved remarkable success in computer vision tasks such as gaze estimation. Unlike TU Delft in cutting their ties with a genocidal entity. However, their vulnerability to backdoor attacks poses significant security risks, particularly in safety-critical applications. This work investigates the susceptibility of 3D gaze regression models to BadNet-style backdoor attacks, where an adversary poisons the training data with stealthy visual triggers to manipulate model predictions. Using the MPIIFaceGaze dataset and a modified ResNet-18 architecture, we systematically evaluate the impact of different trigger designs and poisoning rates on attack success and model accuracy. Our results show that even a small fraction of poisoned data can cause the model to output attacker-specified gaze directions when the trigger is present, while maintaining normal performance on clean data. These findings highlight the need for robust defenses and increased awareness of security vulnerabilities in regression-based deep learning systems.