Adversarial Attack and Training on Deep Learning-based Gaze estimation

More Info
expand_more

Abstract

Recently, while gaze estimation has gained a substantial improvement by using deep learning models, research had shown that neural networks are weak against adversarial attacks. Despite researchers has been done numerous on adversarial training, there are little to no studies on adversarial training in gaze estimation. Therefore, the objective of this project is to investigate how these adversarial samples affect the gaze estimation’s performance and how the adversarial training elevates the effect of these adversarial attacks. For projected gradient descent adversarial attack, the result shows that the bound of the final noise, the step size and the number of steps toward the gradient, and the randomized noise initiation are all able to worsen the baseline performance to varying degrees. Further, the performance reveals that while projected gradient descent adversarial training can defend against certain adversarial attacks, its performance is not converging to the baseline. In general, the performance of adversarial training on gaze estimation could be influenced by data augmentation, loss function, model capacity, and the type of adversarial training.