Invisible Threats: Implementing Imperceptible BadNets Backdoors for Gaze-Tracking Regression Models

Bachelor Thesis (2024)
Author(s)

D.B. Bentsnijder (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

L. Du – Mentor (TU Delft - Embedded Systems)

L. A.N. Guohao – Mentor (TU Delft - Embedded Systems)

Sicco Verwer – Graduation committee member (TU Delft - Algorithmics)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2024
Language
English
Graduation Date
26-06-2024
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The use of deep learning models has advanced in gaze-tracking systems, but it has also introduced new vulnerabilities to backdoor attacks, such as BadNets. This attack allows models to behave normally on regular inputs. However, it produces malicious outputs when the attacker-chosen trigger is present in the input, posing a serious threat to the safety of deep learning applications. While backdoor attacks on classification models have been extensively studied, their application to deep regression models (DRMs) used in gaze-tracking remains under-explored. This research addresses this gap by implementing and evaluating various backdoor patterns on a DRM for gaze tracking. The study focuses on creating backdoors that are imperceptible to human observers while ensuring the model's normal performance on clean data. Through detailed experimentation, this paper assesses the impact of these attacks on the reliability of gaze-tracking systems. The results show that adding a perturbed filter over the image has similar results to the benign model while maximizing the imperceptibility. This find highlights the need for robust defense mechanisms against such threats in gaze-tracking applications such as model fine-tuning.

Files

License info not available