Manipulating Head Pose Estimation Models

Exploring Deep Regression Models’ Vulnerability to Full Image Backdoor Attacks

Bachelor Thesis (2025)
Author(s)

P. Gulyás (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Guohao Lan – Mentor (TU Delft - Embedded Systems)

L. Du – Mentor (TU Delft - Embedded Systems)

Georgios Smaragdakis – Graduation committee member (TU Delft - Cyber Security)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
27-06-2025
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Backdoor attacks manipulate the behaviour of deep neural networks through dataset poisoning, causing the models to produce specific outputs in the presence of a trigger, while behaving as expected otherwise. Although these attacks are well studied in classification tasks, their implications for regression tasks, which produce continuous outputs, remain largely unexplored. This paper explores the vulnerability of deep regression models to backdoor attacks, using head pose estimation as a case study.

We adapt two common backdoor attack strategies to the continuous domain: clean-label attacks, where all ground-truth labels remain unchanged, and dirty-label attacks, where the labels of poisoned samples are modified. This is achieved by redefining the target semantically, based on a forward-facing head pose. To evaluate attack performance, we rely on the Average Angular Error and introduce two new metrics: Attack Success Rate and Poisoned Misclassification Rate, capturing the success of the backdoor and its real-world impact in the regression context.

Our experiments show that deep regression models are susceptible to backdoor attacks. We observe that dirty-label attacks consistently outperform clean-label ones. Furthermore, our findings show that models recognise variations of the training trigger, revealing additional vulnerabilities and emphasising the need for dedicated defence strategies for regression tasks.

Files

License info not available