Evaluating differential privacy on language processing federated learning

Bachelor Thesis (2024)
Author(s)

Q.M.F. Van Opstal (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

J. Huang – Mentor (TU Delft - Data-Intensive Systems)

Y. Chen – Mentor (TU Delft - Data-Intensive Systems)

Marco Zúñiga Zuñiga Zamalloa – Graduation committee member (TU Delft - Networked Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2024 Quinten Van Opstal
More Info
expand_more
Publication Year
2024
Language
English
Copyright
© 2024 Quinten Van Opstal
Graduation Date
02-02-2024
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Related content

A link to the associated github project.

https://github.com/QuintenVanOpstal/OOD_Federated_Learning.git
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Federated learning provides a lot of opportunities, especially with the built-in privacy considerations. There is however one attack that might compromise the utility of federated learning: backdoor attacks [14]. There are already some existing defenses, like flame [13] but they are computationally expensive [14]. This paper evaluates a version of differential privacy, where the Gaussian noise added to the aggravated model of the clipped updates is smaller than usually. This is often referred to as weak differential privacy or weakDP. This paper evaluates weakDP with different parameters to find if weakDP can be used as a defense for a language processing federated learning classifier against a backdoor attack.

Files

Research_project_5_.pdf
(pdf | 0.804 Mb)
License info not available