Evaluating differential privacy on language processing federated learning
Q.M.F. Van Opstal (TU Delft - Electrical Engineering, Mathematics and Computer Science)
J. Huang – Mentor (TU Delft - Data-Intensive Systems)
Y. Chen – Mentor (TU Delft - Data-Intensive Systems)
Marco Zúñiga Zuñiga Zamalloa – Graduation committee member (TU Delft - Networked Systems)
More Info
expand_more
A link to the associated github project.
https://github.com/QuintenVanOpstal/OOD_Federated_Learning.gitOther than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Federated learning provides a lot of opportunities, especially with the built-in privacy considerations. There is however one attack that might compromise the utility of federated learning: backdoor attacks [14]. There are already some existing defenses, like flame [13] but they are computationally expensive [14]. This paper evaluates a version of differential privacy, where the Gaussian noise added to the aggravated model of the clipped updates is smaller than usually. This is often referred to as weak differential privacy or weakDP. This paper evaluates weakDP with different parameters to find if weakDP can be used as a defense for a language processing federated learning classifier against a backdoor attack.