Print Email Facebook Twitter Evaluating differential privacy on language processing federated learning Title Evaluating differential privacy on language processing federated learning Author Van Opstal, Quinten (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Huang, J. (mentor) Chen, Lydia Y. (mentor) Zuniga, Marco (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science and Engineering Project CSE3000 Research Project Date 2024-02-02 Abstract Federated learning provides a lot of opportunities, especially with the built-in privacy considerations. There is however one attack that might compromise the utility of federated learning: backdoor attacks [14]. There are already some existing defenses, like flame [13] but they are computationally expensive [14]. This paper evaluates a version of differential privacy, where the Gaussian noise added to the aggravated model of the clipped updates is smaller than usually. This is often referred to as weak differential privacy or weakDP. This paper evaluates weakDP with different parameters to find if weakDP can be used as a defense for a language processing federated learning classifier against a backdoor attack. Subject Federated learningDifferential PrivacyNon-iid To reference this document use: http://resolver.tudelft.nl/uuid:7ed823a0-5a2c-4439-8fd0-60e02235f24e Bibliographical note A link to the associated github project. - https://github.com/QuintenVanOpstal/OOD_Federated_Learning.git Part of collection Student theses Document type bachelor thesis Rights © 2024 Quinten Van Opstal Files PDF research_project_5_.pdf 822.95 KB Close viewer /islandora/object/uuid:7ed823a0-5a2c-4439-8fd0-60e02235f24e/datastream/OBJ/view