Print Email Facebook Twitter Approximate Gradient Inversion Attack on Federated Learning Title Approximate Gradient Inversion Attack on Federated Learning Author Xu, Jin (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Chen, Lydia Y. (mentor) Decouchant, Jérémie (mentor) Liang, K. (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science Date 2022-06-27 Abstract Federated learning is a private-by-design distributed learning paradigm where clients train local models on their own data before a central server aggregates their local updates to compute a global model. Depending on the aggregation method used, the local updates are either the gradients or the weights of local learning models. Unfortunately, recent reconstruction attacks apply a gradient inversion optimization on the gradient update of a single mini-batch to reconstruct the private data used by clients during training. As the state-of-the-art reconstruction attacks solely focus on single update, realistic adversarial scenarios are overlooked, such as observation across multiple updates and updates trained from multiple mini-batches. A few studies consider a more challenging adversarial scenario where only model updates are observable, and resort to computationally expensive simulation to untangle the underlying samples for each local step. In this paper, we propose AGIC, Approximate Gradient Inversion Attack, that efficiently and effectively reconstructs images from both model or gradient updates, and across multiple epochs. In a nutshell, AGIC (i) approximates gradient updates of used training samples from model updates, (ii) leverages gradient/model updates collected from multiple epochs, and (iii) assigns increasing weights to layers with respect to the neural network structure for reconstruction quality. Our experiment results show that AGIC increases the peak signal-to-noise ratio (PSNR) by up to 50% compared to two representative state-of-the-art gradient inversion attacks. Furthermore, AGIC is faster than the simulation-based attack, e.g., it is 5x faster when attacking FedAvg with 8 local steps in between model updates. Subject Reconstruction AttackFederated LearningFederated Averaging To reference this document use: http://resolver.tudelft.nl/uuid:cb6408f5-22b6-46c9-b191-bcc6c5b1de4b Part of collection Student theses Document type master thesis Rights © 2022 Jin Xu Files PDF Jin_Xu_Master_Thesis.pdf 2.69 MB Close viewer /islandora/object/uuid:cb6408f5-22b6-46c9-b191-bcc6c5b1de4b/datastream/OBJ/view