Improving privacy of Federated Learning Generative Adversarial Networks using Intel SGX
W. Jehee (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Kaitai Liang – Mentor (TU Delft - Cyber Security)
Julián Urbano – Graduation committee member (TU Delft - Multimedia Computing)
R. Wang – Mentor (TU Delft - Cyber Security)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Federated learning (FL), although a major privacy improvement over centralized learning, is still vulnerable to privacy leaks. The research presented in this paper provides an analysis of the threats to FL Generative Adversarial Networks. Furthermore, an implementation is provided to better protect the data of the participants with Trusted Execution Environments (TEEs), using Intel Software Guard Extensions. Lastly, the viability of it’s use in practice is evaluated and discussed. The results indicate that this approach protects the data, while not affecting the predicting capabilities of the model, with a noticeable but manageable impact on the training duration.