Tailoring Attacks To Federated Continual Learning Models

Bachelor Thesis (2023)
Author(s)

E.V. Trinh (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Bart Cox – Mentor (TU Delft - Data-Intensive Systems)

J.E.A.P. Decouchant – Mentor (TU Delft - Data-Intensive Systems)

Qing Wang – Graduation committee member (TU Delft - Embedded Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2023 Eames Trinh
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Eames Trinh
Graduation Date
30-06-2023
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Federated learning enables training machine learning models on decentralized data sources without centrally aggregating sensitive information. Continual learning, on the other hand, focuses on learning and adapting to new tasks over time while avoiding the catastrophic forgetting of knowledge from previously encountered tasks. Federated Continual Learning (FCL) addresses this challenge within the framework of federated learning. This thesis investigates how FCL can be made vulnerable to Byzantine attacks (from unpredictable or malicious nodes), which aim to manipulate or corrupt the training process, compromising model performance. We adapt and evaluate four existing attacks from traditional federated learning in the FCL setting. Furthermore, we propose three tailored attacks for FCL are proposed based on the insights gained. Additionally, a novel attack called "Incremental Forgetting" is introduced, which specifically targets the incremental knowledge retention aspect of FCL. Our experimental evaluations of the attacks carried out against various FCL algorithms show that personalizing these towards FCL provides varying degrees of performance benefits, while the novel attack additionally exhibits evidence showing it may be more practical against real-world systems, strengthening its impact on the FCL community. This research contributes to the development of secure and resilient FCL systems to build better defenses against such attacks in the federated learning domain.

Files

License info not available