An Exploratory Analysis on Users' Contributions in Federated Learning

Conference Paper (2020)
Author(s)

J. Huang (TU Delft - Data-Intensive Systems)

Rania Talbi (INSA Lyon)

Zilong Zhao (TU Delft - Data-Intensive Systems)

Sara Boucchenak (INSA Lyon)

Lydia Y. Chen (TU Delft - Data-Intensive Systems)

Stefanie Roos (TU Delft - Data-Intensive Systems)

Research Group
Data-Intensive Systems
Copyright
© 2020 J. Huang, Rania Talbi, Z. Zhao, Sara Boucchenak, Lydia Y. Chen, S. Roos
DOI related publication
https://doi.org/10.1109/TPS-ISA50397.2020.00014
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 J. Huang, Rania Talbi, Z. Zhao, Sara Boucchenak, Lydia Y. Chen, S. Roos
Research Group
Data-Intensive Systems
Pages (from-to)
20-29
ISBN (electronic)
9781728185439
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Federated Learning is an emerging distributed collaborative learning paradigm adopted by many of today's applications, e.g., keyboard prediction and object recognition. Its core principle is to learn from large amount of users data while preserving data privacy by design as collaborative users only need to share the machine learning models and keep data locally. The main challenge for such systems is to provide incentives to users to contribute high-quality models trained from their local data. In this paper, we aim to answer how well incentives recognize (in)accurate local models from honest and malicious users, and perceive their impacts on the model accuracy of federated learning systems. We first present a thorough survey on two contrasting perspectives: incentive mechanisms to measure the contribution of local models by honest users, and malicious users to deliberately degrade the overall model. We conduct simulation experiments to empirically demonstrate if existing contribution measurement schemes can disclose low-quality models from malicious users. Our results show there exists a clear tradeoff among measurement schemes in terms of the computational efficiency and effectiveness to distill the impact of malicious participants. We conclude this paper by discussing the research directions to design resilient contribution incentives.

Files

License info not available