Performance comparison of different federated learning aggregation algorithms

How does the performance of different federated learning aggregation algorithms compare to each other?

More Info
expand_more

Abstract

Federated learning enables the construction of machine learning models, while adhering to privacy constraints and without sharing data between different devices. It is achieved by creating a machine learning model on each device that contains data, and then combining these models through an aggregation algorithm without sharing the data. Federated learning is currently a hot topic, and a lot of research has gone into implementing accurate aggregation algorithms. The original algorithm is FedAvg, and since then many different algorithms have been introduced. In this paper, I will compare the performance of five different aggregation algorithms: FedAvg, FedProx, FedYogi, FedMedian and q-FedAvg. The algorithms are compared on different data sets, namely MNIST and a kinase inhibition data set, as well as on different data distributions and number of clients. The experiments indicate that among these five algorithms, FedYogi achieves the best performance, both in terms of highest final accuracy as well as in terms of convergence rate.