Federated Learning (FL), is a distributed learning approach where multiple clients collaboratively train a model whilst maintaining data security and privacy. One significant challenge in FL that must be addressed is statistical heterogeneity within the data. This occurs because
...
Federated Learning (FL), is a distributed learning approach where multiple clients collaboratively train a model whilst maintaining data security and privacy. One significant challenge in FL that must be addressed is statistical heterogeneity within the data. This occurs because data across different clients may not come from the same distribution, potentially leading to sub-optimal performance. To address this, we examine how insights gained from a generative model’s latent space can mitigate these problems by adjusting the aggregation weight (influence) assigned to each client during the training process. We leverage information derived from a Variational Autoencoder (VAE) trained in a federated manner and propose a method to modify the aggregation weight of each client in FL. This method considers local discrepancies, resulting from differences between the local latent space distributions and global latent space distributions, together with the dataset sizes of each client. Experiments were conducted on the MNIST and Fashion-MNIST datasets. Our results indicate that our method enhance the model’s performance by up to 6.76% in the best case, in terms of reducing the average test VAE loss and accelerating the convergence of the β-VAE in scenarios characterised by severe data imbalances among clients. It worsens performance when all clients have an equal level of imbalance. The source code for our research is available at https://github.com/FederatedRP2024Delft/
Federated-Learning-PyTorch-Weight-Modification