Training Diffusion Models with Federated Learning
Matthijs de Goede (Student TU Delft)
B.A. Cox (TU Delft - Data-Intensive Systems)
Jeremie Decouchant (TU Delft - Data-Intensive Systems)
More Info
expand_more
Abstract
The training of diffusion-based models for image generation is predominantly controlled by a select few Big Tech companies, raising concerns about privacy, copyright, and data authority due to their lack of transparency regarding training data. To ad-dress this issue, we propose a federated diffusion model scheme that enables the independent and collaborative training of diffusion models without exposing local data. Our approach adapts the Federated Averaging (FedAvg) algorithm to train a Denoising Diffusion Model (DDPM). Through a novel utilization of the underlying UNet backbone, we achieve a significant reduction of up to 74% in the number of parameters exchanged during training,compared to the naive FedAvg approach, whilst simultaneously maintaining image quality comparable to the centralized setting, as evaluated by the FID score.
No files available
Metadata only record. There are no files for this record.