Federated Learning (FL) systems often suffer from high variability in the final model due to inconsistent training across distributed clients. This paper identifies the problem of high variance in models trained through FL and proposes a novel approach to mitigate this issue thro
...
Federated Learning (FL) systems often suffer from high variability in the final model due to inconsistent training across distributed clients. This paper identifies the problem of high variance in models trained through FL and proposes a novel approach to mitigate this issue through scheduling simulations subject to precedence constraints. By effectively scheduling the execution of client tasks and parameter server updates, we aim to reduce the variance in the final aggregated model. Through a series of experiments, we demonstrate that our proposed scheduling method significantly reduces model variance, while not impacting the time of simulation drastically. Additionally, we propose 2 algorithms to solve the problem of scheduling under precedence constraints - Ant Colony Optimisation, and an Evolutionary Algorithm - to minimize the makespan of simulations.