Decentralized learning (DL) enables a set of nodes to train a model collaboratively without central coordination, offering benefits for privacy and scalability. However, DL struggles to train a high accuracy model when the data distribution is non-independent and identically dist
...
Decentralized learning (DL) enables a set of nodes to train a model collaboratively without central coordination, offering benefits for privacy and scalability. However, DL struggles to train a high accuracy model when the data distribution is non-independent and identically distributed (non-IID) and when the communication topology is static. To address these issues, we propose DissDL, a fully decentralized topology optimization algorithm for DL where nodes select the peers they exchange models with based on local model dissimilarity. DissDL maintains a fixed in-degree while dynamically adapting the communication graph via gossip-based peer discovery and diversity-driven neighbor selection, improving robustness to data heterogeneity. Experiments on CIFAR-10 and FEMNIST show that DissDL achieves faster convergence, more stable learning, as quantified by lower inter-node variance across nodes, and higher final accuracy than with static topologies and state-of-the-art baselines. For example, on CIFAR-10, DissDL reaches the best accuracy achieved by its most competitive baseline using 1.34 × fewer communication rounds and 1.34 × less communication cost, while maintaining comparable overhead.