Circular Image

B.A. Cox

19 records found

Decentralized Learning is becoming increasingly popular due to its ability to protect user privacy and scale across large distributed systems. However, when clients leave the network, either by choice or due to failure, their past contributions remain in the model. This raises pr ...
Decentralized Learning (DL) is a key tool for training machine learning models on sensitive, distributed data. However, peer-to-peer model exchange in DL systems exposes participants to privacy attacks. Existing defenses often degrade model utility, introduce communication overhe ...
Decentralized learning (DL) enables a set of nodes to train a model collaboratively without central coordination, offering benefits for privacy and scalability. However, DL struggles to train a high accuracy model when the data distribution is non-independent and identically dist ...
Decentralized learning (DL) enables collaborative model training in a distributed fashion without a central server, increasing resilience but still remaining vulnerable to the same adversarial model attacks, that Federated Learning is vulnerable to. In this paper, we test two def ...
Decentralized learning is a paradigm that enables machine learning in a distributed and decentralized manner. A common challenge in this setting is the presence of non-identically and independently distributed (non-IID) data across clients. Under such conditions, it has been show ...
Federated Learning has gained prominence in recent years, in no small part due to its ability to train Machine Learning models with data from users' devices whilst keeping this data private. Decentralized Federated Learning (DFL) is a branch of Federated Learning (FL) that deals ...

Improving the Accuracy of Federated Learning Simulations

Using Traces from Real-world Deployments to Enhance the Realism of Simulation Environments

Federated learning (FL) is a machine learning paradigm where private datasets are distributed among decentralized client devices and model updates are communicated and aggregated to train a shared global model. While providing privacy and scalability benefits, FL systems also fac ...
Federated Learning is a machine learning paradigm where the computational load for training the model on the server is distributed amongst a pool of clients who only exchange model parameters with the server. Simulation environments try to accurately model all the intricacies of ...

Fast Simulation of Federated and Decentralized Learning Algorithms

Scheduling Algorithms for Minimisation of Variability in Federated Learning Simulations

Federated Learning (FL) systems often suffer from high variability in the final model due to inconsistent training across distributed clients. This paper identifies the problem of high variance in models trained through FL and proposes a novel approach to mitigate this issue thro ...
In federated learning systems, a server maintains a global model trained by a set of clients based on their local datasets. Conventional synchronous FL systems are very sensitive to system heterogeneity since the server needs to wait for the slowest clients in each round. Asynchr ...
Federated Learning (FL) is widely favoured in the training of machine learning models due to its privacy-preserving and data diversity benefits. In this research paper, we investigate an extension of FL referred to as Personalized Federated Learning (PFL) for the purpose of train ...

Natural Language Processing and Tabular Data sets in Federated Continual Learning

A usability study of FCL in domains beyond Image classification

Federated Continual Learning (FCL) is a emerging field with strong roots in Image classification. However, limited research has been done on its potential in Natural Language Processing and Tabular datasets. With recent developments in A.I. with language models and the widespread ...

Training diffusion models with federated learning

A communication-efficient model for cross-silo federated image generation

The training of diffusion-based models for image generation is predominantly controlled by a select few Big Tech companies, raising concerns about privacy, copyright, and data authority due to the lack of transparency regarding training data. Hence, we propose a federated diffusi ...
Federated learning enables training machine learning models on decentralized data sources without centrally aggregating sensitive information. Continual learning, on the other hand, focuses on learning and adapting to new tasks over time while avoiding the catastrophic forgetting ...
In this paper we will consider the Byzantine Reliable Broadcast problem on partially connected net- works. We introduce an routing algorithm for networks with a known topology. It will show that when this is combined with cryptographic signatures, we can use the routing algorithm ...
Increasing digitalisation of society due to technical advancement has increased the appearance and size of cyber- physical systems. These systems require real-time reliable control, which comes with its challenges. These systems need reliable communication despite the presence of ...
Distributed systems are networks of nodes depending on each other. However, each network can have multiple faulty nodes, which are either malfunctioning or malicious. Bracha's algorithm allows correct nodes inside the network to agree on certain information, while tolerating a ce ...
During this research we have replaced Bracha’s layer in the state-of-the-art Bracha-Dolev protocol to improve the performance by decreasing the message complexity of the protocol running on top of a given network topology so long as the requirements stated by Bracha and Dolev are ...
Discovering the topology in an unknown network is a fundamental problem for the distributed systems that faces several backlashes due to the proneness of such systems to Byzantine (i.e. arbitrary or malicious) failures. During the past decades, several protocols were developed to ...