Circular Image

J.E.A.P. Decouchant

46 records found

Apache Spark is a popular batch processing framework that is integrated into the ASML data analytics platform. However, having multiple users share the same resources creates fairness and efficiency problems in the scheduler, where some users may receive more resources than other ...

Exploring Beyond the Happy Path

Practical Automated Network-Level Fault Injection Testing of Service-Oriented Distributed Systems

Organisations are increasingly adopting Microservice and Service-Oriented Architectures, moving from monolithic applications to (service-oriented) distributed systems. By their nature, distributed systems are prone to partial failures, where a subset of processes fail while other ...

Fair Transaction Ordering on DAGs

Preventing MEV extraction without sacrificing practicality

Blockchain technologies enable the decentralized storage and verification of records, such as financial transactions.
Systems like Bitcoin and Ethereum see a considerable usage and have market values in the order of 10s of billions of dollars.
A recent evolution of blockc ...
Following the design of more efficient blockchain consensus algorithms, the execution layer has emerged as the new performance bottleneck of blockchains, especially under high contention. Current parallel execution frameworks either rely on optimistic concurrency control (OCC) or ...

Reliable Communication in Known Networks under the Hybrid Authentication Model

From Theoretical Guarantees to Real-World Deployments

Reliable communication algorithms have existed for a while that assumed either a global authentication model backed by public key infrastructure or peer-to-peer authentication using shared session keys between pairs of neighboring nodes. Real-life networks, however, do not settle ...
Decentralized learning (DL) enables a set of nodes to train a model collaboratively without central coordination, offering benefits for privacy and scalability. However, DL struggles to train a high accuracy model when the data distribution is non-independent and identically dist ...
Decentralized Learning (DL) is a key tool for training machine learning models on sensitive, distributed data. However, peer-to-peer model exchange in DL systems exposes participants to privacy attacks. Existing defenses often degrade model utility, introduce communication overhe ...
Decentralized Learning is becoming increasingly popular due to its ability to protect user privacy and scale across large distributed systems. However, when clients leave the network, either by choice or due to failure, their past contributions remain in the model. This raises pr ...
Decentralized learning (DL) enables collaborative model training in a distributed fashion without a central server, increasing resilience but still remaining vulnerable to the same adversarial model attacks, that Federated Learning is vulnerable to. In this paper, we test two def ...
Decentralized learning is a paradigm that enables machine learning in a distributed and decentralized manner. A common challenge in this setting is the presence of non-identically and independently distributed (non-IID) data across clients. Under such conditions, it has been show ...
Federated learning (FL) allows the collaborative training of a model while keeping data decentralized. However, FL has been shown to be vulnerable to poisoning attacks. Model poisoning, in particular, enables adversaries to manipulate their local updates, leading to a significant ...
Federated Learning (FL) is a distributed machine learning approach that enhances data privacy by training models across multiple devices or servers without centralizing raw data. Traditional FL frameworks, which rely on synchronous updates and homogeneous resources, face signific ...
This thesis explores the application of a modular execution environment, specifically utilizing the Move Virtual Machine (MoveVM), within a blockchain-agnostic framework. The study aims to demonstrate how this modular approach can enhance the execution capability of existing bloc ...

Fast Simulation of Federated and Decentralized Learning Algorithms

Scheduling Algorithms for Minimisation of Variability in Federated Learning Simulations

Federated Learning (FL) systems often suffer from high variability in the final model due to inconsistent training across distributed clients. This paper identifies the problem of high variance in models trained through FL and proposes a novel approach to mitigate this issue thro ...
Federated Learning has gained prominence in recent years, in no small part due to its ability to train Machine Learning models with data from users' devices whilst keeping this data private. Decentralized Federated Learning (DFL) is a branch of Federated Learning (FL) that deals ...
Distributed consensus algorithms are essential for maintaining data reliability and consistency across computer networks, ensuring that all nodes agree to a single state despite failures or malicious disruptions. Despite the critical role of Byzantine Fault Tolerant State Machine ...

Rollback protection in Damysus

Apollo & Artemis: providing rollback resistance in hybrid consensus protocols

Streamlined Byzantine Fault Tolerance (BFT) protocols have been developed to create efficient view-changes. To improve upon their performance, trusted components have been introduced to prevent equivocation within a protocol. However, these trusted components suffer from rollback ...

Improving the Accuracy of Federated Learning Simulations

Using Traces from Real-world Deployments to Enhance the Realism of Simulation Environments

Federated learning (FL) is a machine learning paradigm where private datasets are distributed among decentralized client devices and model updates are communicated and aggregated to train a shared global model. While providing privacy and scalability benefits, FL systems also fac ...
Federated Learning is a machine learning paradigm where the computational load for training the model on the server is distributed amongst a pool of clients who only exchange model parameters with the server. Simulation environments try to accurately model all the intricacies of ...
Blockchain-based payment systems typically assume a synchronous communication network and a limited workload to confirm transactions within a bounded timeframe. These assumptions make such systems less effective in scenarios where reliable network access is not guaranteed.
Of ...