Concurrency Testing of PBFT

How do different exploration strategies perform for detecting concurrency bugs in PBFT?

Bachelor Thesis (2023)
Author(s)

M.A. Petrov (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Burcu Kulahcioglu Özkan – Mentor (TU Delft - Software Engineering)

Ege Berkay Gulcan – Mentor (TU Delft - Software Engineering)

JA Pouwelse – Graduation committee member (TU Delft - Data-Intensive Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2023 Martin Petrov
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Martin Petrov
Graduation Date
30-06-2023
Awarding Institution
Delft University of Technology
Project
CSE3000 Research Project
Programme
Computer Science and Engineering
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Consensus algorithms, as well as distributed systems in general, are vulnerable to concurrency bugs due to non-determinism. Such bugs are hard to detect since it is necessary to test using a lot of different scenarios and even then, there is no guarantee to find one.

Controlled concurrency testing is a proposed solution to that problem. Its main strength is allowing for more control over the order in which threads are executed, using controlled scheduling.

In this paper, we utilize Coyote, a concurrency testing framework, to test for concurrency bugs in PBFT, the seminal consensus algorithm. Random Walk, Delay-Bounding, Probabilistic Concurrency Testing (PCT) and Q-Learning are the exploration strategies we performed experiments with. The goal is to find schedules that lead to concurrency bugs. We test for 2 bugs that are seeded by us into the algorithm.

A comparison of the results shows that fair exploration strategies outperform their unfair variations. Finally, we conclude that PCT and fair PCT are the best-performing strategies for the benchmarks we use.

Files

License info not available