The formal verification of multi-agent systems in safety-critical domains is challenged by the need to certify population-level behaviours, such as formation control, under environmental uncertainty. Traditional state-based verification techniques are often inadequate for express
...
The formal verification of multi-agent systems in safety-critical domains is challenged by the need to certify population-level behaviours, such as formation control, under environmental uncertainty. Traditional state-based verification techniques are often inadequate for expressing these system-wide objectives and face scalability limitations. This thesis addresses this gap by developing a distributional reachability framework that models the evolution of the system directly over the space of probability distributions, using Interval Markov Decision Processes (IMDPs) to capture model uncertainty. We introduce two complementary analysis algorithms to compute guaranteed bounds on the set of all reachable distributions: a forward method using occupation measures and McCormick relaxations, and a robust backward algorithm based on value iteration over a discretised distribution space. Case studies in swarm deployment demonstrate the efficacy of the framework in computing robust, set-based approximations of reachable distributions. Furthermore, results for the robust backward reachability algorithm are presented for a running example. This capability allows for the formal verification of complex distributional specifications and the synthesis of control policies with certified safety guarantees, establishing a computational foundation for designing certifiably safe, large-scale autonomous systems.