Learning local abstractions in complex decision-making systems

Doctoral Thesis (2026)
Author(s)

E. Congeduti (TU Delft - Computer Science & Engineering-Teaching Team)

Contributor(s)

F.A. Oliehoek – Promotor (TU Delft - Sequential Decision Making)

C.M. Jonker – Promotor (TU Delft - Interactive Intelligence)

Research Group
Computer Science & Engineering-Teaching Team
More Info
expand_more
Publication Year
2026
Language
English
Defense Date
30-03-2026
Awarding Institution
Delft University of Technology
Research Group
Computer Science & Engineering-Teaching Team
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This thesis investigates how to learn local abstractions for scalable sequential decision making in large and complex systems. Real-world environments are typically dynamic, multiagent, and characterized by an extremely high number of state variables. As a result, exhaustive reasoning is computationally infeasible for an agent. Abstraction serves to reduce this complexity by focusing on the essential aspects of the environment while disregarding irrelevant details. A central theme of this work is the study of specific local abstractions and effective methods to learn them.
We introduce a perspective that unifies different approaches to state abstraction, showing how seemingly distinct methods can be systematically organized within a broader conceptual framework. This clarifies the connections between existing models and lays the foundation for more systematic development and reuse of abstraction techniques.
We explore approximate influence-based abstraction, an approach that enables building small local models complemented by learned representations of the external influence on local dynamics. We establish theoretical performance guarantees for approximate influence models, proving that the performance loss can be bounded in terms of the approximation error. The analysis is supported by empirical studies that show that accurate influence approximations improve performance in practice.
A further contribution is the investigation of the learnability of influence. We demonstrate that accurate influence representations can be learned efficiently, even in largescale and long-horizon scenarios. Empirical evaluations show that small recurrent architectures are often sufficient to approximate the influence effectively and generalize beyond the training horizon.
In conclusion, this dissertation advances the foundations of state abstraction and principled applications of approximate influence-based abstraction. It provides a coherent framework for understanding state abstraction, establishes performance guarantees for approximate influence representations, and demonstrates the feasibility of influence learning. Together, these contributions offer insights into scalable and principled methods for sequential decision making in complex systems.

Files

License info not available
License info not available