Transparent AI by Design
Search Algorithms for Supervised Learning, Control Policies, and Combinatorial Certification
Emir Demirović (TU Delft - Algorithmics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
AI methods - such as those used in supervised learning, controller synthesis, and combinatorial optimisation - have demonstrated immense value across many domains. However, their practical adoption is hindered by reliability concerns, particularly when these systems are designed as black boxes. Two key challenges arise for black-box AI: (1) lack of performance guarantees - when AI fails, it is unclear whether the task is infeasible or the underlying algorithm is simply inadequate; and (2) lack of confidence - results may be difficult to interpret or trust. While post-hoc interpretability techniques offer partial remedies, we advocate for a different paradigm: building AI systems that are transparent by design. Rather than explaining opaque decisions after the fact, we synthesise outputs that are intrinsically understandable and verifiable. This shifts the focus from doubting AI to questioning whether we are solving the right problem. We apply this approach across three distinct domains: supervised learning, controller synthesis, and infeasibility certification for combinatorial optimisation problems. Although these tasks involve exponentially large search spaces, recent advances demonstrate that designing for transparency is increasingly practical - often without sacrificing performance - making it a compelling alternative to opaque AI systems.