Benchmarking in Neuro-Symbolic AI
Robin Manhaeve (Katholieke Universiteit Leuven)
Francesco Giannini (Scuola Normale Superiore di Pisa)
Mehdi Ali (IAIS-Fraunhofer, Lamarr Institute for Machine Learning and Artificial Intelligence)
Damiano Azzolini (University of Ferrara)
Alice Bizzarri (University of Ferrara)
Andrea Borghesi (University of Bologna)
Samuele Bortolotti (Università degli Studi di Trento)
Sebastijan Dumančić (TU Delft - Algorithmics)
Neil Yorke-Smith (TU Delft - Algorithmics)
undefined More Authors
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Neural-symbolic (NeSy) AI has gained a lot of popularity by enhancing learning models with explicit reasoning capabilities. Both new systems and new benchmarks are constantly introduced and used to evaluate learning and reasoning skills. The large variety of systems and benchmarks, however, makes it difficult to establish a fair comparison among the various frameworks, let alone a unifying set of benchmarking criteria. This paper analyzes the state-of-the-art in benchmarking NeSy systems, studies its limitations, and proposes ways to overcome them. We categorize popular neural-symbolic frameworks into three groups: model-theoretic, proof-theoretic fuzzy, and proof-theoretic probabilistic systems. We show how these three categories have distinct strengths and weaknesses, and how this is reflected in the type of tasks and benchmarks to which they are applied.