The neurobench framework for benchmarking neuromorphic computing algorithms and systems

Journal Article (2025)
Author(s)

Jason Yik (Harvard University)

Korneel Van den Berghe (Harvard University, Student TU Delft)

D.M.J. den Blanken (TU Delft - Electronic Instrumentation)

Younes Bouhadjar (Forschungszentrum Jülich)

Maxime Fabre (Rijksuniversiteit Groningen)

A. Micheli (TU Delft - Pattern Recognition and Bioinformatics)

Guido C.H.E.de de Croon (TU Delft - Control & Simulation)

N. Tömen (TU Delft - Pattern Recognition and Bioinformatics)

C. Frenkel (TU Delft - Electronic Instrumentation)

More Authors

Research Group
Electronic Instrumentation
DOI related publication
https://doi.org/10.1038/s41467-025-56739-4
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Electronic Instrumentation
Issue number
1
Volume number
16
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Neuromorphic computing shows promise for advancing computing efficiency and capabilities of AI applications using brain-inspired principles. However, the neuromorphic research field currently lacks standardized benchmarks, making it difficult to accurately measure technological advancements, compare performance with conventional methods, and identify promising future research directions. This article presents NeuroBench, a benchmark framework for neuromorphic algorithms and systems, which is collaboratively designed from an open community of researchers across industry and academia. NeuroBench introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent and hardware-dependent settings. For latest project updates, visit the project website (neurobench.ai).