On the Trustworthiness of Spiking Neural Networks and Neuromorphic Systems
Theofilos Spyrou (TU Delft - Computer Engineering)
Haralampos G. Stratigopoulos (CNRS)
Ihsen Alouani (CNRS-IEMN, UPHF, INSA, Queen's University Belfast)
S. Hamdioui (TU Delft - Computer Engineering)
Anteneh Gebregiorgis (TU Delft - Computer Engineering)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Neuromorphic computing offers a promising solution for realizing energy-efficient and compact Artificial Intelligence (AI) systems. Implemented with Spiking Neural Networks (SNNs), neuromorphic systems can benefit from SNN characteristics, such as event-driven computation, event sparsity, biological plausibility, etc., to achieve high performance and energy efficiency, an aspect vital for the realization of AI at the edge. Although SNNs are biology-inspired structures, their use in mission- and safety-critical applications raises multiple concerns around the trustworthiness of neuromorphic hardware due to various intrinsic and extrinsic reliability and security issues. Hence, adequately studying the dependability of SNNs and neuromorphic hardware accelerators becomes of utmost importance, in order to expose and harden against potential vulnerabilities, so that a reliable and secure operation is ensured. This paper presents an analysis of the dependability and trustworthiness aspects of SNNs and neuromorphic hardware. It outlines potential mitigation and countermeasure strategies to improve the reliability, testability, and security aspects of SNN hardware and ensure its trustworthy deployment in critical application domains.
Files
File under embargo until 01-01-2026