Neuromorphic computing offers a promising solution for realizing energy-efficient and compact Artificial Intelligence (AI) systems. Implemented with Spiking Neural Networks (SNNs), neuromorphic systems can benefit from SNN characteristics, such as event-driven computation, event
...
Neuromorphic computing offers a promising solution for realizing energy-efficient and compact Artificial Intelligence (AI) systems. Implemented with Spiking Neural Networks (SNNs), neuromorphic systems can benefit from SNN characteristics, such as event-driven computation, event sparsity, biological plausibility, etc., to achieve high performance and energy efficiency, an aspect vital for the realization of AI at the edge. Although SNNs are biology-inspired structures, their use in mission- and safety-critical applications raises multiple concerns around the trustworthiness of neuromorphic hardware due to various intrinsic and extrinsic reliability and security issues. Hence, adequately studying the dependability of SNNs and neuromorphic hardware accelerators becomes of utmost importance, in order to expose and harden against potential vulnerabilities, so that a reliable and secure operation is ensured. This paper presents an analysis of the dependability and trustworthiness aspects of SNNs and neuromorphic hardware. It outlines potential mitigation and countermeasure strategies to improve the reliability, testability, and security aspects of SNN hardware and ensure its trustworthy deployment in critical application domains.