Energy-efficient SNN Architecture using 3nm FinFET Multiport SRAM-based CIM with Online Learning

Conference Paper (2024)
Authors

L.C.A. Huijbregts (Imec, TU Delft - Computer Engineering)

Hsiao-Hsuan Liu (Imec)

Paul Detterer (Imec)

S Hamdioui (TU Delft - Computer Engineering)

Amirreza Yousefzadeh (Imec, University of Twente)

Rajendra bishnoi (TU Delft - Computer Engineering)

Research Group
Computer Engineering
To reference this document use:
https://doi.org/10.1145/3649329.3656514
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Computer Engineering
ISBN (electronic)
9798400706011
DOI:
https://doi.org/10.1145/3649329.3656514
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Current Artificial Intelligence (AI) computation systems face challenges, primarily from the memory-wall issue, limiting overall system-level performance, especially for Edge devices with constrained battery budgets, such as smartphones, wearables, and Internet-of-Things sensor systems. In this paper, we propose a new SRAM-based Compute-In-Memory (CIM) accelerator optimized for Spiking Neural Networks (SNNs) Inference. Our proposed architecture employs a multiport SRAM design with multiple decoupled Read ports to enhance the throughput and Transposable Read-Write ports to facilitate online learning. Furthermore, we develop an Arbiter circuit for efficient data-processing and port allocations during the computation. Results for a 128×128 array in 3nm FinFET technology demonstrate a 3.1× improvement in speed and a 2.2× enhancement in energy efficiency with our proposed multiport SRAM design compared to the traditional single-port design. At system-level, a throughput of 44 MInf/s at 607 pJ/Inf and 29mW is achieved.

Files

3649329.3656514.pdf
(pdf | 3.62 Mb)
License info not available