An Event-Based Digital Compute-In-Memory Accelerator with Flexible Operand Resolution and Layer-Wise Weight/Output Stationarity
N. Chauvaux (TU Delft - Electronic Instrumentation)
A. Kneip (Katholieke Universiteit Leuven, TU Delft - Electronic Instrumentation)
Christoph Posch (Prophesee)
K.A.A. Makinwa (TU Delft - Microelectronics)
Charlotte Frenkel (TU Delft - Electronic Instrumentation)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Compute-in-memory (CIM) accelerators for spiking neural networks (SNNs) are promising solutions to enable μs-level inference latency and ultra-low energy in edge vision applications. Yet, their current lack of flexibility at both the circuit and system levels prevents their deployment in a wide range of real-life scenarios. In this work, we propose FlexSpIM, a novel digital CIM macro that supports arbitrary operand resolution and shape within a unified CIM storage for weights and membrane potentials. These circuit-level techniques enable a hybrid weight- and output-stationary dataflow at the system level to maximize operand reuse, thereby minimizing costly on- and off-chip data movements during the SNN execution. Measurement results of a fabricated FlexSpIM prototype in 40-nm CMOS demonstrate a 2× increase in 1-bit-normalized energy efficiency compared to prior fixed-precision digital CIM-based SNNs, while providing resolution reconfiguration with bitwise granularity. Our approach can save up to 90% energy in large-scale systems, while reaching a state-of-the-art classification accuracy of 95.8% on the IBM DVS gesture dataset.
Files
File under embargo until 27-12-2025