Event-Driven Graph Neural Network Accelerators for Low-Power Vision

More Info
expand_more

Abstract

Event-based cameras promise new opportunities for smart vision systems deployed at the edge. Contrary to their conventional frame-based counterparts, event-based cameras generate temporal light intensity changes as events on a per-pixel basis, enabling ultra-low latency with microsecond-scale temporal resolution, low power consumption at milliwatts level, and sparse information encoding where only dynamic objects trigger events, effectively excluding static background data. However, mainstream computer vision algorithms based on convolutional neural networks (CNNs) hardly exploit these advantages of event-based cameras. Recently, event graph neural networks (event-GNNs) have been proposed as the backbone for novel event-based vision algorithms. By treating events as graph data, GNNs are able to process events while preserving their spatiotemporal information and sparse characteristics. Further studies also revealed an event-driven computation workflow that translates an event stream into a dynamic, evolving graph, outlining a path toward low-latency event-based vision. Despite these promises, event-GNNs are still calling for dedicated hardware accelerators toward integrated solutions with real-time prediction latency and low power consumption for real-world edge intelligence.

In this thesis, for the first time, we proposed an event-driven GNN accelerator for low-power, high-speed edge vision. Through hardware-algorithm co-design, an event-driven GNN model is adopted for deployment on an edge FPGA platform without prediction accuracy loss. We also pointed out two novel optimizations, edge-free storage and layer-parallel computation, to further decrease memory footprints and processing latency. The proposed accelerator is implemented on the Xilinx KV260 System-On-Module (SOM) platform containing an UltraScale+ MPSoC FPGA, and benchmarked on-board. Targeting a car recognition task based on the NCars dataset, our accelerator achieves a prediction accuracy of 87.8%. Meanwhile, operating with a 6.86W board-level system power, the accelerator reaches an average 16μs prediction latency per event and runs 9.2× faster than its software counterpart running on an NVIDIA RTX A6000 GPU platform. Therefore, our event-driven GNN accelerator efficiently allows for both real-time and microsecond-resolution event-based vision at the edge.

Files

Thesis_Yufeng_Yang_Event_Drive... (.pdf)
warning

File under embargo until 05-09-2025