RT-ST-GCN

Enabling Realtime Continual Inference at the Edge

Conference Paper (2024)
Author(s)

Maxim Yudayev (Katholieke Universiteit Leuven)

Benjamin Filtjens (Katholieke Universiteit Leuven)

Josep Balasch (Katholieke Universiteit Leuven)

DOI related publication
https://doi.org/10.1109/MLSP58920.2024.10734754 Final published version
More Info
expand_more
Publication Year
2024
Language
English
ISBN (electronic)
9798350372250
Event
Downloads counter
90

Abstract

Continual Spatial-Temporal Graph Convolutional Network (CoST-GCN) is a continual inference optimization of ST-GCN, an established graph-based action classification method. It removes redundant computations in the ST-GCN classifier when applied as a sliding window over a continual stream of data for per-frame predictions. Despite the improvement of CoST-GCN we can only achieve a throughput of 5 fps on a representative edge platform (Raspberry Pi 4). We propose a hardware-driven optimization, termed RT-ST-GCN, which scales down the computational bottleneck of ST-GCN to achieve realtime predictions up to 50 fps. We study and compare the performance of our lightweight model against (Co)ST-GCN on the PKU-MMD continual action dataset. Despite an expected drop in framewise performance metrics, our model shows similar or better performance on key segmental metrics, a constant latency of 20 ms for any temporal kernel size and 3x decrease in memory usage.