Real-Time Traffic Sign Recognition on Microcontrollers
A.E. Celen (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Q. Wang – Mentor (TU Delft - Embedded Systems)
R. Zhu – Mentor (TU Delft - Embedded Systems)
Rangarao Venkatesha Prasad – Graduation committee member (TU Delft - Networked Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Real-time traffic sign recognition on microcontrollers introduces challenges due to limited memory and processing capacity. This study investigates the trade-offs between model size, classification accuracy, and inference latency within hardware constraints. We present an efficient network architecture called AykoNet with two variants: AykoNet-Lite, prioritizing model size and inference latency, and AykoNet-Pro, prioritizing classification accuracy. We trained AykoNet on the German Traffic Sign Recognition Benchmark (GTSRB) and specifically optimized it for deployment on the Raspberry Pi Pico microcontroller. AykoNet-Lite delivers 94.60% accuracy with only a 36.80KB model size and 55.34ms inference time, while AykoNet-Pro achieves 95.90% accuracy with an 80.18KB model size and 87.13ms inference time. Our approach demonstrates the effectiveness of domain-specific preprocessing and architectural design, class-aware data augmentation, and the strategic use of depthwise separable convolutions. These results validate the feasibility of real-time traffic sign recognition in resource-constrained embedded systems. Specifically, AykoNet-Lite strikes an optimal balance between model size, classification accuracy, and inference latency for practical deployment in autonomous navigation applications.