Real-time traffic sign recognition on microcontrollers introduces challenges due to limited memory and processing capacity. This study investigates the trade-offs between model size, classification accuracy, and inference latency within hardware constraints. We present an efficie
...
Real-time traffic sign recognition on microcontrollers introduces challenges due to limited memory and processing capacity. This study investigates the trade-offs between model size, classification accuracy, and inference latency within hardware constraints. We present an efficient network architecture called AykoNet with two variants: AykoNet-Lite, prioritizing model size and inference latency, and AykoNet-Pro, prioritizing classification accuracy. We trained AykoNet on the German Traffic Sign Recognition Benchmark (GTSRB) and specifically optimized it for deployment on the Raspberry Pi Pico microcontroller. AykoNet-Lite delivers 94.60% accuracy with only a 36.80KB model size and 55.34ms inference time, while AykoNet-Pro achieves 95.90% accuracy with an 80.18KB model size and 87.13ms inference time. Our approach demonstrates the effectiveness of domain-specific preprocessing and architectural design, class-aware data augmentation, and the strategic use of depthwise separable convolutions. These results validate the feasibility of real-time traffic sign recognition in resource-constrained embedded systems. Specifically, AykoNet-Lite strikes an optimal balance between model size, classification accuracy, and inference latency for practical deployment in autonomous navigation applications.