Radar is a vital sensor modality for advanced driver assistance systems (ADAS), offering reliable range and speed detection in all weather and lighting conditions. These capabilities complement camera sensors, which excel at angle determination and scene comprehension. Machine le
...
Radar is a vital sensor modality for advanced driver assistance systems (ADAS), offering reliable range and speed detection in all weather and lighting conditions. These capabilities complement camera sensors, which excel at angle determination and scene comprehension. Machine learning (ML) has been instrumental in leveraging radar data for ADAS tasks through both standalone radar processing as well as radar-camera sensor fusion. However, effectively creating ML models requires extensive and diverse datasets. Publicly available high-resolution radar datasets remain relatively rare and limited in their scope and scale. The variability in radar configurations also poses a significant challenge; different chirp designs and transmitter/receiver array configurations greatly influence the resolution, range, and clarity of radar data. ML models trained on data from one radar configuration may not perform effectively when applied to another.
This thesis addresses the problem of generating synthetic automotive radar range-azimuth (RA) maps by fusing low-resolution radar RA maps with aligned camera images. Synthetic data may close the gaps between what is needed to train and develop effective ML-based ADAS algorithms for radar and what is currently available by creating radar data that is faithful to an arbitrary array configuration for a consistent set of scenes. To that end, I propose a novel radar super-resolution deep learning network, employing a UNet-based autoencoder enhanced with visual features from a pre-trained ResNet50 encoder. The model produces high-resolution radar RA maps approximating ground truth data. Evaluations on the RADIal and RaDICaL datasets show superior performance over baseline and prior methods, particularly in resolving details like closely-spaced vehicles, distant targets, and pedestrians.