Automotive radar has shown promising developments in environment perception due to its cost-effectiveness and robustness in adverse weather conditions. However, the limited availability of annotated radar data poses a significant challenge for advancing radar-based perception sys
...
Automotive radar has shown promising developments in environment perception due to its cost-effectiveness and robustness in adverse weather conditions. However, the limited availability of annotated radar data poses a significant challenge for advancing radar-based perception systems. To address this limitation, we propose a framework to generate 4D radar point clouds for training and evaluating object detectors. Specifically, we apply diffusion to a point-structured latent representation of radar point clouds. Our proposed 4DRad-Diffusion generates foreground and background points separately, conditioned on 3D bounding boxes and LiDAR data, respectively. The generated foreground points can be used as an effective synthetic data augmentation strategy or combined with generated background points to pre-train models on fully synthetic data. We demonstrate that augmenting real radar data with our synthetic data improves object detection performance on both the View-of-Delft and TruckScenes datasets, even outperforming existing augmentation methods. We also show that pre-training on synthetic data enhances performance, highlighting the potential of generative models to advance radar perception.