4DRad-Diffusion: Latent Diffusion Models for 4D Radar Point Cloud Generation
J.C.K. Kwok (TU Delft - Mechanical Engineering)
Holger Caesar – Mentor (TU Delft - Intelligent Vehicles)
A. Palffy – Mentor (Perciv AI)
L. Ferranti – Graduation committee member (TU Delft - Learning & Autonomous Control)
H. Jamali-Rad – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Automotive radar has shown promising developments in environment perception due to its cost-effectiveness and robustness in adverse weather conditions. However, the limited availability of annotated radar data poses a significant challenge for advancing radar-based perception systems. To address this limitation, we propose a framework to generate 4D radar point clouds for training and evaluating object detectors. Specifically, we apply diffusion to a point-structured latent representation of radar point clouds. Our proposed 4DRad-Diffusion generates foreground and background points separately, conditioned on 3D bounding boxes and LiDAR data, respectively. The generated foreground points can be used as an effective synthetic data augmentation strategy or combined with generated background points to pre-train models on fully synthetic data. We demonstrate that augmenting real radar data with our synthetic data improves object detection performance on both the View-of-Delft and TruckScenes datasets, even outperforming existing augmentation methods. We also show that pre-training on synthetic data enhances performance, highlighting the potential of generative models to advance radar perception.
Files
File under embargo until 04-09-2027