4D-RaDiff: Latent Diffusion for 4D Radar Point Cloud Generation

Preprint (2025)
Author(s)

J.C.K. Kwok (Student TU Delft)

Holger Caesar (TU Delft - Intelligent Vehicles)

A. Palffy (Perciv AI)

Research Group
Intelligent Vehicles
DOI related publication
https://doi.org/10.48550/arXiv.2512.14235
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Intelligent Vehicles
Publisher
ArXiv

Abstract

Automotive radar has shown promising developments in environment perception due to its cost-effectiveness and robustness in adverse weather conditions. However, the limited availability of annotated radar data poses a significant challenge for advancing radar-based perception systems. To address this limitation, we propose a novel framework to generate 4D radar point clouds for training and evaluating object detectors. Unlike image-based diffusion, our method is designed to consider the sparsity and unique characteristics of radar point clouds by applying diffusion to a latent point cloud representation. Within this latent space, generation is controlled via conditioning at either the object or scene level. The proposed 4D-RaDiff converts unlabeled bounding boxes into high-quality radar annotations and transforms existing LiDAR point cloud data into realistic radar scenes. Experiments demonstrate that incorporating synthetic radar data of 4D-RaDiff as data augmentation method during training consistently improves object detection performance compared to training on real data only. In addition, pre-training on our synthetic data reduces the amount of required annotated radar data by up to 90% while achieving comparable object detection performance.

No files available

Metadata only record. There are no files for this record.