Seeing Clearly, Forgetting Deeply: Revisiting Fine-Tuned Video Generators for Driving Simulation

Preprint (2025)
Author(s)

C. Chang (TU Delft - Intelligent Vehicles)

Chen-Yu Wang (German Research Center for Artificial Intelligence)

Julian Schmidt (Mercedes-Benz)

Holger Caesar (TU Delft - Intelligent Vehicles)

Alain Pagani (German Research Center for Artificial Intelligence)

Research Group
Intelligent Vehicles
DOI related publication
https://doi.org/10.48550/arXiv.2508.16512
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Intelligent Vehicles
Publisher
ArXiv

Abstract

Recent advancements in video generation have substantially improved visual quality and temporal coherence, making these models increasingly appealing for applications such as autonomous driving, particularly in the context of driving simulation and so-called "world models". In this work, we investigate the effects of existing fine-tuning video generation approaches on structured driving datasets and uncover a potential trade-off: although visual fidelity improves, spatial accuracy in modeling dynamic elements may degrade. We attribute this degradation to a shift in the alignment between visual quality and dynamic understanding objectives. In datasets with diverse scene structures within temporal space, where objects or perspective shift in varied ways, these objectives tend to highly correlated. However, the very regular and repetitive nature of driving scenes allows visual quality to improve by modeling dominant scene motion patterns, without necessarily preserving fine-grained dynamic behavior. As a result, fine-tuning encourages the model to prioritize surface-level realism over dynamic accuracy. To further examine this phenomenon, we show that simple continual learning strategies, such as replay from diverse domains, can offer a balanced alternative by preserving spatial accuracy while maintaining strong visual quality.

No files available

Metadata only record. There are no files for this record.