Reconstructing high-resolution wind fields from sparse, low-resolution observations is a critical yet ill-posed problem in meteorological modeling. Classical approaches, such as Computational Fluid Dynamics (CFD), are often too computationally intensive to meet the demands of rea
...
Reconstructing high-resolution wind fields from sparse, low-resolution observations is a critical yet ill-posed problem in meteorological modeling. Classical approaches, such as Computational Fluid Dynamics (CFD), are often too computationally intensive to meet the demands of real- time or large-scale industrial applications. Meanwhile, conventional data-driven methods like Convolutional Neural Networks (CNNs) tend to produce overly smoothed outputs and struggle to recover fine-scale structures, especially under severe data sparsity.
This thesis explores the use of diffusion-based generative models for super-resolution in wind field reconstruction. A progressive SR3 (Super-Resolution via Repeated Refinement) frame- work is developed, combining a multi-stage architecture with stochastic denoising processes to gradually reconstruct high-resolution outputs. Extensive experiments demonstrate that the progressive SR3 consistently outperforms CNN-based baselines in terms of reconstruction accur- acy, perceptual quality, and robustness. Furthermore, a joint training strategy improves both performance and computational efficiency by enabling end-to-end optimization across stages.
The findings support the use of probabilistic diffusion models for meteorological super-resolution tasks and emphasize the effectiveness of progressive refinement in handling large upscaling factors. This approach provides a promising pathway for enhancing data-driven post-processing in atmospheric modeling.