Super-Resolution for Enhanced Aerial Imagery

Master Thesis (2025)
Author(s)

M. MICHALAS (TU Delft - Architecture and the Built Environment)

Contributor(s)

M. Meijers – Mentor (TU Delft - Digital Technologies)

Azarakhsh Rafiee – Mentor (TU Delft - Digital Technologies)

Faculty
Architecture and the Built Environment
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
30-06-2025
Awarding Institution
Delft University of Technology
Programme
['Geomatics']
Sponsors
Readar B.V.
Faculty
Architecture and the Built Environment
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

High-resolution aerial imagery plays a critical role in urban planning, energy mapping, and land-use
classification. However, many datasets remain limited to lower resolutions due to acquisition costs or
legacy data sources. Super-resolution (SR) techniques offer a means to enhance 25 cm aerial imagery
to 8 cm, making it more suitable for object-level analysis. This thesis investigates the capability of a
modified SRGAN architecture to enhance the visual and structural fidelity of aerial images, thereby
improving the representation of urban features such as rooftops, dormers, and solar panels. The
architecture incorporates an EdgeMaskBlock to improve edge awareness and preserve sharp contours
in reconstructed imagery.
To address the challenges of spatial complexity and temporal misalignment, a two-phase training
strategy is implemented. First, the model is trained on synthetically downsampled HR-LR pairs
to establish a robust initialization. This is followed by fine-tuning on real-world 25 cm inputs mis-
aligned with their 8 cm HR counterparts, enabling the model to generalize under realistic and variable
acquisition conditions.
Evaluation is conducted across both training iterations using standard image quality metrics
(PSNR, SSIM, LPIPS), along with downstream segmentation benchmarks. For Iteration 2, the gen-
eralization capability of the model is assessed across new cities and seasonal conditions. Two segmen-
tation pipelines are used: the Segment Anything Model (SAM) and the operational semantic segmen-
tation system developed by Readar B.V., which detects buildings, dormers, and PV panels using both
RGB and DSM data. Metrics such as precision, recall, and F1-score demonstrate that super-resolved
outputs significantly outperform bicubic upsampling, particularly for fine-scale rooftop objects.
The results show that the proposed SRGAN model improves perceptual quality while enabling
effective domain transfer across seasons. These enhancements contribute to more reliable segmentation
outputs, reinforcing the potential of GAN-based super-resolution as a practical tool in geospatial
workflows that require fine-grained object recognition.

Files

Michail_Michalas_P5.pdf
(pdf | 22.5 Mb)
License info not available
P2_Michalis_Michalas.pdf
(pdf | 14.7 Mb)
License info not available
P5.pdf
(pdf | 2.45 Mb)
License info not available