Automated Rooftop Solar Panel Detection Through Convolutional Neural Networks

Journal Article (2024)
Authors

Simon Pena Pereira (Student TU Delft)

A. Rafiee (TU Delft - Digital Technologies)

S. Lhermitte (TU Delft - Mathematical Geodesy and Positioning, Katholieke Universiteit Leuven)

Research Group
Digital Technologies
To reference this document use:
https://doi.org/10.1080/07038992.2024.2363236
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Digital Technologies
Issue number
1
Volume number
50
DOI:
https://doi.org/10.1080/07038992.2024.2363236
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Transforming the global energy sector from fossil-fuel based to renewable energy sources is crucial to limiting global warming and achieving climate neutrality. The decentralized nature of the renewable energy system allows private households to deploy photovoltaic systems on their rooftops. However, inconsistent data on installed photovoltaic (PV) systems complicate planning for an efficient grid expansion. To address this issue, deep-learning techniques, can support collecting data about PV systems from aerial and satellite imagery. Previous research, however, lacks the consideration for ground truth data-specific characteristics of PV panels. This study aims to implement a semantic segmentation model that detects PV systems in aerial imagery to explore the impact of area-specific characteristics in the training data and CNN hyperparameters on the performance of a CNN. Hence, a U-Net architecture is employed to analyze land use types, rooftop colors, and lower-resolution images. Additionally, the impact of near-infrared data on the detection rate of PV panels is analyzed. The results indicate that a U-Net is suitable for classifying PV panels in high-resolution aerial imagery (10 cm) by reaching F1 scores of up to 91.75% while demonstrating the importance of adapting the training data to area-specific ground truth data concerning urban and architectural properties.