Deep Learning-based Segmentation of Cracks within a Photogrammetry Solution

Fully-Supervised Learning, Transfer Learning and Photogrammetric Image Processing

More Info
expand_more

Abstract

The city of Amsterdam faces the challenge of monitoring and assessing 200 kilometers of historic quay walls, of which much is deemed to be in poor condition. A key monitoring technique used is photogrammetry resulting in deformation testing. The fundamental data source forming the basis of this deformation analysis is a collection of overlapping images acquired of the masonry quay walls. Solely focusing on deformations overlooks a potential wealth of information which could be retrieved from this imagery, like the existence of cracks in the quay walls, a key sign of potential deformation of the structure.
As manual visual inspection of this imagery is very time-consuming, this work proposes a methodology based on fully-supervised deep learning-based segmentation techniques with the goal of detecting and localizing cracks in the masonry quay walls. For this purpose, two neural networks are trained, one for the segmentation of quay walls in images, and one for the segmentation of cracks.
The neural network architectures which are considered in this work are DeepLabV3+, FPN, MANet and LinkNet, together with different encoders and loss functions. For quay wall segmentation, we adopt transfer learning on a network trained on masonry walls and fine-tune it for quay walls specifically. Here, DeepLabV3+ with ResNeXt-50 was found to be most effective, achieving a F1-score of 96.3 % on the test set. For crack segmentation, FPN with ResNeSt-50 performed best, resulting in a test set F1-score of 78.8 %.
The inference of the crack network is done with a multi-level scheme to detect cracks at different image scales and increase output confidence.
The inherent photogrammetric properties of the imagery have proven to be vital for further post-processing steps, like aggregating overlapping predictions, resulting in more prediction confidence.
Photogrammetry also enables converting pixel-wise predictions to crack length and crack width in the units of meters and millimeters respectively. The methodology additionally proposes photogrammetric image processing methods to transform neural network predictions to a 3D representation and a true-to-scale orthographic 2D image.
Additionally a concise visual evaluation has been conducted to assess the prediction performance on an otherwise unlabelled dataset.
This thesis presents an engineering effort for fully-supervised crack localization within the context of photogrammetric processed images, with generalization in mind for automatic assessment.