The impact of spatial resolution and satellite data type characteristics on Automated Damage Assessment using a Convolutional Neural Network

More Info
expand_more

Abstract

Satellite data, such as optical and Synthetic Aperture Radar imagery, can provide information about the location and level of destruction caused by natural hazards. This information is essential to optimise the rescue mission logistics by humanitarian aid organisations and save people in need. Currently, many Automatic Damage Assessment (ADA) methods exist, designed explicitly for one data type with corresponding spatial resolution. However, the weather and satellite coverage conditions can hinder rapid and complete data acquisitions after large events. Therefore, it is important to identify the limits and capabilities of novel methodologies testing various data availability scenarios and adjusting them to become robust and widely deployable.
In this research, the Convolutional Neural Network Caladrius of 510 ­ an organisation of the Red Cross Netherlands is selected to perform experiments. Initially, the model was designed to input high-­resolution imagery and based on the Siamese Architecture, including two Inception­-V3 modules fol­lowed by three connected layers. The multiple experiments are based on single­, dual­, and cross­mode scenarios, representing data characteristics with varying resolutions, satellite sources and observation sensor types. The xBD dataset provides pre- and post-­event high­-resolution optical imagery of numer­ous disasters with corresponding validated damage labels of the included buildings. Subsequently, this dataset is replicated in three down­sampled versions and using Sentinel-­2 1C and Sentinel­-1 GRD data. With the use of the Macro F1-­score and the Cohen’s Kappa coefficient the performances are compared and the predictions’ reliability is determined in operational situations.
The results indicate that a lower resolution of the input data has a negative effect on the correct classi­fied buildings. A linear relation does not express the loss in performance, as most damage propertiesare captured between 0.5­ and 2.5-meter. Consequently, this implies that the Sentinel 10­-meter res­olution datasets provide little recognisable features. The Sentinel­-2 1C experiment outperforms the Sentinel­-1 GRD, which equals the output of a random classifier. However, no final conclusion is drawn between the true prediction rate of the model compared to the input data type; optical and SAR im­agery due to the non-­optimal experiment circumstances and limited included datasets. Furthermore,the results from the dual-­mode mapping showcase the importance of identical data characteristics be­tween train and test datasets. Conversely, with the use of the cross-­mode experiments, it is found not essential to match the pre-­ and post­-event resolution imagery. This latter is very promising for the Red Cross and creates flexibility to construct datasets quickly after the disaster has struck.