Automated Correction of Refraction Residuals

More Info
expand_more

Abstract

In a world of high precision sensors, one of the few remaining challenges in multibeam echosounding is that of refraction-based uncertainty. A poor understanding of oceanographic variability or a poor choice of equipment can lead directly to poor quality bathymetric data. Post-processing software tools have existed for some time to allow data processors to correct for these artifacts. These tools typically involve the manual review of soundings and manual adjustment of a small set of parameters to achieve the desired correction. Though there are a number of commercial solutions currently available, they all have the same inherent weaknesses: (1) they are manual, thus time-consuming, (2) they are subjective, thus not repeatable, (3) they require expert training and thus are typically only usable by experienced personnel. QPS and the Technical University of Delft, The Netherlands (TU Delft) have worked together to implement an algorithm to address these issues in QPS’ post-processing software, Qimera. The algorithm, the TU Delft Sound Speed Inversion, works by taking advantage of the overlap between survey lines, harnessing the power of redundancy of the multiple observations. For a given set of pings, the algorithm simultaneously estimates sound speed corrections for the chosen pings and their neighbors by computing a best-fit solution that minimizes the mismatch in the areas of overlap between lines. This process is repeated across the entire spatial area, allowing for an adaptive solution that responds to changes in oceanographic conditions. This process is completely automated and requires no operator interaction or data review. The algorithm is also physics-based in that it honors the physics of acoustic ray bending. For accountability, the algorithm preserves the output of the inversion process for review, vetting, adjustment, and reporting. In this paper, we briefly explain how the algorithm works in simple terms. We also explore two data sets that cover differing oceanographic conditions, seabed morphologies, and survey line planning geometries in order to establish some early guiding principles on how far the algorithm can be pushed for performance.