A Hybrid Deep Learning Pipeline for Improved Ultrasound Localization Microscopy
Tristan S.W. Stevens (Eindhoven University of Technology)
Elizabeth B. Herbst (Philips Research)
Ben Luijten (Eindhoven University of Technology)
Boudewine W. Ossenkoppele (TU Delft - ImPhys/Imaging Physics, TU Delft - ImPhys/Medical Imaging, Eindhoven University of Technology)
Thierry J. Voskuil (Eindhoven University of Technology)
Shiying Wang (Philips Research)
Jihwan Youn (Eindhoven University of Technology)
Claudia Errico (Philips Research)
Nicola Pezzotti (Eindhoven University of Technology, Philips Research)
undefined More Authors (External organisation)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
The image quality of ultrasound localization microscopy (ULM) images is driven by the ability to accurately detect and track the location of microbubbles (MBs) in vascular networks. This task becomes increasingly challenging in imaging environments with high MB concentrations and low signal-to-noise ratios, making it difficult to differentiate and localize individual MBs. Recent developments in deep learning (DL) have demonstrated significant improvements over conventional methods but depend on vast amounts of realistic training data with the corresponding ground truth labels, which are difficult to obtain. The alternative, simulated data, in turn, poses challenges in generalizability of the method. In this work, we present a hybrid pipeline for ULM that comprises data generation, localization, and tracking. It combines the current state-of-the-art, utilizing both conventional and DL techniques. We show that using this approach, we can create high-quality velocity maps while being able to generalize well across different domains.