Needle Detection and Localization in Ultrasound Images based on Deep Learning

More Info
expand_more

Abstract

The technique of ultrasound-guided needle insertion is commonly employed in various clinical fields, including biopsy, anesthesia, brachytherapy, and ablation. However, the visibility of the needle in ultrasound (US) images remains a persistent challenge. To improve the guidance accuracy of needle insertion during interventions, it is crucial to develop a reliable technique that enhances the visibility of ultrasound needles and accurately detects their position and orientation. Recently, Deep Learning (DL) based segmentation methods have drawn attention because of their high efficiency and accurate results. In order to improve the model performance on challenging datasets, previous researchers modified the segmentation models by introducing spatial attention mechanism and temporal information. However, whether the approaches are effective for ultrasound images, especially for the needle insertion tasks remains unclear.

This thesis aims to investigate whether deep learning models are able to increase segmentation accuracy as well as localization accuracy in 2D ultrasound images, specifically focusing on introducing spatial attention and optical flow information into U-Net backbone. Spatial Mask Attention U-Net (SMA-UNet) and Optical Flow Attention U-Net (OFA-UNet) were therefore proposed. The hierarchical experiments were designed to evaluate the effects of training loss, mask width and optical flow methods, and then select an optimal configuration for the segmentation models. Furthermore, U-Net, Attention U-Net and two proposed models were validated on datasets collected from pork and beef phantoms, as well as patients. The evaluation results indicate that OFA-UNet has significant improvement in terms of segmentation metrics and geometrical errors compared to the U-Net baseline and the U-Net only considering the mask attention. Specifically, the model achieved Dice of 86.7%, IoU of 88.2%, Precision of 88.6%, tip error of 2.7 mm and angular error of 0.002 radians on the pork dataset. Furthermore, the OFA-UNet shows robustness and consistency in evaluation metrics across three different datasets, indicating its ability to adapt to varying complexities of US datasets.