This study investigated the feasibility of deep learning-based segmentation in intra-abdominal
kidney ultrasound registration, to enable image-guided robotic-assisted partial nephrectomy (RAPN).
Two state-of-the-art models, DeepLabV3+ and SAMUS, were trained and evaluated using a
...
This study investigated the feasibility of deep learning-based segmentation in intra-abdominal
kidney ultrasound registration, to enable image-guided robotic-assisted partial nephrectomy (RAPN).
Two state-of-the-art models, DeepLabV3+ and SAMUS, were trained and evaluated using a novel
intra-abdominal kidney ultrasound (IAKUS) dataset of 2,265 images from 15 RAPN patients.
Moreover, a transfer-learning approach was adopted using the publicly available open kidney ultrasound (OKUS) dataset for pre-training. Results showed that SAMUS consistently outperformed
DeepLabV3+ across all metrics, achieving an average Dice score of 88.0±2.0% and Hausdorff distance of 13.7 ± 3.8 mm, consistent with literature. SAMUS was pre-trained on ∼ 30k ultrasound
images which enabled a zero-shot test, outperforming trained DeepLabV3+ configurations. Furthermore, no measurable difference was seen between OKUS and IAKUS training. Both findings
suggest ultrasound specific features may be more important than organ specific features for training, and data diversity may be more important than strict anatomical similarity. The SAMUS
model obtained a registration accuracy of 4.3 ± 2.8 mm and inference time of 4.35 fps, in line with
literature reported clinical feasibility. The target registration was even improved by an average
of 2.6 ± 4.2 mm in 11/13 patients compared with manual-based registration. This proves that
deep learning-based registration is not only feasible in a clinical setting but exceeds manual-based
registration.