SuperLoss: A Superpixel-Guided Loss for Noisy Label Semantic Segmentation in X-Ray Images

More Info
expand_more

Abstract

Deep learning based architectures have been applied to semantic segmentation tasks in medicalimaging with great success. However, such modelsare heavily reliant on the quality of the groundtruth segmentation mask and hence are susceptibleto label noise. To address this issue, thispaper introduces SuperLoss, a loss function thatpushes semantic boundaries towards superpixeledges. Superpixels are compact, homogeneous regionswithin an image that group pixels with similarcharacteristics, such as pixel intensity. Our losscan be combined with other loss functions for differentsegmentation architectures. We demonstrateour framework on a combination of two large publicdatasets of hip joint X-Ray images. We comparea U-Net model with and without our loss,when trained with different fractions of noise in thetraining dataset. Our approach achieves a 1 − 2%improvement in Intersection-over-Union and Hausdorffdistance for some cases, yet yields worse insome other cases. We also perform hypothesis testingand show that our results are statistically significantwith low to medium effect size.