Automated segmentation of lacunes of presumed vascular origin in brain MRI scans

More Info
expand_more

Abstract

Lacunes of presumed vascular origin (lacunes) are small lesions in the brain and are an important indicator of cerebral small vessel disease (cSVD). To gain more insight in this disease, obtaining more information about the shape, size and location of lacunes is essential. However, manual segmentation (the voxel-wise labeling of lacunes in a scan) can be very time-consuming. In this thesis, we optimize convolutional neural networks using different loss functions to automate the process of segmenting lacunes in full brain MRI scans. A set of 111 scans was used for development and a separate test set of 111 scans was used for evaluation. As lacunes are small, they generally occupy less than 0.02% of an MRI scan. This leads to an extreme data imbalance between lacune voxels and background voxels, which complicates the optimization process. We trained networks with the binary cross-entropy (BCE) loss, the weighted binary cross-entropy (WBCE) loss and the Dice loss. Additionally, we trained networks with two proposed adaptations of the Dice loss: Dice-ReLU loss and constrained Dice-ReLU (CDR) loss. Our experiments show that all losses except the BCE loss are able to cope with the data imbalance and learn to segment lacunes. The Dice-ReLU loss performs best on detection with a sensitivity of 0.79, but produces an excessive amount of false positives (FPs) with on average 26 per image. Adding a size constraint (the constrained Dice-ReLU loss) improves the number of FPs considerably to 11.52 FPs per image with a sensitivity of 0.70. The Dice loss has just 1.93 FPs per image, but reaches a detection sensitivity of only 0.43. However, the Dice loss obtains the best segmentation performance of the true positive (TP) elements with a Dice similarity coefficient score of 0.47. Compared to the Dice loss, the WBCE loss performs slightly less on FPs per image and TP-element-wise segmentation, but slightly better on sensitivity. We developed methods that were able to learn to segment lacunes. However, the data imbalance still influences the optimization process considerably, leading to methods that either focus too much on the foreground or too much on the background. Further work on developing a loss function that can cope better with this data imbalance could greatly improve lacune segmentation performance.