Quantitative MR inter-scanner harmonization using image style transfer

More Info
expand_more

Abstract

Quantitative MR obtains images containing meaningful physical and chemical characteristics of the tissues, allowing for comparison between patients. The procurement of this type of MRI through specific correction acquisition sequences tries to eliminate the maximum possible standardization problems that the technique inherently possesses. However, this does not provide perfect results. Deep learning could be used as a tool to finalize this harmonization after acquisition is finished. In this project, a CycleGAN is used to transfer the style of one specific MRI scanner into the images of another specific scanner. The aim is to achieve a better harmonization that eases posterior image analysis and to possibly solve other issues like hardware obsoleteness. Inspired by the literature, which has never applied image style transfer to this type of images, different methodologies are tested. Some have been applied to other MRI modalities, like an extra similarity measure in the loss function. One novel implementation is tested. It consists on an extra discriminator that tries to reinforce the classification of the original and fake/generated images of one scanner as one class, as opposed to the class formed by the original and fake images of the other scanner. Validation is based on visual inspection; histogram comparison; SSIM, NRMSE and correlation measures and CNN classification of the generated images (in a network trained to distinguish the origin of the scanners). Experiments show the inconclusiveness of the possibility to apply the general CycleGAN loss function to a set of images with such visual similarity. A further study on the specific details that a discriminator uses in order to classify images as coming from a given scanner could help design a specific loss which’s optimization generates the desired results.