Towards robust CT-ultrasound registration using deep learning methods

Conference Paper (2018)
Author(s)

Yuanyuan Sun (Erasmus MC)

Adriaan Moelker (Erasmus MC)

W.J. Niessen (Erasmus MC, TU Delft - ImPhys/Quantitative Imaging)

T. van Walsum (Erasmus MC)

Research Group
ImPhys/Quantitative Imaging
DOI related publication
https://doi.org/10.1007/978-3-030-02628-8_5
More Info
expand_more
Publication Year
2018
Language
English
Research Group
ImPhys/Quantitative Imaging
Volume number
11038 LNCS
Pages (from-to)
43-51
ISBN (print)
978-3-030-02627-1

Abstract

Multi-modal registration, especially CT/MR to ultrasound (US), is still a challenge, as conventional similarity metrics such as mutual information do not match the imaging characteristics of ultrasound. The main motivation for this work is to investigate whether a deep learning network can be used to directly estimate the displacement between a pair of multi-modal image patches, without explicitly performing similarity metric and optimizer, the two main components in a registration framework. The proposed DVNet is a fully convolutional neural network and is trained using a large set of artificially generated displacement vectors (DVs). The DVNet was evaluated on mono- and simulated multi-modal data, as well as real CT and US liver slices (selected from 3D volumes). The results show that the DVNet is quite robust on the single- and multi-modal (simulated) data, but does not work yet on the real CT and US images.

No files available

Metadata only record. There are no files for this record.