NIH Research Festival
–
–
FAES Terrace
CC
BIOENG-2
Medical image registration is one of the essential processes in analyzing multiple images and diagnosing diseases. Although classical image registration methods that solve optimization problems for a moving image to be deformed into a fixed image have shown high-quality performance, they have limitations in taking a long time to be processed and extensive computational costs. To accelerate registration while maintaining performance, recently, deep learning approaches have been developed, which train neural networks by minimizing the energy function from the classical algorithms. Since these learning-based methods can learn image registration in an unsupervised manner, they have been applied to various medical image registration tasks such as atlas-based registration and multi-phase image registration. However, multi-modal image registration is still challenging in that the network needs to consider different data distributions from multiple modality images. To address this problem, we propose a domain-transported image registration method, called OTMorph. By employing a recent approach of neural optimal transport for image-to-image translation, we design a novel framework composed of a transport module and a registration module: the former transports data distribution from the moving source domain to the fixed target domain, and the latter provides deformation by taking the transported data. Through end-to-end learning, our proposed method can effectively learn deformable registration for the images in different distributions. Experimental results on abdominal multi-parametric MR image registration show that our method is superior to deform multi-modal images compared to the existing learning-based methods. We expect that our method can be useful for various image modalities.
Scientific Focus Area: Biomedical Engineering and Biophysics
This page was last updated on Monday, September 25, 2023