Unsupervised Cross-Modality Adaptation via Dual Structural-Oriented Guidance for 3D Medical Image Segmentation

Deep convolutional neural networks (CNNs) have achieved impressive performance in medical image segmentation; however, their performance could degrade significantly when being deployed to unseen data with heterogeneous characteristics. Unsupervised domain adaptation (UDA) is a promising solution to tackle this problem. In this work, we present a novel UDA method, named dual adaptation-guiding network (DAG-Net), which incorporates two highly effective and complementary structural-oriented guidance in training to collaboratively adapt a segmentation model from a labelled source domain to an unlabeled target domain. Specifically, our DAG-Net consists of two core modules: 1) Fourier-based contrastive style augmentation (FCSA) which implicitly guides the segmentation network to focus on learning modality-insensitive and structural-relevant features, and 2) residual space alignment (RSA) which provides explicit guidance to enhance the geometric continuity of the prediction in the target modality based on a 3D prior of inter-slice correlation. We have extensively evaluated our method with cardiac substructure and abdominal multi-organ segmentation for bidirectional cross-modality adaptation between MRI and CT images. Experimental results on two different tasks demonstrate that our DAG-Net greatly outperforms the state-of-the-art UDA approaches for 3D medical image segmentation on unlabeled target images.
Source: IEE Transactions on Medical Imaging - Category: Biomedical Engineering Source Type: research