Reliable Mutual Distillation for Medical Image Segmentation Under Imperfect Annotations

Convolutional neural networks (CNNs) have made enormous progress in medical image segmentation. The learning of CNNs is dependent on a large amount of training data with fine annotations. The workload of data labeling can be significantly relieved via collecting imperfect annotations which only match the underlying ground truths coarsely. However, label noises which are systematically introduced by the annotation protocols, severely hinders the learning of CNN-based segmentation models. Hence, we devise a novel collaborative learning framework in which two segmentation models cooperate to combat label noises in coarse annotations. First, the complementary knowledge of two models is explored by making one model clean training data for the other model. Secondly, to further alleviate the negative impact of label noises and make sufficient usage of the training data, the specific reliable knowledge of each model is distilled into the other model with augmentation-based consistency constraints. A reliability-aware sample selection strategy is incorporated for guaranteeing the quality of the distilled knowledge. Moreover, we employ joint data and model augmentations to expand the usage of reliable knowledge. Extensive experiments on two benchmarks showcase the superiority of our proposed method against existing methods under annotations with different noise levels. For example, our approach can improve existing methods by nearly 3% DSC on the lung lesion segmentation dataset...
Source: IEE Transactions on Medical Imaging - Category: Biomedical Engineering Source Type: research