首页> 美国卫生研究院文献>other >Deep Learning based Inter-Modality Image Registration Supervised by Intra-Modality Similarity
【2h】

Deep Learning based Inter-Modality Image Registration Supervised by Intra-Modality Similarity

机译:基于深度学习的模态内相似性指导的模态间图像配准

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Non-rigid inter-modality registration can facilitate accurate information fusion from different modalities, but it is challenging due to the very different image appearances across modalities. In this paper, we propose to train a non-rigid inter-modality image registration network, which can directly predict the transformation field from the input multimodal images, such as CT and MR images. In particular, the training of our inter-modality registration network is supervised by intra-modality similarity metric based on the available paired data, which is derived from a pre-aligned CT and MR dataset. Specifically, in the training stage, to register the input CT and MR images, their similarity is evaluated on the warped MR image and the MR image that is paired with the input CT. So that, the intra-modality similarity metric can be directly applied to measure whether the input CT and MR images are well registered. Moreover, we use the idea of dual-modality fashion, in which we measure the similarity on both CT modality and MR modality. In this way, the complementary anatomies in both modalities can be jointly considered to more accurately train the inter-modality registration network. In the testing stage, the trained inter-modality registration network can be directly applied to register the new multimodal images without any paired data. Experimental results have shown that, the proposed method can achieve promising accuracy and efficiency for the challenging non-rigid inter-modality registration task and also outperforms the state-of-the-art approaches.
机译:非刚性多模态配准可以促进来自不同模态的准确信息融合,但是由于跨模态的图像外观差异很大,因此具有挑战性。在本文中,我们建议训练一个非刚性的多模态图像配准网络,该网络可以直接从输入的多模态图像(例如CT和MR图像)中预测变换场。特别是,基于可用的配对数据,由模态内相似性度量监督我们的模态注册网络的训练,该配对数据是从预先对齐的CT和MR数据集获得的。具体地,在训练阶段中,为了配准输入的CT和MR图像,在弯曲的MR图像和与输入CT配对的MR图像上评估它们的相似性。因此,可以将模态内相似性度量直接应用于测量输入的CT和MR图像是否正确配准。此外,我们使用双模态时尚的思想,其中我们测量了CT模态和MR模态的相似性。通过这种方式,可以共同考虑两种模态中的互补解剖结构,以更准确地训练模态间配准网络。在测试阶段,训练有素的多式联运注册网络可以直接用于注册新的多式联运影像,而无需任何配对数据。实验结果表明,所提出的方法可以为具有挑战性的非刚性多模态配准任务提供有希望的准确性和效率,并且其性能优于最新方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号