【24h】

Division and Fusion: Rethink Convolutional Kernels for 3D Medical Image Segmentation

机译:分部和融合:重新思考3D医学图像分割的卷积核

获取原文

摘要

There has been a debate of using 2D and 3D convolution on volumetric medical image segmentation. The problem is that 2D convolution loses 3D spatial relationship of image features, while 3D convolution layers are hard to train from scratch due to the limited size of medical image dataset. Employing more trainable parameters and complicated connections may improve the performance of 3D CNN, however, inducing extra computational burden at the same time. It is meaningful to improve performance of current 3D medical image processing without requiring extra inference computation and memory resources. In this paper, we propose a general solution, Division-Fusion (DF)-CNN for free performance improvement on any available 3D medical image segmentation approach. During the division phase, different view-based kernels are divided from a single 3D kernel to extract multi-view context information that strengthens the spatial information of feature maps. During the fusion phase, all kernels are fused into one 3D kernel to reduce the parameters of deployed model. We extensively evaluated our DF mechanism on prostate ultrasound volume segmentation. The results demonstrate a consistent improvement over different benchmark models with a clear margin.
机译:在体积医学图像分割上使用2D和3D卷积有辩论。问题是2D卷积失去了图像特征的3D空间关系,而3D卷积层由于医学图像数据集的尺寸有限而难以从划痕训练。采用更多培训参数和复杂的连接可以提高3D CNN的性能,然而,同时诱导额外的计算负担。提高当前3D医学图像处理的性能是有意义的,而无需额外推理计算和内存资源。在本文中,我们提出了一种通用解决方案,分区 - 融合(DF)-CNN,用于对任何可用的3D医学图像分割方法进行免费性能改进。在分割阶段期间,基于不同的视图的内核被划分为单个3D内核,以提取增强特征映射的空间信息的多视图上下文信息。在融合阶段期间,所有内核都融合到一个3D内核中,以减少部署模型的参数。我们广泛地评估了前列腺超声波体积分割的DF机制。结果表明,在不同的基准模型中表现出一致的改进,具有清晰的边距。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号