首页> 外文会议>IEEE International Symposium on Biomedical Imaging >A Context Based Deep Learning Approach for Unbalanced Medical Image Segmentation
【24h】

A Context Based Deep Learning Approach for Unbalanced Medical Image Segmentation

机译:基于上下文的深度学习方法用于不平衡医学图像分割

获取原文

摘要

Automated medical image segmentation is an important step in many medical procedures. Recently, deep learning networks have been widely used for various medical image segmentation tasks, with U - Net and generative adversarial nets (GANs) being some of the commonly used ones. Foreground-background class imbalance is a common occurrence in medical images, and U-Net has difficulty in handling class imbalance because of its cross entropy (CE) objective function. Similarly, GAN also suffers from class imbalance because the discriminator looks at the entire image to classify it as real or fake. Since the discriminator is essentially a deep learning classifier, it is incapable of correctly identifying minor changes in small structures. To address these issues, we propose a novel context based CE loss function for U-Net, and a novel architecture Seg-GLGAN. The context based CE is a linear combination of CE obtained over the entire image and its region of interest (ROI). In Seg-GLGAN, we introduce a novel context discriminator to which the entire image and its ROI are fed as input, thus enforcing local context. We conduct extensive experiments using two challenging unbalanced datasets: PROMISE12 and ACDC. We observe that segmentation results obtained from our methods give better segmentation metrics as compared to various baseline methods.
机译:在许多医疗程序中,自动医学图像分割是重要的一步。近年来,深度学习网络已广泛用于各种医学图像分割任务,其中U-Net和生成对抗网络(GANs)是一些常用的网络。前景-背景类别不平衡在医学图像中很常见,由于其交叉熵(CE)目标函数,U-Net难以处理类别不平衡。同样,GAN也会遭受类不平衡的困扰,因为歧视者会查看整个图像以将其分类为真实还是伪造。由于区分器本质上是深度学习分类器,因此无法正确识别小型结构中的细微变化。为了解决这些问题,我们为U-Net提出了一种基于上下文的新颖CE损失函数,以及一种新颖的Seg-GLGAN体系结构。基于上下文的CE是在整个图像及其关注区域(ROI)上获得的CE的线性组合。在Seg-GLGAN中,我们引入了一种新颖的上下文识别器,整个图像及其ROI作为输入被馈送到该上下文识别器中,从而执行局部上下文。我们使用两个具有挑战性的不平衡数据集进行了广泛的实验:PROMISE12和ACDC。我们观察到,与各种基线方法相比,从我们的方法获得的细分结果可提供更好的细分指标。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号