【24h】

AdaBoosted Deep Ensembles: Getting Maximum Performance Out of Small Training Datasets

机译:Adaboosted Deep Sensembles:从小型训练数据集中获得最大性能

获取原文

摘要

Even though state-of-the-art convolutional neural networks (CNNs) have shown outstanding performance in a wide range of imaging applications, they typically require large amounts of high-quality training data to prevent over fitting. In the case of medical image segmentation, it is often difficult to gain access to large data sets, particularly those involving rare diseases, such as skull-based chordoma tumors. This challenge is exacerbated by the difficulty in performing manual delineations, which are time-consuming and can have inconsistent quality. In this work, we propose a deep ensemble method that learns multiple models, trained using a leave-one-out strategy, and then aggregates the outputs for test data through a boosting strategy. The proposed method was evaluated for chordoma tumor segmentation in head magnetic resonance images using three well-known CNN architectures; VNET, UNET, and Feature pyramid network (FPN). Significantly improved Dice scores (up to 27%) were obtained using the proposed ensemble method when compared to a single model trained with all available training subjects. The proposed ensemble method can be applied to any neural network based segmentation method to potentially improve generalizability when learning from a small sized dataset.
机译:尽管最先进的卷积神经网络(CNNS)在广泛的成像应用中显示出出色的性能,但它们通常需要大量的高质量培训数据来防止接合拟合。在医学图像分割的情况下,通常难以获得对大数据集的访问,特别是涉及稀有疾病的大数据集,例如基于颅骨的脊髓瘤肿瘤。执行手动描绘的困难,这种挑战是加剧,这是耗时的,并且质量不一致。在这项工作中,我们提出了一种深度集成方法,了解多种模型,使用休假策略训练,然后通过升压策略聚合测试数据的输出。使用三种众所周知的CNN架构评估了对头部磁共振图像中的脊索瘤肿瘤分割的提出的方法; VNet,UNET和功能金字塔网络(FPN)。在与所有可用培训对象培训的单一模型相比,使用所提出的集合方法获得显着改善的骰子评分(最多27%)。所提出的合并方法可以应用于任何神经网络的基于神经网络的分段方法,以在从小型数据集中学习时可能提高概括性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号