首页> 美国卫生研究院文献>Journal of Digital Imaging >Deep-Learning-Based Semantic Labeling for 2D Mammography and Comparison of Complexity for Machine Learning Tasks
【2h】

Deep-Learning-Based Semantic Labeling for 2D Mammography and Comparison of Complexity for Machine Learning Tasks

机译:基于深度学习的2D乳腺摄影语义标记和机器学习任务的复杂度比较

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Machine learning has several potential uses in medical imaging for semantic labeling of images to improve radiologist workflow and to triage studies for review. The purpose of this study was to (1) develop deep convolutional neural networks (DCNNs) for automated classification of 2D mammography views, determination of breast laterality, and assessment and of breast tissue density; and (2) compare the performance of DCNNs on these tasks of varying complexity to each other. We obtained 3034 2D-mammographic images from the Digital Database for Screening Mammography, annotated with mammographic view, image laterality, and breast tissue density. These images were used to train a DCNN to classify images for these three tasks. The DCNN trained to classify mammographic view achieved receiver-operating-characteristic (ROC) area under the curve (AUC) of 1. The DCNN trained to classify breast image laterality initially misclassified right and left breasts (AUC 0.75); however, after discontinuing horizontal flips during data augmentation, AUC improved to 0.93 (p < 0.0001). Breast density classification proved more difficult, with the DCNN achieving 68% accuracy. Automated semantic labeling of 2D mammography is feasible using DCNNs and can be performed with small datasets. However, automated classification of differences in breast density is more difficult, likely requiring larger datasets.
机译:机器学习在医学成像中具有多种潜在用途,可用于图像的语义标记,以改善放射科医生的工作流程并分流研究以进行审查。这项研究的目的是(1)开发深层卷积神经网络(DCNN),用于对2D乳腺X线摄影视图进行自动分类,确定乳房的侧斜度以及评估和乳房组织密度; (2)比较DCNN在这些复杂程度各不相同的任务上的性能。我们从用于筛查乳腺X射线摄影的数字数据库中获得了3034幅2D乳腺X射线摄影图像,并用乳腺X射线摄影视图,图像横向性和乳房组织密度进行了注释。这些图像用于训练DCNN以对这三个任务进行图像分类。经过训练的对乳腺X射线照片分类的DCNN在曲线(AUC)为1时达到了接收者操作特征(ROC)区域。但是,在数据增强过程中中断水平翻转之后,AUC提升至0.93(p <0.0001)。事实证明,乳房密度分类更加困难,DCNN的准确度达到68%。使用DCNN进行2D乳腺摄影的自动语义标记是可行的,并且可以使用小型数据集执行。然而,乳房密度差异的自动分类更加困难,可能需要更大的数据集。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号