首页> 外文期刊>Informatics in Medicine Unlocked >New unified insights on deep learning in radiological and pathological images: Beyond quantitative performances to qualitative interpretation
【24h】

New unified insights on deep learning in radiological and pathological images: Beyond quantitative performances to qualitative interpretation

机译:放射性和病理图像深度学习的新统一见解:超越定量演绎的定量表现

获取原文
           

摘要

Deep learning (DL) has become the main focus of research in the field of artificial intelligence, despite its lack of explainability and interpretability. DL mainly involves automated feature extraction using deep neural networks (DNNs) that can classify radiological and pathological images. Convolutional neural network (CNNs) can be also applied to pathological image analysis, such as the detection of tumors and the quantification of cellular features. However, to our knowledge, no attempts have been made to identify interpretable signatures from CNN features, and few studies have examined the use of CNNs for cytopathology images. Therefore, the aim of the present paper is to provide new unified insights to aid the development of more interpretable CNN-based methods to classify radiological and pathological images and explain the reason for this classification in the form of if-then rules. We first describe the “black box” problem of shallow NNs, the concept of rule extraction, the renewed attack of the “black box” problem in DNN architectures, and the paradigm shift regarding the transparency of DL using rule extraction. Next, we review limitations of DL in pathology in regard to histopathology and cytopathology. We then investigate the discrimination of cytological features and explanations and review recent techniques for interpretable CNN-based methods in histopathology, as well as current approaches being taken to enhance the interpretability of CNN-based methods for radiological images. Finally, we provide new unified insights to extract qualitative interpretable rules for radiological and pathological images.
机译:深入学习(DL)已成为人工智能领域研究的主要重点,尽管其缺乏可解释性和可解释性。 DL主要涉及使用深度神经网络(DNN)来分类放射学和病理图像的自动特征提取。卷积神经网络(CNNS)也可以应用于病理图像分析,例如检测肿瘤和蜂窝特征的定量。然而,为了我们的知识,没有尝试识别来自CNN特征的可解释签名,并且少数研究已经检查了CNN用于细胞病变图像的使用。因此,本文的目的是提供新的统一见解,以帮助开发更具可解释的基于CNN的方法,以分类放射学和病理图像,并以IF-DEL规则的形式解释该分类的原因。我们首先描述浅NNS的“黑匣子”问题,规则提取的概念,在DNN架构中的“黑匣子”问题的重新攻击,以及使用规则提取的DL透明度的范式转变。接下来,我们在组织病理学和缩细胞病的病理学中审查DL的局限性。然后,我们调查细胞学特征和解释的辨别以及近期基于CNN的基于组织病理学的方法的技术,以及采用电流方法来增强基于CNN的放射图像的方法的解释性。最后,我们提供了新的统一见解,以提取放射性和病理图像的定性解释规则。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号