...
首页> 外文期刊>Medical image analysis >Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation
【24h】

Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation

机译:增强自动提取的机器学习功能的可解释性:应用于脑病变分割的RBM-随机林系统

获取原文
获取原文并翻译 | 示例
           

摘要

Highlights ? We propose methodologies to enhance the interpretability of a machine learning system. ? The approach can yield two levels of interpretability (global and local), allowing us to assess how the system learned task-specific relations and its individual predictions. ? Validation on brain tumor segmentation and penumbra estimation in acute stroke. ? Based on the evaluated clinical scenarios, the proposed approach allows us to confirm that the machine learning system learns relations coherent with expert knowledge and annotation protocols. Graphical abstract Display Omitted Abstract Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable “black boxes”. In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images.
机译:强调 ?我们提出了提高机器学习系统的可解释性的方法。还该方法可以产生两个级别的可解释性(全球和本地),允许我们评估系统如何学习特定的任务关系及其个人预测。还急性中风脑肿瘤分割和PENUMBRA估算的验证。还基于评估的临床情景,所提出的方法使我们能够确认机器学习系统了解与专家知识和注释协议的关系相干。图形抽象显示省略了抽象机器学习系统正在实现更好的性能,以越来越复杂。然而,由于这一点,它们变得不那么可解释,这可能会因系统的最终用户而导致一些不信任。这尤其重要,因为这些系统被普遍引入到关键域,例如医学领域。表示学习技术是自动特征计算的一般方法。尽管如此,这些技术被视为不可诠释的“黑匣子”。在本文中,我们提出了一种方法来提高自动提取的机器学习功能的可解释性。该提出的系统由用于无监督特征学习的受限制的Boltzmann机器组成,以及随机林分类器,它们组合以共同考虑成像数据,特征和目标变量之间的现有相关性。我们定义了两个级别的解释:全球和本地。前者致力于理解,如果系统在数据中学到了数据中的相关关系,则后来专注于对体素和患者水平进行的预测。此外,我们提出了一种新颖的特征重要性策略,其考虑了成像数据和目标变量,我们展示了利用所获得的代表的可解释性的方法的能力。我们在缺血性卒中病变中评估了脑肿瘤细分和Penumbra估算中所提出的方法。我们展示了提出的方法,揭示了关于成像模型和提取特征之间的关系的信息及其对手任务的有用性的信息。在这两个临床情景中,我们证明所提出的方法能够增强自动学习功能的可解释性,突出了类似于专家如何从医学图像提取相关数据的特定学习模式。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号