【24h】

Interpretability of a Deep Learning Model for Rodents Brain Semantic Segmentation

机译:啮齿动物脑语义分割的深度学习模型的可解释性

获取原文

摘要

In recent years, as machine learning research has become real products and applications, some of which are critical, it is recognized that it is necessary to look for other model evaluation mechanisms. The commonly used main metrics such as accuracy or F-statistics are no longer sufficient in the deployment phase. This fostered the emergence of methods for interpretability of models. In this work, we discuss an approach to improving the prediction of a model by interpreting what has been learned and using that knowledge in a second phase. As a case study we have used the semantic segmentation of rodent, brain tissue in Magnetic Resonance Imaging. By analogy with what, happens to the human visual system, the experiment performed provides a way to make more in-depth conclusions about, a scene by carefully observing what attracts more attention after a first glance in en passant.
机译:近年来,由于机器学习研究已成为真正的产品和应用,其中一些至关重要,因此人们认识到有必要寻找其他模型评估机制。在部署阶段,诸如准确性或F统计量之类的常用主要指标已不再足够。这促进了模型可解释性方法的出现。在这项工作中,我们讨论一种通过解释已学到的知识并在第二阶段使用该知识来改进模型预测的方法。作为案例研究,我们在磁共振成像中使用了啮齿动物,脑组织的语义分割。通过与人类视觉系统发生的事情进行类比,所进行的实验提供了一种方法,可以通过仔细观察哪些内容在首次瞥见之后会引起更多的关注来对场景做出更深入的结论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号