首页> 外文会议>International Conference on Case-Based Reasoning >Some Shades of Grey! Interpretability and Explanatory Capacity of Deep Neural Networks
【24h】

Some Shades of Grey! Interpretability and Explanatory Capacity of Deep Neural Networks

机译:一些灰色阴影!深度神经网络的可解释性和解释能力

获取原文

摘要

Based on the availability of data and corresponding computing capacity, more and more cognitive tasks can be transferred to computers, which independently learn to improve our understanding, increase our problem-solving capacity or simply help us to remember connections. Deep neural networks in particular clearly outperform traditional Al methods and thus find more and more areas of application where they are involved in decision-making or even make decisions independently. For many areas, such as autonomous driving or credit allocation, the use of such networks is extremely critical and risky due to their black box character, since it is difficult to interpret how or why the models come to certain results. The paper discusses and presents various approaches that attempt to understand and explain decision-making in deep neural networks.
机译:根据数据的可用性和相应的计算能力,可以将越来越多的认知任务转移到计算机上,这些计算机可以独立学习以提高我们的理解力,提高我们的问题解决能力或只是帮助我们记住联系。特别是深度神经网络明显胜过传统的Al方法,因此在涉及决策甚至独立决策的领域中找到了越来越多的应用领域。在许多领域,例如自动驾驶或信贷分配,由于其黑匣子特征,使用此类网络极为关键且具有风险,因为很难解释模型如何或为何得出某些结果。本文讨论并提出了各种试图理解和解释深度神经网络决策的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号