首页> 美国卫生研究院文献>Annals of Translational Medicine >Opening the black box of neural networks: methods for interpreting neural network models in clinical applications
【2h】

Opening the black box of neural networks: methods for interpreting neural network models in clinical applications

机译:打开神经网络的黑匣子:在临床应用中解释神经网络模型的方法

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Artificial neural networks (ANNs) are powerful tools for data analysis and are particularly suitable for modeling relationships between variables for best prediction of an outcome. While these models can be used to answer many important research questions, their utility has been critically limited because the interpretation of the “black box” model is difficult. Clinical investigators usually employ ANN models to predict the clinical outcomes or to make a diagnosis; the model however is difficult to interpret for clinicians. To address this important shortcoming of neural network modeling methods, we describe several methods to help subject-matter audiences (e.g., clinicians, medical policy makers) understand neural network models. Garson’s algorithm describes the relative magnitude of the importance of a descriptor (predictor) in its connection with outcome variables by dissecting the model weights. The Lek’s profile method explores the relationship of the outcome variable and a predictor of interest, while holding other predictors at constant values (e.g., minimum, 20th quartile, maximum). While Lek’s profile was developed specifically for neural networks, partial dependence plot is a more generic version that visualize the relationship between an outcome and one or two predictors. Finally, the local interpretable model-agnostic explanations (LIME) method can show the predictions of any classification or regression, by approximating it locally with an interpretable model. R code for the implementations of these methods is shown by using example data fitted with a standard, feed-forward neural network model. We offer codes and step-by-step description on how to use these tools to facilitate better understanding of ANN.
机译:人工神经网络(ANN)是用于数据分析的强大工具,尤其适合于建模变量之间的关系以最佳预测结果。尽管这些模型可以用来回答许多重要的研究问题,但由于难以解释“黑匣子”模型,因此它们的实用性受到了严格限制。临床研究人员通常采用ANN模型来预测临床结果或做出诊断。然而,该模型很难为临床医生解释。为了解决神经网络建模方法的这一重要缺点,我们描述了几种方法来帮助主题受众(例如临床医生,医疗政策制定者)​​理解神经网络模型。 Garson的算法通过剖析模型权重来描述描述符(预测变量)与结果变量相关性的重要性的相对大小。 Lek的配置文件方法探究了结果变量与感兴趣的预测变量之间的关系,同时将其他预测变量保持为恒定值(例如,最小值,第20个四分位数,最大值)。虽然Lek的配置文件是专门为神经网络开发的,但部分依赖图是一种更通用的版本,可以直观显示结果与一个或两个预测变量之间的关系。最后,本地可解释模型不可知论方法(LIME)通过使用可解释模型将其局部逼近,可以显示任何分类或回归的预测。通过使用配有标准前馈神经网络模型的示例数据来显示用于这些方法的R代码。我们提供了有关如何使用这些工具的代码和分步说明,以促进对ANN的更好理解。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号