首页> 外文期刊>BioSystems >How robust are neural network models of stimulus generalization?
【24h】

How robust are neural network models of stimulus generalization?

机译:刺激泛化的神经网络模型的鲁棒性如何?

获取原文
获取原文并翻译 | 示例
           

摘要

Artificial feed-forward neural networks are commonly used as a tool for modelling stimulus selection and animal signalling. A key finding of stimulus selection research has been generalization: if a given behaviour has been established to one stimulus, perceptually similar novel stimuli are likely to induce a similar response. Stimulus generalization, in feed-forward neural networks, automatically arises as a property of the network. This network property raises understandable concern regarding the sensitivity of the network to variation in its internal parameter values used in relation to its structure and to its training process. Researchers must have confidence that the predictions of their model follow from the underlying biology that they deliberately incorporated in the model, and not from often arbitrary choices about model implementation. We study how network training and parameter perturbations influence the qualitative and quantitative behaviour of a simple but general network. Specifically, for models of stimulus control we study the effect that parameter variation has on the shape of the generalization curves produced by the network. We show that certain network and training conditions produce undesirable artifacts that need to be avoided (or at least understood) when modelling stimulus selection.
机译:人工前馈神经网络通常用作建模刺激选择和动物信号的工具。刺激选择研究的一个关键发现是概括性的:如果已经对一种刺激建立了给定的行为,则在感知上相似的新型刺激很可能引起相似的反应。在前馈神经网络中,刺激的泛化自动作为网络的属性出现。这种网络特性引起了人们对于网络对于与其内部结构和训练过程有关的内部参数值变化的敏感性的理解。研究人员必须有信心,他们模型的预测是基于他们故意纳入模型的基础生物学,而不是基于对模型实现的任意选择。我们研究了网络训练和参数扰动如何影响简单但通用网络的定性和定量行为。具体来说,对于刺激控制模型,我们研究了参数变化对网络产生的泛化曲线形状的影响。我们表明,某些网络和训练条件会产生不期望的伪影,在对刺激选择进行建模时需要避免(或至少理解)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号