首页> 外文期刊>Frontiers in Computational Neuroscience >The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding
【24h】

The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding

机译:建筑和学习约束在神经网络模型中的作用:以视觉空间编码为例

获取原文
           

摘要

The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.
机译:最近在人工神经网络中的“深度学习革命”对工程应用产生了巨大的影响并得到了广泛的部署,但是迄今为止,将深度学习用于神经计算模型的应用受到了限制。在本文中,我们认为无监督的深度学习代表了改善感知和认知的神经计算模型的重要一步,因为它强调了生成式学习相对于判别式(监督式)学习的作用。作为案例研究,我们提出了一系列模拟,以研究视觉空间神经编码用于感觉运动转换的出现。我们通过系统地测试隐蔽神经元开发的感受野和增益调制的类型,比较了通常用作无监督深度学习构建模块的不同网络体系结构。尤其是,我们将受限玻尔兹曼机(RBM)与自动编码器进行了比较,后者是使用对比发散训练的具有双向连接的随机生成网络,而自动编码器是使用错误反向传播训练的确定性网络。对于这两种学习架构,我们还探讨了稀疏编码的作用,稀疏编码已被确定为神经计算的基本原理。然后,将无监督模型与受监督的前馈网络进行比较,后者了解不同空间参考系之间的显式映射。我们的仿真表明,在单个神经元水平上,调优功能的分布方面,体系结构约束和学习约束都极大地影响了视觉空间的新兴编码。发现无监督模型,尤其是RBM,更紧密地遵守了灵长类顶叶皮层中单细胞记录的神经生理学数据。这些结果为人工神经网络的基本属性与生物系统中的神经信息处理建模如何相关提供了新的见解。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号