首页> 美国卫生研究院文献>Frontiers in Computational Neuroscience >Toward an Integration of Deep Learning and Neuroscience
【2h】

Toward an Integration of Deep Learning and Neuroscience

机译:走向深度学习和神经科学的融合

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.
机译:神经科学致力于计算的详细实现,研究神经代码,动力学和电路。然而,在机器学习中,人工神经网络倾向于避开精确设计的代码,动力学或电路,而倾向于使用成本函数的蛮力优化,通常使用简单且相对统一的初始架构。机器学习领域出现了两项最新进展,这为将这些看似不同的观点联系在一起提供了机会。首先,使用结构化架构,包括用于注意力,递归和各种形式的短期和长期内存存储的专用系统。第二,成本功能和培训程序变得更加复杂,并且随着时间的流逝而变化。在这里,我们根据这些想法来思考大脑。我们假设(1)大脑对成本函数进行了优化,(2)成本函数在大脑的各个位置和整个开发过程中是多种多样且不同的,并且(3)优化在与行为所引起的计算问题相匹配的预构造体系结构中进行。为了支持这些假设,我们认为通过多层神经元进行的信用分配的一系列实现与我们目前对神经电路的知识兼容,并且大脑的专用系统可以解释为能够针对特定问题类别进行有效的优化。通过一系列相互影响的成本函数实现的这种异构优化系统可提高学习数据的效率,并精确地针对生物体的需求。我们提出了神经科学可以寻求改进和检验这些假设的方向。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号