首页> 美国卫生研究院文献>Frontiers in Computational Neuroscience >A Closed-Loop Toolchain for Neural Network Simulations of Learning Autonomous Agents
【2h】

A Closed-Loop Toolchain for Neural Network Simulations of Learning Autonomous Agents

机译:用于学习自治代理的神经网络仿真的闭环工具链

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Neural network simulation is an important tool for generating and evaluating hypotheses on the structure, dynamics, and function of neural circuits. For scientific questions addressing organisms operating autonomously in their environments, in particular where learning is involved, it is crucial to be able to operate such simulations in a closed-loop fashion. In such a set-up, the neural agent continuously receives sensory stimuli from the environment and provides motor signals that manipulate the environment or move the agent within it. So far, most studies requiring such functionality have been conducted with custom simulation scripts and manually implemented tasks. This makes it difficult for other researchers to reproduce and build upon previous work and nearly impossible to compare the performance of different learning architectures. In this work, we present a novel approach to solve this problem, connecting benchmark tools from the field of machine learning and state-of-the-art neural network simulators from computational neuroscience. The resulting toolchain enables researchers in both fields to make use of well-tested high-performance simulation software supporting biologically plausible neuron, synapse and network models and allows them to evaluate and compare their approach on the basis of standardized environments with various levels of complexity. We demonstrate the functionality of the toolchain by implementing a neuronal actor-critic architecture for reinforcement learning in the NEST simulator and successfully training it on two different environments from the OpenAI Gym. We compare its performance to a previously suggested neural network model of reinforcement learning in the basal ganglia and a generic Q-learning algorithm.
机译:神经网络仿真是用于生成和评估关于神经回路的结构,动力学和功能的假设的重要工具。对于解决在其环境中(尤其是在涉及学习的情况下)自主运行的生物的科学问题,至关重要的是能够以闭环方式进行此类模拟。在这样的设置中,神经作用剂不断地接收来自环境的感觉刺激,并提供操纵环境或在其中移动作用剂的运动信号。到目前为止,大多数需要这种功能的研究都是通过自定义仿真脚本和手动执行的任务进行的。这使得其他研究人员难以复制和借鉴以前的工作,并且几乎不可能比较不同学习架构的性能。在这项工作中,我们提出了一种解决该问题的新颖方法,将来自机器学习领域的基准工具与来自计算神经科学的先进神经网络模拟器相连接。由此产生的工具链使这两个领域的研究人员都可以使用经过测试的高性能仿真软件,以支持生物学上合理的神经元,突触和网络模型,并允许他们在具有各种复杂程度的标准化环境的基础上评估和比较其方法。我们通过在NEST模拟器中实现用于增强学习的神经元角色批判体系结构并在OpenAI Gym的两个不同环境中成功对其进行培训,来演示工具链的功能。我们将其性能与先前建议的基底神经节中强化​​学习的神经网络模型和通用Q学习算法进行比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号