首页> 美国卫生研究院文献>Frontiers in Computational Neuroscience >Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory
【2h】

Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory

机译:多时标记忆动力学在具有注意力门控记忆的强化学习网络中扩展任务库

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The interplay of reinforcement learning and memory is at the core of several recent neural network models, such as the Attention-Gated MEmory Tagging (AuGMEnT) model. While successful at various animal learning tasks, we find that the AuGMEnT network is unable to cope with some hierarchical tasks, where higher-level stimuli have to be maintained over a long time, while lower-level stimuli need to be remembered and forgotten over a shorter timescale. To overcome this limitation, we introduce a hybrid AuGMEnT, with leaky (or short-timescale) and non-leaky (or long-timescale) memory units, that allows the exchange of low-level information while maintaining high-level one. We test the performance of the hybrid AuGMEnT network on two cognitive reference tasks, sequence prediction and 12AX.
机译:强化学习和记忆的相互作用是最近几种神经网络模型的核心,例如注意力门控记忆标记( AuGMEnT )模型。虽然在各种动物学习任务中都取得了成功,但我们发现 AuGMEnT 网络无法应付某些分层任务,这些任务必须长时间保持较高级别的刺激,而需要较低级别的刺激在较短的时间内被记住和遗忘。为了克服此限制,我们引入了具有泄漏(或短时标)和非泄漏(或长时标)存储单元的混合型 AuGMEnT ,该存储单元允许交换低级信息,同时保持高一级。我们在两个认知参考任务(序列预测和12AX)上测试了混合型 AuGMEnT 网络的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号