首页> 中文期刊> 《计算机工程与应用》 >一种结合演示数据和演化优化的强化学习方法

一种结合演示数据和演化优化的强化学习方法

         

摘要

强化学习研究智能体如何从与环境的交互中学习最优的策略,以最大化长期奖赏。由于环境反馈的滞后性,强化学习问题面临巨大的决策空间,进行有效的搜索是获得成功学习的关键。以往的研究从多个角度对策略的搜索进行了探索,在搜索算法方面,研究结果表明基于演化优化的直接策略搜索方法能够获得优于传统方法的性能;在引入外部信息方面,通过加入用户提供的演示,可以有效帮助强化学习提高性能。然而,这两种有效方法的结合却鲜有研究。对用户演示与演化优化的结合进行研究,提出iNEAT+Q算法,尝试将演示数据通过预训练神经网络和引导演化优化的适应值函数的方式与演化强化学习方法结合。初步实验表明,iNEAT+Q较不使用演示数据的演化强化学习方法NEAT+Q有明显的性能改善。%Reinforcement learning aims at learning an optimal policy that maximizes the long term rewards, from interac-tions with the environment. Since the environment feedbacks commonly delay after a sequences of actions, reinforcement learning has to tackle the problem of searching in a huge policy space, and thus an effective search is the key to a success approach. Previous studies explore various ways to achieve effective search methods, one effective way is employing the evolutionary algorithm as the search method, and another direction is introducing user demonstration data to guide the search. In this work, it investigates the combination of the two directions, and proposes the iNEAT+Q approach, which trains a neural network using the demonstration data as well as integrating the demonstration data into the fitness function for the evolutionary algorithm. Preliminary empirical study shows that iNEAT+Q is superior to NEAT+Q, which is an classical evolutionary reinforcement learning approach.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号