...
首页> 外文期刊>IEEE transactions on GAMES >Efficient Reinforcement Learning for StarCraft by Abstract Forward Models and Transfer Learning
【24h】

Efficient Reinforcement Learning for StarCraft by Abstract Forward Models and Transfer Learning

机译:Efficient Reinforcement Learning for StarCraft by Abstract Forward Models and Transfer Learning

获取原文
获取原文并翻译 | 示例
           

摘要

Injecting human knowledge is an effective way to accelerate reinforcement learning (RL). However, these methods are underexplored. This article presents our discovery that an abstract forward model [thought-game (TG)] combined with transfer learning is an effective way. We take StarCraft II as our study environment. With the help of a designed TG, the agent can learn a 99% win-rate on a 64×64 map against the Level-7 built-in AI, using only 1.08 h in a single commercial machine. We also show that the TG method is not as restrictive as it was thought to be. It can work with roughly designed TGs, and can also be useful when the environment changes. Comparing with previous model-based RL, we show TG is more effective. We also present a TG hypothesis that gives the influence of different fidelity levels of TG. For real games that have unequal state and action spaces, we proposed a novel XfrNet of which usefulness is validated while achieving a 90% win-rate against the cheating Level-10 AI. We argue that the TG method might shed light on further studies of efficient RL with human knowledge.
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号