首页> 外文会议>2018 International Conference on Artificial Intelligence and Big Data >Handling large-scale action space in deep Q network
【24h】

Handling large-scale action space in deep Q network

机译:在深度Q网络中处理大规模行动空间

获取原文
获取原文并翻译 | 示例

摘要

Deep reinforcement learning (DRL) is a new topic in recent years. Deep Q Network is a popular DRL implement. It is a well-studied technique and has achieved significant improvement on several challenging tasks such as Atari 2600 games. However, in some kinds of games, there are a large number of possible actions. Thus the output layer in DQN could be complicated due to the large-scale action space, which could harm the performance of DQN. In this paper, we proposed a variant structure of DQN to handle this problem. We could reduce the size of output layer in DQN. The experimental results show that our method improves significantly in some tasks with large-scale action space.
机译:深度强化学习(DRL)是近年来的新话题。深度Q网络是一种流行的DRL工具。这是一项经过充分研究的技术,并且在诸如Atari 2600游戏之类的一些具有挑战性的任务上取得了显着改进。但是,在某些类型的游戏中,存在大量可能的动作。因此,DQN中的输出层由于动作空间较大而可能变得复杂,这可能会损害DQN的性能。在本文中,我们提出了DQN的变体结构来解决此问题。我们可以减少DQN中输出层的大小。实验结果表明,在某些具有大规模动作空间的任务中,我们的方法有明显的改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号