首页> 外文会议>Conference on empirical methods in natural language processing >Approximate Dynamic Oracle for Dependency Parsing with Reinforcement Learning
【24h】

Approximate Dynamic Oracle for Dependency Parsing with Reinforcement Learning

机译:用加固学习的依赖性解剖近似动态oracle

获取原文

摘要

We present a general approach with reinforcement learning (RL) to approximate dynamic oracles for transition systems where exact dynamic oracles are difficult to derive. We treat oracle parsing as a reinforcement learning problem, design the reward function inspired by the classical dynamic oracle, and use Deep Q-Learning (DQN) techniques to train the oracle with gold trees as features. The combination of a priori knowledge and data-driven methods enables an efficient dynamic oracle, which improves the parser performance over static oracles in several transition systems.
机译:我们提出了一种具有加强学习(RL)的一般方法,以近似用于过渡系统的动态oracles,其中精确的动态伪装难以得出。我们将Oracle解析视为加强学习问题,设计了经典动态甲骨文启发的奖励功能,并使用深度Q-Learning(DQN)技术,以将金树木作为特征培训Oracle。先验知识和数据驱动方法的组合使得能够有效的动态Oracle,这可以提高多个转换系统中的静态oracles的解析器性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号