首页> 外文会议>Pacific-Asia Conference on Knowledge Discovery and Data Mining >Best from Top k Versus Top 1: Improving Distant Supervision Relation Extraction with Deep Reinforcement Learning
【24h】

Best from Top k Versus Top 1: Improving Distant Supervision Relation Extraction with Deep Reinforcement Learning

机译:最佳实践与最佳实践对最佳实践1的对比:通过深度强化学习改善远距离监督关系提取

获取原文

摘要

Distant supervision relation extraction is a promising approach to find new relation instances from large text corpora. Most previous works employ the top 1 strategy, i.e., predicting the relation of a sentence with the highest confidence score, which is not always the optimal solution. To improve distant supervision relation extraction, this work applies the best from top k strategy to explore the possibility of relations with lower confidence scores. We approach the best from top k strategy using a deep reinforcement learning framework, where the model learns to select the optimal relation among the top k candidates for better predictions. Specifically, we employ a deep Q-network, trained to optimize a reward function that reflects the extraction performance under distant supervision. The experiments on three public datasets -of news articles, Wikipedia and biomedical papers - demonstrate that the proposed strategy improves the performance of traditional state-of-the-art relation extractors significantly. We achieve an improvement of 5.13% in average F_1 -score over four competitive baselines.
机译:远程监管关系提取是一种从大文本语料库中查找新关系实例的有前途的方法。以前的大多数作品都采用头1种策略,即预测具有最高置信度得分的句子之间的关系,这并不总是最佳的解决方案。为了改善远程监管关系的提取,这项工作运用了从前k个策略中得出的最佳结果,以探索具有较低置信度得分的关系的可能性。我们使用深度强化学习框架从前k个策略中获取最佳策略,在该模型中,模型将学习选择前k个候选对象之间的最佳关系以进行更好的预测。具体来说,我们采用了深度Q网络,经过训练可优化奖励功能,以反映在远程监督下的提取性能。在新闻,维基百科和生物医学论文三个公共数据集上进行的实验表明,该策略可以显着提高传统的最新关系提取器的性能。与四个竞争基准相比,我们的平均F_1得分提高了5.13%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号