首页> 外文会议>Conference on Empirical Methods in Natural Language Processing >Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning
【24h】

Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning

机译:通过图形引导的代表学习在文本中利用结构化知识

获取原文

摘要

In this work, we aim at equipping pre-trained language models with structured knowledge. We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs. Building upon entity-level masked language models, our first contribution is an entity masking scheme that exploits relational knowledge underlying the text. This is fullilled by using a linked knowledge graph to select informative entities and then masking their mentions. In addition, we use knowledge graphs to obtain distrac-tors for the masked entities, and propose a novel distractor-suppressed ranking objective that is optimized jointly with masked language model. In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training, to inject language models with structured knowledge via learning from raw text. It is more efficient than retrieval-based methods that perform entity linking and integration during finetuning and inference, and generalizes more effectively than the methods that directly learn from concatenated graph triples. Experiments show that our proposed model achieves improved performance on five benchmarks, including question answering and knowledge base completion.
机译:在这项工作中,我们的目的是用结构化知识配备预先训练的语言模型。我们提出了两个自我监督的任务,以知识图表的指导为指导学习。建立在实体级屏蔽语言模型上,我们的第一个贡献是一个实体屏蔽方案,用于利用文本底层的关系知识。通过使用链接知识图来选择信息实体,然后掩盖其提到。此外,我们使用知识图来获取蒙面实体的Distract-Tors,并提出了一种与屏蔽语言模型共同优化的新型侦听者抑制的排名目标。与现有范例相比,我们的方法仅在预训练期间隐含地使用知识图,通过从原始文本中学习具有结构化知识的语言模型。它比基于检索的方法更有效,这些方法在FINETUNING和推断过程中执行实体链接和集成,并且比直接从连接的图形三元组中学习的方法更有效地概括。实验表明,我们的拟议模型实现了五个基准测试的提高,包括问题应答和知识库完成。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号