首页> 外文会议>Annual meeting of the Association for Computational Linguistics >The (Non-)Utility of Structural Features in BiLSTM-based Dependency Parsers
【24h】

The (Non-)Utility of Structural Features in BiLSTM-based Dependency Parsers

机译:基于BiLSTM的依存解析器中结构特征的(非)效用

获取原文

摘要

Classical non-neural dependency parsers put considerable effort on the design of feature functions. Especially, they benefit from information coming from structural features, such as features drawn from neighboring tokens in the dependency tree. In contrast, their BiLSTM-based successors achieve state-of-the-art performance without explicit information about the structural context. In this paper we aim to answer the question: How much structural context are the BiLSTM representations able to capture implicitly? We show that features drawn from partial subtrees become redundant when the BiLSTMs are used. We provide a deep insight into information flow in transition- and graph-based neural architectures to demonstrate where the implicit information comes from when the parsers make their decisions. Finally, with model ablations we demonstrate that the structural context is not only present in the models, but it significantly influences their performance.
机译:经典的非神经依赖性解析器在特征函数的设计上付出了巨大的努力。特别地,它们受益于来自结构特征的信息,例如从相关性树中的相邻标记中提取的特征。相比之下,他们基于BiLSTM的后继者可以在没有有关结构上下文的明确信息的情况下实现最新性能。在本文中,我们旨在回答以下问题:BiLSTM表示能够隐式捕获多少结构上下文?我们显示,当使用BiLSTM时,从子树中提取的特征将变得多余。我们提供了对基于过渡和基于图的神经体系结构中信息流的深入了解,以演示解析器做出决策时隐式信息来自何处。最后,通过模型消融,我们证明了结构上下文不仅存在于模型中,而且极大地影响了它们的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号