首页> 外文期刊>Audio, Speech, and Language Processing, IEEE/ACM Transactions on >Dependency-to-Dependency Neural Machine Translation
【24h】

Dependency-to-Dependency Neural Machine Translation

机译:依存到依存的神经机器翻译

获取原文
获取原文并翻译 | 示例
           

摘要

Recent research has proven that syntactic knowledge is effective to improve the performance of neural machine translation (NMT). Most previous work focuses on leveraging either source or target syntax in the recurrent neural network (RNN) based encoder-decoder model. In this paper, we simultaneously use both source and target dependency tree to improve the NMT model. First, we propose a simple but effective syntax-aware encoder to incorporate source dependency tree into NMT. The new encoder enriches each source state with dependence relations from the tree. Then, we propose a novel sequence-to-dependence framework. In this framework, the target translation and its corresponding dependence tree are jointly constructed and modeled. During decoding, the tree structure is used as context to facilitate word generations. Finally, we extend the sequence-to-dependence framework with the syntax-aware encoder to build a dependence-NMT model and apply the dependence-based framework to the Transformer. Experimental results on several translation tasks show that both source and target dependence structures can improve the translation quality and their effects can be accumulated.
机译:最近的研究证明,句法知识可以有效地提高神经机器翻译(NMT)的性能。先前的大多数工作都集中在利用基于循环神经网络(RNN)的编码器/解码器模型中的源语法或目标语法。在本文中,我们同时使用源依赖树和目标依赖树来改进NMT模型。首先,我们提出一种简单但有效的语法感知编码器,将源依赖树合并到NMT中。新的编码器通过树中的依赖关系丰富了每个源状态。然后,我们提出了一种新颖的序列依赖框架。在此框架中,目标转换及其相应的依赖树是联合构建和建模的。在解码期间,树结构用作上下文以促进单词生成。最后,我们使用语法感知编码器扩展序列到依赖框架,以构建依赖关系NMT模型,并将基于依赖关系的框架应用于Transformer。在多个翻译任务上的实验结果表明,源依存结构和目标依存结构都可以提高翻译质量,并且可以累积其效果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号