首页> 外文会议>Conference on empirical methods in natural language processing >When Are Tree Structures Necessary for Deep Learning of Representations ?
【24h】

When Are Tree Structures Necessary for Deep Learning of Representations ?

机译:何时需要深度学习表示法的树结构?

获取原文

摘要

Recursive neural models, which use syntactic parse trees to recursively generate representations bottom-up, are a popular architecture. However there have not been rigorous evaluations showing for exactly which tasks this syntax-based method is appropriate. In this paper, we benchmark recursive neural models against sequential recurrent neural models, enforcing apples-to-apples comparison as much as possible. We investigate 4 tasks: (1) sentiment classification at the sentence level and phrase level; (2) matching questions to answer-phrases; (3) discourse parsing; (4) semantic relation extraction. Our goal is to understand better when, and why, recursive models can outperform simpler models. We find that recursive models help mainly on tasks (like semantic relation extraction) that require longdistance connection modeling, particularly on very long sequences. We then introduce a method for allowing recurrent models to achieve similar performance: breaking long sentences into clause-like units at punctuation and processing them separately before combining. Our results thus help understand the limitations of both classes of models, and suggest directions for improving recurrent models.
机译:递归神经模型是一种流行的体系结构,它使用语法分析树从下至上递归地生成表示形式。但是,还没有严格的评估来确切显示此基于语法的方法适合于哪些任务。在本文中,我们针对顺序递归神经模型对递归神经模型进行基准测试,并尽可能地加强了苹果之间的比较。我们研究了4个任务:(1)在句子级别和短语级别的情感分类; (2)使问题与答案短语匹配; (3)语篇解析; (4)语义关系提取。我们的目标是更好地理解何时以及为什么递归模型可以胜过简单模型。我们发现递归模型主要对需要长距离连接建模的任务(如语义关系提取)有所帮助,特别是在很长的序列上。然后,我们介绍一种允许递归模型实现类似性能的方法:将长句子在标点符号处分解为类似子句的单元,并在合并之前分别进行处理。因此,我们的结果有助于理解这两种模型的局限性,并为改进循环模型提供了建议。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号