首页> 外文会议>Conference of the European Chapter of the Association for Computational Linguistics >A Systematic Study of Neural Discourse Models for Implicit Discourse Relation
【24h】

A Systematic Study of Neural Discourse Models for Implicit Discourse Relation

机译:隐性话语关系神经话语模型的系统研究

获取原文

摘要

Inferring implicit discourse relations in natural language text is the most difficult subtask in discourse parsing. Many neural network models have been proposed to tackle this problem. However, the comparison for this task is not unified, so we could hardly draw clear conclusions about the effectiveness of various architectures. Here, we propose neural network models that are based on feedforward and long-short term memory architecture and systematically study the effects of varying structures. To our surprise, the best-configured feedforward architecture outperforms LSTM-based model in most cases despite thorough tuning. Further, we compare our best feedforward system with competitive convolutional and recurrent networks and find that feedforward can actually be more effective. For the first time for this task, we compile and publish outputs from previous neural and non-neural systems to establish the standard for further comparison.
机译:推断自然语言文本中的隐式话语关系是话语解析中最困难的子任务。已经提出了许多神经网络模型来解决这个问题。但是,这项任务的比较并不统一,因此我们几乎无法得出关于各种架构的有效性的清晰结论。在这里,我们提出了基于前馈和长期内存架构的神经网络模型,并系统地研究不同结构的影响。令我们突触,尽管彻底调整,但在大多数情况下,最佳配置的前馈架构在大多数情况下占据了基于LSTM的模型。此外,我们比较具有竞争力的卷积和经常性网络的最佳前馈系统,并发现前馈可以更有效。有关此任务的第一次,我们编制和发布以前的神经系统和非神经系统的输出,以建立进一步比较的标准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号