首页> 外文会议>3rd workshop on computational creativity in natural language generation >Discourse Embellishment Using a Deep Encoder-Decoder Network
【24h】

Discourse Embellishment Using a Deep Encoder-Decoder Network

机译:使用深度编码器-解码器网络进行话语修饰

获取原文
获取原文并翻译 | 示例

摘要

We suggest a new NLG task in the context of the discourse generation pipeline of computational storytelling systems. This task, textual embellishment, is defined by taking a text as input and generating a semantically equivalent output with increased lexical and syntactic complexity. Ideally, this would allow the authors of computational storytellers to implement just lightweight NLG systems and use a domain-independent embellishment module to translate its output into more literary text. We present promising first results on this task using LSTM Encoder-Decoder networks trained on the Wiki-Large dataset. Furthermore, we introduce "Compiled Computer Tales", a corpus of computationally generated stories, that can be used to test the capabilities of embellishment algorithms.
机译:我们建议在计算叙事系统的话语生成管道的背景下提出一项新的NLG任务。通过将文本作为输入并生成语义上等效的输出(具有增加的词法和句法复杂性)来定义此任务,即文本修饰。理想情况下,这将使计算讲故事的作者仅实现轻量级的NLG系统,并使用与领域无关的修饰模块将其输出转换为更多文学文字。我们使用在Wiki-Large数据集上训练的LSTM Encoder-Decoder网络在此任务上展示有希望的第一个结果。此外,我们介绍了“ Compiled Computer Tales”(计算机生成的故事集),这是一个由计算生成的故事的语料库,可用于测试修饰算法的功能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号