【24h】

Boosting Dialog Response Generation

机译:促进对话响应的产生

获取原文

摘要

Neural models have become one of the most important approaches to dialog response generation. However, they still tend to generate the most common and generic responses in the corpus all the time. To address this problem, we designed an iterative training process and ensemble method based on boosting. We combined our method with different training and decoding paradigms as the base model, including mutual-information-based decoding and reward-augmented maximum likelihood learning. Empirical results show that our approach can significantly improve the diversity and relevance of the responses generated by all base models, backed by objective measurements and human evaluation.
机译:神经模型已成为对话响应生成的最重要方法之一。但是,它们仍然倾向于始终在语料库中生成最常见和最通用的响应。为了解决这个问题,我们设计了一种基于Boosting的迭代训练过程和集成方法。我们将我们的方法与不同的训练和解码范例结合起来作为基本模型,包括基于互信息的解码和奖励增强的最大似然学习。实证结果表明,在客观测量和人工评估的支持下,我们的方法可以显着提高所有基本模型生成的响应的多样性和相关性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号