首页> 外文会议>Pacific Asia Conference on Language, Information and Computation >A Comparison Study of Human-Evaluated Automated Highlighting Systems
【24h】

A Comparison Study of Human-Evaluated Automated Highlighting Systems

机译:人类评估自动突出系统的比较研究

获取原文

摘要

Automatic text highlighting aims to identify key portions that are most important to a reader. In this paper, we explore the use of existing extractive summarization models for automatically generating highlights; automatic highlight generation has not previously been addressed from this perspective. Evaluation studies typically rely on automated evaluation metrics as they are cheap to compute and scale well. However, these metrics are not designed to assess automated highlighting. We therefore focus on human evaluations in this work. Our comparison of multiple summarization models used for automated highlighting accompanied by human evaluation provides an approximate upper bound of the quality of future highlighting models.
机译:自动文本突出显示旨在识别对读者最重要的关键部分。在本文中,我们探讨了现有的采掘摘要模型,用于自动生成亮点;以前没有从这个角度解决自动亮点。评估研究通常依赖于自动评估指标,因为它们很便宜,计算和缩放。但是,这些指标旨在评估自动突出显示。因此,我们专注于这项工作的人类评估。我们对自动突出显示的多个摘要模型的比较伴随着人为评估提供了未来突出模型质量的近似的上限。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号