首页> 外文会议>International Computer Conference, Computer Society of Iran >Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization
【24h】

Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization

机译:利用帕尔索特和普里尔净净化MT5为波斯抽象文本摘要

获取原文

摘要

Text summarization is one of the most critical Natural Language Processing (NLP) tasks. More and more researches are conducted in this field every day. Pre-trained transformer-based encoder-decoder models have begun to gain popularity for these tasks. This paper proposes two methods to address this task and introduces a novel dataset named pn-summary for Persian abstractive text summarization. The models employed in this paper are mT5 and an encoder-decoder version of the ParsBERT model (i.e., a monolingual BERT model for Persian). These models are fine-tuned on the pn-summary dataset. The current work is the first of its kind and, by achieving promising results, can serve as a baseline for any future work.
机译:文本摘要是最关键的自然语言处理之一(NLP)任务。 越来越多的研究每天都在这一领域进行。 基于预训练的变换器编码器 - 解码器模型已经开始为这些任务获得人气。 本文提出了两种解决此任务的方法,并介绍了名为PN-Summary的新型数据集以获得PERSIAN抽象文本摘要。 本文所采用的模型是MT5和帕尔索特模型的编码器 - 解码器版本(即Persian的单声道BERT模型)。 这些模型在PN摘要数据集上进行了微调。 目前的工作是首先,通过实现有前途的结果,可以作为任何未来工作的基准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号