首页> 外文期刊>Open Access Library Journal >Summary of Research Methods on Pre-Training Models of Natural Language Processing
【24h】

Summary of Research Methods on Pre-Training Models of Natural Language Processing

机译:自然语言处理预培训模型研究方法综述

获取原文
           

摘要

In recent years, deep learning technology has been widely used and developed. In natural language processing tasks, pre-training models have been more widely used. Whether it is sentence extraction or sentiment analysis of text, the pre-training model plays a very important role. The use of a large-scale corpus for unsupervised pre-training of models has proven to be an excellent and effective way to provide models. This article summarizes the existing pre-training models and sorts out the improved models and processing methods of the relatively new pre-training models, and finally summarizes the challenges and prospects of the current pre-training models.
机译:近年来,深入学习技术已被广泛使用和发展。 在自然语言处理任务中,预训练模型已经更广泛使用。 无论是句子提取还是对文本的情感分析,训练前模式都发挥着非常重要的作用。 使用大规模语料库进行无监督的模型进行了预测的模型,已被证明是提供模型的优秀有效的方法。 本文总结了现有的预培训模型,并整理了相对较新的预培训模型的改进模型和处理方法,最后总结了当前预培训模型的挑战和前景。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号