首页> 外文会议>International Conference on intelligent science and big data engineering >A New Method of Improving BERT for Text Classification
【24h】

A New Method of Improving BERT for Text Classification

机译:一种改善文本分类的伯特的新方法

获取原文

摘要

Text classification is a basic task in natural language processing. Recently, pre-training models such as BERT have achieved outstanding results compared with previous methods. However, BERT fails to take into account local information in the text such as a sentence and a phrase. In this paper, we present a BERT-CNN model for text classification. By adding CNN to the task-specific layers of BERT model, our model can get the information of important fragments in the text. In addition, we input the local representation along with the output of the BERT into the transformer encoder in order to take advantage of the self-attention mechanism and finally get the representation of the whole text through transformer layer. Extensive experiments demonstrate that our model obtains competitive performance against state-of-the-art baselines on four benchmark datasets.
机译:文本分类是自然语言处理中的基本任务。最近,与以前的方法相比,BERT的预训练模型已经取得了出色的结果。但是,BERT无法考虑文本中的本地信息,例如句子和短语。在本文中,我们介绍了一个用于文本分类的BERT-CNN模型。通过将CNN添加到特定于BERT模型的特定层,我们的模型可以获得文本中重要碎片的信息。此外,我们将本地表示与伯特的输出一起输入变压器编码器,以便利用自我关注机制,最后通过变压器层获得整个文本的表示。广泛的实验表明,我们的模型在四个基准数据集中获得针对最先进的基线的竞争性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号