首页> 外文会议>International Conference on Computational Linguistics >Understanding Pre-trained BERT for Aspect-based Sentiment Analysis
【24h】

Understanding Pre-trained BERT for Aspect-based Sentiment Analysis

机译:了解训练前的伯特,用于基于方面的情感分析

获取原文

摘要

This paper analyzes the pre-trained hidden representations learned from reviews on BERT for tasks in aspect-based sentiment analysis (ABSA). Our work is motivated by the recent progress in BERT-based language models for ABSA. However, it is not clear how the general proxy task of (masked) language model trained on unlabeled corpus without annotations of aspects or opinions can provide important features for downstream tasks in ABSA. By leveraging the annotated datasets in ABSA, we investigate both the attentions and the learned representations of BERT pre-trained on reviews. We found that BERT uses very few self-attention heads to encode context words (such as prepositions or pronouns that indicating an aspect) and opinion words for an aspect. Most features in the representation of an aspect are dedicated to the finegrained semantics of the domain (or product category) and the aspect itself, instead of carrying summarized opinions from its context. We hope this investigation can help future research in improving self-supervised learning, unsupervised learning and fine-tuning for ABSA.
机译:本文分析了前训练的隐藏表示从基于宽基的情绪分析(ABSA)中的任务中的评论中学到的评论。我们的作品受到最近在ABA的BERT的语言模型中的进展。但是,目前尚不清楚(蒙版)语言模型的普通代理任务如何在没有方面或意见的未注释的未标记语料库上培训,可以为ABS中的下游任务提供重要的功能。通过利用ABSA中的注释数据集,我们调查前提和伯特的评论中培训的学习表现。我们发现BERT使用很少的自我关注头来编码上下文词(例如指示一个方面的介词或代词)和观点词。一个方面的表示中的大多数功能都专用于域(或产品类别)的FineGreatgregRate语义,而不是从其上下文中携带总结意见。我们希望这项调查可以帮助未来的研究改善自我监督的学习,无监督的学习和ABS的微调。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号