首页> 外文会议>Workshop on Knowledge Extraction and Integration for Deep Learning Architectures >What Makes My Model Perplexed? A Linguistic Investigation on Neural Language Models Perplexity
【24h】

What Makes My Model Perplexed? A Linguistic Investigation on Neural Language Models Perplexity

机译:是什么让我的模型感到困惑?神经语言模型困惑的语言学研究

获取原文

摘要

This paper presents an investigation aimed at studying how the linguistic structure of a sentence affects the perplexity of two of the most popular Neural Language Models (NLMs). BERT and GPT-2. We first compare the sentence-level likelihood computed with BERT and the GPT-2's perplexity showing that the two metrics are correlated. In addition, we exploit linguistic features capturing a wide set of morpho-syntactic and syntactic phenomena showing how they contribute to predict the perplexity of the two NLMs.
机译:本文旨在研究句子的语言结构如何影响两种最流行的神经语言模型(NLM)的复杂性。伯特和GPT-2。我们首先比较了用BERT计算的句子级似然和GPT-2的困惑度,表明这两个指标是相关的。此外,我们还利用语言特征捕捉了大量的形态-句法和句法现象,展示了它们如何有助于预测这两个NLM的复杂性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号