首页> 外文会议>7th workshop on cognitive aspects of computational language learning >Generalization in Artificial Language Learning: Modelling the Propensity to Generalize
【24h】

Generalization in Artificial Language Learning: Modelling the Propensity to Generalize

机译:人工语言学习中的泛化:建模泛化倾向

获取原文
获取原文并翻译 | 示例

摘要

Experiments in Artificial Language Learning have revealed much about the cognitive mechanisms underlying sequence and language learning in human adults, in infants and in non-human animals. This paper focuses on their ability to generalize to novel grammatical instances (i.e., instances consistent with a familiarization pattern). Notably, the propensity to generalize appears to be negatively correlated with the amount of exposure to the artificial language, a fact that has been claimed to be contrary to the predictions of statistical models (Pena et al. (2002); Endress and Bonatti (2007)). In this paper, we propose to model generalization as a three-step process, and we demonstrate that the use of statistical models for the first two steps, contrary to widespread intuitions in the ALL-field, can explain the observed decrease of the propensity to generalize with exposure time.
机译:人工语言学习的实验揭示了人类,婴儿和非人类动物序列和语言学习背后的认知机制。本文着重于它们概括为新颖语法实例(即与熟悉模式一致的实例)的能力。值得注意的是,泛化的倾向似乎与暴露于人工语言的数量呈负相关,这一事实被认为与统计模型的预测相反(Pena等人(2002); Endress和Bonatti(2007 ))。在本文中,我们建议将泛化建模为一个三步过程,并且我们证明,与ALL领域的普遍直觉相反,使用前两步的统计模型可以解释观察到的倾向降低。用曝光时间来概括。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号