首页> 外文会议>Conference on empirical methods in natural language processing >Invited Speaker: Yoshua Bengio Deep Learning of Semantic Representations
【24h】

Invited Speaker: Yoshua Bengio Deep Learning of Semantic Representations

机译:特邀演讲者:Yoshua Bengio语义表示的深度学习

获取原文

摘要

The core ingredient of deep learning is the notion of distributed representation. This talk will start by explaining its theoretical advantages, in comparison with non-parametric methods based on counting frequencies of occurrence of observed tuples of values (like with n-grams). The talk will then explain how having multiple levels of representation, i.e., depth, can in principle give another exponential advantage. Neural language models have been extremely successful in recent years but extending their reach from language modeling to machine translation is very appealing because it forces the learned intermediate representations to capture meaning, and we found that the resulting word embeddings are qualitatively different. Recently, we introduced the notion of attention-based encoder-decoder systems, with impressive results on machine translation several language pairs and for mapping an image to a sentence, and these results will conclude the talk.
机译:深度学习的核心要素是分布式表示的概念。与基于对观察到的元组(如n-gram)的出现频率进行计数的非参数方法相比,本演讲将首先说明其理论优势。然后,演讲将解释具有多个表示级别(即深度)的原理在原理上如何能带来另一项指数优势。近年来,神经语言模型取得了巨大的成功,但是将其范围从语言建模扩展到机器翻译非常吸引人,因为它迫使学习的中间表示形式捕获含义,并且我们发现生成的词嵌入在质量上有所不同。最近,我们引入了基于注意力的编码器/解码器系统的概念,在几种语言对的机器翻译以及将图像映射到句子方面产生了令人印象深刻的结果,这些结果将结束本次演讲。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号