首页> 美国卫生研究院文献>Frontiers in Neurorobotics >Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions
【2h】

Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions

机译:RNN对逻辑词的表示学习:从单词序列到机器人动作

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as “not,” “and,” and “or” simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human–robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as “true,” “false,” and “not” work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word “and,” which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word “or,” which required action generation that looked apparently random, was represented as an unstable space of the network's dynamical system.
机译:人类语言的一个重要特征是构成性。通过从有限数量的元素构成复杂表达式的含义,我们可以有效地表达各种现实情况,事件和行为。先前的研究已经分析了机器学习模型(尤其是神经网络)如何从经验中学习来表示语言和机器人动作之间的组成关系,目的是理解符号的基础结构并实现智能的通讯代理。这些研究主要涉及直接指代现实世界中的单词(名词,形容词和动词)。除了这些单词,当前的研究还同时处理逻辑单词,例如“ not”,“ and”和“ or”。这些词不是直接指现实世界,而是有助于句中意义构建的逻辑运算符。在人机交互中,这些词可能经常使用。当前的研究建立了一个具有长短期记忆单元的递归神经网络模型,并对其进行训练以学习将包括逻辑词在内的句子翻译为机器人动作。我们研究通过学习过程,什么样的构图表示,中间句和机器人动作作为网络的内部状态出现。学习后的分析表明,参考词已与视觉信息和机器人自身的当前状态合并,并且逻辑词根据其作为逻辑运算符的功能由模型表示。诸如“ true”,“ false”和“ not”之类的词可作为非线性转换,将正交短语编码为存储单元状态空间中的同一区域。需要机器人抬起双手的“和”一词的作用就好像它是一个通用量词。 “或”一词需要看起来显然是随机的,需要动作的产生,被表示为网络动态系统的不稳定空间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号