...
【24h】

Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model

机译:学习掌握和提取能力:掌握和能力整合学习(ILGA)模型

获取原文
获取原文并翻译 | 示例
           

摘要

The activity of certain parietal neurons has been interpreted as encoding affordances (directly perceivable opportunities) for grasping. Separate computational models have been developed for infant grasp learning and affordance learning, but no single model has yet combined these processes in a neurobiologically plausible way. We present the Integrated Learning of Grasps and Affordances (ILGA) model that simultaneously learns grasp affordances from visual object features and motor parameters for planning grasps using trial-and-error reinforcement learning. As in the Infant Learning to Grasp Model, we model a stage of infant development prior to the onset of sophisticated visual processing of hand-object relations, but we assume that certain premotor neurons activate neural populations in primary motor cortex that synergistically control different combinations of fingers. The ILGA model is able to extract affordance representations from visual object features, learn motor parameters for generating stable grasps, and generalize its learned representations to novel objects.
机译:某些顶叶神经元的活动已被解释为编码抓取能力(可直接感知的机会)。已经开发了单独的计算模型用于婴儿抓握学习和能力学习,但是还没有单个模型以神经生物学上可行的方式组合这些过程。我们提出了“抓取和支付能力的集成学习”(ILGA)模型,该模型同时从视觉对象特征和运动参数中学习抓握能力,以使用反复试验强化学习来计划抓握。就像在婴儿学习抓握模型中一样,我们在手部对象之间进行复杂的视觉处理之前,先模拟了婴儿发育的阶段,但是我们假设某些运动前神经元可以激活初级运动皮层中的神经种群,从而协同控制神经元的不同组合。手指。 ILGA模型能够从视觉对象特征中提取可负担性表示形式,学习运动参数以生成稳定的抓握,并将其学习的表示形式推广到新颖的对象。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号