首页> 外文期刊>Autonomous Mental Development, IEEE Transactions on >A Computational Model of Acoustic Packaging
【24h】

A Computational Model of Acoustic Packaging

机译:声学包装的计算模型

获取原文
获取原文并翻译 | 示例
           

摘要

In order to learn and interact with humans, robots need to understand actions and make use of language in social interactions. The use of language for the learning of actions has been emphasized by Hirsh-Pasek and Golinkoff (MIT Press, 1996), introducing the idea of acoustic packaging . Accordingly, it has been suggested that acoustic information, typically in the form of narration, overlaps with action sequences and provides infants with a bottom-up guide to attend to relevant parts and to find structure within them. In this article, we present a computational model of the multimodal interplay of action and language in tutoring situations. For our purpose, we understand events as temporal intervals, which have to be segmented in both, the visual and the acoustic modality. Our acoustic packaging algorithm merges the segments from both modalities based on temporal overlap. First evaluation results show that acoustic packaging can provide a meaningful segmentation of action demonstration within tutoring behavior. We discuss our findings with regard to a meaningful action segmentation. Based on our future vision of acoustic packaging we point out a roadmap describing the further development of acoustic packaging and interactive scenarios it is employed in.
机译:为了学习并与人类互动,机器人需要理解动作并在社交互动中使用语言。 Hirsh-Pasek和Golinkoff(MIT Press,1996)强调了语言在动作学习中的应用,介绍了声学包装的思想。因此,已经提出,通常以叙述形式的声音信息与动作序列重叠,并向婴儿提供了自下而上的指导以照顾相关部分并在其中寻找结构。在本文中,我们介绍了在辅导情况下动作和语言的多模式相互作用的计算模型。为了我们的目的,我们将事件理解为时间间隔,必须将其分为视觉和听觉形式。我们的声学打包算法会根据时间重叠将两种模态的片段进行合并。初步评估结果表明,声学包装可以在辅导行为中提供有意义的动作演示细分。我们讨论关于有意义的动作细分的发现。基于我们对声学包装的未来愿景,我们指出了一个路线图,描述了声学包装的进一步发展及其所采用的交互方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号