首页> 外文会议>Computer vision systems >Multimodal Interaction Abilities for a Robot Companion
【24h】

Multimodal Interaction Abilities for a Robot Companion

机译:机器人伴侣的多式联运能力

获取原文
获取原文并翻译 | 示例

摘要

Among the cognitive abilities a robot companion must be endowed with, human perception and speech understanding are both fundamental in the context of multimodal human-robot interaction. In order to provide a mobile robot with the visual perception of its user and means to handle verbal and multi-modal communication, we have developed and integrated two components. In this paper we will focus on an interactively distributed multiple object tracker dedicated to two-handed gestures and head location in 3D. Its relevance is highlighted by in- and off- line evaluations from data acquired by the robot. Implementation and preliminary experiments on a household robot companion, including speech recognition and understanding as well as basic fusion with gesture, are then demonstrated. The latter illustrate how vision can assist speech by specifying location references, object/person IDs in verbal statements in order to interpret natural deictic commands given by human beings. Extensions of our work are finally discussed.
机译:在机器人同伴必须具备的认知能力中,人类感知和语音理解都是多模式人机交互的基础。为了向移动机器人提供其用户的视觉感知以及处理口头和多模式通信的方式,我们开发并集成了两个组件。在本文中,我们将专注于交互式分布的多对象跟踪器,该跟踪器专门用于3D中的双手手势和头部位置。通过从机器人获取的数据进行线内和线外评估,可以突出显示其相关性。然后演示了在家用机器人伴侣上的实现和初步实验,包括语音识别和理解以及与手势的基本融合。后者说明了视觉如何通过在口头陈述中指定位置参考,对象/人的ID来辅助语音,以解释人类发出的自然指示命令。最后讨论了我们的工作扩展。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号