首页> 外文会议>ACM/IEEE International Conference on Human-Robot Interaction >Is a Robot a Better Walking Partner if it Associates Utterances with Visual Scenes?
【24h】

Is a Robot a Better Walking Partner if it Associates Utterances with Visual Scenes?

机译:如果机器人将话语与视觉场景相关联,是一个更好的步行伙伴吗?

获取原文

摘要

We aim to develop a walking partner robot with the capability to select small-talk topics that are associative to visual scenes. We first collected video sequences from five different locations and prepared a dataset about small-talk topics associated to visual scenes. Then we developed a technique to associate the visual scenes with the small-talk topics. We converted visual scenes into lists of words using an off-the-shelf vision library and formed a topic space with a Latent Dirichlet Allocation (LDA) method in which a list of words is transformed to a topic vector. Finally, the system selects the most similar utterance in the topic vectors. We tested our developed technique with a dataset, which successfully selected 72% appropriate utterances, and conducted a user study outdoors where participants took a walk with a small robot on their shoulder and engaged in small talk. We confirmed that the participants more highly perceived the robot with our developed technique because it selected appropriate utterances than a robot that randomly selected utterances. Further, they also felt that the former type of robot is a better walking partner.
机译:我们的目标是开发一种步行伙伴机器人,该机器人能够选择与视觉场景相关的小话题。我们首先从五个不同的位置收集了视频序列,并准备了一个与视觉场景相关的小话题话题的数据集。然后,我们开发了一种将视觉场景与小话题相关联的技术。我们使用现成的视觉库将视觉场景转换为单词列表,并使用潜在狄利克雷分配(LDA)方法将单词列表转换为主题向量,从而形成了主题空间。最后,系统在主题向量中选择最相似的话语。我们使用数据集测试了我们开发的技术,该数据集成功地选择了72%的适当话语,并在户外进行了用户研究,参与者在他们的肩膀上带着小机器人散步并进行了闲聊。我们确认,与我们随机选择语音的机器人相比,参与者使用我们开发的技术对机器人的感知程度更高,因为它选择了适当的语音。此外,他们还认为前一种机器人是更好的步行伙伴。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号