首页> 美国卫生研究院文献>other >Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations
【2h】

Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

机译:来自不同人机交互研究的视频数据的系统分析:错误情况下的社会信号分类

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.
机译:人机交互通常受机器人或人为引起的错误情况的影响。因此,机器人将从识别错误情况发生的能力中受益。我们调查了人机交互实验中出现错误情况时人类显示的语言和非语言社交信号。为此,我们分析了201个视频,这些视频包含五个人机交互用户研究,其中四个任务来自不同的任务。分析表明,有两种类型的错误情况:违反社会规范和技术故障。违反社交规范的情况是机器人不遵守交互的基础社交脚本。技术故障是由于机器人的技术缺陷引起的。视频分析的结果表明,研究参与者使用许多头部动作和很少的手势,但是当机器人出现错误时,他们经常微笑。另一个结果是,参与者有时会在错误情况开始时停止移动。我们还发现,在违反社会规范的情况下,参加者的谈话更多,而在技术故障期间,参与者的谈话少。最后,当参与者仅与机器人互动且没有实验者或其他人在场时,他们使用较少的非语言社交信号(例如微笑,点头和摇头)。结果表明,参与者没有将机器人视为具有可比沟通能力的社交互动伙伴。我们的发现对人机交互系统的构建者和评估者具有启示意义。建设者需要考虑包括一些模块,用于识别和分类机器人输入通道的头部运动。评估人员需要确保实验者的在场不会歪曲其用户研究的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号