首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops >Modelling Multi-Channel Emotions Using Facial Expression and Trajectory Cues for Improving Socially-Aware Robot Navigation
【24h】

Modelling Multi-Channel Emotions Using Facial Expression and Trajectory Cues for Improving Socially-Aware Robot Navigation

机译:使用面部表情和轨迹提示为多通道情感建模,以提高社交意识的机器人导航

获取原文

摘要

Using facial expressions and trajectory signals, we present an emotion-aware navigation algorithm for social robots. Our approach uses a combination of Bayesian-inference, CNN-based learning and the Pleasure-Arousal-Dominance model from psychology to estimate time-varying emotional behaviors of pedestrians from their faces and trajectories. For each pedestrian, these PAD characteristics are used to generate proxemic constraints. We use a multi-channel model to classify pedestrian features into four categories of emotions (happy, sad, angry, neutral). We observe an emotional detection accuracy of 85.33% in our validation results. In low-to medium-density environments, we formulate emotion-based proxemic constraints to perform socially conscious robot navigation. With Pepper, a social humanoid robot, we demonstrate the benefits of our algorithm in simulated environments with tens of pedestrians as well as in a real world setting.
机译:使用面部表情和轨迹信号,我们提出了一种针对社交机器人的情绪感知导航算法。我们的方法结合了贝叶斯推断,基于CNN的学习方法和心理学的愉悦-主动-支配地位模型,可以从行人的脸部和轨迹估算其随时间变化的情感行为。对于每个行人,这些PAD特性都用于生成近似约束。我们使用多渠道模型将行人特征分为四类情感(快乐,悲伤,愤怒,中性)。在我们的验证结果中,我们发现情绪检测的准确性为85.33%。在中低密度环境中,我们制定了基于情感的近距离约束,以执行具有社会意识的机器人导航。通过社交类人机器人Pepper,我们在有数十名行人的模拟环境中以及在现实环境中展示了我们算法的优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号