首页> 外文期刊>Advanced Robotics: The International Journal of the Robotics Society of Japan >Expressing reactive emotion based on multimodal emotion recognition for natural conversation in human-robot interaction*
【24h】

Expressing reactive emotion based on multimodal emotion recognition for natural conversation in human-robot interaction*

机译:基于多模式情绪识别对人机互动的自然谈判的反应情绪*

获取原文
获取原文并翻译 | 示例
           

摘要

Human-human interaction consists of various nonverbal behaviors that are often emotion-related. To establish rapport, it is essential that the listener respond to reactive emotion in a way that makes sense given the speaker's emotional state. However, human-robot interactions generally fail in this regard because most spoken dialogue systems play only a question-answer role. Aiming for natural conversation, we examine an emotion processing module that consists of a user emotion recognition function and a reactive emotion expression function for a spoken dialogue system to improve human-robot interaction. For the emotion recognition function, we propose a method that combines valence from prosody and sentiment from text by decision-level fusion, which considerably improves the performance. Moreover, this method reduces fatal recognition errors, thereby improving the user experience. For the reactive emotion expression function, the system's emotion is divided into emotion category and emotion level, which are predicted using the parameters estimated by the recognition function on the basis of distributions inferred from human-human dialogue data. As a result, the emotion processing module can recognize the user's emotion from his/her speech, and expresses a reactive emotion that matches. Evaluation with ten participants demonstrated that the system enhanced by this module is effective to conduct natural conversation.
机译:人类互动由往往与情感相关的各种非语言行为组成。为了建立融洽,听众必须以赋予扬声器的情绪状态有意义的方式应对反应性情绪。然而,人机互动通常在这方面失败,因为大多数口头对话系统只扮演一个问题答案作用。针对自然对话,我们研究了一个情感处理模块,由用户情感识别功能和用于口语对话系统的无功情感表达功能,以改善人机互动。对于情感识别功能,我们提出了一种将价值从韵律和情绪与决策级融合结合起来的方法,这大大提高了性能。此外,该方法减少了致命识别误差,从而提高了用户体验。对于反应性情绪表达功能,系统的情绪被分为情感类别和情感水平,这些情绪水平被预测使用由人类对话数据推断出来的分布来预测由识别功能估计的参数。因此,情绪处理模块可以识别用户的情绪,从他/她的演讲中表达反应性情绪。与十名参与者的评估表明该模块增强的系统可以有效地进行自然对话。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号