首页> 外文会议>International Conference on Foundations of Augmented Cognition >Unobtrusive Multimodal Emotion Detection in Adaptive Interfaces: Speech and Facial Expressions
【24h】

Unobtrusive Multimodal Emotion Detection in Adaptive Interfaces: Speech and Facial Expressions

机译:自适应接口中不显眼的多式联动情绪检测:语音和面部表情

获取原文

摘要

Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system evaluation and emotion annotation that one is most likely to encounter in emotion recognition research. Further, we identify some of the possible applications for emotion recognition such as health monitoring or e-learning systems. Finally, we will discuss the growing need for developing agreed standards in automatic emotion recognition research.
机译:讨论了两个自动情绪识别的不引人注意的方式:言语和面部表情。首先,概述基于语音和面部表情的组合来赋予情感识别研究。我们将识别有关数据收集,数据融合,系统评估和情感注释的困难,即人们最有可能在情感认可研究中遇到。此外,我们确定了一些可能的诸如情感识别的可能应用,例如健康监测或电子学习系统。最后,我们将讨论在自动情绪识别研究中开发商定标准的日益增长的需求。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号