首页> 外文期刊>IEEE transactions on multimedia >Speaking Effect Removal on Emotion Recognition From Facial Expressions Based on Eigenface Conversion
【24h】

Speaking Effect Removal on Emotion Recognition From Facial Expressions Based on Eigenface Conversion

机译:基于特征脸转换的面部表情情感识别语音去除

获取原文
获取原文并翻译 | 示例
           

摘要

Speaking effect is a crucial issue that may dramatically degrade performance in emotion recognition from facial expressions. To manage this problem, an eigenface conversion-based approach is proposed to remove speaking effect on facial expressions for improving accuracy of emotion recognition. In the proposed approach, a context-dependent linear conversion function modeled by a statistical Gaussian Mixture Model (GMM) is constructed with parallel data from speaking and non-speaking facial expressions with emotions. To model the speaking effect in more detail, the conversion functions are categorized using a decision tree considering the visual temporal context of the Articulatory Attribute (AA) classes of the corresponding input speech segments. For verification of the identified quadrant of emotional expression on the Arousal-Valence (A-V) emotion plane, which is commonly used to dimensionally define the emotion classes, from the reconstructed facial feature points, an expression template is constructed to represent the feature points of the non-speaking facial expressions for each quadrant. With the verified quadrant, a regression scheme is further employed to estimate the A-V values of the facial expression as a precise point in the A-V emotion plane. Experimental results show that the proposed method outperforms current approaches and demonstrates that removing the speaking effect on facial expression is useful for improving the performance of emotion recognition.
机译:说话效果是一个至关重要的问题,它可能会大大降低面部表情在情感识别方面的表现。为了解决这个问题,提出了一种基于特征脸转换的方法来消除面部表情的说话效果,从而提高了情绪识别的准确性。在提出的方法中,使用统计高斯混合模型(GMM)建模的上下文相关线性转换函数,该模型使用来自带有情感的口语和非口语面部表情的并行数据构建。为了更详细地建模口语效果,使用决策树对转换函数进行分类,同时考虑相应输入语音段的发音属性(AA)类的视觉时间上下文。为了验证Arousal-Valence(AV)情感平面上已确定的情感表达象限,该情感象限通常用于在维度上定义情感类别,根据重构的面部特征点,构建了一个表达模板来表示面部表情特征点每个象限的非语言面部表情。通过已验证的象限,进一步采用回归方案来估计面部表情的A-V值,作为A-V情感平面中的精确点。实验结果表明,所提出的方法优于目前的方法,并且表明消除对面部表情的说话效果对于提高情绪识别的性能是有用的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号