首页> 外文会议>Cross-modal analysis of speech, gestures, gaze and facial expressions >Evaluation of Speech Emotion Classification Based on GMM and Data Fusion
【24h】

Evaluation of Speech Emotion Classification Based on GMM and Data Fusion

机译:基于GMM和数据融合的语音情感分类评价

获取原文
获取原文并翻译 | 示例

摘要

This paper describes continuation of our research on automatic emotion recognition from speech based on Gaussian Mixture Models (GMM). We use similar technique for emotion recognition as for speaker recognition. From previous research it seems to be better to use a lesser number of GMM components than is used for speaker recognition and better results are also achieved for a greater number of speech parameters used for GMM modeling. In previous experiments we used suprasegmental and segmental parameters separately and also together, which can be described as fusion on feature level. The experiment described in this paper is based on an evaluation of score level fusion for two GMM classifiers used separately for segmental and suprasegmental parameters. We evaluate two techniques of score level fusion - dot product of scores from both classifiers and maximum selection and maximum confidence selections.
机译:本文描述了我们基于高斯混合模型(GMM)的语音自动情感识别研究的继续。我们使用与说话人识别类似的技术来进行情感识别。从以前的研究来看,使用较少的GMM组件似乎比用于说话人识别更好,并且对于用于GMM建模的大量语音参数,也可以获得更好的结果。在先前的实验中,我们分别使用超分割参数和分段参数,也一起使用,可以描述为特征级别的融合。本文所述的实验基于对分别用于分段和超分段参数的两个GMM分类器的得分水平融合的评估。我们评估两种评分水平融合技术-来自两个分类器的评分的点积以及最大选择和最大置信度选择。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号