首页> 中文期刊> 《光学精密工程》 >基于自适应高斯混合模型与静动态听觉特征融合的说话人识别

基于自适应高斯混合模型与静动态听觉特征融合的说话人识别

         

摘要

By optimizing the feature vectors and Gaussian Mixture Models(GMMs),a hybrid compensation method in model and feature domains is proposed.With the method,the speaker recognition features effected by the noise and the declined performance of GMM with reducing length of the training data under different unexpected noise environments are improved.By emulating human auditory,Gammatone Filter Cepstral Coefficients(GFCC) is given out based on Gammatone Filter bank models.As the GFCC only reflects the static properties,the Gammatone Filter Shifted Delta Cepstral Coefficients(GFSDCC) is extracted based on Shifted Delta Cepstral.Then,the adaptive process for each GMM model with sufficient training data is transformed to the shift factor based on factor analysis.Furthermore,when the training data are insufficient,the coordinate of the shift factor is learned from the GMM mixtures of insensitive to the training data and then it is adapted to compensate other GMM mixtures.The experiment result shows that the recognition rate of the method proposed is 98.46%.The conclusion is that the performance of speaker recognition system is improved under several kinds of noise environments.%对特征参数和高斯混合模型进行改进,提出了一种特征域和模型域混合补偿的方法用于解决说话人识别特征受噪声影响较大以及高斯混合模型随训练样本长度减小而性能下降的问题.通过模拟人耳听觉,给出了基于伽马通滤波器的伽马通滤波倒谱系数;考虑其只反映了语音的静态特征,提取了能够反映语音动态特征的伽马通滑动差分倒谱系数.基于因子分析技术,利用移动因子表示高斯混合模型的自适应过程,通过训练语料较充分的说话人模型中的均值向量补偿受训练语料长度影响较大的分量的均值向量.仿真实验表明:在纯净背景下,本文方法的识别率达到了98.46%;在不同噪声环境下,本文提出的混合补偿方法能有效提高说话人识别系统的性能.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号