首页> 外文会议>International Conference on Spoken Language Processing; 20041004-08; Jeju(KR) >Acoustic model adapatation for coded speech using synthetic speech
【24h】

Acoustic model adapatation for coded speech using synthetic speech

机译:使用合成语音对编码语音进行声学模型自适应

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we describe a novel acoustic model adaptation technique which generates "speaker-independent" HMM for the target environment. Recently, personal digital assistants like cellular phones are shifting to IP terminals. The encoding-decoding process utilized for transmitting over IP networks deteriorates the quality of speech data. This deterioration causes degradation in speech recognition performance. Acoustic model adaptations can improve recognition performance. However, the conventional adaptation methods usually require a large amount of adaptation data. The proposed method uses HMM-based speech synthesis to generate adaptation data from the acoustic model of HMM-based speech recognizer, and consequently does not require any speech data for adaptation. Experimental results on G.723.1 coded speech recognition show that the proposed method improves speech recognition performance. A relative word error rate reduction of approximately 12% was observed.
机译:在本文中,我们描述了一种新颖的声学模型自适应技术,该技术可为目标环境生成“与扬声器无关”的HMM。近来,诸如蜂窝电话的个人数字助理正在转移到IP终端。用于通过IP网络传输的编码-解码过程会降低语音数据的质量。这种恶化导致语音识别性能下降。声学模型适应可以提高识别性能。然而,常规的适应方法通常需要大量的适应数据。所提出的方法使用基于HMM的语音合成从基于HMM的语音识别器的声学模型生成自适应数据,因此不需要任何语音数据进行自适应。 G.723.1编码语音识别的实验结果表明,该方法提高了语音识别性能。观察到相对词错误率降低了大约12%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号