首页> 外文会议>International conference on spoken language processing >Integrating different acoustic and syntactic Language Models in a Continuous Speech Recognition System
【24h】

Integrating different acoustic and syntactic Language Models in a Continuous Speech Recognition System

机译:在连续语音识别系统中集成不同的声学和句法语言模型

获取原文

摘要

Continuous Speech Recognition (CSR) systems require acoustic models to represent the characteristics of the acoustic signal and Language Models (LM) to represent the synthactic constraints of the language. Both acoustic and LM probability distributions are usually independently obtained and evalauted. Then, the respective "best" models are selected to be integrated in the CSR systems. But, in this paper it was proved that the use of more accurate acoustic models (for example the use of semicontinuous models instead of discrete ones or the use of a bigger number of then representing a more complete set of sublexical units) didn't always mean a better performance of the integrated system becauses the acoustic improvements were softened when the LM probabilities were applied. This experiemtnal evalaution was carried out over a Spanish speech application task.
机译:连续语音识别(CSR)系统需要声学模型来表示声学信号和语言模型(LM)的特征来表示语言的合成限制。声学和LM概率分布通常是独立的和评估的。然后,选择相应的“最佳”模型集成在CSR系统中。但是,在本文中,证明使用更准确的声学模型(例如,使用半连续模型而不是离散模型或使用更大的数量的代表更完整的空闲单位)并不总是意味着在应用LM概率时,集成系统的更好性能使声学改进被软化。这种体验评估是通过西班牙语演讲应用任务进行的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号