首页> 美国卫生研究院文献>PLoS Clinical Trials >One-against-All Weighted Dynamic Time Warping for Language-Independent and Speaker-Dependent Speech Recognition in Adverse Conditions
【2h】

One-against-All Weighted Dynamic Time Warping for Language-Independent and Speaker-Dependent Speech Recognition in Adverse Conditions

机译:不利条件下与语言无关和与说话者相关的语音识别的一对多加权动态时间规整

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Considering personal privacy and difficulty of obtaining training material for many seldom used English words and (often non-English) names, language-independent (LI) with lightweight speaker-dependent (SD) automatic speech recognition (ASR) is a promising option to solve the problem. The dynamic time warping (DTW) algorithm is the state-of-the-art algorithm for small foot-print SD ASR applications with limited storage space and small vocabulary, such as voice dialing on mobile devices, menu-driven recognition, and voice control on vehicles and robotics. Even though we have successfully developed two fast and accurate DTW variations for clean speech data, speech recognition for adverse conditions is still a big challenge. In order to improve recognition accuracy in noisy environment and bad recording conditions such as too high or low volume, we introduce a novel one-against-all weighted DTW (OAWDTW). This method defines a one-against-all index (OAI) for each time frame of training data and applies the OAIs to the core DTW process. Given two speech signals, OAWDTW tunes their final alignment score by using OAI in the DTW process. Our method achieves better accuracies than DTW and merge-weighted DTW (MWDTW), as 6.97% relative reduction of error rate (RRER) compared with DTW and 15.91% RRER compared with MWDTW are observed in our extensive experiments on one representative SD dataset of four speakers' recordings. To the best of our knowledge, OAWDTW approach is the first weighted DTW specially designed for speech data in adverse conditions.
机译:考虑到个人隐私和难以获得许多很少使用的英语单词和(通常是非英语)名称的培训材料,语言独立(LI)和轻量说话者独立(SD)自动语音识别(ASR)是解决该问题的有前途的选择问题。动态时间规整(DTW)算法是用于存储空间有限且词汇量少的小型占地SD ASR应用程序的最新算法,例如移动设备上的语音拨号,菜单驱动的识别和语音控制在车辆和机器人上。即使我们已经成功开发出两种快速,准确的DTW变体来获取清晰的语音数据,但针对不利条件的语音识别仍然是一个巨大的挑战。为了提高在嘈杂的环境和恶劣的录制条件(例如音量太高或太小)下的识别精度,我们引入了一种新颖的“一反所有”加权DTW(OAWDTW)。此方法为训练数据的每个时间帧定义了一个“反对所有的索引”(OAI),并将该OAI应用于核心DTW流程。给定两个语音信号,OAWDTW通过在DTW过程中使用OAI调整其最终对齐分数。我们的方法实现了比DTW和合并加权DTW(MWDTW)更好的精度,因为在我们的四个代表性SD数据集的广泛实验中,与DTW相比,相对错误率(RRER)降低了6.97%,与MWDTW相比,RRER降低了15.91%演讲者的录音。据我们所知,OAWDTW方法是专门针对不利条件下的语音数据而设计的首个加权DTW。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号