首页> 外文会议>INTERSPEECH 2012 >Whole-Word Recognition from Articulatory Movements for Silent Speech Interfaces
【24h】

Whole-Word Recognition from Articulatory Movements for Silent Speech Interfaces

机译:从静音语音接口的明晰度运动中识别整体识别

获取原文

摘要

Articulation-based silent speech interfaces convert silently produced speech movements into audible words. These systems are still in their experimental stages, but have significant potential for facilitating oral communication in persons with laryngectomy or speech impairments. In this paper, we report the result of a novel, real-time algorithm that recognizes whole-words based on articulatory movements. This approach differs from prior work that has focused primarily on phoneme-level recognition based on articulatory features. On average, our algorithm missed 1.93 words in a sequence of twenty-five words with an average latency of 0.79 seconds for each word prediction using a data set of 5,500 isolated word samples collected from ten speakers. The results demonstrate the effectiveness of our approach and its potential for building a real-time articulation-based silent speech interface for health applications.
机译:基于铰接的静音语音接口将静默产生的语音移动转换为可听词。这些系统仍处于实验阶段,但具有促进喉切术或言语障碍的人口口头沟通的重要潜力。在本文中,我们报告了一种新颖的实时算法的结果,该算法识别基于明晰度运动的整个单词。这种方法与先前的工作不同,这些工作主要专注于基于铰接特征的音素级识别。平均而言,我们的算法以二十五个单词的序列错过了1.93字,每个单词预测的平均延迟是使用从10个扬声器收集的5,500个孤立的单词样本的数据集。结果展示了我们的方法的有效性及其构建基于实时阐述的静音语音界面的潜力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号