首页> 外文期刊>Electronics and communications in Japan >Robust Extraction of Desired Speaker's Utterance in Overlapped Speech
【24h】

Robust Extraction of Desired Speaker's Utterance in Overlapped Speech

机译:重叠语音中所需说话人说话的鲁棒性提取

获取原文
获取原文并翻译 | 示例
       

摘要

In this paper, we propose a speaker indexing method using speaker verification technique to extract one desired speaker's utterances from conversational speech. To solve the overlapped speech problem, we construct overlapped speech models with the observed conversational speech itself. The overlapped speech models include overlapped speech of target and cohort speaker, and speech model of two cohort speakers. In order to evaluate the proposed method, we made a simulated conversational speech that has up to 50% overlapping segments. The equal error rate was reduced by up to 43.7% compared with the conventional methods that use a target speaker model only, and use a target model and overlapped speech model trained with a speaker independent of large speech database.
机译:在本文中,我们提出了一种说话人索引方法,该方法使用说话人验证技术从会话语音中提取一个所需说话人的话语。为了解决重叠语音问题,我们用观察到的对话语音本身构造了重叠语音模型。重叠语音模型包括目标和同类说话者的语音重叠,以及两个同类说话者的语音模型。为了评估所提出的方法,我们进行了模拟会话语音,该会话语音具有多达50%的重叠段。与仅使用目标演讲者模型,使用目标模型和重叠演讲模型(由独立于大型语音数据库的演讲者训练)的传统方法相比,平均错误率降低了多达43.7%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号