首页> 美国卫生研究院文献>Trends in Hearing >Visual Speech Benefit in Clear and Degraded Speech Depends on the Auditory Intelligibility of the Talker and the Number of Background Talkers
【2h】

Visual Speech Benefit in Clear and Degraded Speech Depends on the Auditory Intelligibility of the Talker and the Number of Background Talkers

机译:清晰语音中的视觉语音收益取决于讲话者的听觉清晰度和背景讲话者的数量

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Perceiving speech in background noise presents a significant challenge to listeners. Intelligibility can be improved by seeing the face of a talker. This is of particular value to hearing impaired people and users of cochlear implants. It is well known that auditory-only speech understanding depends on factors beyond audibility. How these factors impact on the audio-visual integration of speech is poorly understood. We investigated audio-visual integration when either the interfering background speech (Experiment 1) or intelligibility of the target talkers (Experiment 2) was manipulated. Clear speech was also contrasted with sine-wave vocoded speech to mimic the loss of temporal fine structure with a cochlear implant. Experiment 1 showed that for clear speech, the visual speech benefit was unaffected by the number of background talkers. For vocoded speech, a larger benefit was found when there was only one background talker. Experiment 2 showed that visual speech benefit depended upon the audio intelligibility of the talker and increased as intelligibility decreased. Degrading the speech by vocoding resulted in even greater benefit from visual speech information. A single “independent noise” signal detection theory model predicted the overall visual speech benefit in some conditions but could not predict the different levels of benefit across variations in the background or target talkers. This suggests that, similar to audio-only speech intelligibility, the integration of audio-visual speech cues may be functionally dependent on factors other than audibility and task difficulty, and that clinicians and researchers should carefully consider the characteristics of their stimuli when assessing audio-visual integration.
机译:在背景噪声中感知语音对听众提出了重大挑战。通过看说话者的脸可以提高清晰度。这对听力受损的人和人工耳蜗的使用者特别有价值。众所周知,仅听觉的语音理解取决于可听性以外的因素。这些因素如何影响语音的视听整合了解甚少。我们研究了干扰背景语音(实验1)或目标说话者的清晰度(实验2)受到控制时的视听整合。清晰的语音也与正弦波声码语音进行对比,以模拟人工耳蜗植入时精细结构的损失。实验1表明,对于清晰的语音,视觉语音的益处不受背景说话者数量的影响。对于语音编码语音,如果只有一个背景通话者,则会发现更大的好处。实验2表明,视觉语音的收益取决于讲话者的语音清晰度,并且随着清晰度的降低而增加。通过声码降解语音会从视觉语音信息中获得更大的收益。单一的“独立噪声”信号检测理论模型可以预测某些情况下的整体视觉语音收益,但无法预测背景说话者或目标说话者的不同收益水平。这表明,类似于纯语音的语音清晰度,视听语音提示的集成可能在功能上取决于可听性和任务难度以外的因素,并且临床医生和研究人员在评估音频-语音时应仔细考虑其刺激特征。视觉整合。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号