首页> 美国卫生研究院文献>other >Quantified acoustic-optical speech signal incongruity identifies cortical sites of audiovisual speech processing
【2h】

Quantified acoustic-optical speech signal incongruity identifies cortical sites of audiovisual speech processing

机译:量化的声光语音信号不一致性可识别视听语音处理的皮质部位

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

A fundamental question about human perception is how the speech perceiving brain combines auditory and visual phonetic stimulus information. We assumed that perceivers learn the normal relationship between acoustic and optical signals. We hypothesized that when the normal relationship is perturbed by mismatching the acoustic and optical signals, cortical areas responsible for audiovisual stimulus integration respond as a function of the magnitude of the mismatch. To test this hypothesis, in a previous study, we developed quantitative measures of acoustic-optical speech stimulus incongruity that correlate with perceptual measures. In the current study, we presented low incongruity (LI, matched), medium incongruity (MI, moderately mismatched), and high incongruity (HI, highly mismatched) audiovisual nonsense syllable stimuli during fMRI scanning. Perceptual responses differed as a function of the incongruity level, and BOLD measures were found to vary regionally and quantitatively with perceptual and quantitative incongruity levels. Each increase in level of incongruity resulted in an increase in overall levels of cortical activity and in additional activations. However, the only cortical region that demonstrated differential sensitivity to the three stimulus incongruity levels (HI > MI > LI) was a subarea of the left supramarginal gyrus (SMG). The left SMG might support a fine-grained analysis of the relationship between audiovisual phonetic input in comparison with stored knowledge, as hypothesized here. The methods here show that quantitative manipulation of stimulus incongruity is a new and powerful tool for disclosing the system that processes audiovisual speech stimuli.
机译:关于人类感知的一个基本问题是,感知语音的大脑如何结合听觉和视觉语音刺激信息。我们假设感知者学习声学和光学信号之间的正常关系。我们假设当声光信号不匹配而扰乱正常关系时,负责视听刺激整合的皮层区域会根据不匹配的大小进行响应。为了验证该假设,在先前的研究中,我们开发了与感知测量相关的声光语音刺激不一致性的定量测量。在当前的研究中,我们在fMRI扫描过程中呈现了低不一致性(LI,匹配),中不一致性(MI,中度不匹配)和高不一致性(HI,高度不匹配)视听废话音节刺激。知觉反应随不一致水平的变化而不同,并且发现BOLD测度随知觉和定量不一致水平而在区域和数量上有所不同。不一致程度的每一次增加都会导致皮层活动的整体水平增加,并导致额外的激活。但是,唯一显示出对三种刺激不一致水平(HI> MI> LI)具有不同敏感性的皮质区域是左上指回(SMG)的子区域。此处假设,左SMG可能支持对视听语音输入与存储的知识之间的关系进行细粒度的分析。此处的方法表明,对刺激不一致的定量操纵是公开处理视听语音刺激的系统的一种新的强大工具。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号