...
首页> 外文期刊>Autism research: official journal of the International Society for Autism Research >Audiovisual speech integration in autism spectrum disorders: ERP evidence for atypicalities in lexical-semantic processing.
【24h】

Audiovisual speech integration in autism spectrum disorders: ERP evidence for atypicalities in lexical-semantic processing.

机译:自闭症频谱障碍中的视听语音集成:ERP证据,证明词汇语义处理过程中的非典型性。

获取原文
获取原文并翻译 | 示例
           

摘要

In typically developing (TD) individuals, behavioral and event-related potential (ERP) studies suggest that audiovisual (AV) integration enables faster and more efficient processing of speech. However, little is known about AV speech processing in individuals with autism spectrum disorders (ASD). This study examined ERP responses to spoken words to elucidate the effects of visual speech (the lip movements accompanying a spoken word) on the range of auditory speech processing stages from sound onset detection to semantic integration. The study also included an AV condition, which paired spoken words with a dynamic scrambled face in order to highlight AV effects specific to visual speech. Fourteen adolescent boys with ASD (15-17 years old) and 14 age- and verbal IQ-matched TD boys participated. The ERP of the TD group showed a pattern and topography of AV interaction effects consistent with activity within the superior temporal plane, with two dissociable effects over frontocentral and centroparietal regions. The posterior effect (200-300 ms interval) was specifically sensitive to lip movements in TD boys, and no AV modulation was observed in this region for the ASD group. Moreover, the magnitude of the posterior AV effect to visual speech correlated inversely with ASD symptomatology. In addition, the ASD boys showed an unexpected effect (P2 time window) over the frontocentral region (pooled electrodes F3, Fz, F4, FC1, FC2, FC3, FC4), which was sensitive to scrambled face stimuli. These results suggest that the neural networks facilitating processing of spoken words by visual speech are altered in individuals with ASD.
机译:在典型的发展中(TD)个体中,行为和与事件相关的潜力(ERP)研究表明,视听(AV)集成可以更快,更有效地处理语音。但是,关于自闭症谱系障碍(ASD)患者的AV语音处理知之甚少。这项研究检查了ERP对口语单词的响应,以阐明视觉语音(伴随口语单词的嘴唇运动)对从声音发作检测到语义整合的听觉语音处理阶段范围的影响。这项研究还包括一种视听疾病,该疾病将口语单词和一张动态的乱序面孔配对,以突出视觉语音特有的视听效果。十四名患有ASD(15-17岁)的青春期男孩和14名年龄和言语智商匹配的TD男孩参加了比赛。 TD组的ERP表现出与上颞平面内的活动相一致的AV交互作用的模式和形貌,在额中央和向心区域有两个可分解的影响。后向效应(间隔200-300 ms)对TD男孩的嘴唇运动特别敏感,对于ASD组,在该区域没有观察到AV调节。此外,后部AV效应对视觉语音的强度与ASD症状学成反比。此外,ASD男孩在额中部区域(池电极F3,Fz,F4,FC1,FC2,FC3,FC4)上表现出意想不到的效果(P2时间窗口),这对混乱的面部刺激非常敏感。这些结果表明,在具有ASD的个体中,通过视觉语音促进口语处理的神经网络发生了变化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号