首页> 外文期刊>Frontiers in Communication >Visual-Tactile Speech Perception and the Autism Quotient
【24h】

Visual-Tactile Speech Perception and the Autism Quotient

机译:视听语音知觉与自闭症智商

获取原文
           

摘要

Multisensory information is integrated asymmetrically in speech perception: An audio signal can follow video by 240 milliseconds, but can precede video by only 60 ms, without disrupting the sense of synchronicity (Munhall et al., 1996). Similarly, air flow can follow either audio (Gick et al., 2010) or video (Bicevskis et al., 2016) by a much larger margin than it can precede either while remaining perceptually synchronous. These asymmetric windows of integration have been attributed to the physical properties of the signals; light travels faster than sound (Munhall et al., 1996), and sound travels faster than air flow (Gick et al., 2010). Perceptual windows of integration narrow during development (Hillock-Dunn and Wallace, 2012), but remain wider among people with autism (Wallace and Stevenson, 2014). Here we show that, even among neurotypical adult perceivers, visual-tactile windows of integration are wider and flatter the higher the participanta??s Autism Quotient (AQ) (Baron-Cohen et al, 2001), a self-report screening test for Autism Spectrum Disorder (ASD). As a??paa?? is produced with a tiny burst of aspiration (Derrick et al., 2009), we applied light and inaudible air puffs to participantsa?? necks while they watched silent videos of a person saying a??baa?? or a??paa??, with puffs presented both synchronously and at varying degrees of asynchrony relative to the recorded plosive release burst, which itself is time-aligned to visible lip opening. All syllables seen along with cutaneous air puffs were more likely to be perceived as a??paa??. Syllables were perceived as a??paa?? most often when the air puff occurred 50-100 ms after lip opening, with decaying probability as asynchrony increased. Integration was less dependent on time-alignment the higher the participanta??s AQ. Perceivers integrate event-relevant tactile information in visual speech perception with greater reliance upon event-related accuracy the more they self-describe as neurotypical, supporting the Happe & Frith (2006) weak coherence account of ASD.
机译:多感官信息不对称地集成在语音感知中:音频信号可以跟随视频240毫秒,但可以跟随视频仅60 ms,而不会破坏同步感(Munhall等,1996)。同样,空气流可以跟随音频(Gick等,2010)或视频(Bicevskis等,2016),其幅度要远大于任一种,同时仍保持感知同步。这些不对称的积分窗口已归因于信号的物理特性。光的传播比声音的传播快(Munhall等,1996),而声音的传播比气流的传播快(Gick等,2010)。整合的感知窗口在开发过程中会缩小(Hillock-Dunn和Wallace,2012年),但在自闭症患者中会更广(Wallace和Stevenson,2014年)。在这里,我们表明,即使在具有神经性典型的成人感知器中,视觉触觉整合的窗口也越宽,越平坦,参与者的自闭症智商(AQ)越高(Baron-Cohen等,2001)。自闭症谱系障碍(ASD)。作为一个?产生的气息很小(Derrick等,2009),我们向参与者施加了轻度和听不见的吹气。脖子上,当他们观看一个人说“ baa”的无声视频时或“ paa”,相对于所记录的爆炸性释放爆发,其以同步方式显示,并且以不同程度的异步呈现,其时间与可见的嘴唇张开时间对齐。看到的所有音节以及皮肤吹气都被认为是“ paa”。音节被认为是“ paa”最常见的情况是在嘴唇张开后50-100毫秒发生吹气,并且随着异步的增加而衰减的可能性。参与者的AQ越高,积分对时间对准的依赖性越小。感知者将事件相关的触觉信息整合到视觉语音感知中,并且越依赖事件相关的准确性,他们越会自称为神经型,从而支持Happe&Frith(2006)对ASD的弱一致性描述。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号