首页> 外文期刊>Journal of Cognitive Neuroscience >Selective Attention Modulates Early Human Evoked Potentials during Emotional Face–Voice Processing
【24h】

Selective Attention Modulates Early Human Evoked Potentials during Emotional Face–Voice Processing

机译:选择性注意调节情绪面部语音处理过程中人类早期诱发的电位。

获取原文
获取原文并翻译 | 示例
           

摘要

Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face–voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face–voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face–voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective—one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.
机译:多感官整合的最新发现表明,选择性注意会影响早期加工阶段的跨感官互动。然而,在情绪化的人脸-声音整合领域,假说普遍存在,即面部和声音的情绪信息会进行前瞻性的交互。使用ERP,我们调查了选择性注意对中性和愤怒的面部和声音表达的一致与不一致组合的感知的影响。通过四个任务来操纵注意力,这四个任务将参与者引导到(i)面部表情,(ii)声音表达,(iii)面部和声音之间的情感一致性,以及(iv)嘴唇运动与语音发作之间的同步。我们的研究结果揭示了面部和声音情绪表达之间的早期相互作用,表现为通过不协调的情绪面部-声音组合来调节听觉N1和P2振幅。尽管在N1时间窗口内的视听情感互动受到注意力操纵的影响,但P2调制内的互动并未显示出这种注意力影响。因此,我们建议将N1和P2在情感面语音处理方面进行功能分离,并讨论支持N1与交叉感觉预测相关的观念的证据,而P2与情感感知的推导有关。从本质上讲,我们的发现将面部表情和声音情感表达的整合放到了一个新的视角,即将整合过程视为多个可能独立的子过程的组合,其中一些子过程容易受到注意调节,而另一些子过程可能会受到其他因素的影响。因素。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号