...
首页> 外文期刊>Cortex: A Journal Devoted to the Study of the Nervous System and Behavior >Integration of cross-modal emotional information in the human brain: an fMRI study.
【24h】

Integration of cross-modal emotional information in the human brain: an fMRI study.

机译:跨模式情感信息在人脑中的整合:一项功能磁共振成像研究。

获取原文
获取原文并翻译 | 示例
           

摘要

The interaction of information derived from the voice and facial expression of a speaker contributes to the interpretation of the emotional state of the speaker and to the formation of inferences about information that may have been merely implied in the verbal communication. Therefore, we investigated the brain processes responsible for the integration of emotional information originating from different sources. Although several studies have reported possible sites for integration, further investigation using a neutral emotional condition is required to locate emotion-specific networks. Using functional magnetic resonance imaging (fMRI), we explored the brain regions involved in the integration of emotional information from different modalities in comparison to those involved in integrating emotionally neutral information. There was significant activation in the superior temporal gyrus (STG); inferior frontal gyrus (IFG); and parahippocampal gyrus, including the amygdala, under the bimodal versus the unimodal condition, irrespective of the emotional content. We confirmed the results of previous studies by finding that the bimodal emotional condition elicited strong activation in the left middle temporal gyrus (MTG), and we extended this finding to locate the effects of emotional factors by using a neutral condition in the experimental design. We found anger-specific activation in the posterior cingulate, fusiform gyrus, and cerebellum, whereas we found happiness-specific activation in the MTG, parahippocampal gyrus, hippocampus, claustrum, inferior parietal lobule, cuneus, middle frontal gyrus (MFG), IFG, and anterior cingulate. These emotion-specific activations suggest that each emotion uses a separate network to integrate bimodal information and shares a common network for cross-modal integration.
机译:从说话者的语音和面部表情中获得的信息的交互作用有助于解释说话者的情绪状态,并有助于形成关于可能只是在口头交流中暗示的信息的推论。因此,我们调查了负责整合来自不同来源的情绪信息的大脑过程。尽管有几项研究报告了可能的整合地点,但仍需要使用中性的情绪条件进行进一步调查才能找到特定于情绪的网络。使用功能磁共振成像(fMRI),我们探索了与整合情感中性信息所涉及的大脑区域相比,来自于不同形式的情感信息整合所涉及的大脑区域。颞上回(STG)有明显的激活。下额回(IFG);双峰与单峰条件下的海马旁回(包括杏仁核)无关,而与情感内容无关。我们通过发现双峰情绪条件在左中颞回(MTG)中引起强烈激活来确认先前的研究结果,并且我们通过在实验设计中使用中性条件将这一发现扩展到定位情绪因素的影响。我们在扣带回,梭状回和小脑中发现了愤怒特定的激活,而在MTG,海马旁回,海马,锁骨,顶下小叶,楔形,中额回(MFG),IFG,和前扣带回。这些特定于情绪的激活表明,每种情绪都使用单独的网络来整合双峰信息,并共享用于跨峰整合的公共网络。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号