首页> 美国卫生研究院文献>other >Audition and vision share spatial attentional resources yet attentional load does not disrupt audiovisual integration
【2h】

Audition and vision share spatial attentional resources yet attentional load does not disrupt audiovisual integration

机译:试听和视觉共享空间注意力资源但注意力负荷不会破坏视听整合

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
机译:人类不断从多种感觉模态接收和整合信息。但是,注意力资源限制了可以处理的信息量。尚不清楚注意力资源和多感觉处理如何相互关联。具体来说,会出现以下问题:(1)每个感官方式是否有独特的空间注意资源? (2)注意负荷是否会影响多感觉整合?我们使用双重任务范式调查了这些问题:参与者分别(单独的任务条件)或同时(双重任务条件)执行了两个空间任务(一个多对象跟踪任务和一个本地化任务)。在多对象跟踪任务中,参与者从视觉上跟踪了几个随机移动的对象的一小部分。在本地化任务中,参与者会收到视觉,听觉或多余的视觉和听觉位置提示。在双重任务条件下,我们发现参与者的绩效相对于单一任务条件的结果大幅下降。重要的是,无论位置提示的方式如何,参与者在双重任务条件下的表现均相同。该结果表明,具有来自不同模态的空间信息不利于性能,从而指示了用于听觉和视觉模态的共享空间注意力资源。此外,我们发现,即使参与者在双重任务条件下承受额外的注意力负荷,他们也会类似地集成冗余的多感官信息。总体而言,研究结果表明(1)视觉和听觉空间注意力资源是共享的,并且(2)空间信息的视听整合发生在注意前的处理阶段。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号