Substantial evidence supports the role of the lateral intraparietal region (LIP) of the brain as one of the areas of central processing where bottom-up visual information is modulated by top-down task information from higher cortical structures. It also contains a global egocentric as opposed to a local retinotopic mapping and thus is also considered critical for the accumulation of a coherent view of the surrounding environment in the context of an ever changing visual scene. We have developed an active vision system architecture based on the LIP structure as its central element. This architecture, as an extension of that previously presented (Hulse et al. 2009), now considers feature data and has the ability to modulate visual search according to specific object properties. This architecture is discussed in terms of its ability to generate visual search for active robotic vision systems.
展开▼