1
|
Wang X, Liang H, Li L, Zhou J, Song R. Contribution of the stereoscopic representation of motion-in-depth during visually guided feedback control. Cereb Cortex 2023:7030846. [PMID: 36750266 DOI: 10.1093/cercor/bhad010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 01/06/2023] [Accepted: 01/07/2023] [Indexed: 02/09/2023] Open
Abstract
Considerable studies have focused on the neural basis of visually guided tracking movement in the frontoparallel plane, whereas the neural process in real-world circumstances regarding the influence of binocular disparity and motion-in-depth (MID) perception is less understood. Although the role of stereoscopic versus monoscopic MID information has been extensively described for visual processing, its influence on top-down regulation for motor execution has not received much attention. Here, we orthogonally varied the visual representation (stereoscopic versus monoscopic) and motion direction (depth motion versus bias depth motion versus frontoparallel motion) during visually guided tracking movements, with simultaneous functional near-infrared spectroscopy recordings. Results show that the stereoscopic representation of MID could lead to more accurate movements, which was supported by specific neural activity pattern. More importantly, we extend prior evidence about the role of frontoparietal network in brain-behavior relationship, showing that occipital area, more specifically, visual area V2/V3 was also robustly involved in the association. Furthermore, by using the stereoscopic representation of MID, it is plausible to detect robust brain-behavior relationship even with small sample size at low executive task demand. Taken together, these findings highlight the importance of the stereoscopic representation of MID for investigating neural correlates of visually guided feedback control.
Collapse
Affiliation(s)
- Xiaolu Wang
- Key Laboratory of Sensing Technology and Biomedical Instrument of Guangdong Province, School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510006, China
| | - Haowen Liang
- State Key Laboratory of Optoelectronic Materials and Technology, Guangdong Marine Laboratory, School of Physics, Sun Yat-sen University, Guangzhou 510275, China
| | - Le Li
- Institute of Medical Research, Northwestern Polytechnical University, Xi'an 710072, China.,Department of Rehabilitation Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510030, China
| | - Jianying Zhou
- State Key Laboratory of Optoelectronic Materials and Technology, Guangdong Marine Laboratory, School of Physics, Sun Yat-sen University, Guangzhou 510275, China
| | - Rong Song
- Key Laboratory of Sensing Technology and Biomedical Instrument of Guangdong Province, School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510006, China
| |
Collapse
|
2
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
3
|
Dittrich S, Noesselt T. Temporal Audiovisual Motion Prediction in 2D- vs. 3D-Environments. Front Psychol 2018; 9:368. [PMID: 29618999 PMCID: PMC5871701 DOI: 10.3389/fpsyg.2018.00368] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Accepted: 03/06/2018] [Indexed: 11/24/2022] Open
Abstract
Predicting motion is essential for many everyday life activities, e.g., in road traffic. Previous studies on motion prediction failed to find consistent results, which might be due to the use of very different stimulus material and behavioural tasks. Here, we directly tested the influence of task (detection, extrapolation) and stimulus features (visual vs. audiovisual and three-dimensional vs. non-three-dimensional) on temporal motion prediction in two psychophysical experiments. In both experiments a ball followed a trajectory toward the observer and temporarily disappeared behind an occluder. In audiovisual conditions a moving white noise (congruent or non-congruent to visual motion direction) was presented concurrently. In experiment 1 the ball reappeared on a predictable or a non-predictable trajectory and participants detected when the ball reappeared. In experiment 2 the ball did not reappear after occlusion and participants judged when the ball would reach a specified position at two possible distances from the occluder (extrapolation task). Both experiments were conducted in three-dimensional space (using stereoscopic screen and polarised glasses) and also without stereoscopic presentation. Participants benefitted from visually predictable trajectories and concurrent sounds during detection. Additionally, visual facilitation was more pronounced for non-3D stimulation during detection task. In contrast, for a more complex extrapolation task group mean results indicated that auditory information impaired motion prediction. However, a post hoc cross-validation procedure (split-half) revealed that participants varied in their ability to use sounds during motion extrapolation. Most participants selectively profited from either near or far extrapolation distances but were impaired for the other one. We propose that interindividual differences in extrapolation efficiency might be the mechanism governing this effect. Together, our results indicate that both a realistic experimental environment and subject-specific differences modulate the ability of audiovisual motion prediction and need to be considered in future research.
Collapse
Affiliation(s)
- Sandra Dittrich
- Department of Biological Psychology, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Tömme Noesselt
- Department of Biological Psychology, Otto von Guericke University Magdeburg, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| |
Collapse
|
4
|
Yuan G, Liu G, Wei D, Wang G, Li Q, Qi M, Wu S. Functional connectivity corresponding to the tonotopic differentiation of the human auditory cortex. Hum Brain Mapp 2018; 39:2224-2234. [PMID: 29417705 DOI: 10.1002/hbm.24001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2017] [Revised: 01/26/2018] [Accepted: 01/31/2018] [Indexed: 12/19/2022] Open
Abstract
Recent research has demonstrated that resting-state functional connectivity (RS-FC) within the human auditory cortex (HAC) is frequency-selective, but whether RS-FC between the HAC and other brain areas is differentiated by frequency remains unclear. Three types of data were collected in this study, including resting-state functional magnetic resonance imaging (fMRI) data, task-based fMRI data using six pure tone stimuli (200, 400, 800, 1,600, 3,200, and 6,400 Hz), and structural imaging data. We first used task-based fMRI to identify frequency-selective cortical regions in the HAC. Six regions of interest (ROIs) were defined based on the responses of 50 participants to the six pure tone stimuli. Then, these ROIs were used as seeds to determine RS-FC between the HAC and other brain regions. The results showed that there was RS-FC between the HAC and brain regions that included the superior temporal gyrus, dorsolateral prefrontal cortex (DL-PFC), parietal cortex, occipital lobe, and subcortical structures. Importantly, significant differences in FC were observed among most of the brain regions that showed RS-FC with the HAC. Specifically, there was stronger RS-FC between (1) low-frequency (200 and 400 Hz) regions and brain regions including the premotor cortex, somatosensory/-association cortex, and DL-PFC; (2) intermediate-frequency (800 and 1,600 Hz) regions and brain regions including the anterior/posterior superior temporal sulcus, supramarginal gyrus, and inferior frontal cortex; (3) intermediate/low-frequency regions and vision-related regions; (4) high-frequency (3,200 and 6,400 Hz) regions and the anterior cingulate cortex or left DL-PFC. These findings demonstrate that RS-FC between the HAC and other brain areas is frequency selective.
Collapse
Affiliation(s)
- Guangjie Yuan
- College of Electronic and Information Engineering, Southwest University, Chongqing, China.,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China
| | - Guangyuan Liu
- College of Electronic and Information Engineering, Southwest University, Chongqing, China.,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China.,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China.,Chongqing Brain Science Collaborative Innovation Center, Chongqing, China
| | - Dongtao Wei
- Faculty of Psychology, Southwest University, Chongqing, China
| | - Gaoyuan Wang
- College of Music, Southwest University, Chongqing, China
| | - Qiang Li
- College of Electronic and Information Engineering, Southwest University, Chongqing, China.,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China
| | - Mingming Qi
- Faculty of Psychology, Southwest University, Chongqing, China.,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China
| | - Shifu Wu
- College of Electronic and Information Engineering, Southwest University, Chongqing, China.,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China
| |
Collapse
|
5
|
Rosemann S, Wefel IM, Elis V, Fahle M. Audio-visual interaction in visual motion detection: Synchrony versus Asynchrony. JOURNAL OF OPTOMETRY 2017; 10:242-251. [PMID: 28237358 PMCID: PMC5595265 DOI: 10.1016/j.optom.2016.12.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Revised: 11/17/2016] [Accepted: 12/09/2016] [Indexed: 06/06/2023]
Abstract
OBJECTIVE Detection and identification of moving targets is of paramount importance in everyday life, even if it is not widely tested in optometric practice, mostly for technical reasons. There are clear indications in the literature that in perception of moving targets, vision and hearing interact, for example in noisy surrounds and in understanding speech. The main aim of visual perception, the ability that optometry aims to optimize, is the identification of objects, from everyday objects to letters, but also the spatial orientation of subjects in natural surrounds. To subserve this aim, corresponding visual and acoustic features from the rich spectrum of signals supplied by natural environments have to be combined. METHODS Here, we investigated the influence of an auditory motion stimulus on visual motion detection, both with a concrete (left/right movement) and an abstract auditory motion (increase/decrease of pitch). RESULTS We found that incongruent audiovisual stimuli led to significantly inferior detection compared to the visual only condition. Additionally, detection was significantly better in abstract congruent than incongruent trials. For the concrete stimuli the detection threshold was significantly better in asynchronous audiovisual conditions than in the unimodal visual condition. CONCLUSION We find a clear but complex pattern of partly synergistic and partly inhibitory audio-visual interactions. It seems that asynchrony plays only a positive role in audiovisual motion while incongruence mostly disturbs in simultaneous abstract configurations but not in concrete configurations. As in speech perception in hearing-impaired patients, patients suffering from visual deficits should be able to benefit from acoustic information.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Department of Human-Neurobiology, University of Bremen, Hochschulring 18, 28359 Bremen, Germany.
| | - Inga-Maria Wefel
- Department of Human-Neurobiology, University of Bremen, Hochschulring 18, 28359 Bremen, Germany
| | - Volkan Elis
- Department of Human-Neurobiology, University of Bremen, Hochschulring 18, 28359 Bremen, Germany
| | - Manfred Fahle
- Department of Human-Neurobiology, University of Bremen, Hochschulring 18, 28359 Bremen, Germany
| |
Collapse
|
6
|
Andric M, Davis B, Hasson U. Visual cortex signals a mismatch between regularity of auditory and visual streams. Neuroimage 2017; 157:648-659. [DOI: 10.1016/j.neuroimage.2017.05.028] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Revised: 04/14/2017] [Accepted: 05/15/2017] [Indexed: 10/19/2022] Open
|
7
|
Kayser SJ, Philiastides MG, Kayser C. Sounds facilitate visual motion discrimination via the enhancement of late occipital visual representations. Neuroimage 2017; 148:31-41. [PMID: 28082107 PMCID: PMC5349847 DOI: 10.1016/j.neuroimage.2017.01.010] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Revised: 12/12/2016] [Accepted: 01/05/2017] [Indexed: 12/24/2022] Open
Abstract
Sensory discriminations, such as judgements about visual motion, often benefit from multisensory evidence. Despite many reports of enhanced brain activity during multisensory conditions, it remains unclear which dynamic processes implement the multisensory benefit for an upcoming decision in the human brain. Specifically, it remains difficult to attribute perceptual benefits to specific processes, such as early sensory encoding, the transformation of sensory representations into a motor response, or to more unspecific processes such as attention. We combined an audio-visual motion discrimination task with the single-trial mapping of dynamic sensory representations in EEG activity to localize when and where multisensory congruency facilitates perceptual accuracy. Our results show that a congruent sound facilitates the encoding of motion direction in occipital sensory - as opposed to parieto-frontal - cortices, and facilitates later - as opposed to early (i.e. below 100 ms) - sensory activations. This multisensory enhancement was visible as an earlier rise of motion-sensitive activity in middle-occipital regions about 350 ms from stimulus onset, which reflected the better discriminability of motion direction from brain activity and correlated with the perceptual benefit provided by congruent multisensory information. This supports a hierarchical model of multisensory integration in which the enhancement of relevant sensory cortical representations is transformed into a more accurate choice. Feature specific multisensory integration occurs in sensory not amodal cortex. Feature specific integration occurs late, i.e. around 350 ms post stimulus onset. Acoustic and visual representations interact in occipital motion regions.
Collapse
Affiliation(s)
- Stephanie J Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.
| | | | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| |
Collapse
|
8
|
Hidaka S, Teramoto W, Sugita Y. Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review. Front Integr Neurosci 2015; 9:62. [PMID: 26733827 PMCID: PMC4686600 DOI: 10.3389/fnint.2015.00062] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 12/03/2015] [Indexed: 11/13/2022] Open
Abstract
Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing.
Collapse
Affiliation(s)
- Souta Hidaka
- Department of Psychology, Rikkyo University Saitama, Japan
| | - Wataru Teramoto
- Department of Psychology, Kumamoto University Kumamoto, Japan
| | - Yoichi Sugita
- Department of Psychology, Waseda University Tokyo, Japan
| |
Collapse
|
9
|
The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio–visual motion in depth. Neuropsychologia 2015; 78:51-62. [DOI: 10.1016/j.neuropsychologia.2015.09.023] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2015] [Revised: 09/09/2015] [Accepted: 09/15/2015] [Indexed: 11/18/2022]
|
10
|
Laing M, Rees A, Vuong QC. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity. Front Psychol 2015; 6:1440. [PMID: 26483710 PMCID: PMC4591484 DOI: 10.3389/fpsyg.2015.01440] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Accepted: 09/09/2015] [Indexed: 11/13/2022] Open
Abstract
The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.
Collapse
Affiliation(s)
- Mark Laing
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Quoc C Vuong
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| |
Collapse
|
11
|
Yang CY, Lin CP. Gender difference in the theta/alpha ratio during the induction of peaceful audiovisual modalities. J Integr Neurosci 2015; 14:343-54. [PMID: 26347507 DOI: 10.1142/s0219635215500181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Gender differences in emotional perception have been found in numerous psychological and psychophysiological studies. The conducting modalities in diverse characteristics of different sensory systems make it interesting to determine how cooperation and competition contribute to emotional experiences. We have previously estimated the bias from the match attributes of auditory and visual modalities and revealed specific brain activity frequency patterns related to a peaceful mood. In that multimodality experiment, we focused on how inner-quiet information is processed in the human brain, and found evidence of auditory domination from the theta-band activity. However, a simple quantitative description of these three frequency bands is lacking, and no studies have assessed the effects of peacefulness on the emotional state. Therefore, the aim of this study was to use magnetoencephalography to determine if gender differences exist (and when and where) in the frequency interactions underpinning the perception of peacefulness. This study provides evidence of auditory and visual domination in perceptual bias during multimodality processing of peaceful consciousness. The results of power ratio analyses suggest that the values of the theta/alpha ratio are associated with a modality as well as hemispheric asymmetries in the anterior-to-posterior direction, which shift from right to left with the auditory to visual stimulations in a peaceful mood. This means that the theta/alpha ratio might be useful for evaluating emotion. Moreover, the difference was found to be most pronounced for auditory domination and visual sensitivity in the female group.
Collapse
Affiliation(s)
- Chia-Yen Yang
- * Department of Biomedical Engineering, Ming-Chuan University, Taoyuan, Taiwan
| | - Ching-Po Lin
- † Brain Connectivity Laboratory, Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
| |
Collapse
|
12
|
Kafaligonul H, Oluk C. Audiovisual associations alter the perception of low-level visual motion. Front Integr Neurosci 2015; 9:26. [PMID: 25873869 PMCID: PMC4379893 DOI: 10.3389/fnint.2015.00026] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2014] [Accepted: 03/14/2015] [Indexed: 11/13/2022] Open
Abstract
Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.
Collapse
Affiliation(s)
- Hulusi Kafaligonul
- National Magnetic Resonance Research Center (UMRAM), Bilkent University Ankara, Turkey
| | - Can Oluk
- Department of Psychology, Bilkent University Ankara, Turkey
| |
Collapse
|
13
|
Ogawa A, Macaluso E. Orienting of visuo-spatial attention in complex 3D space: Search and detection. Hum Brain Mapp 2015; 36:2231-47. [PMID: 25691253 PMCID: PMC4682464 DOI: 10.1002/hbm.22767] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2014] [Revised: 01/31/2015] [Accepted: 02/03/2015] [Indexed: 11/26/2022] Open
Abstract
The ability to detect changes in the environment is necessary for appropriate interactions with the external world. Changes in the background go more unnoticed than foreground changes, possibly because attention prioritizes processing of foreground/near stimuli. Here, we investigated the detectability of foreground and background changes within natural scenes and the influence of stereoscopic depth cues on this. Using a flicker paradigm, we alternated a pair of images that were exactly same or differed for one single element (i.e., a color change of one object in the scene). The participants were asked to find the change that occurred either in a foreground or background object, while viewing the stimuli either with binocular and monocular cues (bmC) or monocular cues only (mC). The behavioral results showed faster and more accurate detections for foreground changes and overall better performance in bmC than mC conditions. The imaging results highlighted the involvement of fronto‐parietal attention controlling networks during active search and target detection. These attention networks did not show any differential effect as function of the presence/absence of the binocular cues, or the detection of foreground/background changes. By contrast, the lateral occipital cortex showed greater activation for detections in foreground compared to background, while area V3A showed a main effect of bmC vs. mC, specifically during search. These findings indicate that visual search with binocular cues does not impose any specific requirement on attention‐controlling fronto‐parietal networks, while the enhanced detection of front/near objects in the bmC condition reflects bottom‐up sensory processes in visual cortex. Hum Brain Mapp 36:2231–2247, 2015. © 2015 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Akitoshi Ogawa
- Neuroimaging Laboratory, Santa Lucia Foundation, Via Ardeatina 306, Rome, Italy
| | | |
Collapse
|