51
|
van Wassenhove V, Grzeczkowski L. Visual-induced expectations modulate auditory cortical responses. Front Neurosci 2015; 9:11. [PMID: 25705174 PMCID: PMC4319385 DOI: 10.3389/fnins.2015.00011] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2014] [Accepted: 01/11/2015] [Indexed: 11/13/2022] Open
Abstract
Active sensing has important consequences on multisensory processing (Schroeder et al., 2010). Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient color changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the “where” and the “when” of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG) while maintaining the position of their eyes on the left, right, or center of the screen. Participants counted color changes of the fixation cross while neglecting sounds which could be presented to the left, right, or both ears. First, clear alpha power increases were observed in auditory cortices, consistent with participants' attention directed to visual inputs. Second, color changes elicited robust modulations of auditory cortex responses (“when” prediction) seen as ramping activity, early alpha phase-locked responses, and enhanced high-gamma band responses in the contralateral side of sound presentation. Third, no modulations of auditory evoked or oscillatory activity were found to be specific to eye position. Altogether, our results suggest that visual transience can automatically elicit a prediction of “when” a sound will occur by changing the excitability of auditory cortices irrespective of the attended modality, eye position or spatial congruency of auditory and visual events. To the contrary, auditory cortical responses were not significantly affected by eye position suggesting that “where” predictions may require active sensing or saccadic reset to modulate auditory cortex responses, notably in the absence of spatial orientation to sounds.
Collapse
Affiliation(s)
- Virginie van Wassenhove
- CEA, DSV/I2BM, NeuroSpin; INSERM, Cognitive Neuroimaging Unit, U992; Université Paris-Sud Gif-sur-Yvette, France
| | - Lukasz Grzeczkowski
- CEA, DSV/I2BM, NeuroSpin; INSERM, Cognitive Neuroimaging Unit, U992; Université Paris-Sud Gif-sur-Yvette, France ; Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| |
Collapse
|
52
|
Association of Concurrent fNIRS and EEG Signatures in Response to Auditory and Visual Stimuli. Brain Topogr 2015; 28:710-725. [DOI: 10.1007/s10548-015-0424-8] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2014] [Accepted: 01/05/2015] [Indexed: 11/25/2022]
|
53
|
Henderson JM, Choi W, Luke SG. Morphology of Primary Visual Cortex Predicts Individual Differences in Fixation Duration during Text Reading. J Cogn Neurosci 2014; 26:2880-8. [DOI: 10.1162/jocn_a_00668] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
In skilled reading, fixations are brief periods of time in which the eyes settle on words. E-Z Reader, a computational model of dynamic reading, posits that fixation durations are under real-time control of lexical processing. Lexical processing, in turn, requires efficient visual encoding. Here we tested the hypothesis that individual differences in fixation durations are related to individual differences in the efficiency of early visual encoding. To test this hypothesis, we recorded participants' eye movements during reading. We then examined individual differences in fixation duration distributions as a function of individual differences in the morphology of primary visual cortex measured from MRI scans. The results showed that greater gray matter surface area and volume in visual cortex predicted shorter and less variable fixation durations in reading. These results suggest that individual differences in eye movements during skilled reading are related to initial visual encoding, consistent with models such as E-Z Reader that emphasize lexical control over fixation time.
Collapse
|
54
|
Stehberg J, Dang PT, Frostig RD. Unimodal primary sensory cortices are directly connected by long-range horizontal projections in the rat sensory cortex. Front Neuroanat 2014; 8:93. [PMID: 25309339 PMCID: PMC4174042 DOI: 10.3389/fnana.2014.00093] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Accepted: 08/23/2014] [Indexed: 11/23/2022] Open
Abstract
Research based on functional imaging and neuronal recordings in the barrel cortex subdivision of primary somatosensory cortex (SI) of the adult rat has revealed novel aspects of structure-function relationships in this cortex. Specifically, it has demonstrated that single whisker stimulation evokes subthreshold neuronal activity that spreads symmetrically within gray matter from the appropriate barrel area, crosses cytoarchitectural borders of SI and reaches deeply into other unimodal primary cortices such as primary auditory (AI) and primary visual (VI). It was further demonstrated that this spread is supported by a spatially matching underlying diffuse network of border-crossing, long-range projections that could also reach deeply into AI and VI. Here we seek to determine whether such a network of border-crossing, long-range projections is unique to barrel cortex or characterizes also other primary, unimodal sensory cortices and therefore could directly connect them. Using anterograde (BDA) and retrograde (CTb) tract-tracing techniques, we demonstrate that such diffuse horizontal networks directly and mutually connect VI, AI and SI. These findings suggest that diffuse, border-crossing axonal projections connecting directly primary cortices are an important organizational motif common to all major primary sensory cortices in the rat. Potential implications of these findings for topics including cortical structure-function relationships, multisensory integration, functional imaging, and cortical parcellation are discussed.
Collapse
Affiliation(s)
- Jimmy Stehberg
- Department of Neurobiology and Behavior, University of California, Irvine Irvine, CA, USA ; Laboratorio de Neurobiología, Centro de Investigaciones Biomédicas, Universidad Andres Bello Santiago, Chile
| | - Phat T Dang
- Department of Neurobiology and Behavior, University of California, Irvine Irvine, CA, USA
| | - Ron D Frostig
- Department of Neurobiology and Behavior, University of California, Irvine Irvine, CA, USA ; Department of Biomedical Engineering, University of California, Irvine Irvine, CA, USA ; The Center for the Neurobiology of Learning and Memory, University of California, Irvine Irvine, CA, USA
| |
Collapse
|
55
|
Hearing brighter: Changing in-depth visual perception through looming sounds. Cognition 2014; 132:312-23. [DOI: 10.1016/j.cognition.2014.04.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2013] [Revised: 04/17/2014] [Accepted: 04/26/2014] [Indexed: 11/18/2022]
|
56
|
van Atteveldt N, Murray MM, Thut G, Schroeder CE. Multisensory integration: flexible use of general operations. Neuron 2014; 81:1240-1253. [PMID: 24656248 DOI: 10.1016/j.neuron.2014.02.044] [Citation(s) in RCA: 190] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/27/2014] [Indexed: 11/25/2022]
Abstract
Research into the anatomical substrates and "principles" for integrating inputs from separate sensory surfaces has yielded divergent findings. This suggests that multisensory integration is flexible and context dependent and underlines the need for dynamically adaptive neuronal integration mechanisms. We propose that flexible multisensory integration can be explained by a combination of canonical, population-level integrative operations, such as oscillatory phase resetting and divisive normalization. These canonical operations subsume multisensory integration into a fundamental set of principles as to how the brain integrates all sorts of information, and they are being used proactively and adaptively. We illustrate this proposition by unifying recent findings from different research themes such as timing, behavioral goal, and experience-related differences in integration.
Collapse
Affiliation(s)
- Nienke van Atteveldt
- Neuroimaging & Neuromodeling group, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences, Meibergdreef 47, 1105 BA Amsterdam, The Netherlands; Department of Educational Neuroscience, Faculty of Psychology & Education and Institute LEARN!, VU University Amsterdam, van der Boechorststraat 1, 1081 BT Amsterdam, The Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology & Neuroscience, Maastricht University, P.O. Box 616, 6200 MD Maastricht, The Netherlands.
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (the LINE), Neuropsychology and Neurorehabilitation Service and Radiodiagnostic Service, University Hospital Center and University of Lausanne, Avenue Pierre Decker 5, 1011 Lausanne, Switzerland; EEG Brain Mapping Core, Centre for Biomedical Imaging (CIBM), Rue du Bugnon 46, 1011 Lausanne, Switzerland
| | - Gregor Thut
- Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow, G12 8QB, UK
| | - Charles E Schroeder
- Columbia University, Department Psychiatry, and the New York State Psychiatric Institute, 1051 Riverside Drive, New York, NY 10032, USA; Nathan S. Kline Institute, Cognitive Neuroscience & Schizophrenia Program, 140 Old Orangeburg Road, Orangeburg, NY 10962, USA.
| |
Collapse
|
57
|
A neurocomputational analysis of the sound-induced flash illusion. Neuroimage 2014; 92:248-66. [DOI: 10.1016/j.neuroimage.2014.02.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2013] [Revised: 01/14/2014] [Accepted: 02/01/2014] [Indexed: 11/18/2022] Open
|
58
|
Abstract
Neurophysiological findings suggested that auditory and visual motion information is integrated at an early stage of auditory cortical processing, already starting in primary auditory cortex. Here, the effect of visual motion on processing of auditory motion was investigated by employing electrotomography in combination with free-field sound motion. A delayed-motion paradigm was used in which the onset of motion was delayed relative to the onset of an initially stationary stimulus. The results indicated that activity related to the motion-onset response, a neurophysiological correlate of auditory motion processing, interacts with the processing of visual motion at quite early stages of auditory analysis in the dimensions of both the time and the location of cortical processing. A modulation of auditory motion processing by concurrent visual motion was found already around 170 ms after motion onset (cN1 component) in the regions of primary auditory cortex and posterior superior temporal gyrus: Incongruent visual motion enhanced the auditory motion onset response in auditory regions ipsilateral to the sound motion stimulus, thus reducing the pattern of contralaterality observed with unimodal auditory stimuli. No modulation was found in parietal cortex nor around 250 ms after motion onset (cP2 component) in any auditory region of interest. These findings may reflect the integration of auditory and visual motion information in low-level areas of the auditory cortical system at relatively early points in time.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
59
|
Tjan BS, Chao E, Bernstein LE. A visual or tactile signal makes auditory speech detection more efficient by reducing uncertainty. Eur J Neurosci 2014; 39:1323-31. [PMID: 24400652 PMCID: PMC3997613 DOI: 10.1111/ejn.12471] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2013] [Revised: 12/01/2013] [Accepted: 12/02/2013] [Indexed: 11/28/2022]
Abstract
Acoustic speech is easier to detect in noise when the talker can be seen. This finding could be explained by integration of multisensory inputs or refinement of auditory processing from visual guidance. In two experiments, we studied two-interval forced-choice detection of an auditory 'ba' in acoustic noise, paired with various visual and tactile stimuli that were identically presented in the two observation intervals. Detection thresholds were reduced under the multisensory conditions vs. the auditory-only condition, even though the visual and/or tactile stimuli alone could not inform the correct response. Results were analysed relative to an ideal observer for which intrinsic (internal) noise and efficiency were independent contributors to detection sensitivity. Across experiments, intrinsic noise was unaffected by the multisensory stimuli, arguing against the merging (integrating) of multisensory inputs into a unitary speech signal, but sampling efficiency was increased to varying degrees, supporting refinement of knowledge about the auditory stimulus. The steepness of the psychometric functions decreased with increasing sampling efficiency, suggesting that the 'task-irrelevant' visual and tactile stimuli reduced uncertainty about the acoustic signal. Visible speech was not superior for enhancing auditory speech detection. Our results reject multisensory neuronal integration and speech-specific neural processing as explanations for the enhanced auditory speech detection under noisy conditions. Instead, they support a more rudimentary form of multisensory interaction: the otherwise task-irrelevant sensory systems inform the auditory system about when to listen.
Collapse
Affiliation(s)
- Bosco S Tjan
- Department of Psychology, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, 90089, USA
| | | | | |
Collapse
|
60
|
Crossmodal enhancement of visual orientation discrimination by looming sounds requires functional activation of primary visual areas: A case study. Neuropsychologia 2014; 56:350-8. [DOI: 10.1016/j.neuropsychologia.2014.02.008] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2013] [Revised: 02/05/2014] [Accepted: 02/07/2014] [Indexed: 11/17/2022]
|
61
|
Henschke JU, Noesselt T, Scheich H, Budinger E. Possible anatomical pathways for short-latency multisensory integration processes in primary sensory cortices. Brain Struct Funct 2014; 220:955-77. [DOI: 10.1007/s00429-013-0694-4] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2013] [Accepted: 12/17/2013] [Indexed: 01/25/2023]
|
62
|
Bolognini N, Convento S, Fusaro M, Vallar G. The sound-induced phosphene illusion. Exp Brain Res 2013; 231:469-78. [DOI: 10.1007/s00221-013-3711-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2013] [Accepted: 09/16/2013] [Indexed: 11/30/2022]
|
63
|
Selinger L, Domínguez-Borràs J, Escera C. Phasic boosting of auditory perception by visual emotion. Biol Psychol 2013; 94:471-8. [PMID: 24060548 DOI: 10.1016/j.biopsycho.2013.09.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2012] [Revised: 06/26/2013] [Accepted: 09/06/2013] [Indexed: 11/17/2022]
Abstract
Emotionally negative stimuli boost perceptual processes. There is little known, however, about the timing of this modulation. The present study aims at elucidating the phasic effects of, emotional processing on auditory processing within subsequent time-windows of visual emotional, processing in humans. We recorded the electroencephalogram (EEG) while participants responded to a, discrimination task of faces with neutral or fearful expressions. A brief complex tone, which subjects, were instructed to ignore, was displayed concomitantly, but with different asynchronies respective to, the image onset. Analyses of the N1 auditory event-related potential (ERP) revealed enhanced brain, responses in presence of fearful faces. Importantly, this effect occurred at picture-tone asynchronies of, 100 and 150ms, but not when these were displayed simultaneously, or at 50ms or 200ms asynchrony. These results confirm the existence of a fast-operating crossmodal effect of visual emotion on auditory, processing, suggesting a phasic variation according to the time-course of emotional processing.
Collapse
Affiliation(s)
- Lenka Selinger
- Institute for Brain, Cognition and Behavior (IR3C), University of Barcelona, Catalonia, Spain; Department of Psychiatry and Clinical Psychobiology, University of Barcelona, Catalonia, Spain
| | | | | |
Collapse
|
64
|
Hamilton RH, Wiener M, Drebing DE, Coslett HB. Gone in a flash: manipulation of audiovisual temporal integration using transcranial magnetic stimulation. Front Psychol 2013; 4:571. [PMID: 24062701 PMCID: PMC3769638 DOI: 10.3389/fpsyg.2013.00571] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2013] [Accepted: 08/11/2013] [Indexed: 11/13/2022] Open
Abstract
While converging evidence implicates the right inferior parietal lobule in audiovisual integration, its role has not been fully elucidated by direct manipulation of cortical activity. Replicating and extending an experiment initially reported by Kamke et al. (2012), we employed the sound-induced flash illusion, in which a single visual flash, when accompanied by two auditory tones, is misperceived as multiple flashes (Wilson, 1987; Shams et al., 2000). Slow repetitive (1 Hz) TMS administered to the right angular gyrus, but not the right supramarginal gyrus, induced a transient decrease in the Peak Perceived Flashes (PPF), reflecting reduced susceptibility to the illusion. This finding independently confirms that perturbation of networks involved in multisensory integration can result in a more veridical representation of asynchronous auditory and visual events and that cross-modal integration is an active process in which the objective is the identification of a meaningful constellation of inputs, at times at the expense of accuracy.
Collapse
Affiliation(s)
- Roy H Hamilton
- Department of Neurology, University of Pennsylvania Philadelphia, PA, USA ; Center for Cognitive Neuroscience, University of Pennsylvania Philadelphia, PA, USA
| | | | | | | |
Collapse
|
65
|
van Atteveldt NM, Peterson BS, Schroeder CE. Contextual control of audiovisual integration in low-level sensory cortices. Hum Brain Mapp 2013; 35:2394-411. [PMID: 23982946 DOI: 10.1002/hbm.22336] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2013] [Revised: 05/07/2013] [Accepted: 05/15/2013] [Indexed: 11/06/2022] Open
Abstract
Potential sources of multisensory influences on low-level sensory cortices include direct projections from sensory cortices of different modalities, as well as more indirect feedback inputs from higher order multisensory cortical regions. These multiple architectures may be functionally complementary, but the exact roles and inter-relationships of the circuits are unknown. Using a fully balanced context manipulation, we tested the hypotheses that: (1) feedforward and lateral pathways subserve speed functions, such as detecting peripheral stimuli. Multisensory integration effects in this context are predicted in peripheral fields of low-level sensory cortices. (2) Slower feedback pathways underpin accuracy functions, such as object discrimination. Integration effects in this context are predicted in higher-order association cortices and central/foveal fields of low-level sensory cortex. We used functional magnetic resonance imaging to compare the effects of central versus peripheral stimulation on audiovisual integration, while varying speed and accuracy requirements for behavioral responses. We found that interactions of task demands and stimulus eccentricity in low-level sensory cortices are more complex than would be predicted by a simple dichotomy such as our hypothesized peripheral/speed and foveal/accuracy functions. Additionally, our findings point to individual differences in integration that may be related to skills and strategy. Overall, our findings suggest that instead of using fixed, specialized pathways, the exact circuits and mechanisms that are used for low-level multisensory integration are much more flexible and contingent upon both individual and contextual factors than previously assumed.
Collapse
Affiliation(s)
- Nienke M van Atteveldt
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands; Neuroimaging and Neuromodeling Group, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands; Department of Psychiatry, New York State Psychiatric Institute, Columbia University, New York, New York
| | | | | |
Collapse
|
66
|
Abstract
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.
Collapse
|
67
|
Allen JS, Emmorey K, Bruss J, Damasio H. Neuroanatomical differences in visual, motor, and language cortices between congenitally deaf signers, hearing signers, and hearing non-signers. Front Neuroanat 2013; 7:26. [PMID: 23935567 PMCID: PMC3731534 DOI: 10.3389/fnana.2013.00026] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2013] [Accepted: 07/19/2013] [Indexed: 11/13/2022] Open
Abstract
WE INVESTIGATED EFFECTS OF SIGN LANGUAGE USE AND AUDITORY DEPRIVATION FROM BIRTH ON THE VOLUMES OF THREE CORTICAL REGIONS OF THE HUMAN BRAIN: the visual cortex surrounding the calcarine sulcus in the occipital lobe; the language-related cortex in the inferior frontal gyrus (pars triangularis and pars opercularis); and the motor hand region in the precentral gyrus. The study included 25 congenitally deaf participants and 41 hearing participants (of which 16 were native sign language users); all were right-handed. Deaf participants exhibited a larger calcarine volume than hearing participants, which we interpret as the likely result of cross-modal compensation and/or dynamic interactions within sensory neural networks. Deaf participants also had increased volumes of the pars triangularis bilaterally compared to hearing signers and non-signers, which we interpret is related to the increased linguistic demands of speech processing and/or text reading for deaf individuals. Finally, although no statistically significant differences were found in the motor hand region for any of the groups, the deaf group was leftward asymmetric, the hearing signers essentially symmetric and the hearing non-signers were rightward asymmetric - results we interpret as the possible result of activity-dependent change due to life-long signing. The brain differences we observed in visual, motor, and language-related areas in adult deaf native signers provide evidence for the plasticity available for cognitive adaptation to varied environments during development.
Collapse
Affiliation(s)
- John S Allen
- Dornsife Cognitive Neuroscience Imaging Center, University of Southern California Los Angeles, CA, USA ; Brain and Creativity Institute, University of Southern California Los Angeles, CA, USA
| | | | | | | |
Collapse
|
68
|
Romei V, Murray MM, Cappe C, Thut G. The Contributions of Sensory Dominance and Attentional Bias to Cross-modal Enhancement of Visual Cortex Excitability. J Cogn Neurosci 2013; 25:1122-35. [DOI: 10.1162/jocn_a_00367] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Approaching or looming sounds (L-sounds) have been shown to selectively increase visual cortex excitability [Romei, V., Murray, M. M., Cappe, C., & Thut, G. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology, 19, 1799–1805, 2009]. These cross-modal effects start at an early, preperceptual stage of sound processing and persist with increasing sound duration. Here, we identified individual factors contributing to cross-modal effects on visual cortex excitability and studied the persistence of effects after sound offset. To this end, we probed the impact of different L-sound velocities on phosphene perception postsound as a function of individual auditory versus visual preference/dominance using single-pulse TMS over the occipital pole. We found that the boosting of phosphene perception by L-sounds continued for several tens of milliseconds after the end of the L-sound and was temporally sensitive to different L-sound profiles (velocities). In addition, we found that this depended on an individual's preferred sensory modality (auditory vs. visual) as determined through a divided attention task (attentional preference), but not on their simple threshold detection level per sensory modality. Whereas individuals with “visual preference” showed enhanced phosphene perception irrespective of L-sound velocity, those with “auditory preference” showed differential peaks in phosphene perception whose delays after sound-offset followed the different L-sound velocity profiles. These novel findings suggest that looming signals modulate visual cortex excitability beyond sound duration possibly to support prompt identification and reaction to potentially dangerous approaching objects. The observed interindividual differences favor the idea that unlike early effects this late L-sound impact on visual cortex excitability is influenced by cross-modal attentional mechanisms rather than low-level sensory processes.
Collapse
Affiliation(s)
| | - Micah M. Murray
- 3Center for Biomedical Imaging (CIBM), Lausanne and Geneva, Switzerland
- 4Vaudois University Hospital Center and University of Lausanne
| | - Céline Cappe
- 5Ecole Polytechnique Fédérale de Lausanne
- 6Université de Toulouse, UPS, CNRS, Centre de Recherche Cerveau et Cognition
| | | |
Collapse
|
69
|
Auditory-driven phase reset in visual cortex: human electrocorticography reveals mechanisms of early multisensory integration. Neuroimage 2013; 79:19-29. [PMID: 23624493 DOI: 10.1016/j.neuroimage.2013.04.060] [Citation(s) in RCA: 103] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2012] [Revised: 04/08/2013] [Accepted: 04/14/2013] [Indexed: 11/22/2022] Open
Abstract
Findings in animal models demonstrate that activity within hierarchically early sensory cortical regions can be modulated by cross-sensory inputs through resetting of the phase of ongoing intrinsic neural oscillations. Here, subdural recordings evaluated whether phase resetting by auditory inputs would impact multisensory integration processes in human visual cortex. Results clearly showed auditory-driven phase reset in visual cortices and, in some cases, frank auditory event-related potentials (ERP) were also observed over these regions. Further, when audiovisual bisensory stimuli were presented, this led to robust multisensory integration effects which were observed in both the ERP and in measures of phase concentration. These results extend findings from animal models to human visual cortices, and highlight the impact of cross-sensory phase resetting by a non-primary stimulus on multisensory integration in ostensibly unisensory cortices.
Collapse
|
70
|
Bernstein LE, Auer ET, Eberhardt SP, Jiang J. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training. Front Neurosci 2013; 7:34. [PMID: 23515520 PMCID: PMC3600826 DOI: 10.3389/fnins.2013.00034] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2012] [Accepted: 02/28/2013] [Indexed: 11/13/2022] Open
Abstract
Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.
Collapse
Affiliation(s)
- Lynne E Bernstein
- Communication Neuroscience Laboratory, Department of Speech and Hearing Science, George Washington University Washington, DC, USA
| | | | | | | |
Collapse
|
71
|
Zhang J, Raij T, Hämäläinen M, Yao D. MEG source localization using invariance of noise space. PLoS One 2013; 8:e58408. [PMID: 23505502 PMCID: PMC3591341 DOI: 10.1371/journal.pone.0058408] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2012] [Accepted: 02/06/2013] [Indexed: 11/18/2022] Open
Abstract
We propose INvariance of Noise (INN) space as a novel method for source localization of magnetoencephalography (MEG) data. The method is based on the fact that modulations of source strengths across time change the energy in signal subspace but leave the noise subspace invariant. We compare INN with classical MUSIC, RAP-MUSIC, and beamformer approaches using simulated data while varying signal-to-noise ratios as well as distance and temporal correlation between two sources. We also demonstrate the utility of INN with actual auditory evoked MEG responses in eight subjects. In all cases, INN performed well, especially when the sources were closely spaced, highly correlated, or one source was considerably stronger than the other.
Collapse
Affiliation(s)
- Junpeng Zhang
- Key Laboratory for NeuroInformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, Boston, Massachusetts, United States of America
- Department of Biomedical Engineering, Chengdu Medical College, Chengdu, China
| | - Tommi Raij
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, Boston, Massachusetts, United States of America
| | - Matti Hämäläinen
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, Boston, Massachusetts, United States of America
| | - Dezhong Yao
- Key Laboratory for NeuroInformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
72
|
Beer AL, Plank T, Meyer G, Greenlee MW. Combined diffusion-weighted and functional magnetic resonance imaging reveals a temporal-occipital network involved in auditory-visual object processing. Front Integr Neurosci 2013; 7:5. [PMID: 23407860 PMCID: PMC3570774 DOI: 10.3389/fnint.2013.00005] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2012] [Accepted: 01/25/2013] [Indexed: 11/22/2022] Open
Abstract
Functional magnetic resonance imaging (MRI) showed that the superior temporal and occipital cortex are involved in multisensory integration. Probabilistic fiber tracking based on diffusion-weighted MRI suggests that multisensory processing is supported by white matter connections between auditory cortex and the temporal and occipital lobe. Here, we present a combined functional MRI and probabilistic fiber tracking study that reveals multisensory processing mechanisms that remained undetected by either technique alone. Ten healthy participants passively observed visually presented lip or body movements, heard speech or body action sounds, or were exposed to a combination of both. Bimodal stimulation engaged a temporal-occipital brain network including the multisensory superior temporal sulcus (msSTS), the lateral superior temporal gyrus (lSTG), and the extrastriate body area (EBA). A region-of-interest (ROI) analysis showed multisensory interactions (e.g., subadditive responses to bimodal compared to unimodal stimuli) in the msSTS, the lSTG, and the EBA region. Moreover, sounds elicited responses in the medial occipital cortex. Probabilistic tracking revealed white matter tracts between the auditory cortex and the medial occipital cortex, the inferior occipital cortex (IOC), and the superior temporal sulcus (STS). However, STS terminations of auditory cortex tracts showed limited overlap with the msSTS region. Instead, msSTS was connected to primary sensory regions via intermediate nodes in the temporal and occipital cortex. Similarly, the lSTG and EBA regions showed limited direct white matter connections but instead were connected via intermediate nodes. Our results suggest that multisensory processing in the STS is mediated by separate brain areas that form a distinct network in the lateral temporal and inferior occipital cortex.
Collapse
Affiliation(s)
- Anton L. Beer
- Institut für Psychologie, Universität RegensburgRegensburg, Germany
- Experimental and Clinical Neurosciences Programme, Universität RegensburgRegensburg, Germany
| | - Tina Plank
- Institut für Psychologie, Universität RegensburgRegensburg, Germany
| | - Georg Meyer
- Department of Experimental Psychology, University of LiverpoolLiverpool, UK
| | - Mark W. Greenlee
- Institut für Psychologie, Universität RegensburgRegensburg, Germany
- Experimental and Clinical Neurosciences Programme, Universität RegensburgRegensburg, Germany
| |
Collapse
|
73
|
Spierer L, Manuel AL, Bueti D, Murray MM. Contributions of pitch and bandwidth to sound-induced enhancement of visual cortex excitability in humans. Cortex 2013; 49:2728-34. [PMID: 23419789 DOI: 10.1016/j.cortex.2013.01.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2012] [Revised: 12/07/2012] [Accepted: 01/04/2013] [Indexed: 10/27/2022]
Abstract
Multisensory interactions have been documented within low-level, even primary, cortices and at early post-stimulus latencies. These effects are in turn linked to behavioral and perceptual modulations. In humans, visual cortex excitability, as measured by transcranial magnetic stimulation (TMS) induced phosphenes, can be reliably enhanced by the co-presentation of sounds. This enhancement occurs at pre-perceptual stages and is selective for different types of complex sounds. However, the source(s) of auditory inputs effectuating these excitability changes in primary visual cortex remain disputed. The present study sought to determine if direct connections between low-level auditory cortices and primary visual cortex are mediating these kinds of effects by varying the pitch and bandwidth of the sounds co-presented with single-pulse TMS over the occipital pole. Our results from 10 healthy young adults indicate that both the central frequency and bandwidth of a sound independently affect the excitability of visual cortex during processing stages as early as 30 msec post-sound onset. Such findings are consistent with direct connections mediating early-latency, low-level multisensory interactions within visual cortices.
Collapse
Affiliation(s)
- Lucas Spierer
- Neuropsychology and Neurorehabilitation Service, Department of Clinical Neurosciences, University Hospital Center and University of Lausanne, Switzerland; Neurology Unit, Department of Medicine, Faculty of Sciences, University of Fribourg, Fribourg, Switzerland
| | | | | | | |
Collapse
|
74
|
Diaconescu AO, Hasher L, McIntosh AR. Visual dominance and multisensory integration changes with age. Neuroimage 2013; 65:152-66. [PMID: 23036447 DOI: 10.1016/j.neuroimage.2012.09.057] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2012] [Revised: 09/23/2012] [Accepted: 09/24/2012] [Indexed: 10/27/2022] Open
|
75
|
Banerjee A, Pillai AS, Sperling JR, Smith JF, Horwitz B. Temporal microstructure of cortical networks (TMCN) underlying task-related differences. Neuroimage 2012; 62:1643-57. [PMID: 22728151 PMCID: PMC3408836 DOI: 10.1016/j.neuroimage.2012.06.014] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2012] [Revised: 06/02/2012] [Accepted: 06/08/2012] [Indexed: 12/01/2022] Open
Abstract
Neuro-electromagnetic recording techniques (EEG, MEG, iEEG) provide high temporal resolution data to study the dynamics of neurocognitive networks: large scale neural assemblies involved in task-specific information processing. How does a neurocognitive network reorganize spatiotemporally on the order of a few milliseconds to process specific aspects of the task? At what times do networks segregate for task processing, and at what time scales does integration of information occur via changes in functional connectivity? Here, we propose a data analysis framework-Temporal microstructure of cortical networks (TMCN)-that answers these questions for EEG/MEG recordings in the signal space. Method validation is established on simulated MEG data from a delayed-match to-sample (DMS) task. We then provide an example application on MEG recordings during a paired associate task (modified from the simpler DMS paradigm) designed to study modality specific long term memory recall. Our analysis identified the times at which network segregation occurs for processing the memory recall of an auditory object paired to a visual stimulus (visual-auditory) in comparison to an analogous visual-visual pair. Across all subjects, onset times for first network divergence appeared within a range of 0.08-0.47 s after initial visual stimulus onset. This indicates that visual-visual and visual auditory memory recollection involves equivalent network components without any additional recruitment during an initial period of the sensory processing stage which is then followed by recruitment of additional network components for modality specific memory recollection. Therefore, we propose TMCN as a viable computational tool for extracting network timing in various cognitive tasks.
Collapse
Affiliation(s)
- Arpan Banerjee
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD 20892, USA.
| | | | | | | | | |
Collapse
|
76
|
Thelen A, Cappe C, Murray MM. Electrical neuroimaging of memory discrimination based on single-trial multisensory learning. Neuroimage 2012; 62:1478-88. [DOI: 10.1016/j.neuroimage.2012.05.027] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2012] [Revised: 04/11/2012] [Accepted: 05/10/2012] [Indexed: 10/28/2022] Open
|
77
|
Huang S, Belliveau JW, Tengshe C, Ahveninen J. Brain networks of novelty-driven involuntary and cued voluntary auditory attention shifting. PLoS One 2012; 7:e44062. [PMID: 22937153 PMCID: PMC3429427 DOI: 10.1371/journal.pone.0044062] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2012] [Accepted: 07/30/2012] [Indexed: 01/03/2023] Open
Abstract
In everyday life, we need a capacity to flexibly shift attention between alternative sound sources. However, relatively little work has been done to elucidate the mechanisms of attention shifting in the auditory domain. Here, we used a mixed event-related/sparse-sampling fMRI approach to investigate this essential cognitive function. In each 10-sec trial, subjects were instructed to wait for an auditory "cue" signaling the location where a subsequent "target" sound was likely to be presented. The target was occasionally replaced by an unexpected "novel" sound in the uncued ear, to trigger involuntary attention shifting. To maximize the attention effects, cues, targets, and novels were embedded within dichotic 800-Hz vs. 1500-Hz pure-tone "standard" trains. The sound of clustered fMRI acquisition (starting at t = 7.82 sec) served as a controlled trial-end signal. Our approach revealed notable activation differences between the conditions. Cued voluntary attention shifting activated the superior intra--parietal sulcus (IPS), whereas novelty-triggered involuntary orienting activated the inferior IPS and certain subareas of the precuneus. Clearly more widespread activations were observed during voluntary than involuntary orienting in the premotor cortex, including the frontal eye fields. Moreover, we found -evidence for a frontoinsular-cingular attentional control network, consisting of the anterior insula, inferior frontal cortex, and medial frontal cortices, which were activated during both target discrimination and voluntary attention shifting. Finally, novels and targets activated much wider areas of superior temporal auditory cortices than shifting cues.
Collapse
Affiliation(s)
- Samantha Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, Massachusetts, United States of America.
| | | | | | | |
Collapse
|
78
|
Lin FH, Tsai KW, Chu YH, Witzel T, Nummenmaa A, Raij T, Ahveninen J, Kuo WJ, Belliveau JW. Ultrafast inverse imaging techniques for fMRI. Neuroimage 2012; 62:699-705. [PMID: 22285221 PMCID: PMC3377851 DOI: 10.1016/j.neuroimage.2012.01.072] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2011] [Revised: 01/07/2012] [Accepted: 01/10/2012] [Indexed: 10/14/2022] Open
Abstract
Inverse imaging (InI) supercharges the sampling rate of traditional functional MRI 10-100 fold at a cost of a moderate reduction in spatial resolution. The technique is inspired by similarities between multi-sensor magnetoencephalography (MEG) and highly parallel radio-frequency (RF) MRI detector arrays. Using presently available 32-channel head coils at 3T, InI can be sampled at 10 Hz and provides about 5-mm cortical spatial resolution with whole-brain coverage. Here we discuss the present applications of InI, as well as potential future challenges and opportunities in further improving its spatiotemporal resolution and sensitivity. InI may become a helpful tool for clinicians and neuroscientists for revealing the complex dynamics of brain functions during task-related and resting states.
Collapse
Affiliation(s)
- Fa-Hsuan Lin
- Institute of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
- MGH-HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Biomedical Engineering and Computational Science, Aalto University School of Science and Technology, Espoo, Finland
| | - Kevin W.K. Tsai
- Institute of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
| | - Ying-Hua Chu
- Institute of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
| | - Thomas Witzel
- MGH-HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
| | - Aapo Nummenmaa
- MGH-HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Biomedical Engineering and Computational Science, Aalto University School of Science and Technology, Espoo, Finland
| | - Tommi Raij
- MGH-HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
| | - Jyrki Ahveninen
- MGH-HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
| | - Wen-Jui Kuo
- Institute of Neuroscience, National Yang Ming University, Taipei, Taiwan
| | - John W. Belliveau
- MGH-HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
| |
Collapse
|
79
|
Navarra J, García-Morera J, Spence C. Temporal adaptation to audiovisual asynchrony generalizes across different sound frequencies. Front Psychol 2012; 3:152. [PMID: 22615705 PMCID: PMC3351678 DOI: 10.3389/fpsyg.2012.00152] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2012] [Accepted: 04/26/2012] [Indexed: 11/13/2022] Open
Abstract
The human brain exhibits a highly adaptive ability to reduce natural asynchronies between visual and auditory signals. Even though this mechanism robustly modulates the subsequent perception of sounds and visual stimuli, it is still unclear how such a temporal realignment is attained. In the present study, we investigated whether or not temporal adaptation generalizes across different auditory frequencies. In a first exposure phase, participants adapted to a fixed 220-ms audiovisual asynchrony or else to synchrony for 3 min. In a second phase, the participants performed simultaneity judgments (SJs) regarding pairs of audiovisual stimuli that were presented at different stimulus onset asynchronies (SOAs) and included either the same tone as in the exposure phase (a 250 Hz beep), another low-pitched beep (300 Hz), or a high-pitched beep (2500 Hz). Temporal realignment was always observed (when comparing SJ performance after exposure to asynchrony vs. synchrony), regardless of the frequency of the sound tested. This suggests that temporal recalibration influences the audiovisual perception of sounds in a frequency non-specific manner and may imply the participation of non-primary perceptual areas of the brain that are not constrained by certain physical features such as sound frequency.
Collapse
Affiliation(s)
- Jordi Navarra
- Fundació Sant Joan de Déu, Parc Sanitari Sant Joan de Déu - Hospital Sant Joan de Déu Esplugues de Llobregat, Barcelona, Spain
| | | | | |
Collapse
|
80
|
Alho J, Sato M, Sams M, Schwartz JL, Tiitinen H, Jääskeläinen IP. Enhanced early-latency electromagnetic activity in the left premotor cortex is associated with successful phonetic categorization. Neuroimage 2012; 60:1937-46. [DOI: 10.1016/j.neuroimage.2012.02.011] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2011] [Revised: 01/12/2012] [Accepted: 02/04/2012] [Indexed: 11/30/2022] Open
|
81
|
Leitão J, Thielscher A, Werner S, Pohmann R, Noppeney U. Effects of parietal TMS on visual and auditory processing at the primary cortical level -- a concurrent TMS-fMRI study. ACTA ACUST UNITED AC 2012; 23:873-84. [PMID: 22490546 DOI: 10.1093/cercor/bhs078] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Accumulating evidence suggests that multisensory interactions emerge already at the primary cortical level. Specifically, auditory inputs were shown to suppress activations in visual cortices when presented alone but amplify the blood oxygen level-dependent (BOLD) responses to concurrent visual inputs (and vice versa). This concurrent transcranial magnetic stimulation-functional magnetic resonance imaging (TMS-fMRI) study applied repetitive TMS trains at no, low, and high intensity over right intraparietal sulcus (IPS) and vertex to investigate top-down influences on visual and auditory cortices under 3 sensory contexts: visual, auditory, and no stimulation. IPS-TMS increased activations in auditory cortices irrespective of sensory context as a result of direct and nonspecific auditory TMS side effects. In contrast, IPS-TMS modulated activations in the visual cortex in a state-dependent fashion: it deactivated the visual cortex under no and auditory stimulation but amplified the BOLD response to visual stimulation. However, only the response amplification to visual stimulation was selective for IPS-TMS, while the deactivations observed for IPS- and Vertex-TMS resulted from crossmodal deactivations induced by auditory activity to TMS sounds. TMS to IPS may increase the responses in visual (or auditory) cortices to visual (or auditory) stimulation via a gain control mechanism or crossmodal interactions. Collectively, our results demonstrate that understanding TMS effects on (uni)sensory processing requires a multisensory perspective.
Collapse
Affiliation(s)
- Joana Leitão
- Cognitive Neuroimaging Group, Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany.
| | | | | | | | | |
Collapse
|
82
|
Abstract
Multisensory interactions are a fundamental feature of brain organization. Principles governing multisensory processing have been established by varying stimulus location, timing and efficacy independently. Determining whether and how such principles operate when stimuli vary dynamically in their perceived distance (as when looming/receding) provides an assay for synergy among the above principles and also means for linking multisensory interactions between rudimentary stimuli with higher-order signals used for communication and motor planning. Human participants indicated movement of looming or receding versus static stimuli that were visual, auditory, or multisensory combinations while 160-channel EEG was recorded. Multivariate EEG analyses and distributed source estimations were performed. Nonlinear interactions between looming signals were observed at early poststimulus latencies (∼75 ms) in analyses of voltage waveforms, global field power, and source estimations. These looming-specific interactions positively correlated with reaction time facilitation, providing direct links between neural and performance metrics of multisensory integration. Statistical analyses of source estimations identified looming-specific interactions within the right claustrum/insula extending inferiorly into the amygdala and also within the bilateral cuneus extending into the inferior and lateral occipital cortices. Multisensory effects common to all conditions, regardless of perceived distance and congruity, followed (∼115 ms) and manifested as faster transition between temporally stable brain networks (vs summed responses to unisensory conditions). We demonstrate the early-latency, synergistic interplay between existing principles of multisensory interactions. Such findings change the manner in which to model multisensory interactions at neural and behavioral/perceptual levels. We also provide neurophysiologic backing for the notion that looming signals receive preferential treatment during perception.
Collapse
|
83
|
Posse S, Ackley E, Mutihac R, Rick J, Shane M, Murray-Krezan C, Zaitsev M, Speck O. Enhancement of temporal resolution and BOLD sensitivity in real-time fMRI using multi-slab echo-volumar imaging. Neuroimage 2012; 61:115-30. [PMID: 22398395 DOI: 10.1016/j.neuroimage.2012.02.059] [Citation(s) in RCA: 72] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2011] [Revised: 02/06/2012] [Accepted: 02/20/2012] [Indexed: 11/25/2022] Open
Abstract
In this study, a new approach to high-speed fMRI using multi-slab echo-volumar imaging (EVI) is developed that minimizes geometrical image distortion and spatial blurring, and enables nonaliased sampling of physiological signal fluctuation to increase BOLD sensitivity compared to conventional echo-planar imaging (EPI). Real-time fMRI using whole brain 4-slab EVI with 286 ms temporal resolution (4mm isotropic voxel size) and partial brain 2-slab EVI with 136 ms temporal resolution (4×4×6 mm(3) voxel size) was performed on a clinical 3 Tesla MRI scanner equipped with 12-channel head coil. Four-slab EVI of visual and motor tasks significantly increased mean (visual: 96%, motor: 66%) and maximum t-score (visual: 263%, motor: 124%) and mean (visual: 59%, motor: 131%) and maximum (visual: 29%, motor: 67%) BOLD signal amplitude compared with EPI. Time domain moving average filtering (2s width) to suppress physiological noise from cardiac and respiratory fluctuations further improved mean (visual: 196%, motor: 140%) and maximum (visual: 384%, motor: 200%) t-scores and increased extents of activation (visual: 73%, motor: 70%) compared to EPI. Similar sensitivity enhancement, which is attributed to high sampling rate at only moderately reduced temporal signal-to-noise ratio (mean: -52%) and longer sampling of the BOLD effect in the echo-time domain compared to EPI, was measured in auditory cortex. Two-slab EVI further improved temporal resolution for measuring task-related activation and enabled mapping of five major resting state networks (RSNs) in individual subjects in 5 min scans. The bilateral sensorimotor, the default mode and the occipital RSNs were detectable in time frames as short as 75 s. In conclusion, the high sampling rate of real-time multi-slab EVI significantly improves sensitivity for studying the temporal dynamics of hemodynamic responses and for characterizing functional networks at high field strength in short measurement times.
Collapse
Affiliation(s)
- Stefan Posse
- Department of Neurology, University of New Mexico School of Medicine, Albuquerque, NM 87131, USA.
| | | | | | | | | | | | | | | |
Collapse
|
84
|
Steady-state responses in MEG demonstrate information integration within but not across the auditory and visual senses. Neuroimage 2012; 60:1478-89. [PMID: 22305992 DOI: 10.1016/j.neuroimage.2012.01.114] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2011] [Revised: 12/22/2011] [Accepted: 01/22/2012] [Indexed: 11/23/2022] Open
Abstract
To form a unified percept of our environment, the human brain integrates information within and across the senses. This MEG study investigated interactions within and between sensory modalities using a frequency analysis of steady-state responses that are elicited time-locked to periodically modulated stimuli. Critically, in the frequency domain, interactions between sensory signals are indexed by crossmodulation terms (i.e. the sums and differences of the fundamental frequencies). The 3 × 2 factorial design, manipulated (1) modality: auditory, visual or audiovisual (2) steady-state modulation: the auditory and visual signals were modulated only in one sensory feature (e.g. visual gratings modulated in luminance at 6 Hz) or in two features (e.g. tones modulated in frequency at 40 Hz & amplitude at 0.2 Hz). This design enabled us to investigate crossmodulation frequencies that are elicited when two stimulus features are modulated concurrently (i) in one sensory modality or (ii) in auditory and visual modalities. In support of within-modality integration, we reliably identified crossmodulation frequencies when two stimulus features in one sensory modality were modulated at different frequencies. In contrast, no crossmodulation frequencies were identified when information needed to be combined from auditory and visual modalities. The absence of audiovisual crossmodulation frequencies suggests that the previously reported audiovisual interactions in primary sensory areas may mediate low level spatiotemporal coincidence detection that is prominent for stimulus transients but less relevant for sustained SSR responses. In conclusion, our results indicate that information in SSRs is integrated over multiple time scales within but not across sensory modalities at the primary cortical level.
Collapse
|
85
|
Jääskeläinen IP, Ahveninen J, Andermann ML, Belliveau JW, Raij T, Sams M. Short-term plasticity as a neural mechanism supporting memory and attentional functions. Brain Res 2011; 1422:66-81. [PMID: 21985958 DOI: 10.1016/j.brainres.2011.09.031] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2011] [Revised: 08/16/2011] [Accepted: 09/16/2011] [Indexed: 10/17/2022]
Abstract
Based on behavioral studies, several relatively distinct perceptual and cognitive functions have been defined in cognitive psychology such as sensory memory, short-term memory, and selective attention. Here, we review evidence suggesting that some of these functions may be supported by shared underlying neuronal mechanisms. Specifically, we present, based on an integrative review of the literature, a hypothetical model wherein short-term plasticity, in the form of transient center-excitatory and surround-inhibitory modulations, constitutes a generic processing principle that supports sensory memory, short-term memory, involuntary attention, selective attention, and perceptual learning. In our model, the size and complexity of receptive fields/level of abstraction of neural representations, as well as the length of temporal receptive windows, increases as one steps up the cortical hierarchy. Consequently, the type of input (bottom-up vs. top down) and the level of cortical hierarchy that the inputs target, determine whether short-term plasticity supports purely sensory vs. semantic short-term memory or attentional functions. Furthermore, we suggest that rather than discrete memory systems, there are continuums of memory representations from short-lived sensory ones to more abstract longer-duration representations, such as those tapped by behavioral studies of short-term memory.
Collapse
Affiliation(s)
- Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University, School of Science, Espoo, Finland.
| | | | | | | | | | | |
Collapse
|
86
|
Müller N, Weisz N. Lateralized Auditory Cortical Alpha Band Activity and Interregional Connectivity Pattern Reflect Anticipation of Target Sounds. Cereb Cortex 2011; 22:1604-13. [DOI: 10.1093/cercor/bhr232] [Citation(s) in RCA: 83] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
|
87
|
Auditory event-related response in visual cortex modulates subsequent visual responses in humans. J Neurosci 2011; 31:7729-36. [PMID: 21613485 DOI: 10.1523/jneurosci.1076-11.2011] [Citation(s) in RCA: 59] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Growing evidence from electrophysiological data in animal and human studies suggests that multisensory interaction is not exclusively a higher-order process, but also takes place in primary sensory cortices. Such early multisensory interaction is thought to be mediated by means of phase resetting. The presentation of a stimulus to one sensory modality resets the phase of ongoing oscillations in another modality such that processing in the latter modality is modulated. In humans, evidence for such a mechanism is still sparse. In the current study, the influence of an auditory stimulus on visual processing was investigated by measuring the electroencephalogram (EEG) and behavioral responses of humans to visual, auditory, and audiovisual stimulation with varying stimulus-onset asynchrony (SOA). We observed three distinct oscillatory EEG responses in our data. An initial gamma-band response around 50 Hz was followed by a beta-band response around 25 Hz, and a theta response around 6 Hz. The latter was enhanced in response to cross-modal stimuli as compared to either unimodal stimuli. Interestingly, the beta response to unimodal auditory stimuli was dominant in electrodes over visual areas. The SOA between auditory and visual stimuli--albeit not consciously perceived--had a modulatory impact on the multisensory evoked beta-band responses; i.e., the amplitude depended on SOA in a sinusoidal fashion, suggesting a phase reset. These findings further support the notion that parameters of brain oscillations such as amplitude and phase are essential predictors of subsequent brain responses and might be one of the mechanisms underlying multisensory integration.
Collapse
|
88
|
Unisensory processing and multisensory integration in schizophrenia: a high-density electrical mapping study. Neuropsychologia 2011; 49:3178-87. [PMID: 21807011 DOI: 10.1016/j.neuropsychologia.2011.07.017] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2011] [Revised: 06/24/2011] [Accepted: 07/15/2011] [Indexed: 11/20/2022]
Abstract
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder.
Collapse
|
89
|
Abstract
Recent studies of multisensory integration compel a redefinition of fundamental sensory processes, including, but not limited to, how visual inputs influence the localization of sounds and suppression of their echoes.
Collapse
Affiliation(s)
- Micah M Murray
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging of Lausanne and Geneva, rue du Bugnon 46, BH08.078, 1011 Lausanne, Switzerland.
| | | |
Collapse
|
90
|
Franciotti R, Brancucci A, Della Penna S, Onofrj M, Tommasi L. Neuromagnetic responses reveal the cortical timing of audiovisual synchrony. Neuroscience 2011; 193:182-92. [PMID: 21787844 DOI: 10.1016/j.neuroscience.2011.07.018] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2011] [Revised: 07/01/2011] [Accepted: 07/06/2011] [Indexed: 11/25/2022]
Abstract
Multisensory processing involving visual and auditory inputs is modulated by their relative temporal offsets. In order to assess whether multisensory integration alters the activation timing of primary visual and auditory cortices as a function of the temporal offsets between auditory and visual stimuli, a task was designed in which subjects had to judge the perceptual simultaneity of the onset of visual stimuli and brief acoustic tones. These were presented repeatedly with three different inter-stimulus intervals that were chosen to meet three perceptual conditions: (1) physical synchrony perceived as synchrony by subjects (SYNC); (2) physical asynchrony perceived as asynchrony (ASYNC); (3) physical asynchrony perceived ambiguously (AMB, i.e. 50% perceived as synchrony, 50% as asynchrony). Magnetoencephalographic activity was recorded during crossmodal sessions and unimodal control sessions. The activation of primary visual and auditory cortices peaked at a longer latency for the crossmodal conditions as compared to the unimodal conditions. Moreover, the latency in the auditory cortex was longer in the SYNC than in the ASYNC condition, whereas in the visual cortex the latency in the AMB condition was longer than in the ASYNC condition. These findings suggest that multisensory processing affects temporal dynamics already in primary cortices, that such activity can differ regionally and can be sensitive to the temporal offsets of multisensory inputs. In addition, in the AMB condition the conscious awareness of asynchrony might be associated to a later activation of the primary auditory cortex.
Collapse
Affiliation(s)
- R Franciotti
- Department of Neuroscience and Imaging, G. d'Annunzio University, Chieti, Italy.
| | | | | | | | | |
Collapse
|
91
|
Leo F, Romei V, Freeman E, Ladavas E, Driver J. Looming sounds enhance orientation sensitivity for visual stimuli on the same side as such sounds. Exp Brain Res 2011; 213:193-201. [PMID: 21643714 PMCID: PMC3155046 DOI: 10.1007/s00221-011-2742-8] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2010] [Accepted: 05/18/2011] [Indexed: 11/29/2022]
Abstract
Several recent multisensory studies show that sounds can influence visual processing. Some visual judgments can be enhanced for visual stimuli near a sound occurring around the same time. A recent TMS study (Romei et al. 2009) indicates looming sounds might influence visual cortex particularly strongly. But unlike most previous behavioral studies of possible audio–visual exogenous effects, TMS phosphene thresholds rather than judgments of external visual stimuli were measured. Moreover, the visual hemifield assessed relative to the hemifield of the sound was not varied. Here, we compared the impact of looming sounds to receding or “static” sounds, using auditory stimuli adapted from Romei et al. (2009), but now assessing any influence on visual orientation discrimination for Gabor patches (well-known to involve early visual cortex) when appearing in the same hemifield as the sound or on the opposite side. The looming sounds that were effective in Romei et al. (2009) enhanced visual orientation sensitivity (d′) here on the side of the sound, but not for the opposite hemifield. This crossmodal, spatially specific effect was stronger for looming than receding or static sounds. Similarly to Romei et al. (2009), the differential effect for looming sounds was eliminated when using white noise rather than structured sounds. Our new results show that looming structured sounds can specifically benefit visual orientation sensitivity in the hemifield of the sound, even when the sound provides no information about visual orientation itself.
Collapse
Affiliation(s)
- Fabrizio Leo
- UCL Institute of Cognitive Neuroscience, University College London, London, UK.
| | | | | | | | | |
Collapse
|
92
|
Renvall H, Formisano E, Parviainen T, Bonte M, Vihla M, Salmelin R. Parametric Merging of MEG and fMRI Reveals Spatiotemporal Differences in Cortical Processing of Spoken Words and Environmental Sounds in Background Noise. Cereb Cortex 2011; 22:132-43. [DOI: 10.1093/cercor/bhr095] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
93
|
Abstract
In the multisensory environment, inputs to each sensory modality are rarely independent. Sounds often follow a visible action or event. Here we present behaviorally relevant evidence from the human EEG that visual input prepares the auditory system for subsequent auditory processing by resetting the phase of neuronal oscillatory activity in auditory cortex. Subjects performed a simple auditory frequency discrimination task using paired but asynchronous auditory and visual stimuli. Auditory cortex activity was modeled from the scalp-recorded EEG using spatiotemporal dipole source analysis. Phase resetting activity was assessed using time-frequency analysis of the source waveforms. Significant cross-modal phase resetting was observed in auditory cortex at low alpha frequencies (8-10 Hz) peaking 80 ms after auditory onset, at high alpha frequencies (10-12 Hz) peaking at 88 ms, and at high theta frequencies (∼ 7 Hz) peaking at 156 ms. Importantly, significant effects were only evident when visual input preceded auditory by between 30 and 75 ms. Behaviorally, cross-modal phase resetting accounted for 18% of the variability in response speed in the auditory task, with stronger resetting overall leading to significantly faster responses. A direct link was thus shown between visual-induced modulations of auditory cortex activity and performance in an auditory task. The results are consistent with a model in which the efficiency of auditory processing is improved when natural associations between visual and auditory inputs allow one input to reliably predict the next.
Collapse
|
94
|
Letham B, Raij T. Statistically robust measurement of evoked response onset latencies. J Neurosci Methods 2010; 194:374-9. [PMID: 20974175 DOI: 10.1016/j.jneumeth.2010.10.016] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2010] [Revised: 10/10/2010] [Accepted: 10/19/2010] [Indexed: 10/18/2022]
Abstract
Onset latencies of evoked responses are useful for determining delays in sensory pathways and for indicating spread of activity between brain areas, therefore inferring causality. Previous studies have applied several different methods and parameters for detecting onsets, mainly utilizing thresholds based on the mean and standard deviation (SD) of the pre-stimulus "baseline" time window, or using t-tests of group data to determine when the response first differs significantly from the baseline. However, these methods are not statistically robust, have low power when the baseline data are not normally distributed, and are heavily influenced by outliers in the baseline. Here, we examine using a modified boxplot method known as the "median rule" for determining onset latencies. This rule makes no assumptions about the baseline distribution, is resistant to outliers, and can be applied to individual level data therefore allowing intersubject and interregional comparisons. We first show with simulations that the median rule is significantly less sensitive to outliers in the baseline than the SD method. We then use simulations to demonstrate the effect of skewness on onset latencies. Finally, we use magnetoencephalography (MEG) to show that the median rule can be easily applied to real data and gives reasonable results. In most situations the different methods give similar results, which enhances comparability across studies, but in data sets with a high noise level there is a clear advantage to using a statistically robust method. In conclusion, the median rule is an excellent method for estimating onset latencies in evoked responses.
Collapse
Affiliation(s)
- Benjamin Letham
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, MA, USA
| | | |
Collapse
|
95
|
Auditory-visual multisensory interactions in humans: timing, topography, directionality, and sources. J Neurosci 2010; 30:12572-80. [PMID: 20861363 DOI: 10.1523/jneurosci.1099-10.2010] [Citation(s) in RCA: 108] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Current models of brain organization include multisensory interactions at early processing stages and within low-level, including primary, cortices. Embracing this model with regard to auditory-visual (AV) interactions in humans remains problematic. Controversy surrounds the application of an additive model to the analysis of event-related potentials (ERPs), and conventional ERP analysis methods have yielded discordant latencies of effects and permitted limited neurophysiologic interpretability. While hemodynamic imaging and transcranial magnetic stimulation studies provide general support for the above model, the precise timing, superadditive/subadditive directionality, topographic stability, and sources remain unresolved. We recorded ERPs in humans to attended, but task-irrelevant stimuli that did not require an overt motor response, thereby circumventing paradigmatic caveats. We applied novel ERP signal analysis methods to provide details concerning the likely bases of AV interactions. First, nonlinear interactions occur at 60-95 ms after stimulus and are the consequence of topographic, rather than pure strength, modulations in the ERP. AV stimuli engage distinct configurations of intracranial generators, rather than simply modulating the amplitude of unisensory responses. Second, source estimations (and statistical analyses thereof) identified primary visual, primary auditory, and posterior superior temporal regions as mediating these effects. Finally, scalar values of current densities in all of these regions exhibited functionally coupled, subadditive nonlinear effects, a pattern increasingly consistent with the mounting evidence in nonhuman primates. In these ways, we demonstrate how neurophysiologic bases of multisensory interactions can be noninvasively identified in humans, allowing for a synthesis across imaging methods on the one hand and species on the other.
Collapse
|
96
|
|