1
|
Bidelman GM, York A, Pearson C. Neural correlates of phonetic categorization under auditory (phoneme) and visual (grapheme) modalities. Neuroscience 2025; 565:182-191. [PMID: 39631659 DOI: 10.1016/j.neuroscience.2024.11.079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2024] [Revised: 11/16/2024] [Accepted: 11/30/2024] [Indexed: 12/07/2024]
Abstract
This study assessed the neural mechanisms and relative saliency of categorization for speech sounds and comparable graphemes (i.e., visual letters) of the same phonetic label. Given that linguistic experience shapes categorical processing, and letter-speech sound matching plays a crucial role during early reading acquisition, we hypothesized sound phoneme and visual grapheme tokens representing the same linguistic identity might recruit common neural substrates, despite originating from different sensory modalities. Behavioral and neuroelectric brain responses (ERPs) were acquired as participants categorized stimuli from sound (phoneme) and homologous letter (grapheme) continua each spanning a /da/-/ga/ gradient. Behaviorally, listeners were faster and showed stronger categorization of phoneme compared to graphemes. At the neural level, multidimensional scaling of the EEG revealed responses self-organized in a categorial fashion such that tokens clustered within their respective modality beginning ∼150-250 ms after stimulus onset. Source-resolved ERPs further revealed modality-specific and overlapping brain regions supporting phonetic categorization. Left inferior frontal gyrus and auditory cortex showed stronger responses for sound category members compared to phonetically ambiguous tokens, whereas early visual cortices paralleled this categorical organization for graphemes. Auditory and visual categorization also recruited common visual association areas in extrastriate cortex but in opposite hemispheres (auditory = left; visual = right). Our findings reveal both auditory and visual sensory cortex supports categorical organization for phonetic labels within their respective modalities. However, a partial overlap in phoneme and grapheme processing among occipital brain areas implies the presence of an isomorphic, domain-general mapping for phonetic categories in dorsal visual system.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA; Program in Neuroscience, Indiana University, Bloomington, IN, USA; Cognitive Science Program, Indiana University, Bloomington, IN, USA.
| | - Ashleigh York
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Univeristy of Mississippi Medical Center, Jackson, MS, USA
| | - Claire Pearson
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| |
Collapse
|
2
|
Alwashmi K, Rowe F, Meyer G. Multimodal MRI analysis of microstructural and functional connectivity brain changes following systematic audio-visual training in a virtual environment. Neuroimage 2025; 305:120983. [PMID: 39732221 DOI: 10.1016/j.neuroimage.2024.120983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 12/06/2024] [Accepted: 12/18/2024] [Indexed: 12/30/2024] Open
Abstract
Recent work has shown rapid microstructural brain changes in response to learning new tasks. These cognitive tasks tend to draw on multiple brain regions connected by white matter (WM) tracts. Therefore, behavioural performance change is likely to be the result of microstructural, functional activation, and connectivity changes in extended neural networks. Here we show for the first time that learning-induced microstructural change in WM tracts, quantified with diffusion tensor and kurtosis imaging (DTI, DKI) is linked to functional connectivity changes in brain areas that use these tracts to communicate. Twenty healthy participants engaged in a month of virtual reality (VR) systematic audiovisual (AV) training. DTI analysis using repeated-measures ANOVA unveiled a decrease in mean diffusivity (MD) in the SLF II, alongside a significant increase in fractional anisotropy (FA) in optic radiations post-training, persisting in the follow-up (FU) assessment (post: MD t(76) = 6.13, p < 0.001, FA t(76) = 3.68, p < 0.01, FU: MD t(76) = 4.51, p < 0.001, FA t(76) = 2.989, p < 0.05). The MD reduction across participants was significantly correlated with the observed behavioural performance gains. A functional connectivity (FC) analysis showed significantly enhanced functional activity correlation between primary visual and auditory cortices post-training, which was evident by the DKI microstructural changes found within these two regions as well as in the sagittal stratum including WM tracts connecting occipital and temporal lobes (mean kurtosis (MK): cuneus t(19)=2.3 p < 0.05, transverse temporal t(19)=2.6 p < 0.05, radial kurtosis (RK): sagittal stratum t(19)=2.3 p < 0.05). DTI and DKI show complementary data, both of which are consistent with the task-relevant brain networks. The results demonstrate the utility of multimodal imaging analysis to provide complementary evidence for brain changes at the level of networks. In summary, our study shows the complex relationship between microstructural adaptations and functional connectivity, unveiling the potential of multisensory integration within immersive VR training. These findings have implications for learning and rehabilitation strategies, facilitating more effective interventions within virtual environments.
Collapse
Affiliation(s)
- Kholoud Alwashmi
- Faculty of Health and Life Sciences, University of Liverpool, United Kingdom; Department of Radiology, Princess Nourah bint Abdulrahman University, Saudi Arabia.
| | - Fiona Rowe
- IDEAS, University of Liverpool, United Kingdom.
| | - Georg Meyer
- Institute of Population Health, University of Liverpool, United Kingdom; Hanse Wissenschaftskolleg, Delmenhorst, Germany.
| |
Collapse
|
3
|
Raij T, Lin FH, Letham B, Lankinen K, Nayak T, Witzel T, Hämäläinen M, Ahveninen J. Onset timing of letter processing in auditory and visual sensory cortices. Front Integr Neurosci 2024; 18:1427149. [PMID: 39610979 PMCID: PMC11602476 DOI: 10.3389/fnint.2024.1427149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Accepted: 10/29/2024] [Indexed: 11/30/2024] Open
Abstract
Here, we report onset latencies for multisensory processing of letters in the primary auditory and visual sensory cortices. Healthy adults were presented with 300-ms visual and/or auditory letters (uppercase Roman alphabet and the corresponding auditory letter names in English). Magnetoencephalography (MEG) evoked response generators were extracted from the auditory and visual sensory cortices for both within-modality and cross-sensory activations; these locations were mainly consistent with functional magnetic resonance imaging (fMRI) results in the same subjects. In the primary auditory cortices (Heschl's gyri) activity to auditory stimuli commenced at 25 ms and to visual stimuli at 65 ms (median values). In the primary visual cortex (Calcarine fissure) the activations started at 48 ms to visual and at 62 ms to auditory stimuli. This timing pattern suggests that the origins of the cross-sensory activations may be in the primary sensory cortices of the opposite modality, with conduction delays (from one sensory cortex to another) of 17-37 ms. Audiovisual interactions for letters started at 125 ms in the auditory and at 133 ms in the visual cortex (60-71 ms after inputs from both modalities converged). Multivariate pattern analysis suggested similar latency differences between the sensory cortices. Combined with our earlier findings for simpler stimuli (noise bursts and checkerboards), these results suggest that primary sensory cortices participate in early cross-modal and interaction processes similarly for different stimulus materials, but previously learned audiovisual associations and stimulus complexity may delay the start of the audiovisual interaction stage.
Collapse
Affiliation(s)
- Tommi Raij
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
| | - Fa-Hsuan Lin
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
| | - Benjamin Letham
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
| | - Kaisu Lankinen
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
| | - Tapsya Nayak
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
| | - Thomas Witzel
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
- Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, United States
| | - Matti Hämäläinen
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
| | - Jyrki Ahveninen
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, United States
| |
Collapse
|
4
|
Paasonen J, Valjakka JS, Salo RA, Paasonen E, Tanila H, Michaeli S, Mangia S, Gröhn O. Whisker stimulation with different frequencies reveals non-uniform modulation of functional magnetic resonance imaging signal across sensory systems in awake rats. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.13.623361. [PMID: 39605361 PMCID: PMC11601494 DOI: 10.1101/2024.11.13.623361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
Primary sensory systems are classically considered to be separate units, however there is current evidence that there are notable interactions between them. We examined the cross-sensory interplay by applying a quiet and motion-tolerant zero echo time functional magnetic resonance imaging (fMRI) technique to elucidate the evoked brain-wide responses to whisker pad stimulation in awake and anesthetized rats. Specifically, characterized the brain-wide responses in core and non-core regions to whisker pad stimulation by the varying stimulation-frequency, and determined whether isoflurane-medetomidine anesthesia, traditionally used in preclinical imaging, confounded investigations related to sensory integration. We demonstrated that unilateral whisker pad stimulation not only elicited robust activity along the whisker-mediated tactile system, but also in auditory, visual, high-order, and cerebellar regions, indicative of brain-wide cross-sensory and associative activity. By inspecting the response profiles to different stimulation frequencies and temporal signal characteristics, we observed that the non-core regions responded to stimulation in a very different way compared to the primary sensory system, likely reflecting different encoding modes between the primary sensory, cross-sensory, and integrative processing. Lastly, while the activity evoked in low-order sensory structures could be reliably detected under anesthesia, the activity in high-order processing and the complex differences between primary, cross-sensory, and associative systems were visible only in the awake state. We conclude that our study reveals novel aspects of the cross-sensory interplay of whisker-mediated tactile system, and importantly, that these would be difficult to observe in anesthetized rats.
Collapse
Affiliation(s)
- Jaakko Paasonen
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Juha S. Valjakka
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, USA
| | - Raimo A. Salo
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Ekaterina Paasonen
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
- NeuroCenter, Kuopio University Hospital, Kuopio, Finland
| | - Heikki Tanila
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Shalom Michaeli
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, USA
| | - Silvia Mangia
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, USA
| | - Olli Gröhn
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
5
|
Mackey CA, O’Connell MN, Hackett TA, Schroeder CE, Kajikawa Y. Laminar organization of visual responses in core and parabelt auditory cortex. Cereb Cortex 2024; 34:bhae373. [PMID: 39300609 PMCID: PMC11412770 DOI: 10.1093/cercor/bhae373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 08/24/2024] [Accepted: 08/29/2024] [Indexed: 09/22/2024] Open
Abstract
Audiovisual (AV) interaction has been shown in many studies of auditory cortex. However, the underlying processes and circuits are unclear because few studies have used methods that delineate the timing and laminar distribution of net excitatory and inhibitory processes within areas, much less across cortical levels. This study examined laminar profiles of neuronal activity in auditory core (AC) and parabelt (PB) cortices recorded from macaques during active discrimination of conspecific faces and vocalizations. We found modulation of multi-unit activity (MUA) in response to isolated visual stimulation, characterized by a brief deep MUA spike, putatively in white matter, followed by mid-layer MUA suppression in core auditory cortex; the later suppressive event had clear current source density concomitants, while the earlier MUA spike did not. We observed a similar facilitation-suppression sequence in the PB, with later onset latency. In combined AV stimulation, there was moderate reduction of responses to sound during the visual-evoked MUA suppression interval in both AC and PB. These data suggest a common sequence of afferent spikes, followed by synaptic inhibition; however, differences in timing and laminar location may reflect distinct visual projections to AC and PB.
Collapse
Affiliation(s)
- Chase A Mackey
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd, Orangeburg, NY 10962, United States
| | - Monica N O’Connell
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd, Orangeburg, NY 10962, United States
- Department of Psychiatry, New York University School of Medicine, 145 E 32nd St., New York, NY 10016, United States
| | - Troy A Hackett
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1211 Medical Center Dr., Nashville, TN 37212, United States
| | - Charles E Schroeder
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd, Orangeburg, NY 10962, United States
- Departments of Psychiatry and Neurology, Columbia University College of Physicians, 630 W 168th St, New York, NY 10032, United States
| | - Yoshinao Kajikawa
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd, Orangeburg, NY 10962, United States
- Department of Psychiatry, New York University School of Medicine, 145 E 32nd St., New York, NY 10016, United States
| |
Collapse
|
6
|
Bidelman GM, York A, Pearson C. Neural correlates of phonetic categorization under auditory (phoneme) and visual (grapheme) modalities. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.24.604940. [PMID: 39211275 PMCID: PMC11361091 DOI: 10.1101/2024.07.24.604940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
We tested whether the neural mechanisms of phonetic categorization are specific to speech sounds or generalize to graphemes (i.e., visual letters) of the same phonetic label. Given that linguistic experience shapes categorical processing, and letter-speech sound matching plays a crucial role during early reading acquisition, we hypothesized sound phoneme and visual grapheme tokens representing the same linguistic identity might recruit common neural substrates, despite originating from different sensory modalities. Behavioral and neuroelectric brain responses (ERPs) were acquired as participants categorized stimuli from sound (phoneme) and homologous letter (grapheme) continua each spanning a /da/ - /ga/ gradient. Behaviorally, listeners were faster and showed stronger categorization of phoneme compared to graphemes. At the neural level, multidimensional scaling of the EEG revealed responses self-organized in a categorial fashion such that tokens clustered within their respective modality beginning ∼150-250 ms after stimulus onset. Source-resolved ERPs further revealed modality-specific and overlapping brain regions supporting phonetic categorization. Left inferior frontal gyrus and auditory cortex showed stronger responses for sound category members compared to phonetically ambiguous tokens, whereas early visual cortices paralleled this categorical organization for graphemes. Auditory and visual categorization also recruited common visual association areas in extrastriate cortex but in opposite hemispheres (auditory = left; visual=right). Our findings reveal both auditory and visual sensory cortex supports categorical organization for phonetic labels within their respective modalities. However, a partial overlap in phoneme and grapheme processing among occipital brain areas implies the presence of an isomorphic, domain-general mapping for phonetic categories in dorsal visual system.
Collapse
|
7
|
Lankinen K, Ahveninen J, Jas M, Raij T, Ahlfors SP. Neuronal Modeling of Cross-Sensory Visual Evoked Magnetoencephalography Responses in the Auditory Cortex. J Neurosci 2024; 44:e1119232024. [PMID: 38508715 PMCID: PMC11044114 DOI: 10.1523/jneurosci.1119-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 02/13/2024] [Accepted: 02/14/2024] [Indexed: 03/22/2024] Open
Abstract
Previous studies have demonstrated that auditory cortex activity can be influenced by cross-sensory visual inputs. Intracortical laminar recordings in nonhuman primates have suggested a feedforward (FF) type profile for auditory evoked but feedback (FB) type for visual evoked activity in the auditory cortex. To test whether cross-sensory visual evoked activity in the auditory cortex is associated with FB inputs also in humans, we analyzed magnetoencephalography (MEG) responses from eight human subjects (six females) evoked by simple auditory or visual stimuli. In the estimated MEG source waveforms for auditory cortex regions of interest, auditory evoked response showed peaks at 37 and 90 ms and visual evoked response at 125 ms. The inputs to the auditory cortex were modeled through FF- and FB-type connections targeting different cortical layers using the Human Neocortical Neurosolver (HNN), which links cellular- and circuit-level mechanisms to MEG signals. HNN modeling suggested that the experimentally observed auditory response could be explained by an FF input followed by an FB input, whereas the cross-sensory visual response could be adequately explained by just an FB input. Thus, the combined MEG and HNN results support the hypothesis that cross-sensory visual input in the auditory cortex is of FB type. The results also illustrate how the dynamic patterns of the estimated MEG source activity can provide information about the characteristics of the input into a cortical area in terms of the hierarchical organization among areas.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Mainak Jas
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Seppo P Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| |
Collapse
|
8
|
Ahveninen J, Lee HJ, Yu HY, Lee CC, Chou CC, Ahlfors SP, Kuo WJ, Jääskeläinen IP, Lin FH. Visual Stimuli Modulate Local Field Potentials But Drive No High-Frequency Activity in Human Auditory Cortex. J Neurosci 2024; 44:e0890232023. [PMID: 38129133 PMCID: PMC10869150 DOI: 10.1523/jneurosci.0890-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 11/06/2023] [Accepted: 11/07/2023] [Indexed: 12/23/2023] Open
Abstract
Neuroimaging studies suggest cross-sensory visual influences in human auditory cortices (ACs). Whether these influences reflect active visual processing in human ACs, which drives neuronal firing and concurrent broadband high-frequency activity (BHFA; >70 Hz), or whether they merely modulate sound processing is still debatable. Here, we presented auditory, visual, and audiovisual stimuli to 16 participants (7 women, 9 men) with stereo-EEG depth electrodes implanted near ACs for presurgical monitoring. Anatomically normalized group analyses were facilitated by inverse modeling of intracranial source currents. Analyses of intracranial event-related potentials (iERPs) suggested cross-sensory responses to visual stimuli in ACs, which lagged the earliest auditory responses by several tens of milliseconds. Visual stimuli also modulated the phase of intrinsic low-frequency oscillations and triggered 15-30 Hz event-related desynchronization in ACs. However, BHFA, a putative correlate of neuronal firing, was not significantly increased in ACs after visual stimuli, not even when they coincided with auditory stimuli. Intracranial recordings demonstrate cross-sensory modulations, but no indication of active visual processing in human ACs.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Hsin-Ju Lee
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 1L7, Canada
| | - Hsiang-Yu Yu
- Department of Epilepsy, Neurological Institute, Taipei Veterans General Hospital, Taipei 11217, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Cheng-Chia Lee
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei 11217, Taiwan
| | - Chien-Chen Chou
- Department of Epilepsy, Neurological Institute, Taipei Veterans General Hospital, Taipei 11217, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Seppo P Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Wen-Jui Kuo
- Institute of Neuroscience, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, FI-00076 AALTO, Finland
- International Laboratory of Social Neurobiology, Institute of Cognitive Neuroscience, Higher School of Economics, Moscow 101000, Russia
| | - Fa-Hsuan Lin
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 1L7, Canada
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, FI-00076 AALTO, Finland
| |
Collapse
|
9
|
Lankinen K, Ahveninen J, Jas M, Raij T, Ahlfors SP. Neuronal modeling of magnetoencephalography responses in auditory cortex to auditory and visual stimuli. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.16.545371. [PMID: 37398025 PMCID: PMC10312796 DOI: 10.1101/2023.06.16.545371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Previous studies have demonstrated that auditory cortex activity can be influenced by crosssensory visual inputs. Intracortical recordings in non-human primates (NHP) have suggested a bottom-up feedforward (FF) type laminar profile for auditory evoked but top-down feedback (FB) type for cross-sensory visual evoked activity in the auditory cortex. To test whether this principle applies also to humans, we analyzed magnetoencephalography (MEG) responses from eight human subjects (six females) evoked by simple auditory or visual stimuli. In the estimated MEG source waveforms for auditory cortex region of interest, auditory evoked responses showed peaks at 37 and 90 ms and cross-sensory visual responses at 125 ms. The inputs to the auditory cortex were then modeled through FF and FB type connections targeting different cortical layers using the Human Neocortical Neurosolver (HNN), which consists of a neocortical circuit model linking the cellular- and circuit-level mechanisms to MEG. The HNN models suggested that the measured auditory response could be explained by an FF input followed by an FB input, and the crosssensory visual response by an FB input. Thus, the combined MEG and HNN results support the hypothesis that cross-sensory visual input in the auditory cortex is of FB type. The results also illustrate how the dynamic patterns of the estimated MEG/EEG source activity can provide information about the characteristics of the input into a cortical area in terms of the hierarchical organization among areas.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| | - Mainak Jas
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| | - Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| | - Seppo P. Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| |
Collapse
|
10
|
Bertonati G, Amadeo MB, Campus C, Gori M. Task-dependent spatial processing in the visual cortex. Hum Brain Mapp 2023; 44:5972-5981. [PMID: 37811869 PMCID: PMC10619374 DOI: 10.1002/hbm.26489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 07/31/2023] [Accepted: 08/30/2023] [Indexed: 10/10/2023] Open
Abstract
To solve spatial tasks, the human brain asks for support from the visual cortices. Nonetheless, representing spatial information is not fixed but depends on the reference frames in which the spatial inputs are involved. The present study investigates how the kind of spatial representations influences the recruitment of visual areas during multisensory spatial tasks. Our study tested participants in an electroencephalography experiment involving two audio-visual (AV) spatial tasks: a spatial bisection, in which participants estimated the relative position in space of an AV stimulus in relation to the position of two other stimuli, and a spatial localization, in which participants localized one AV stimulus in relation to themselves. Results revealed that spatial tasks specifically modulated the occipital event-related potentials (ERPs) after the onset of the stimuli. We observed a greater contralateral early occipital component (50-90 ms) when participants solved the spatial bisection, and a more robust later occipital response (110-160 ms) when they processed the spatial localization. This observation suggests that different spatial representations elicited by multisensory stimuli are sustained by separate neurophysiological mechanisms.
Collapse
Affiliation(s)
- G. Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - M. B. Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - C. Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - M. Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
11
|
Choi I, Demir I, Oh S, Lee SH. Multisensory integration in the mammalian brain: diversity and flexibility in health and disease. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220338. [PMID: 37545309 PMCID: PMC10404930 DOI: 10.1098/rstb.2022.0338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration (MSI) occurs in a variety of brain areas, spanning cortical and subcortical regions. In traditional studies on sensory processing, the sensory cortices have been considered for processing sensory information in a modality-specific manner. The sensory cortices, however, send the information to other cortical and subcortical areas, including the higher association cortices and the other sensory cortices, where the multiple modality inputs converge and integrate to generate a meaningful percept. This integration process is neither simple nor fixed because these brain areas interact with each other via complicated circuits, which can be modulated by numerous internal and external conditions. As a result, dynamic MSI makes multisensory decisions flexible and adaptive in behaving animals. Impairments in MSI occur in many psychiatric disorders, which may result in an altered perception of the multisensory stimuli and an abnormal reaction to them. This review discusses the diversity and flexibility of MSI in mammals, including humans, primates and rodents, as well as the brain areas involved. It further explains how such flexibility influences perceptual experiences in behaving animals in both health and disease. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Ilsong Choi
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
| | - Ilayda Demir
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seungmi Oh
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seung-Hee Lee
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| |
Collapse
|
12
|
Zhu H, Tang X, Chen T, Yang J, Wang A, Zhang M. Audiovisual illusion training improves multisensory temporal integration. Conscious Cogn 2023; 109:103478. [PMID: 36753896 DOI: 10.1016/j.concog.2023.103478] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 01/26/2023] [Accepted: 01/26/2023] [Indexed: 02/08/2023]
Abstract
When we perceive external physical stimuli from the environment, the brain must remain somewhat flexible to unaligned stimuli within a specific range, as multisensory signals are subject to different transmission and processing delays. Recent studies have shown that the width of the 'temporal binding window (TBW)' can be reduced by perceptual learning. However, to date, the vast majority of studies examining the mechanisms of perceptual learning have focused on experience-dependent effects, failing to reach a consensus on its relationship with the underlying perception influenced by audiovisual illusion. The sound-induced flash illusion (SiFI) training is a reliable function for improving perceptual sensitivity. The present study utilized the classic auditory-dominated SiFI paradigm with feedback training to investigate the effect of a 5-day SiFI training on multisensory temporal integration, as evaluated by a simultaneity judgment (SJ) task and temporal order judgment (TOJ) task. We demonstrate that audiovisual illusion training enhances multisensory temporal integration precision in the form of (i) the point of subjective simultaneity (PSS) shifts to reality (0 ms) and (ii) a narrowing TBW. The results are consistent with a Bayesian model of causal inference, suggesting that perception learning reduce the susceptibility to SiFI, whilst improving the precision of audiovisual temporal estimation.
Collapse
Affiliation(s)
- Haocheng Zhu
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Tingji Chen
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Jiajia Yang
- Applied Brain Science Lab Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
13
|
Lankinen K, Ahlfors SP, Mamashli F, Blazejewska AI, Raij T, Turpin T, Polimeni JR, Ahveninen J. Cortical depth profiles of auditory and visual 7 T functional MRI responses in human superior temporal areas. Hum Brain Mapp 2023; 44:362-372. [PMID: 35980015 PMCID: PMC9842898 DOI: 10.1002/hbm.26046] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 07/06/2022] [Accepted: 07/16/2022] [Indexed: 02/02/2023] Open
Abstract
Invasive neurophysiological studies in nonhuman primates have shown different laminar activation profiles to auditory vs. visual stimuli in auditory cortices and adjacent polymodal areas. Means to examine the underlying feedforward vs. feedback type influences noninvasively have been limited in humans. Here, using 1-mm isotropic resolution 3D echo-planar imaging at 7 T, we studied the intracortical depth profiles of functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) signals to brief auditory (noise bursts) and visual (checkerboard) stimuli. BOLD percent-signal-changes were estimated at 11 equally spaced intracortical depths, within regions-of-interest encompassing auditory (Heschl's gyrus, Heschl's sulcus, planum temporale, and posterior superior temporal gyrus) and polymodal (middle and posterior superior temporal sulcus) areas. Effects of differing BOLD signal strengths for auditory and visual stimuli were controlled via normalization and statistical modeling. The BOLD depth profile shapes, modeled with quadratic regression, were significantly different for auditory vs. visual stimuli in auditory cortices, but not in polymodal areas. The different depth profiles could reflect sensory-specific feedforward versus cross-sensory feedback influences, previously shown in laminar recordings in nonhuman primates. The results suggest that intracortical BOLD profiles can help distinguish between feedforward and feedback type influences in the human brain. Further experimental studies are still needed to clarify how underlying signal strength influences BOLD depth profiles under different stimulus conditions.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Seppo P. Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Anna I. Blazejewska
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Tori Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Jonathan R. Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
- Division of Health Sciences and TechnologyMassachusetts Institute of TechnologyCambridgeMassachusettsUSA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| |
Collapse
|
14
|
Yuan Y, He X, Yue Z. Working memory load modulates the processing of audiovisual distractors: A behavioral and event-related potentials study. Front Integr Neurosci 2023; 17:1120668. [PMID: 36908504 PMCID: PMC9995450 DOI: 10.3389/fnint.2023.1120668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 01/30/2023] [Indexed: 02/25/2023] Open
Abstract
The interplay between different modalities can help to perceive stimuli more effectively. However, very few studies have focused on how multisensory distractors affect task performance. By adopting behavioral and event-related potentials (ERPs) techniques, the present study examined whether multisensory audiovisual distractors could attract attention more effectively than unisensory distractors. Moreover, we explored whether such a process was modulated by working memory load. Across three experiments, n-back tasks (1-back and 2-back) were adopted with peripheral auditory, visual, or audiovisual distractors. Visual and auditory distractors were white discs and pure tones (Experiments 1 and 2), pictures and sounds of animals (Experiment 3), respectively. Behavioral results in Experiment 1 showed a significant interference effect under high working memory load but not under low load condition. The responses to central letters with audiovisual distractors were significantly slower than those to letters without distractors, while no significant difference was found between unisensory distractor and without distractor conditions. Similarly, ERP results in Experiments 2 and 3 showed that there existed an integration only under high load condition. That is, an early integration for simple audiovisual distractors (240-340 ms) and a late integration for complex audiovisual distractors (440-600 ms). These findings suggest that multisensory distractors can be integrated and effectively attract attention away from the main task, i.e., interference effect. Moreover, this effect is pronounced only under high working memory load condition.
Collapse
Affiliation(s)
- Yichen Yuan
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Xiang He
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Zhenzhu Yue
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
15
|
Gori M, Bertonati G, Campus C, Amadeo MB. Multisensory representations of space and time in sensory cortices. Hum Brain Mapp 2022; 44:656-667. [PMID: 36169038 PMCID: PMC9842891 DOI: 10.1002/hbm.26090] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Giorgia Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly,Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - Claudio Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
16
|
Perceptual training narrows the temporal binding window of audiovisual integration in both younger and older adults. Neuropsychologia 2022; 173:108309. [PMID: 35752266 DOI: 10.1016/j.neuropsychologia.2022.108309] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 05/16/2022] [Accepted: 06/20/2022] [Indexed: 10/17/2022]
Abstract
There is a growing body of evidence to suggest that multisensory processing changes with advancing age-usually in the form of an enlarged temporal binding window-with some studies linking these multisensory changes to negative clinical outcomes. Perceptual training regimes represent a promising means for enhancing the precision of multisensory integration in ageing; however, to date, the vast majority of studies examining the efficacy of multisensory perceptual learning have focused solely on healthy young adults. Here, we measured the temporal binding windows of younger and older participants before and after training on an audiovisual temporal discrimination task to assess (i) how perceptual training affected the shape of the temporal binding window and (ii) whether training effects were similar in both age groups. Our results replicated previous findings of an enlarged temporal binding window in older adults, as well as providing further evidence that both younger and older participants can improve the precision of their audiovisual timing estimation via perceptual training. We also show that this training protocol led to a narrowing of the temporal binding window associated with the sound-induced flash illusion in both age groups indicating a general refinement of audiovisual integration. However, while younger adults also displayed a general reduction in crossmodal interactions following training, this effect was not observed in the older adult group. Together, our results suggest that perceptual training narrows the temporal binding window of audiovisual integration in both younger and older adults but has less of an impact on prior expectations regarding the source of audiovisual signals in older adults.
Collapse
|
17
|
Turoman N, Tivadar RI, Retsa C, Murray MM, Matusz PJ. Towards understanding how we pay attention in naturalistic visual search settings. Neuroimage 2021; 244:118556. [PMID: 34492292 DOI: 10.1016/j.neuroimage.2021.118556] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 08/31/2021] [Accepted: 09/03/2021] [Indexed: 10/20/2022] Open
Abstract
Research on attentional control has largely focused on single senses and the importance of behavioural goals in controlling attention. However, everyday situations are multisensory and contain regularities, both likely influencing attention. We investigated how visual attentional capture is simultaneously impacted by top-down goals, the multisensory nature of stimuli, and the contextual factors of stimuli's semantic relationship and temporal predictability. Participants performed a multisensory version of the Folk et al. (1992) spatial cueing paradigm, searching for a target of a predefined colour (e.g. a red bar) within an array preceded by a distractor. We manipulated: 1) stimuli's goal-relevance via distractor's colour (matching vs. mismatching the target), 2) stimuli's multisensory nature (colour distractors appearing alone vs. with tones), 3) the relationship between the distractor sound and colour (arbitrary vs. semantically congruent) and 4) the temporal predictability of distractor onset. Reaction-time spatial cueing served as a behavioural measure of attentional selection. We also recorded 129-channel event-related potentials (ERPs), analysing the distractor-elicited N2pc component both canonically and using a multivariate electrical neuroimaging framework. Behaviourally, arbitrary target-matching distractors captured attention more strongly than semantically congruent ones, with no evidence for context modulating multisensory enhancements of capture. Notably, electrical neuroimaging of surface-level EEG analyses revealed context-based influences on attention to both visual and multisensory distractors, in how strongly they activated the brain and type of activated brain networks. For both processes, the context-driven brain response modulations occurred long before the N2pc time-window, with topographic (network-based) modulations at ∼30 ms, followed by strength-based modulations at ∼100 ms post-distractor onset. Our results reveal that both stimulus meaning and predictability modulate attentional selection, and they interact while doing so. Meaning, in addition to temporal predictability, is thus a second source of contextual information facilitating goal-directed behaviour. More broadly, in everyday situations, attention is controlled by an interplay between one's goals, stimuli's perceptual salience, meaning and predictability. Our study calls for a revision of attentional control theories to account for the role of contextual and multisensory control.
Collapse
Affiliation(s)
- Nora Turoman
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; MEDGIFT Lab, Institute of Information Systems, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Techno-Pôle 3, 3960 Sierre, Switzerland; Working Memory, Cognition and Development lab, Department of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Ruxandra I Tivadar
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland; Cognitive Computational Neuroscience group, Institute of Computer Science, Faculty of Science, University of Bern, Switzerland
| | - Chrysa Retsa
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; CIBM Center for Biomedical Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Micah M Murray
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland; CIBM Center for Biomedical Imaging, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Pawel J Matusz
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; MEDGIFT Lab, Institute of Information Systems, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Techno-Pôle 3, 3960 Sierre, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
18
|
Rezaul Karim AKM, Proulx MJ, de Sousa AA, Likova LT. Neuroplasticity and Crossmodal Connectivity in the Normal, Healthy Brain. PSYCHOLOGY & NEUROSCIENCE 2021; 14:298-334. [PMID: 36937077 PMCID: PMC10019101 DOI: 10.1037/pne0000258] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Objective Neuroplasticity enables the brain to establish new crossmodal connections or reorganize old connections which are essential to perceiving a multisensorial world. The intent of this review is to identify and summarize the current developments in neuroplasticity and crossmodal connectivity, and deepen understanding of how crossmodal connectivity develops in the normal, healthy brain, highlighting novel perspectives about the principles that guide this connectivity. Methods To the above end, a narrative review is carried out. The data documented in prior relevant studies in neuroscience, psychology and other related fields available in a wide range of prominent electronic databases are critically assessed, synthesized, interpreted with qualitative rather than quantitative elements, and linked together to form new propositions and hypotheses about neuroplasticity and crossmodal connectivity. Results Three major themes are identified. First, it appears that neuroplasticity operates by following eight fundamental principles and crossmodal integration operates by following three principles. Second, two different forms of crossmodal connectivity, namely direct crossmodal connectivity and indirect crossmodal connectivity, are suggested to operate in both unisensory and multisensory perception. Third, three principles possibly guide the development of crossmodal connectivity into adulthood. These are labeled as the principle of innate crossmodality, the principle of evolution-driven 'neuromodular' reorganization and the principle of multimodal experience. These principles are combined to develop a three-factor interaction model of crossmodal connectivity. Conclusions The hypothesized principles and the proposed model together advance understanding of neuroplasticity, the nature of crossmodal connectivity, and how such connectivity develops in the normal, healthy brain.
Collapse
|
19
|
Keefe JM, Pokta E, Störmer VS. Cross-modal orienting of exogenous attention results in visual-cortical facilitation, not suppression. Sci Rep 2021; 11:10237. [PMID: 33986384 PMCID: PMC8119727 DOI: 10.1038/s41598-021-89654-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 04/29/2021] [Indexed: 11/10/2022] Open
Abstract
Attention may be oriented exogenously (i.e., involuntarily) to the location of salient stimuli, resulting in improved perception. However, it is unknown whether exogenous attention improves perception by facilitating processing of attended information, suppressing processing of unattended information, or both. To test this question, we measured behavioral performance and cue-elicited neural changes in the electroencephalogram as participants (N = 19) performed a task in which a spatially non-predictive auditory cue preceded a visual target. Critically, this cue was either presented at a peripheral target location or from the center of the screen, allowing us to isolate spatially specific attentional activity. We find that both behavior and attention-mediated changes in visual-cortical activity are enhanced at the location of a cue prior to the onset of a target, but that behavior and neural activity at an unattended target location is equivalent to that following a central cue that does not direct attention (i.e., baseline). These results suggest that exogenous attention operates via facilitation of information at an attended location.
Collapse
Affiliation(s)
- Jonathan M Keefe
- Department of Psychology, University of California, San Diego, 92092, USA.
| | - Emilia Pokta
- Department of Psychology, University of California, San Diego, 92092, USA
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego, 92092, USA
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, USA
| |
Collapse
|
20
|
Vaina LM, Calabro FJ, Samal A, Rana KD, Mamashli F, Khan S, Hämäläinen M, Ahlfors SP, Ahveninen J. Auditory cues facilitate object movement processing in human extrastriate visual cortex during simulated self-motion: A pilot study. Brain Res 2021; 1765:147489. [PMID: 33882297 DOI: 10.1016/j.brainres.2021.147489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 04/12/2021] [Accepted: 04/13/2021] [Indexed: 10/21/2022]
Abstract
Visual segregation of moving objects is a considerable computational challenge when the observer moves through space. Recent psychophysical studies suggest that directionally congruent, moving auditory cues can substantially improve parsing object motion in such settings, but the exact brain mechanisms and visual processing stages that mediate these effects are still incompletely known. Here, we utilized multivariate pattern analyses (MVPA) of MRI-informed magnetoencephalography (MEG) source estimates to examine how crossmodal auditory cues facilitate motion detection during the observer's self-motion. During MEG recordings, participants identified a target object that moved either forward or backward within a visual scene that included nine identically textured objects simulating forward observer translation. Auditory motion cues 1) improved the behavioral accuracy of target localization, 2) significantly modulated the MEG source activity in the areas V2 and human middle temporal complex (hMT+), and 3) increased the accuracy at which the target movement direction could be decoded from hMT+ activity using MVPA. The increase of decoding accuracy by auditory cues in hMT+ was significant also when superior temporal activations in or near auditory cortices were regressed out from the hMT+ source activity to control for source estimation biases caused by point spread. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow in the human extrastriate visual cortex can be facilitated by crossmodal influences from auditory system.
Collapse
Affiliation(s)
- Lucia M Vaina
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School-Department of Neurology, Massachusetts General Hospital and Brigham and Women's Hospital, MA, USA
| | - Finnegan J Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA; Department of Psychiatry and Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Abhisek Samal
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Kunjan D Rana
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Matti Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Seppo P Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
21
|
Kaya U, Kafaligonul H. Audiovisual interactions in speeded discrimination of a visual event. Psychophysiology 2021; 58:e13777. [PMID: 33483971 DOI: 10.1111/psyp.13777] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 01/07/2021] [Accepted: 01/07/2021] [Indexed: 01/10/2023]
Abstract
The integration of information from different senses is central to our perception of the external world. Audiovisual interactions have been particularly well studied in this context and various illusions have been developed to demonstrate strong influences of these interactions on the final percept. Using audiovisual paradigms, previous studies have shown that even task-irrelevant information provided by a secondary modality can change the detection and discrimination of a primary target. These modulations have been found to be significantly dependent on the relative timing between auditory and visual stimuli. Although these interactions in time have been commonly reported, we have still limited understanding of the relationship between the modulations of event-related potentials (ERPs) and final behavioral performance. Here, we aimed to shed light on this important issue by using a speeded discrimination paradigm combined with electroencephalogram (EEG). During the experimental sessions, the timing between an auditory click and a visual flash was varied over a wide range of stimulus onset asynchronies and observers were engaged in speeded discrimination of flash location. Behavioral reaction times were significantly changed by click timing. Furthermore, the modulations of evoked activities over medial parietal/parieto-occipital electrodes were associated with this effect. These modulations were within the 126-176 ms time range and more importantly, they were also correlated with the changes in reaction times. These results provide an important functional link between audiovisual interactions at early stages of sensory processing and reaction times. Together with previous research, they further suggest that early crossmodal interactions play a critical role in perceptual performance.
Collapse
Affiliation(s)
- Utku Kaya
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey.,Informatics Institute, Middle East Technical University, Ankara, Turkey.,Department of Anesthesiology, University of Michigan, Ann Arbor, MI, USA
| | - Hulusi Kafaligonul
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey.,Interdisciplinary Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent University, Ankara, Turkey
| |
Collapse
|
22
|
Horsfall RP. Narrowing of the Audiovisual Temporal Binding Window Due To Perceptual Training Is Specific to High Visual Intensity Stimuli. Iperception 2021; 12:2041669520978670. [PMID: 33680418 PMCID: PMC7897829 DOI: 10.1177/2041669520978670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 11/14/2020] [Indexed: 12/04/2022] Open
Abstract
The temporal binding window (TBW), which reflects the range of temporal offsets in which audiovisual stimuli are combined to form a singular percept, can be reduced through training. Our research aimed to investigate whether training-induced reductions in TBW size transfer across stimulus intensities. A total of 32 observers performed simultaneity judgements at two visual intensities with a fixed auditory intensity, before and after receiving audiovisual TBW training at just one of these two intensities. We show that training individuals with a high visual intensity reduces the size of the TBW for bright stimuli, but this improvement did not transfer to dim stimuli. The reduction in TBW can be explained by shifts in decision criteria. Those trained with the dim visual stimuli, however, showed no reduction in TBW. Our main finding is that perceptual improvements following training are specific for high-intensity stimuli, potentially highlighting limitations of proposed TBW training procedures.
Collapse
Affiliation(s)
- Ryan P. Horsfall
- Ryan P. Horsfall, Division of Neuroscience & Experimental Psychology, University of Manchester, Manchester M13 9PL, United Kingdom.
| |
Collapse
|
23
|
Zhao S, Feng C, Huang X, Wang Y, Feng W. Neural Basis of Semantically Dependent and Independent Cross-Modal Boosts on the Attentional Blink. Cereb Cortex 2020; 31:2291-2304. [DOI: 10.1093/cercor/bhaa362] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 10/15/2020] [Accepted: 11/02/2020] [Indexed: 01/26/2023] Open
Abstract
Abstract
The present study recorded event-related potentials (ERPs) in a visual object-recognition task under the attentional blink paradigm to explore the temporal dynamics of the cross-modal boost on attentional blink and whether this auditory benefit would be modulated by semantic congruency between T2 and the simultaneous sound. Behaviorally, the present study showed that not only a semantically congruent but also a semantically incongruent sound improved T2 discrimination during the attentional blink interval, whereas the enhancement was larger for the congruent sound. The ERP results revealed that the behavioral improvements induced by both the semantically congruent and incongruent sounds were closely associated with an early cross-modal interaction on the occipital N195 (192–228 ms). In contrast, the lower T2 accuracy for the incongruent than congruent condition was accompanied by a larger late occurring cento-parietal N440 (424–448 ms). These findings suggest that the cross-modal boost on attentional blink is hierarchical: the task-irrelevant but simultaneous sound, irrespective of its semantic relevance, firstly enables T2 to escape the attentional blink via cross-modally strengthening the early stage of visual object-recognition processing, whereas the semantic conflict of the sound begins to interfere with visual awareness only at a later stage when the representation of visual object is extracted.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Xinyin Huang
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Yijun Wang
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| |
Collapse
|
24
|
Shiohama T, Chew B, Levman J, Takahashi E. Quantitative analyses of high-angular resolution diffusion imaging (HARDI)-derived long association fibers in children with sensorineural hearing loss. Int J Dev Neurosci 2020; 80:717-729. [PMID: 33067827 DOI: 10.1002/jdn.10071] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2020] [Revised: 09/18/2020] [Accepted: 10/12/2020] [Indexed: 11/08/2022] Open
Abstract
Sensorineural hearing loss (SNHL) is the most common developmental sensory disorder due to a loss of function within the inner ear or its connections to the brain. While successful intervention for auditory deprivation with hearing amplification and cochlear implants during a sensitive early developmental period can improve spoken-language outcomes, SNHL patients can suffer several cognitive dysfunctions including executive function deficits, visual cognitive impairment, and abnormal visual dominance in speaking perception even after successful intervention. To evaluate whether long association fibers are involved in the pathogenesis of impairment on the extra-auditory cognitive process in SNHL participants, we quantitatively analyzed high-angular resolution diffusion imaging (HARDI) tractography-derived fibers in participants with SNHL. After excluding cases with congenital disorders, perinatal brain damage, or premature birth, we enrolled 17 participants with SNHL aged under 10 years old. Callosal pathways (CP) and six types of cortico-cortical association fibers (arcuate fasciculus [AF], inferior longitudinal fasciculus [ILF], inferior fronto-occipital fasciculus [IFOF], uncinate fasciculus [UF], cingulum fasciculus [CF], and fornix [Fx]) in both hemispheres were identified and visualized. The ILF and IFOF were partly undetected in three profound SNHL participants. Compared to age- and gender-matched neurotypical controls (NC), decreased volumes, increased lengths, and high apparent diffusion coefficient (ADC) values without difference in fractional anisotropy (FA) values were identified in multiple types of fibers in the SNHL group. The impairment of long association fibers in SNHL may partly be related to the association of cognitive dysfunction with SNHL.
Collapse
Affiliation(s)
- Tadashi Shiohama
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.,Department of Pediatrics, Chiba University Hospital, Chiba, Japan
| | - Brianna Chew
- College of Science, Northeastern University, Boston, MA, USA
| | - Jacob Levman
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.,Department of Mathematics, Statistics and Computer Science, St. Francis Xavier University, Antigonish, NS, Canada
| | - Emi Takahashi
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
25
|
Kimura A. Cross-modal modulation of cell activity by sound in first-order visual thalamic nucleus. J Comp Neurol 2020; 528:1917-1941. [PMID: 31983057 DOI: 10.1002/cne.24865] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Revised: 12/19/2019] [Accepted: 01/16/2020] [Indexed: 12/16/2022]
Abstract
Cross-modal auditory influence on cell activity in the primary visual cortex emerging at short latencies raises the possibility that the first-order visual thalamic nucleus, which is considered dedicated to unimodal visual processing, could contribute to cross-modal sensory processing, as has been indicated in the auditory and somatosensory systems. To test this hypothesis, the effects of sound stimulation on visual cell activity in the dorsal lateral geniculate nucleus were examined in anesthetized rats, using juxta-cellular recording and labeling techniques. Visual responses evoked by light (white LED) were modulated by sound (noise burst) given simultaneously or 50-400 ms after the light, even though sound stimuli alone did not evoke cell activity. Alterations of visual response were observed in 71% of cells (57/80) with regard to response magnitude, latency, and/or burst spiking. Suppression predominated in response magnitude modulation, but de novo responses were also induced by combined stimulation. Sound affected not only onset responses but also late responses. Late responses were modulated by sound given before or after onset responses. Further, visual responses evoked by the second light stimulation of a double flash with a 150-700 ms interval were also modulated by sound given together with the first light stimulation. In morphological analysis of labeled cells projection cells comparable to X-, Y-, and W-like cells and interneurons were all susceptible to auditory influence. These findings suggest that the first-order visual thalamic nucleus incorporates auditory influence into parallel and complex thalamic visual processing for cross-modal modulation of visual attention and perception.
Collapse
Affiliation(s)
- Akihisa Kimura
- Department of Physiology, Wakayama Medical University, Wakayama, Japan
| |
Collapse
|
26
|
Selective attention to sound features mediates cross-modal activation of visual cortices. Neuropsychologia 2020; 144:107498. [PMID: 32442445 DOI: 10.1016/j.neuropsychologia.2020.107498] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 03/14/2020] [Accepted: 05/12/2020] [Indexed: 11/20/2022]
Abstract
Contemporary schemas of brain organization now include multisensory processes both in low-level cortices as well as at early stages of stimulus processing. Evidence has also accumulated showing that unisensory stimulus processing can result in cross-modal effects. For example, task-irrelevant and lateralised sounds can activate visual cortices; a phenomenon referred to as the auditory-evoked contralateral occipital positivity (ACOP). Some claim this is an example of automatic attentional capture in visual cortices. Other results, however, indicate that context may play a determinant role. Here, we investigated whether selective attention to spatial features of sounds is a determining factor in eliciting the ACOP. We recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to four possible stimulus attributes: location, pitch, speaker identity or syllable. Sound acoustics were held constant, and their location was always equiprobable (50% left, 50% right). The only manipulation was to which sound dimension participants attended. We analysed the AEP data from healthy participants within an electrical neuroimaging framework. The presence of sound-elicited activations of visual cortices depended on the to-be-discriminated, goal-based dimension. The ACOP was elicited only when participants were required to discriminate sound location, but not when they attended to any of the non-spatial features. These results provide a further indication that the ACOP is not automatic. Moreover, our findings showcase the interplay between task-relevance and spatial (un)predictability in determining the presence of the cross-modal activation of visual cortices.
Collapse
|
27
|
Stereotactic electroencephalography in humans reveals multisensory signal in early visual and auditory cortices. Cortex 2020; 126:253-264. [PMID: 32092494 DOI: 10.1016/j.cortex.2019.12.032] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Revised: 08/20/2019] [Accepted: 12/30/2019] [Indexed: 02/02/2023]
Abstract
Unequivocally demonstrating the presence of multisensory signals at the earliest stages of cortical processing remains challenging in humans. In our study, we relied on the unique spatio-temporal resolution provided by intracranial stereotactic electroencephalographic (SEEG) recordings in patients with drug-resistant epilepsy to characterize the signal extracted from early visual (calcarine and pericalcarine) and auditory (Heschl's gyrus and planum temporale) regions during a simple audio-visual oddball task. We provide evidences that both cross-modal responses (visual responses in auditory cortex or the reverse) and multisensory processing (alteration of the unimodal responses during bimodal stimulation) can be observed in intracranial event-related potentials (iERPs) and in power modulations of oscillatory activity at different temporal scales within the first 150 msec after stimulus onset. The temporal profiles of the iERPs are compatible with the hypothesis that MSI occurs by means of direct pathways linking early visual and auditory regions. Our data indicate, moreover, that MSI mainly relies on modulations of the low-frequency bands (foremost the theta band in the auditory cortex and the alpha band in the visual cortex), suggesting the involvement of feedback pathways between the two sensory regions. Remarkably, we also observed high-gamma power modulations by sounds in the early visual cortex, thus suggesting the presence of neuronal populations involved in auditory processing in the calcarine and pericalcarine region in humans.
Collapse
|
28
|
Synchronization of Sensory Gamma Oscillations Promotes Multisensory Communication. eNeuro 2019; 6:ENEURO.0101-19.2019. [PMID: 31601635 PMCID: PMC6873160 DOI: 10.1523/eneuro.0101-19.2019] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Revised: 09/03/2019] [Accepted: 09/16/2019] [Indexed: 01/15/2023] Open
Abstract
Rhythmic neuronal activity in the gamma range is a signature of cortical processing and its synchronization across distant sites has been proposed as a fundamental mechanism of network interactions. While this has been shown within sensory streams, we tested whether cross talk between the senses relies on similar mechanisms. Direct sensory interactions in humans (male and female) were studied with a visual-tactile amplitude matching paradigm. In this task, congruent stimuli are associated with behavioral benefits, which are proposed to be mediated by increased binding between sensory cortices through coherent gamma oscillations. We tested this hypothesis by applying 4-in-1 multi-electrode transcranial alternating current stimulation (tACS) with 40 Hz over visual and somatosensory cortices. In phase stimulation (0°) was expected to strengthen binding and thereby enhance the congruence effect, while anti-phase (180°) stimulation was expected to have opposite effects. Gamma tACS was controlled by alpha (10 Hz) and sham stimulation, as well as by applying tACS unilaterally while visual-tactile stimuli were presented lateralized. Contrary to our expectations, gamma tACS over the relevant hemisphere delayed responses to congruent trials. Additionally, reanalysis of EEG data revealed decoupling of sensory gamma oscillations during congruent trials. We propose that gamma tACS prevented sensory decoupling and thereby limited the congruence effect. Together, our results favor the perspective that processing multisensory congruence involves corticocortical communication rather than feature binding. Furthermore, we found control stimulation over the irrelevant hemisphere to speed responses under alpha stimulation and to delay responses under gamma stimulation, consistent with the idea that contralateral alpha/gamma dynamics regulate cortical excitability.
Collapse
|
29
|
Trudeau-Fisette P, Ito T, Ménard L. Auditory and Somatosensory Interaction in Speech Perception in Children and Adults. Front Hum Neurosci 2019; 13:344. [PMID: 31636554 PMCID: PMC6788346 DOI: 10.3389/fnhum.2019.00344] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Accepted: 09/18/2019] [Indexed: 11/28/2022] Open
Abstract
Multisensory integration (MSI) allows us to link sensory cues from multiple sources and plays a crucial role in speech development. However, it is not clear whether humans have an innate ability or whether repeated sensory input while the brain is maturing leads to efficient integration of sensory information in speech. We investigated the integration of auditory and somatosensory information in speech processing in a bimodal perceptual task in 15 young adults (age 19–30) and 14 children (age 5–6). The participants were asked to identify if the perceived target was the sound /e/ or /ø/. Half of the stimuli were presented under a unimodal condition with only auditory input. The other stimuli were presented under a bimodal condition with both auditory input and somatosensory input consisting of facial skin stretches provided by a robotic device, which mimics the articulation of the vowel /e/. The results indicate that the effect of somatosensory information on sound categorization was larger in adults than in children. This suggests that integration of auditory and somatosensory information evolves throughout the course of development.
Collapse
Affiliation(s)
- Paméla Trudeau-Fisette
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, Montreal, QC, Canada
| | - Takayuki Ito
- GIPSA-Lab, CNRS, Grenoble INP, Université Grenoble Alpes, Grenoble, France.,Haskins Laboratories, Yale University, New Haven, CT, United States
| | - Lucie Ménard
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, Montreal, QC, Canada
| |
Collapse
|
30
|
Macharadze T, Budinger E, Brosch M, Scheich H, Ohl FW, Henschke JU. Early Sensory Loss Alters the Dendritic Branching and Spine Density of Supragranular Pyramidal Neurons in Rodent Primary Sensory Cortices. Front Neural Circuits 2019; 13:61. [PMID: 31611778 PMCID: PMC6773815 DOI: 10.3389/fncir.2019.00061] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 09/03/2019] [Indexed: 01/26/2023] Open
Abstract
Multisensory integration in primary auditory (A1), visual (V1), and somatosensory cortex (S1) is substantially mediated by their direct interconnections and by thalamic inputs across the sensory modalities. We have previously shown in rodents (Mongolian gerbils) that during postnatal development, the anatomical and functional strengths of these crossmodal and also of sensory matched connections are determined by early auditory, somatosensory, and visual experience. Because supragranular layer III pyramidal neurons are major targets of corticocortical and thalamocortical connections, we investigated in this follow-up study how the loss of early sensory experience changes their dendritic morphology. Gerbils were sensory deprived early in development by either bilateral sciatic nerve transection at postnatal day (P) 5, ototoxic inner hair cell damage at P10, or eye enucleation at P10. Sholl and branch order analyses of Golgi-stained layer III pyramidal neurons at P28, which demarcates the end of the sensory critical period in this species, revealed that visual and somatosensory deprivation leads to a general increase of apical and basal dendritic branching in A1, V1, and S1. In contrast, dendritic branching, particularly of apical dendrites, decreased in all three areas following auditory deprivation. Generally, the number of spines, and consequently spine density, along the apical and basal dendrites decreased in both sensory deprived and non-deprived cortical areas. Therefore, we conclude that the loss of early sensory experience induces a refinement of corticocortical crossmodal and other cortical and thalamic connections by pruning of dendritic spines at the end of the critical period. Based on present and previous own results and on findings from the literature, we propose a scenario for multisensory development following early sensory loss.
Collapse
Affiliation(s)
- Tamar Macharadze
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Clinic for Anesthesiology and Intensive Care Medicine, Otto von Guericke University Hospital, Magdeburg, Germany
| | - Eike Budinger
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Michael Brosch
- Center for Behavioral Brain Sciences, Magdeburg, Germany.,Special Lab Primate Neurobiology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Henning Scheich
- Center for Behavioral Brain Sciences, Magdeburg, Germany.,Emeritus Group Lifelong Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Frank W Ohl
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany.,Institute for Biology, Otto von Guericke University, Magdeburg, Germany
| | - Julia U Henschke
- Institute of Cognitive Neurology and Dementia Research (IKND), Otto von Guericke University, Magdeburg, Germany
| |
Collapse
|
31
|
Qiao Y, Li X, Shen H, Zhang X, Sun Y, Hao W, Guo B, Ni D, Gao Z, Guo H, Shang Y. Downward cross-modal plasticity in single-sided deafness. Neuroimage 2019; 197:608-617. [DOI: 10.1016/j.neuroimage.2019.05.031] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 03/21/2019] [Accepted: 05/10/2019] [Indexed: 10/26/2022] Open
|
32
|
Shiohama T, McDavid J, Levman J, Takahashi E. The left lateral occipital cortex exhibits decreased thickness in children with sensorineural hearing loss. Int J Dev Neurosci 2019; 76:34-40. [PMID: 31173823 DOI: 10.1016/j.ijdevneu.2019.05.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 05/10/2019] [Accepted: 05/30/2019] [Indexed: 10/26/2022] Open
Abstract
Patients with sensorineural hearing loss (SNHL) tend to show language delay, executive functioning deficits, and visual cognitive impairment, even after intervention with hearing amplification and cochlear implants, which suggest altered brain structures and functions in SNHL patients. In this study, we investigated structural brain MRI in 30 children with SNHL (18 mild to moderate [M-M] SNHL and 12 moderately severe to profound [M-P] SNHL) by comparing gender- and age-matched normal controls (NC). Region-based analyses did not show statistically significant differences in volumes of the cerebrum, basal ganglia, cerebellum, and the ventricles between SNHL and NC. On surface-based analyses, the global and lobar cortical surface area, thickness, and volumes were not statistically significantly different between SNHL and NC participants. Regional surface areas, cortical thicknesses, and cortical volumes were statistically significantly smaller in M-P SNHL compared to NC in the left middle occipital cortex, and left inferior occipital cortex after a correction for multiple comparisons using random field theory (p < 0.02). These regions were identified as areas known to be related to high level visual cognition including the human middle temporal area, lateral occipital area, occipital face area, and V8. The observed regional decreased thickness in M-P SNHL may be associated with dysfunctions of visual cognition in SNHL detectable in a clinical setting.
Collapse
Affiliation(s)
- Tadashi Shiohama
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, 300 Longwood Avenue, Boston, MA, 02115, USA.,Department of Pediatrics, Chiba University Hospital, Inohana 1-8-1, Chiba-shi, Chiba, 2608670, Japan
| | - Jeremy McDavid
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, 300 Longwood Avenue, Boston, MA, 02115, USA
| | - Jacob Levman
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, 300 Longwood Avenue, Boston, MA, 02115, USA.,Department of Mathematics, Statistics and Computer Science, St. Francis Xavier University, 2323 Notre Dame Ave, Antigonish, Nova Scotia, B2G 2W5, Canada
| | - Emi Takahashi
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, 300 Longwood Avenue, Boston, MA, 02115, USA
| |
Collapse
|
33
|
Lightness/pitch and elevation/pitch crossmodal correspondences are low-level sensory effects. Atten Percept Psychophys 2019; 81:1609-1623. [PMID: 30697648 DOI: 10.3758/s13414-019-01668-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We tested the sensory versus decisional origins of two established audiovisual crossmodal correspondences (CMCs; lightness/pitch and elevation/pitch), applying a signal discrimination paradigm to low-level stimulus features and controlling for attentional cueing. An audiovisual stimulus randomly varied along two visual dimensions (lightness: black/white; elevation: high/low) and one auditory dimension (pitch: high/low), and participants discriminated either only lightness, only elevation, or both lightness and elevation. The discrimination task and the stimulus duration varied between subjects. To investigate the influence of crossmodal congruency, we considered the effect of each CMC (lightness/pitch and elevation/pitch) on the sensitivity and criterion of each discrimination as a function of stimulus duration. There were three main findings. First, discrimination sensitivity was significantly higher for visual targets paired congruently (compared with incongruently) with tones while criterion was unaffected. Second, the sensitivity increase occurred for all stimulus durations, ruling out attentional cueing effects. Third, the sensitivity increase was feature specific such that only the CMC that related to the feature being discriminated influenced sensitivity (i.e. lightness congruency only influenced lightness discrimination and elevation congruency only influenced elevation discrimination in the single and dual task conditions). We suggest that these congruency effects reflect low-level sensory processes.
Collapse
|
34
|
Hanada GM, Ahveninen J, Calabro FJ, Yengo-Kahn A, Vaina LM. Cross-Modal Cue Effects in Motion Processing. Multisens Res 2018; 32:45-65. [PMID: 30613468 PMCID: PMC6317375 DOI: 10.1163/22134808-20181313] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The everyday environment brings to our sensory systems competing inputs from different modalities. The ability to filter these multisensory inputs in order to identify and efficiently utilize useful spatial cues is necessary to detect and process the relevant information. In the present study, we investigate how feature-based attention affects the detection of motion across sensory modalities. We were interested to determine how subjects use intramodal, cross-modal auditory, and combined audiovisual motion cues to attend to specific visual motion signals. The results showed that in most cases, both the visual and the auditory cues enhance feature-based orienting to a transparent visual motion pattern presented among distractor motion patterns. Whereas previous studies have shown cross-modal effects of spatial attention, our results demonstrate a spread of cross-modal feature-based attention cues, which have been matched for the detection threshold of the visual target. These effects were very robust in comparisons of the effects of valid vs. invalid cues, as well as in comparisons between cued and uncued valid trials. The effect of intramodal visual, cross-modal auditory, and bimodal cues also increased as a function of motion-cue salience. Our results suggest that orienting to visual motion patterns among distracters can be facilitated not only by intramodal priors, but also by feature-based cross-modal information from the auditory system.
Collapse
Affiliation(s)
- G. M. Hanada
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - J. Ahveninen
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - F. J. Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Department of Psychiatry and Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - A. Yengo-Kahn
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - L. M. Vaina
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Neurology, Massachusetts General Hospital and Brigham and Women’s Hospital, MA, USA
| |
Collapse
|
35
|
Effect of acceleration of auditory inputs on the primary somatosensory cortex in humans. Sci Rep 2018; 8:12883. [PMID: 30150686 PMCID: PMC6110726 DOI: 10.1038/s41598-018-31319-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Accepted: 08/17/2018] [Indexed: 11/09/2022] Open
Abstract
Cross-modal interaction occurs during the early stages of processing in the sensory cortex; however, its effect on neuronal activity speed remains unclear. We used magnetoencephalography to investigate whether auditory stimulation influences the initial cortical activity in the primary somatosensory cortex. A 25-ms pure tone was randomly presented to the left or right side of healthy volunteers at 1000 ms when electrical pulses were applied to the left or right median nerve at 20 Hz for 1500 ms because we did not observe any cross-modal effect elicited by a single pulse. The latency of N20 m originating from Brodmann's area 3b was measured for each pulse. The auditory stimulation significantly shortened the N20 m latency at 1050 and 1100 ms. This reduction in N20 m latency was identical for the ipsilateral and contralateral sounds for both latency points. Therefore, somatosensory-auditory interaction, such as input to the area 3b from the thalamus, occurred during the early stages of synaptic transmission. Auditory information that converged on the somatosensory system was considered to have arisen from the early stages of the feedforward pathway. Acceleration of information processing through the cross-modal interaction seemed to be partly due to faster processing in the sensory cortex.
Collapse
|
36
|
Henschke JU, Ohl FW, Budinger E. Crossmodal Connections of Primary Sensory Cortices Largely Vanish During Normal Aging. Front Aging Neurosci 2018; 10:52. [PMID: 29551970 PMCID: PMC5840148 DOI: 10.3389/fnagi.2018.00052] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Accepted: 02/15/2018] [Indexed: 11/22/2022] Open
Abstract
During aging, human response times (RTs) to unisensory and crossmodal stimuli decrease. However, the elderly benefit more from crossmodal stimulus representations than younger people. The underlying short-latency multisensory integration process is mediated by direct crossmodal connections at the level of primary sensory cortices. We investigate the age-related changes of these connections using a rodent model (Mongolian gerbil), retrograde tracer injections into the primary auditory (A1), somatosensory (S1), and visual cortex (V1), and immunohistochemistry for markers of apoptosis (Caspase-3), axonal plasticity (Growth associated protein 43, GAP 43), and a calcium-binding protein (Parvalbumin, PV). In adult animals, primary sensory cortices receive a substantial number of direct thalamic inputs from nuclei of their matched, but also from nuclei of non-matched sensory modalities. There are also direct intracortical connections among primary sensory cortices and connections with secondary sensory cortices of other modalities. In very old animals, the crossmodal connections strongly decrease in number or vanish entirely. This is likely due to a retraction of the projection neuron axonal branches rather than ongoing programmed cell death. The loss of crossmodal connections is also accompanied by changes in anatomical correlates of inhibition and excitation in the sensory thalamus and cortex. Together, the loss and restructuring of crossmodal connections during aging suggest a shift of multisensory processing from primary cortices towards other sensory brain areas in elderly individuals.
Collapse
Affiliation(s)
- Julia U Henschke
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Department Genetics, Leibniz Institute for Neurobiology, Magdeburg, Germany.,German Center for Neurodegenerative Diseases within the Helmholtz Association, Magdeburg, Germany.,Institute of Cognitive Neurology and Dementia Research (IKND), Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Frank W Ohl
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany.,Institute of Biology, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Eike Budinger
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| |
Collapse
|
37
|
Andersen LM. Group Analysis in MNE-Python of Evoked Responses from a Tactile Stimulation Paradigm: A Pipeline for Reproducibility at Every Step of Processing, Going from Individual Sensor Space Representations to an across-Group Source Space Representation. Front Neurosci 2018; 12:6. [PMID: 29403349 PMCID: PMC5786561 DOI: 10.3389/fnins.2018.00006] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Accepted: 01/04/2018] [Indexed: 11/30/2022] Open
Abstract
An important aim of an analysis pipeline for magnetoencephalographic data is that it allows for the researcher spending maximal effort on making the statistical comparisons that will answer the questions of the researcher, while in turn spending minimal effort on the intricacies and machinery of the pipeline. I here present a set of functions and scripts that allow for setting up a clear, reproducible structure for separating raw and processed data into folders and files such that minimal effort can be spend on: (1) double-checking that the right input goes into the right functions; (2) making sure that output and intermediate steps can be accessed meaningfully; (3) applying operations efficiently across groups of subjects; (4) re-processing data if changes to any intermediate step are desirable. Applying the scripts requires only general knowledge about the Python language. The data analyses are neural responses to tactile stimulations of the right index finger in a group of 20 healthy participants acquired from an Elekta Neuromag System. Two analyses are presented: going from individual sensor space representations to, respectively, an across-group sensor space representation and an across-group source space representation. The processing steps covered for the first analysis are filtering the raw data, finding events of interest in the data, epoching data, finding and removing independent components related to eye blinks and heart beats, calculating participants' individual evoked responses by averaging over epoched data and calculating a grand average sensor space representation over participants. The second analysis starts from the participants' individual evoked responses and covers: estimating noise covariance, creating a forward model, creating an inverse operator, estimating distributed source activity on the cortical surface using a minimum norm procedure, morphing those estimates onto a common cortical template and calculating the patterns of activity that are statistically different from baseline. To estimate source activity, processing of the anatomy of subjects based on magnetic resonance imaging is necessary. The necessary steps are covered here: importing magnetic resonance images, segmenting the brain, estimating boundaries between different tissue layers, making fine-resolution scalp surfaces for facilitating co-registration, creating source spaces and creating volume conductors for each subject.
Collapse
Affiliation(s)
- Lau M Andersen
- NatMEG, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
38
|
Starke J, Ball F, Heinze HJ, Noesselt T. The spatio-temporal profile of multisensory integration. Eur J Neurosci 2017; 51:1210-1223. [PMID: 29057531 DOI: 10.1111/ejn.13753] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Revised: 10/13/2017] [Accepted: 10/16/2017] [Indexed: 12/29/2022]
Abstract
Task-irrelevant visual stimuli can enhance auditory perception. However, while there is some neurophysiological evidence for mechanisms that underlie the phenomenon, the neural basis of visually induced effects on auditory perception remains unknown. Combining fMRI and EEG with psychophysical measurements in two independent studies, we identified the neural underpinnings and temporal dynamics of visually induced auditory enhancement. Lower- and higher-intensity sounds were paired with a non-informative visual stimulus, while participants performed an auditory detection task. Behaviourally, visual co-stimulation enhanced auditory sensitivity. Using fMRI, enhanced BOLD signals were observed in primary auditory cortex for low-intensity audiovisual stimuli which scaled with subject-specific enhancement in perceptual sensitivity. Concordantly, a modulation of event-related potentials could already be observed over frontal electrodes at an early latency (30-80 ms), which again scaled with subject-specific behavioural benefits. Later modulations starting around 280 ms, that is in the time range of the P3, did not fit this pattern of brain-behaviour correspondence. Hence, the latency of the corresponding fMRI-EEG brain-behaviour modulation points at an early interplay of visual and auditory signals in low-level auditory cortex, potentially mediated by crosstalk at the level of the thalamus. However, fMRI signals in primary auditory cortex, auditory thalamus and the P50 for higher-intensity auditory stimuli were also elevated by visual co-stimulation (in the absence of any behavioural effect) suggesting a general, intensity-independent integration mechanism. We propose that this automatic interaction occurs at the level of the thalamus and might signify a first step of audiovisual interplay necessary for visually induced perceptual enhancement of auditory perception.
Collapse
Affiliation(s)
- Johanna Starke
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Department of Neurology, Faculty of Medicine, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Felix Ball
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Department of Neurology, Faculty of Medicine, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Hans-Jochen Heinze
- Department of Neurology, Faculty of Medicine, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Toemme Noesselt
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| |
Collapse
|
39
|
Semantic congruent audiovisual integration during the encoding stage of working memory: an ERP and sLORETA study. Sci Rep 2017; 7:5112. [PMID: 28698594 PMCID: PMC5505990 DOI: 10.1038/s41598-017-05471-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Accepted: 05/31/2017] [Indexed: 11/09/2022] Open
Abstract
Although multisensory integration is an inherent component of functional brain organization, multisensory integration during working memory (WM) has attracted little attention. The present study investigated the neural properties underlying the multisensory integration of WM by comparing semantically related bimodal stimulus presentations with unimodal stimulus presentations and analysing the results using the standardized low-resolution brain electromagnetic tomography (sLORETA) source location approach. The results showed that the memory retrieval reaction times during congruent audiovisual conditions were faster than those during unisensory conditions. Moreover, our findings indicated that the event-related potential (ERP) for simultaneous audiovisual stimuli differed from the ERP for the sum of unisensory constituents during the encoding stage and occurred within a 236-530 ms timeframe over the frontal and parietal-occipital electrodes. The sLORETA images revealed a distributed network of brain areas that participate in the multisensory integration of WM. These results suggested that information inputs from different WM subsystems yielded nonlinear multisensory interactions and became integrated during the encoding stage. The multicomponent model of WM indicates that the central executive could play a critical role in the integration of information from different slave systems.
Collapse
|
40
|
Petro LS, Paton AT, Muckli L. Contextual modulation of primary visual cortex by auditory signals. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0104. [PMID: 28044015 PMCID: PMC5206272 DOI: 10.1098/rstb.2016.0104] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2016] [Indexed: 12/04/2022] Open
Abstract
Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol.23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol.24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’.
Collapse
Affiliation(s)
- L S Petro
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| | - A T Paton
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| | - L Muckli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| |
Collapse
|
41
|
Gordon I, Jack A, Pretzsch CM, Vander Wyk B, Leckman JF, Feldman R, Pelphrey KA. Intranasal Oxytocin Enhances Connectivity in the Neural Circuitry Supporting Social Motivation and Social Perception in Children with Autism. Sci Rep 2016; 6:35054. [PMID: 27845765 PMCID: PMC5109935 DOI: 10.1038/srep35054] [Citation(s) in RCA: 73] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2016] [Accepted: 09/23/2016] [Indexed: 02/07/2023] Open
Abstract
Oxytocin (OT) has become a focus in investigations of autism spectrum disorder (ASD). The social deficits that characterize ASD may relate to reduced connectivity between brain sites on the mesolimbic reward pathway (nucleus accumbens; amygdala) that receive OT projections and contribute to social motivation, and cortical sites involved in social perception. Using functional magnetic resonance imaging and a randomized, double blind, placebo-controlled crossover design, we show that OT administration in ASD increases activity in brain regions important for perceiving social-emotional information. Further, OT enhances connectivity between nodes of the brain’s reward and socioemotional processing systems, and does so preferentially for social (versus nonsocial) stimuli. This effect is observed both while viewing coherent versus scrambled biological motion, and while listening to happy versus angry voices. Our findings suggest a mechanism by which intranasal OT may bolster social motivation—one that could, in future, be harnessed to augment behavioral treatments for ASD.
Collapse
Affiliation(s)
- Ilanit Gordon
- Child Study Center, Yale University, New Haven, CT 06520, USA.,Department of Psychology, Bar-Ilan University, Ramat-Gan 5290002, Israel
| | - Allison Jack
- Autism and Neurodevelopmental Disorders Institute, George Washington University, Ashburn, VA 20147, USA
| | | | | | - James F Leckman
- Child Study Center, Yale University, New Haven, CT 06520, USA
| | - Ruth Feldman
- Child Study Center, Yale University, New Haven, CT 06520, USA.,Department of Psychology, Bar-Ilan University, Ramat-Gan 5290002, Israel
| | - Kevin A Pelphrey
- Autism and Neurodevelopmental Disorders Institute, George Washington University, Ashburn, VA 20147, USA
| |
Collapse
|
42
|
ten Oever S, Romei V, van Atteveldt N, Soto-Faraco S, Murray MM, Matusz PJ. The COGs (context, object, and goals) in multisensory processing. Exp Brain Res 2016; 234:1307-23. [PMID: 26931340 DOI: 10.1007/s00221-016-4590-z] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 01/30/2016] [Indexed: 12/20/2022]
Abstract
Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and "top-down" control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer's goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications.
Collapse
Affiliation(s)
- Sanne ten Oever
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Vincenzo Romei
- Department of Psychology, Centre for Brain Science, University of Essex, Colchester, UK
| | - Nienke van Atteveldt
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.,Department of Educational Neuroscience, Faculty of Psychology and Education and Institute LEARN!, VU University Amsterdam, Amsterdam, The Netherlands
| | - Salvador Soto-Faraco
- Multisensory Research Group, Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain.,Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University Hospital Center and University of Lausanne, BH7.081, rue du Bugnon 46, 1011, Lausanne, Switzerland.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland.,Department of Ophthalmology, Jules-Gonin Eye Hospital, University of Lausanne, Lausanne, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University Hospital Center and University of Lausanne, BH7.081, rue du Bugnon 46, 1011, Lausanne, Switzerland. .,Attention, Brain, and Cognitive Development Group, Department of Experimental Psychology, University of Oxford, Oxford, UK.
| |
Collapse
|
43
|
Matusz PJ, Retsa C, Murray MM. The context-contingent nature of cross-modal activations of the visual cortex. Neuroimage 2015; 125:996-1004. [PMID: 26564531 DOI: 10.1016/j.neuroimage.2015.11.016] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2015] [Revised: 11/05/2015] [Accepted: 11/07/2015] [Indexed: 11/30/2022] Open
Abstract
Real-world environments are nearly always multisensory in nature. Processing in such situations confers perceptual advantages, but its automaticity remains poorly understood. Automaticity has been invoked to explain the activation of visual cortices by laterally-presented sounds. This has been observed even when the sounds were task-irrelevant and spatially uninformative about subsequent targets. An auditory-evoked contralateral occipital positivity (ACOP) at ~250ms post-sound onset has been postulated as the event-related potential (ERP) correlate of this cross-modal effect. However, the spatial dimension of the stimuli was nevertheless relevant in virtually all prior studies where the ACOP was observed. By manipulating the implicit predictability of the location of lateralised sounds in a passive auditory paradigm, we tested the automaticity of cross-modal activations of visual cortices. 128-channel ERP data from healthy participants were analysed within an electrical neuroimaging framework. The timing, topography, and localisation resembled previous characterisations of the ACOP. However, the cross-modal activations of visual cortices by sounds were critically dependent on whether the sound location was (un)predictable. Our results are the first direct evidence that this particular cross-modal process is not (fully) automatic; instead, it is context-contingent. More generally, the present findings provide novel insights into the importance of context-related factors in controlling information processing across the senses, and call for a revision of current models of automaticity in cognitive sciences.
Collapse
Affiliation(s)
- Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, University Hospital Center and University of Lausanne, 1011 Lausanne, Switzerland; Attention, Brain, and Cognitive Development Group, Department of Experimental Psychology, University of Oxford, UK; University of Social Sciences and Humanities, Faculty in Wroclaw, Wroclaw, Poland.
| | - Chrysa Retsa
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, University Hospital Center and University of Lausanne, 1011 Lausanne, Switzerland
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, University Hospital Center and University of Lausanne, 1011 Lausanne, Switzerland; EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, 1011 Lausanne, Switzerland; Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Lausanne, Switzerland
| |
Collapse
|
44
|
Ahveninen J, Huang S, Ahlfors SP, Hämäläinen M, Rossi S, Sams M, Jääskeläinen IP. Interacting parallel pathways associate sounds with visual identity in auditory cortices. Neuroimage 2015; 124:858-868. [PMID: 26419388 DOI: 10.1016/j.neuroimage.2015.09.044] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2015] [Revised: 08/26/2015] [Accepted: 09/20/2015] [Indexed: 10/23/2022] Open
Abstract
Spatial and non-spatial information of sound events is presumably processed in parallel auditory cortex (AC) "what" and "where" streams, which are modulated by inputs from the respective visual-cortex subsystems. How these parallel processes are integrated to perceptual objects that remain stable across time and the source agent's movements is unknown. We recorded magneto- and electroencephalography (MEG/EEG) data while subjects viewed animated video clips featuring two audiovisual objects, a black cat and a gray cat. Adaptor-probe events were either linked to the same object (the black cat meowed twice in a row in the same location) or included a visually conveyed identity change (the black and then the gray cat meowed with identical voices in the same location). In addition to effects in visual (including fusiform, middle temporal or MT areas) and frontoparietal association areas, the visually conveyed object-identity change was associated with a release from adaptation of early (50-150ms) activity in posterior ACs, spreading to left anterior ACs at 250-450ms in our combined MEG/EEG source estimates. Repetition of events belonging to the same object resulted in increased theta-band (4-8Hz) synchronization within the "what" and "where" pathways (e.g., between anterior AC and fusiform areas). In contrast, the visually conveyed identity changes resulted in distributed synchronization at higher frequencies (alpha and beta bands, 8-32Hz) across different auditory, visual, and association areas. The results suggest that sound events become initially linked to perceptual objects in posterior AC, followed by modulations of representations in anterior AC. Hierarchical what and where pathways seem to operate in parallel after repeating audiovisual associations, whereas the resetting of such associations engages a distributed network across auditory, visual, and multisensory areas.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA.
| | - Samantha Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA
| | - Seppo P Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA
| | - Matti Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA; Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, USA; Department of Neuroscience and Biomedical Engineering, Aalto University, School of Science, Espoo, Finland
| | - Stephanie Rossi
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
| |
Collapse
|
45
|
Brang D, Towle VL, Suzuki S, Hillyard SA, Di Tusa S, Dai Z, Tao J, Wu S, Grabowecky M. Peripheral sounds rapidly activate visual cortex: evidence from electrocorticography. J Neurophysiol 2015; 114:3023-8. [PMID: 26334017 DOI: 10.1152/jn.00728.2015] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2015] [Accepted: 08/28/2015] [Indexed: 11/22/2022] Open
Abstract
Neurophysiological studies with animals suggest that sounds modulate activity in primary visual cortex in the presence of concurrent visual stimulation. Noninvasive neuroimaging studies in humans have similarly shown that sounds modulate activity in visual areas even in the absence of visual stimuli or visual task demands. However, the spatial and temporal limitations of these noninvasive methods prevent the determination of how rapidly sounds activate early visual cortex and what information about the sounds is relayed there. Using spatially and temporally precise measures of local synaptic activity acquired from depth electrodes in humans, we demonstrate that peripherally presented sounds evoke activity in the anterior portion of the contralateral, but not ipsilateral, calcarine sulcus within 28 ms of sound onset. These results suggest that auditory stimuli rapidly evoke spatially specific activity in visual cortex even in the absence of concurrent visual stimulation or visual task demands. This rapid auditory-evoked activation of primary visual cortex is likely to be mediated by subcortical pathways or direct cortical projections from auditory to visual areas.
Collapse
Affiliation(s)
- David Brang
- Department of Psychology, Northwestern University, Evanston, Illinois; Interdepartmental Neuroscience Program, Northwestern University, Evanston, Illinois; Department of Neurology, University of Chicago, Chicago, Illinois; and
| | - Vernon L Towle
- Department of Neurology, University of Chicago, Chicago, Illinois; and
| | - Satoru Suzuki
- Department of Psychology, Northwestern University, Evanston, Illinois; Interdepartmental Neuroscience Program, Northwestern University, Evanston, Illinois
| | - Steven A Hillyard
- Department of Neurosciences, University of California, San Diego, La Jolla, California
| | - Senneca Di Tusa
- Department of Psychology, Northwestern University, Evanston, Illinois
| | - Zhongtian Dai
- Department of Neurology, University of Chicago, Chicago, Illinois; and
| | - James Tao
- Department of Neurology, University of Chicago, Chicago, Illinois; and
| | - Shasha Wu
- Department of Neurology, University of Chicago, Chicago, Illinois; and
| | - Marcia Grabowecky
- Department of Psychology, Northwestern University, Evanston, Illinois; Interdepartmental Neuroscience Program, Northwestern University, Evanston, Illinois
| |
Collapse
|
46
|
Neuro-oscillatory phase alignment drives speeded multisensory response times: an electro-corticographic investigation. J Neurosci 2015; 35:8546-57. [PMID: 26041921 DOI: 10.1523/jneurosci.4527-14.2015] [Citation(s) in RCA: 71] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Even simple tasks rely on information exchange between functionally distinct and often relatively distant neuronal ensembles. Considerable work indicates oscillatory synchronization through phase alignment is a major agent of inter-regional communication. In the brain, different oscillatory phases correspond to low- and high-excitability states. Optimally aligned phases (or high-excitability states) promote inter-regional communication. Studies have also shown that sensory stimulation can modulate or reset the phase of ongoing cortical oscillations. For example, auditory stimuli can reset the phase of oscillations in visual cortex, influencing processing of a simultaneous visual stimulus. Such cross-regional phase reset represents a candidate mechanism for aligning oscillatory phase for inter-regional communication. Here, we explored the role of local and inter-regional phase alignment in driving a well established behavioral correlate of multisensory integration: the redundant target effect (RTE), which refers to the fact that responses to multisensory inputs are substantially faster than to unisensory stimuli. In a speeded detection task, human epileptic patients (N = 3) responded to unisensory (auditory or visual) and multisensory (audiovisual) stimuli with a button press, while electrocorticography was recorded over auditory and motor regions. Visual stimulation significantly modulated auditory activity via phase reset in the delta and theta bands. During the period between stimulation and subsequent motor response, transient synchronization between auditory and motor regions was observed. Phase synchrony to multisensory inputs was faster than to unisensory stimulation. This sensorimotor phase alignment correlated with behavior such that stronger synchrony was associated with faster responses, linking the commonly observed RTE with phase alignment across a sensorimotor network.
Collapse
|
47
|
Murray MM, Thelen A, Thut G, Romei V, Martuzzi R, Matusz PJ. The multisensory function of the human primary visual cortex. Neuropsychologia 2015; 83:161-169. [PMID: 26275965 DOI: 10.1016/j.neuropsychologia.2015.08.011] [Citation(s) in RCA: 107] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2015] [Revised: 08/08/2015] [Accepted: 08/10/2015] [Indexed: 01/20/2023]
Abstract
It has been nearly 10 years since Ghazanfar and Schroeder (2006) proposed that the neocortex is essentially multisensory in nature. However, it is only recently that sufficient and hard evidence that supports this proposal has accrued. We review evidence that activity within the human primary visual cortex plays an active role in multisensory processes and directly impacts behavioural outcome. This evidence emerges from a full pallet of human brain imaging and brain mapping methods with which multisensory processes are quantitatively assessed by taking advantage of particular strengths of each technique as well as advances in signal analyses. Several general conclusions about multisensory processes in primary visual cortex of humans are supported relatively solidly. First, haemodynamic methods (fMRI/PET) show that there is both convergence and integration occurring within primary visual cortex. Second, primary visual cortex is involved in multisensory processes during early post-stimulus stages (as revealed by EEG/ERP/ERFs as well as TMS). Third, multisensory effects in primary visual cortex directly impact behaviour and perception, as revealed by correlational (EEG/ERPs/ERFs) as well as more causal measures (TMS/tACS). While the provocative claim of Ghazanfar and Schroeder (2006) that the whole of neocortex is multisensory in function has yet to be demonstrated, this can now be considered established in the case of the human primary visual cortex.
Collapse
Affiliation(s)
- Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland; EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - Antonia Thelen
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Gregor Thut
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, United Kingdom
| | - Vincenzo Romei
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, United Kingdom
| | - Roberto Martuzzi
- Laboratory of Cognitive Neuroscience, Brain-Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Attention, Brain, and Cognitive Development Group, Department of Experimental Psychology, University of Oxford, United Kingdom.
| |
Collapse
|
48
|
Wiggins IM, Hartley DEH. A synchrony-dependent influence of sounds on activity in visual cortex measured using functional near-infrared spectroscopy (fNIRS). PLoS One 2015; 10:e0122862. [PMID: 25826284 PMCID: PMC4380402 DOI: 10.1371/journal.pone.0122862] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2014] [Accepted: 02/15/2015] [Indexed: 11/29/2022] Open
Abstract
Evidence from human neuroimaging and animal electrophysiological studies suggests that signals from different sensory modalities interact early in cortical processing, including in primary sensory cortices. The present study aimed to test whether functional near-infrared spectroscopy (fNIRS), an emerging, non-invasive neuroimaging technique, is capable of measuring such multisensory interactions. Specifically, we tested for a modulatory influence of sounds on activity in visual cortex, while varying the temporal synchrony between trains of transient auditory and visual events. Related fMRI studies have consistently reported enhanced activation in response to synchronous compared to asynchronous audiovisual stimulation. Unexpectedly, we found that synchronous sounds significantly reduced the fNIRS response from visual cortex, compared both to asynchronous sounds and to a visual-only baseline. It is possible that this suppressive effect of synchronous sounds reflects the use of an efficacious visual stimulus, chosen for consistency with previous fNIRS studies. Discrepant results may also be explained by differences between studies in how attention was deployed to the auditory and visual modalities. The presence and relative timing of sounds did not significantly affect performance in a simultaneously conducted behavioral task, although the data were suggestive of a positive relationship between the strength of the fNIRS response from visual cortex and the accuracy of visual target detection. Overall, the present findings indicate that fNIRS is capable of measuring multisensory cortical interactions. In multisensory research, fNIRS can offer complementary information to the more established neuroimaging modalities, and may prove advantageous for testing in naturalistic environments and with infant and clinical populations.
Collapse
Affiliation(s)
- Ian M. Wiggins
- National Institute for Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, Nottingham, United Kingdom
- Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Douglas E. H. Hartley
- National Institute for Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, Nottingham, United Kingdom
- Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Medical Research Council (MRC) Institute of Hearing Research, Nottingham, United Kingdom
| |
Collapse
|
49
|
De Meo R, Murray MM, Clarke S, Matusz PJ. Top-down control and early multisensory processes: chicken vs. egg. Front Integr Neurosci 2015; 9:17. [PMID: 25784863 PMCID: PMC4347447 DOI: 10.3389/fnint.2015.00017] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2014] [Accepted: 02/13/2015] [Indexed: 11/13/2022] Open
Affiliation(s)
- Rosanna De Meo
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois and University of Lausanne Lausanne, Switzerland
| | - Micah M Murray
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois and University of Lausanne Lausanne, Switzerland ; Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM) Lausanne and Geneva, Switzerland
| | - Stephanie Clarke
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois and University of Lausanne Lausanne, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology, Neuropsychology and Neurorehabilitation Service and Department of Radiology, Centre Hospitalier Universitaire Vaudois and University of Lausanne Lausanne, Switzerland ; Faculty in Wroclaw, University of Social Sciences and Humanities Wroclaw, Poland ; Attention, Brain and Cognitive Development Group, Department of Experimental Psychology, University of Oxford Oxford, UK
| |
Collapse
|
50
|
Hawelka S, Schuster S, Gagl B, Hutzler F. On forward inferences of fast and slow readers. An eye movement study. Sci Rep 2015; 5:8432. [PMID: 25678030 PMCID: PMC4327408 DOI: 10.1038/srep08432] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2014] [Accepted: 01/19/2015] [Indexed: 11/26/2022] Open
Abstract
Unimpaired readers process words incredibly fast and hence it was assumed that top-down processing, such as predicting upcoming words, would be too slow to play an appreciable role in reading. This runs counter the major postulate of the predictive coding framework that our brain continually predicts probable upcoming sensory events. This means, it may generate predictions about the probable upcoming word during reading (dubbed forward inferences). Trying to asses these contradictory assumptions, we evaluated the effect of the predictability of words in sentences on eye movement control during silent reading. Participants were a group of fluent (i.e., fast) and a group of speed-impaired (i.e., slow) readers. The findings indicate that fast readers generate forward inferences, whereas speed-impaired readers do so to a reduced extent - indicating a significant role of predictive coding for fluent reading.
Collapse
Affiliation(s)
- Stefan Hawelka
- Centre for Cognitive Neuroscience, University Salzburg, Hellbrunnerstr. 34, 5020 Salzburg, Austria
| | - Sarah Schuster
- Centre for Cognitive Neuroscience, University Salzburg, Hellbrunnerstr. 34, 5020 Salzburg, Austria
| | - Benjamin Gagl
- Centre for Cognitive Neuroscience, University Salzburg, Hellbrunnerstr. 34, 5020 Salzburg, Austria
| | - Florian Hutzler
- Centre for Cognitive Neuroscience, University Salzburg, Hellbrunnerstr. 34, 5020 Salzburg, Austria
| |
Collapse
|