1
|
Gartside SE, Olthof BM, Rees A. Motor, somatosensory, and executive cortical areas elicit monosynaptic and polysynaptic neuronal activity in the auditory midbrain. Hear Res 2024; 447:109009. [PMID: 38670009 DOI: 10.1016/j.heares.2024.109009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 04/10/2024] [Accepted: 04/15/2024] [Indexed: 04/28/2024]
Abstract
We recently reported that the central nucleus of the inferior colliculus (the auditory midbrain) is innervated by glutamatergic pyramidal cells originating not only in auditory cortex (AC), but also in multiple 'non-auditory' regions of the cerebral cortex. Here, in anaesthetised rats, we used optogenetics and electrical stimulation, combined with recording in the inferior colliculus to determine the functional influence of these descending connections. Specifically, we determined the extent of monosynaptic excitation and the influence of these descending connections on spontaneous activity in the inferior colliculus. A retrograde virus encoding both green fluorescent protein (GFP) and channelrhodopsin (ChR2) injected into the central nucleus of the inferior colliculus (ICc) resulted in GFP expression in discrete groups of cells in multiple areas of the cerebral cortex. Light stimulation of AC and primary motor cortex (M1) caused local activation of cortical neurones and increased the firing rate of neurones in ICc indicating a direct excitatory input from AC and M1 to ICc with a restricted distribution. In naïve animals, electrical stimulation at multiple different sites within M1, secondary motor, somatosensory, and prefrontal cortices increased firing rate in ICc. However, it was notable that stimulation at some adjacent sites failed to influence firing at the recording site in ICc. Responses in ICc comprised singular spikes of constant shape and size which occurred with a short, and fixed latency (∼ 5 ms) consistent with monosynaptic excitation of individual ICc units. Increasing the stimulus current decreased the latency of these spikes, suggesting more rapid depolarization of cortical neurones, and increased the number of (usually adjacent) channels on which a monosynaptic spike was seen, suggesting recruitment of increasing numbers of cortical neurons. Electrical stimulation of cortical regions also evoked longer latency, longer duration increases in firing activity, comprising multiple units with spikes occurring with significant temporal jitter, consistent with polysynaptic excitation. Increasing the stimulus current increased the number of spikes in these polysynaptic responses and increased the number of channels on which the responses were observed, although the magnitude of the responses always diminished away from the most activated channels. Together our findings indicate descending connections from motor, somatosensory and executive cortical regions directly activate small numbers of ICc neurones and that this in turn leads to extensive polysynaptic activation of local circuits within the ICc.
Collapse
Affiliation(s)
- Sarah E Gartside
- Centre for Transformative Neuroscience and Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, United Kingdom.
| | - Bas Mj Olthof
- Centre for Transformative Neuroscience and Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Adrian Rees
- Centre for Transformative Neuroscience and Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, United Kingdom
| |
Collapse
|
2
|
Ning M, Duwadi S, Yücel MA, von Lühmann A, Boas DA, Sen K. fNIRS dataset during complex scene analysis. Front Hum Neurosci 2024; 18:1329086. [PMID: 38576451 PMCID: PMC10991699 DOI: 10.3389/fnhum.2024.1329086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 03/06/2024] [Indexed: 04/06/2024] Open
Affiliation(s)
- Matthew Ning
- Department of Biomedical Engineering, Neurophotonics Center, Boston University, Boston, MA, United States
| | - Sudan Duwadi
- Department of Biomedical Engineering, Neurophotonics Center, Boston University, Boston, MA, United States
| | - Meryem A. Yücel
- Department of Biomedical Engineering, Neurophotonics Center, Boston University, Boston, MA, United States
| | - Alexander von Lühmann
- Department of Biomedical Engineering, Neurophotonics Center, Boston University, Boston, MA, United States
- BIFOLD – Berlin Institute for the Foundations of Learning and Data, Berlin, Germany
- Intelligent Biomedical Sensing (IBS) Lab, Technical University Berlin, Berlin, Germany
| | - David A. Boas
- Department of Biomedical Engineering, Neurophotonics Center, Boston University, Boston, MA, United States
| | - Kamal Sen
- Department of Biomedical Engineering, Neurophotonics Center, Boston University, Boston, MA, United States
| |
Collapse
|
3
|
Jones SA, Noppeney U. Older adults preserve audiovisual integration through enhanced cortical activations, not by recruiting new regions. PLoS Biol 2024; 22:e3002494. [PMID: 38319934 PMCID: PMC10871488 DOI: 10.1371/journal.pbio.3002494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 02/16/2024] [Accepted: 01/09/2024] [Indexed: 02/08/2024] Open
Abstract
Effective interactions with the environment rely on the integration of multisensory signals: Our brains must efficiently combine signals that share a common source, and segregate those that do not. Healthy ageing can change or impair this process. This functional magnetic resonance imaging study assessed the neural mechanisms underlying age differences in the integration of auditory and visual spatial cues. Participants were presented with synchronous audiovisual signals at various degrees of spatial disparity and indicated their perceived sound location. Behaviourally, older adults were able to maintain localisation accuracy. At the neural level, they integrated auditory and visual cues into spatial representations along dorsal auditory and visual processing pathways similarly to their younger counterparts but showed greater activations in a widespread system of frontal, temporal, and parietal areas. According to multivariate Bayesian decoding, these areas encoded critical stimulus information beyond that which was encoded in the brain areas commonly activated by both groups. Surprisingly, however, the boost in information provided by these areas with age-related activation increases was comparable across the 2 age groups. This dissociation-between comparable information encoded in brain activation patterns across the 2 age groups, but age-related increases in regional blood-oxygen-level-dependent responses-contradicts the widespread notion that older adults recruit new regions as a compensatory mechanism to encode task-relevant information. Instead, our findings suggest that activation increases in older adults reflect nonspecific or modulatory mechanisms related to less efficient or slower processing, or greater demands on attentional resources.
Collapse
Affiliation(s)
- Samuel A. Jones
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Department of Psychology, Nottingham Trent University, Nottingham, United Kingdom
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
4
|
Jones SA, Noppeney U. Multisensory Integration and Causal Inference in Typical and Atypical Populations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:59-76. [PMID: 38270853 DOI: 10.1007/978-981-99-7611-9_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory perception is critical for effective interaction with the environment, but human responses to multisensory stimuli vary across the lifespan and appear changed in some atypical populations. In this review chapter, we consider multisensory integration within a normative Bayesian framework. We begin by outlining the complex computational challenges of multisensory causal inference and reliability-weighted cue integration, and discuss whether healthy young adults behave in accordance with normative Bayesian models. We then compare their behaviour with various other human populations (children, older adults, and those with neurological or neuropsychiatric disorders). In particular, we consider whether the differences seen in these groups are due only to changes in their computational parameters (such as sensory noise or perceptual priors), or whether the fundamental computational principles (such as reliability weighting) underlying multisensory perception may also be altered. We conclude by arguing that future research should aim explicitly to differentiate between these possibilities.
Collapse
Affiliation(s)
- Samuel A Jones
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
5
|
Seydell-Greenwald A, Wang X, Newport EL, Bi Y, Striem-Amit E. Spoken language processing activates the primary visual cortex. PLoS One 2023; 18:e0289671. [PMID: 37566582 PMCID: PMC10420367 DOI: 10.1371/journal.pone.0289671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 07/24/2023] [Indexed: 08/13/2023] Open
Abstract
Primary visual cortex (V1) is generally thought of as a low-level sensory area that primarily processes basic visual features. Although there is evidence for multisensory effects on its activity, these are typically found for the processing of simple sounds and their properties, for example spatially or temporally-congruent simple sounds. However, in congenitally blind individuals, V1 is involved in language processing, with no evidence of major changes in anatomical connectivity that could explain this seemingly drastic functional change. This is at odds with current accounts of neural plasticity, which emphasize the role of connectivity and conserved function in determining a neural tissue's role even after atypical early experiences. To reconcile what appears to be unprecedented functional reorganization with known accounts of plasticity limitations, we tested whether V1's multisensory roles include responses to spoken language in sighted individuals. Using fMRI, we found that V1 in normally sighted individuals was indeed activated by comprehensible spoken sentences as compared to an incomprehensible reversed speech control condition, and more strongly so in the left compared to the right hemisphere. Activation in V1 for language was also significant and comparable for abstract and concrete words, suggesting it was not driven by visual imagery. Last, this activation did not stem from increased attention to the auditory onset of words, nor was it correlated with attentional arousal ratings, making general attention accounts an unlikely explanation. Together these findings suggest that V1 responds to spoken language even in sighted individuals, reflecting the binding of multisensory high-level signals, potentially to predict visual input. This capability might be the basis for the strong V1 language activation observed in people born blind, re-affirming the notion that plasticity is guided by pre-existing connectivity and abilities in the typically developed brain.
Collapse
Affiliation(s)
- Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Elissa L. Newport
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ella Striem-Amit
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| |
Collapse
|
6
|
Gilad T, Bahar O, Hasan M, Bar A, Subach A, Scharf I. The combined role of visual and olfactory cues in foraging by Cataglyphis ants in laboratory mazes. Curr Zool 2023; 69:401-408. [PMID: 37614920 PMCID: PMC10443614 DOI: 10.1093/cz/zoac058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 07/28/2022] [Indexed: 08/25/2023] Open
Abstract
Foragers use several senses to locate food, and many animals rely on vision and smell. It is beneficial not to rely on a single sense, which might fail under certain conditions. We examined the contribution of vision and smell to foraging and maze exploration under laboratory conditions using Cataglyphis desert ants as a model. Foraging intensity, measured as the number of workers entering the maze and arriving at the target as well as target arrival time, were greater when food, blue light, or both were offered or presented in contrast to a control. Workers trained to forage for a combined food and light cue elevated their foraging intensity with experience. However, foraging intensity was not higher when using both cues simultaneously than in either one of the two alone. Following training, we split between the two cues and moved either the food or the blue light to the opposite maze corner. This manipulation impaired foraging success by either leading to fewer workers arriving at the target cell (when the light stayed and the food was moved) or to more workers arriving at the opposite target cell, empty of food (when the food stayed and the light was moved). This result indicates that ant workers use both senses when foraging for food and readily associate light with food.
Collapse
Affiliation(s)
- Tomer Gilad
- School of Zoology, George S Wise Faculty of Life Sciences, Tel Aviv University, 69978 Tel Aviv, Israel
| | - Ori Bahar
- School of Zoology, George S Wise Faculty of Life Sciences, Tel Aviv University, 69978 Tel Aviv, Israel
| | - Malak Hasan
- School of Zoology, George S Wise Faculty of Life Sciences, Tel Aviv University, 69978 Tel Aviv, Israel
| | - Adi Bar
- School of Zoology, George S Wise Faculty of Life Sciences, Tel Aviv University, 69978 Tel Aviv, Israel
| | - Aziz Subach
- School of Zoology, George S Wise Faculty of Life Sciences, Tel Aviv University, 69978 Tel Aviv, Israel
| | - Inon Scharf
- School of Zoology, George S Wise Faculty of Life Sciences, Tel Aviv University, 69978 Tel Aviv, Israel
| |
Collapse
|
7
|
Franceschiello B, Rumac S, Hilbert T, Nau M, Dziadosz M, Degano G, Roy CW, Gaglianese A, Petri G, Yerly J, Stuber M, Kober T, van Heeswijk RB, Murray MM, Fornari E. Hi-Fi fMRI: High-resolution, fast-sampled and sub-second whole-brain functional MRI at 3T in humans. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.13.540663. [PMID: 37425913 PMCID: PMC10327135 DOI: 10.1101/2023.05.13.540663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
Functional magnetic resonance imaging (fMRI) is a methodological cornerstone of neuroscience. Most studies measure blood-oxygen-level-dependent (BOLD) signal using echo-planar imaging (EPI), Cartesian sampling, and image reconstruction with a one-to-one correspondence between the number of acquired volumes and reconstructed images. However, EPI schemes are subject to trade-offs between spatial and temporal resolutions. We overcome these limitations by measuring BOLD with a gradient recalled echo (GRE) with 3D radial-spiral phyllotaxis trajectory at a high sampling rate (28.24ms) on standard 3T field-strength. The framework enables the reconstruction of 3D signal time courses with whole-brain coverage at simultaneously higher spatial (1mm 3 ) and temporal (up to 250ms) resolutions, as compared to optimized EPI schemes. Additionally, artifacts are corrected before image reconstruction; the desired temporal resolution is chosen after scanning and without assumptions on the shape of the hemodynamic response. By showing activation in the calcarine sulcus of 20 participants performing an ON-OFF visual paradigm, we demonstrate the reliability of our method for cognitive neuroscience research.
Collapse
|
8
|
Kandemir G, Akyürek EG. Impulse perturbation reveals cross-modal access to sensory working memory through learned associations. Neuroimage 2023; 274:120156. [PMID: 37146781 DOI: 10.1016/j.neuroimage.2023.120156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 04/22/2023] [Accepted: 05/03/2023] [Indexed: 05/07/2023] Open
Abstract
We investigated if learned associations between visual and auditory stimuli can afford full cross-modal access to working memory. Previous research using the impulse perturbation technique has shown that cross-modal access to working memory is one-sided; visual impulses reveal both auditory and visual memoranda, but auditory impulses do not seem to reveal visual memoranda (Wolff et al., 2020b). Our participants first learned to associate six auditory pure tones with six visual orientation gratings. Next, a delayed match-to-sample task for the orientations was completed, while EEG was recorded. Orientation memories were recalled either via their learned auditory counterpart, or were visually presented. We then decoded the orientation memories from the EEG responses to both auditory and visual impulses presented during the memory delay. Working memory content could always be decoded from visual impulses. Importantly, through recall of the learned associations, the auditory impulse also evoked a decodable response from the visual WM network, providing evidence for full cross-modal access. We also observed that after a brief initial dynamic period, the representational codes of the memory items generalized across time, as well as between perceptual maintenance and long-term recall conditions. Our results thus demonstrate that accessing learned associations in long-term memory provides a cross-modal pathway to working memory that seems to be based on a common coding scheme.
Collapse
Affiliation(s)
- Güven Kandemir
- Department of Experimental Psychology, University of Groningen, The Netherlands; Institute for Brain and Behavior, Vrije Universiteit Amsterdam, The Netherlands.
| | - Elkan G Akyürek
- Department of Experimental Psychology, University of Groningen, The Netherlands
| |
Collapse
|
9
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
10
|
Electrophysiological differences and similarities in audiovisual speech processing in CI users with unilateral and bilateral hearing loss. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100059. [DOI: 10.1016/j.crneur.2022.100059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 08/24/2022] [Accepted: 10/07/2022] [Indexed: 11/11/2022] Open
|
11
|
Ren Q, Marshall AC, Kaiser J, Schütz-Bosbach S. Multisensory Integration of Anticipated Cardiac Signals with Visual Targets Affects Their Detection among Multiple Visual Stimuli. Neuroimage 2022; 262:119549. [DOI: 10.1016/j.neuroimage.2022.119549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/29/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
|
12
|
Skirzewski M, Molotchnikoff S, Hernandez LF, Maya-Vetencourt JF. Multisensory Integration: Is Medial Prefrontal Cortex Signaling Relevant for the Treatment of Higher-Order Visual Dysfunctions? Front Mol Neurosci 2022; 14:806376. [PMID: 35110996 PMCID: PMC8801884 DOI: 10.3389/fnmol.2021.806376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 12/17/2021] [Indexed: 11/29/2022] Open
Abstract
In the mammalian brain, information processing in sensory modalities and global mechanisms of multisensory integration facilitate perception. Emerging experimental evidence suggests that the contribution of multisensory integration to sensory perception is far more complex than previously expected. Here we revise how associative areas such as the prefrontal cortex, which receive and integrate inputs from diverse sensory modalities, can affect information processing in unisensory systems via processes of down-stream signaling. We focus our attention on the influence of the medial prefrontal cortex on the processing of information in the visual system and whether this phenomenon can be clinically used to treat higher-order visual dysfunctions. We propose that non-invasive and multisensory stimulation strategies such as environmental enrichment and/or attention-related tasks could be of clinical relevance to fight cerebral visual impairment.
Collapse
Affiliation(s)
- Miguel Skirzewski
- Rodent Cognition Research and Innovation Core, University of Western Ontario, London, ON, Canada
| | - Stéphane Molotchnikoff
- Département de Sciences Biologiques, Université de Montréal, Montreal, QC, Canada
- Département de Génie Electrique et Génie Informatique, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Luis F. Hernandez
- Knoebel Institute for Healthy Aging, University of Denver, Denver, CO, United States
| | - José Fernando Maya-Vetencourt
- Department of Biology, University of Pisa, Pisa, Italy
- Centre for Synaptic Neuroscience, Istituto Italiano di Tecnologia (IIT), Genova, Italy
- *Correspondence: José Fernando Maya-Vetencourt
| |
Collapse
|
13
|
Benetti S, Collignon O. Cross-modal integration and plasticity in the superior temporal cortex. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:127-143. [PMID: 35964967 DOI: 10.1016/b978-0-12-823493-8.00026-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In congenitally deaf people, temporal regions typically believed to be primarily auditory enhance their response to nonauditory information. The neural mechanisms and functional principles underlying this phenomenon, as well as its impact on auditory recovery after sensory restoration, yet remain debated. In this chapter, we demonstrate that the cross-modal recruitment of temporal regions by visual inputs in congenitally deaf people follows organizational principles known to be present in the hearing brain. We propose that the functional and structural mechanisms allowing optimal convergence of multisensory information in the temporal cortex of hearing people also provide the neural scaffolding for feeding visual or tactile information into the deafened temporal areas. Innate in their nature, such anatomo-functional links between the auditory and other sensory systems would represent the common substrate of both early multisensory integration and expression of selective cross-modal plasticity in the superior temporal cortex.
Collapse
Affiliation(s)
- Stefania Benetti
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy; Institute for Research in Psychology and Neuroscience, Faculty of Psychology and Educational Science, UC Louvain, Louvain-la-Neuve, Belgium.
| |
Collapse
|
14
|
Ferrari A, Noppeney U. Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biol 2021; 19:e3001465. [PMID: 34793436 PMCID: PMC8639080 DOI: 10.1371/journal.pbio.3001465] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 12/02/2021] [Accepted: 11/01/2021] [Indexed: 11/22/2022] Open
Abstract
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals' causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.
Collapse
Affiliation(s)
- Ambra Ferrari
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
15
|
Linton P. V1 as an egocentric cognitive map. Neurosci Conscious 2021; 2021:niab017. [PMID: 34532068 PMCID: PMC8439394 DOI: 10.1093/nc/niab017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 05/21/2021] [Accepted: 06/08/2021] [Indexed: 01/20/2023] Open
Abstract
We typically distinguish between V1 as an egocentric perceptual map and the hippocampus as an allocentric cognitive map. In this article, we argue that V1 also functions as a post-perceptual egocentric cognitive map. We argue that three well-documented functions of V1, namely (i) the estimation of distance, (ii) the estimation of size, and (iii) multisensory integration, are better understood as post-perceptual cognitive inferences. This argument has two important implications. First, we argue that V1 must function as the neural correlates of the visual perception/cognition distinction and suggest how this can be accommodated by V1's laminar structure. Second, we use this insight to propose a low-level account of visual consciousness in contrast to mid-level accounts (recurrent processing theory; integrated information theory) and higher-level accounts (higher-order thought; global workspace theory). Detection thresholds have been traditionally used to rule out such an approach, but we explain why it is a mistake to equate visibility (and therefore the presence/absence of visual experience) with detection thresholds.
Collapse
Affiliation(s)
- Paul Linton
- Centre for Applied Vision Research, City, University of London, Northampton Square, London EC1V 0HB, UK
| |
Collapse
|
16
|
Abstract
Coordination between different sensory systems is a necessary element of sensory processing. Where and how signals from different sense organs converge onto common neural circuitry have become topics of increasing interest in recent years. In this article, we focus specifically on visual-auditory interactions in areas of the mammalian brain that are commonly considered to be auditory in function. The auditory cortex and inferior colliculus are two key points of entry where visual signals reach the auditory pathway, and both contain visual- and/or eye movement-related signals in humans and other animals. The visual signals observed in these auditory structures reflect a mixture of visual modulation of auditory-evoked activity and visually driven responses that are selective for stimulus location or features. These key response attributes also appear in the classic visual pathway but may play a different role in the auditory pathway: to modify auditory rather than visual perception. Finally, while this review focuses on two particular areas of the auditory pathway where this question has been studied, robust descending as well as ascending connections within this pathway suggest that undiscovered visual signals may be present at other stages as well. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Meredith N Schmehl
- Department of Neurobiology, Duke University, Durham, North Carolina 27708, USA; , .,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina 27708, USA.,Duke Institute for Brain Sciences, Duke University, Durham, North Carolina 27708, USA
| | - Jennifer M Groh
- Department of Neurobiology, Duke University, Durham, North Carolina 27708, USA; , .,Department of Psychology & Neuroscience, Duke University, Durham, North Carolina 27708, USA.,Department of Computer Science, Duke University, Durham, North Carolina 27708, USA.,Department of Biomedical Engineering, Duke University, Durham, North Carolina 27708, USA.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina 27708, USA.,Duke Institute for Brain Sciences, Duke University, Durham, North Carolina 27708, USA
| |
Collapse
|
17
|
Visual Influences on Auditory Behavioral, Neural, and Perceptual Processes: A Review. J Assoc Res Otolaryngol 2021; 22:365-386. [PMID: 34014416 PMCID: PMC8329114 DOI: 10.1007/s10162-021-00789-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 02/07/2021] [Indexed: 01/03/2023] Open
Abstract
In a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision's influence in audition, making the distinction between vision's ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision's ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the McGurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception-scene analysis and communication. The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.
Collapse
|
18
|
Abstract
Adaptive behavior in a complex, dynamic, and multisensory world poses some of the most fundamental computational challenges for the brain, notably inference, decision-making, learning, binding, and attention. We first discuss how the brain integrates sensory signals from the same source to support perceptual inference and decision-making by weighting them according to their momentary sensory uncertainties. We then show how observers solve the binding or causal inference problem-deciding whether signals come from common causes and should hence be integrated or else be treated independently. Next, we describe the multifarious interplay between multisensory processing and attention. We argue that attentional mechanisms are crucial to compute approximate solutions to the binding problem in naturalistic environments when complex time-varying signals arise from myriad causes. Finally, we review how the brain dynamically adapts multisensory processing to a changing world across multiple timescales.
Collapse
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, The Netherlands;
| |
Collapse
|
19
|
Siemann JK, Veenstra-VanderWeele J, Wallace MT. Approaches to Understanding Multisensory Dysfunction in Autism Spectrum Disorder. Autism Res 2020; 13:1430-1449. [PMID: 32869933 PMCID: PMC7721996 DOI: 10.1002/aur.2375] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 07/20/2020] [Accepted: 07/28/2020] [Indexed: 12/14/2022]
Abstract
Abnormal sensory responses are a DSM-5 symptom of autism spectrum disorder (ASD), and research findings demonstrate altered sensory processing in ASD. Beyond difficulties with processing information within single sensory domains, including both hypersensitivity and hyposensitivity, difficulties in multisensory processing are becoming a core issue of focus in ASD. These difficulties may be targeted by treatment approaches such as "sensory integration," which is frequently applied in autism treatment but not yet based on clear evidence. Recently, psychophysical data have emerged to demonstrate multisensory deficits in some children with ASD. Unlike deficits in social communication, which are best understood in humans, sensory and multisensory changes offer a tractable marker of circuit dysfunction that is more easily translated into animal model systems to probe the underlying neurobiological mechanisms. Paralleling experimental paradigms that were previously applied in humans and larger mammals, we and others have demonstrated that multisensory function can also be examined behaviorally in rodents. Here, we review the sensory and multisensory difficulties commonly found in ASD, examining laboratory findings that relate these findings across species. Next, we discuss the known neurobiology of multisensory integration, drawing largely on experimental work in larger mammals, and extensions of these paradigms into rodents. Finally, we describe emerging investigations into multisensory processing in genetic mouse models related to autism risk. By detailing findings from humans to mice, we highlight the advantage of multisensory paradigms that can be easily translated across species, as well as the potential for rodent experimental systems to reveal opportunities for novel treatments. LAY SUMMARY: Sensory and multisensory deficits are commonly found in ASD and may result in cascading effects that impact social communication. By using similar experiments to those in humans, we discuss how studies in animal models may allow an understanding of the brain mechanisms that underlie difficulties in multisensory integration, with the ultimate goal of developing new treatments. Autism Res 2020, 13: 1430-1449. © 2020 International Society for Autism Research, Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Justin K Siemann
- Department of Biological Sciences, Vanderbilt University, Nashville, Tennessee, USA
| | - Jeremy Veenstra-VanderWeele
- Department of Psychiatry, Columbia University, Center for Autism and the Developing Brain, New York Presbyterian Hospital, and New York State Psychiatric Institute, New York, New York, USA
| | - Mark T Wallace
- Department of Psychiatry, Vanderbilt University, Nashville, Tennessee, USA
- Department of Psychology, Vanderbilt University, Nashville, Tennessee, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA
- Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
20
|
Cheng L, Guo ZY, Qu YL. Cross-modality modulation of auditory midbrain processing of intensity information. Hear Res 2020; 395:108042. [PMID: 32810721 DOI: 10.1016/j.heares.2020.108042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 06/12/2020] [Accepted: 07/08/2020] [Indexed: 02/03/2023]
Abstract
In nature, animals constantly receive a multitude of sensory stimuli, such as visual, auditory, and somatosensory. The integration across sensory modalities is advantageous for the precise processing of sensory inputs which is essential for animals to survival. Although some principles of cross-modality integration have been revealed by many studies, little insight has been gained into its functional potentials. In this study, the functional influence of cross-modality modulation on auditory processing of intensity information was investigated via recording neuronal activity in the auditory midbrain (i.e., inferior colliculus, IC) under the conditions of visual, auditory, and audiovisual stimuli, respectively. Results demonstrated that combined audiovisual stimuli either enhanced or suppressed the responses of IC neurons compared to auditory stimuli alone, even though the same visual stimuli alone induced no response. Audiovisual modulation appeared to be strongest when the combined audiovisual stimuli were located at the best auditory azimuth of neurons as well as when presented with intensity at near-threshold levels. Additionally, the rate-intensity function of IC neurons to auditory stimuli was expanded or compressed by audiovisual modulation, which was highly dependent on the minimal threshold (MT) of neurons. Lowering of the MT and greater audiovisual modulation for the neuron indicated an intensity-specific enhancement of auditory intensity sensitivity by cross-modality modulation. Overall, evidence suggests a potential functional role of cross-modality modulation in IC that serves to instruct adaptive plasticity to enhance the auditory perception of intensity information.
Collapse
Affiliation(s)
- Liang Cheng
- School of Psychology & Key Laboratory of Adolescent Cyberpsycology and Behavior (CCNU) of Ministry of Education, Central China Normal University, Wuhan, 430079, China; School of Life Sciences & Hubei Key Lab of Genetic Regulation and Integrative Biology, Central China Normal University, Wuhan, 430079, China.
| | - Zhao-Yang Guo
- School of Psychology & Key Laboratory of Adolescent Cyberpsycology and Behavior (CCNU) of Ministry of Education, Central China Normal University, Wuhan, 430079, China
| | - Yi-Li Qu
- School of Psychology & Key Laboratory of Adolescent Cyberpsycology and Behavior (CCNU) of Ministry of Education, Central China Normal University, Wuhan, 430079, China
| |
Collapse
|
21
|
Selective attention to sound features mediates cross-modal activation of visual cortices. Neuropsychologia 2020; 144:107498. [PMID: 32442445 DOI: 10.1016/j.neuropsychologia.2020.107498] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 03/14/2020] [Accepted: 05/12/2020] [Indexed: 11/20/2022]
Abstract
Contemporary schemas of brain organization now include multisensory processes both in low-level cortices as well as at early stages of stimulus processing. Evidence has also accumulated showing that unisensory stimulus processing can result in cross-modal effects. For example, task-irrelevant and lateralised sounds can activate visual cortices; a phenomenon referred to as the auditory-evoked contralateral occipital positivity (ACOP). Some claim this is an example of automatic attentional capture in visual cortices. Other results, however, indicate that context may play a determinant role. Here, we investigated whether selective attention to spatial features of sounds is a determining factor in eliciting the ACOP. We recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to four possible stimulus attributes: location, pitch, speaker identity or syllable. Sound acoustics were held constant, and their location was always equiprobable (50% left, 50% right). The only manipulation was to which sound dimension participants attended. We analysed the AEP data from healthy participants within an electrical neuroimaging framework. The presence of sound-elicited activations of visual cortices depended on the to-be-discriminated, goal-based dimension. The ACOP was elicited only when participants were required to discriminate sound location, but not when they attended to any of the non-spatial features. These results provide a further indication that the ACOP is not automatic. Moreover, our findings showcase the interplay between task-relevance and spatial (un)predictability in determining the presence of the cross-modal activation of visual cortices.
Collapse
|
22
|
Grasso PA, Gallina J, Bertini C. Shaping the visual system: cortical and subcortical plasticity in the intact and the lesioned brain. Neuropsychologia 2020; 142:107464. [PMID: 32289349 DOI: 10.1016/j.neuropsychologia.2020.107464] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Accepted: 04/08/2020] [Indexed: 02/06/2023]
Abstract
Visual system is endowed with an incredibly complex organization composed of multiple visual pathway affording both hierarchical and parallel processing. Even if most of the visual information is conveyed by the retina to the lateral geniculate nucleus of the thalamus and then to primary visual cortex, a wealth of alternative subcortical pathways is present. This complex organization is experience dependent and retains plastic properties throughout the lifespan enabling the system with a continuous update of its functions in response to variable external needs. Changes can be induced by several factors including learning and experience but can also be promoted by the use non-invasive brain stimulation techniques. Furthermore, besides the astonishing ability of our visual system to spontaneously reorganize after injuries, we now know that the exposure to specific rehabilitative training can produce not only important functional modifications but also long-lasting changes within cortical and subcortical structures. The present review aims to update and address the current state of the art on these topics gathering studies that reported relevant modifications of visual functioning together with plastic changes within cortical and subcortical structures both in the healthy and in the lesioned visual system.
Collapse
Affiliation(s)
- Paolo A Grasso
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Florence, 50135, Italy.
| | - Jessica Gallina
- Department of Psychology, University of Bologna, Bologna, 40127, Italy; CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, 47521, Italy
| | - Caterina Bertini
- Department of Psychology, University of Bologna, Bologna, 40127, Italy; CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, 47521, Italy
| |
Collapse
|
23
|
Denervaud S, Gentaz E, Matusz PJ, Murray MM. Multisensory Gains in Simple Detection Predict Global Cognition in Schoolchildren. Sci Rep 2020; 10:1394. [PMID: 32019951 PMCID: PMC7000735 DOI: 10.1038/s41598-020-58329-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Accepted: 01/14/2020] [Indexed: 11/08/2022] Open
Abstract
The capacity to integrate information from different senses is central for coherent perception across the lifespan from infancy onwards. Later in life, multisensory processes are related to cognitive functions, such as speech or social communication. During learning, multisensory processes can in fact enhance subsequent recognition memory for unisensory objects. These benefits can even be predicted; adults' recognition memory performance is shaped by earlier responses in the same task to multisensory - but not unisensory - information. Everyday environments where learning occurs, such as classrooms, are inherently multisensory in nature. Multisensory processes may therefore scaffold healthy cognitive development. Here, we provide the first evidence of a predictive relationship between multisensory benefits in simple detection and higher-level cognition that is present already in schoolchildren. Multiple regression analyses indicated that the extent to which a child (N = 68; aged 4.5-15years) exhibited multisensory benefits on a simple detection task not only predicted benefits on a continuous recognition task involving naturalistic objects (p = 0.009), even when controlling for age, but also the same relative multisensory benefit also predicted working memory scores (p = 0.023) and fluid intelligence scores (p = 0.033) as measured using age-standardised test batteries. By contrast, gains in unisensory detection did not show significant prediction of any of the above global cognition measures. Our findings show that low-level multisensory processes predict higher-order memory and cognition already during childhood, even if still subject to ongoing maturation. These results call for revision of traditional models of cognitive development (and likely also education) to account for the role of multisensory processing, while also opening exciting opportunities to facilitate early learning through multisensory programs. More generally, these data suggest that a simple detection task could provide direct insights into the integrity of global cognition in schoolchildren and could be further developed as a readily-implemented and cost-effective screening tool for neurodevelopmental disorders, particularly in cases when standard neuropsychological tests are infeasible or unavailable.
Collapse
Affiliation(s)
- Solange Denervaud
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland
- The Center for Affective Sciences (CISA), University of Geneva, Geneva, Switzerland
| | - Edouard Gentaz
- The Center for Affective Sciences (CISA), University of Geneva, Geneva, Switzerland
- Faculty of Psychology and Educational Sciences (FAPSE), University of Geneva, Geneva, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), 3960, Sierre, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland.
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Ophthalmology, Fondation Asile des aveugles and University of Lausanne, Lausanne, Switzerland.
- Sensory, Cognitive and Perceptual Neuroscience Section, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland.
| |
Collapse
|
24
|
Stereotactic electroencephalography in humans reveals multisensory signal in early visual and auditory cortices. Cortex 2020; 126:253-264. [PMID: 32092494 DOI: 10.1016/j.cortex.2019.12.032] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Revised: 08/20/2019] [Accepted: 12/30/2019] [Indexed: 02/02/2023]
Abstract
Unequivocally demonstrating the presence of multisensory signals at the earliest stages of cortical processing remains challenging in humans. In our study, we relied on the unique spatio-temporal resolution provided by intracranial stereotactic electroencephalographic (SEEG) recordings in patients with drug-resistant epilepsy to characterize the signal extracted from early visual (calcarine and pericalcarine) and auditory (Heschl's gyrus and planum temporale) regions during a simple audio-visual oddball task. We provide evidences that both cross-modal responses (visual responses in auditory cortex or the reverse) and multisensory processing (alteration of the unimodal responses during bimodal stimulation) can be observed in intracranial event-related potentials (iERPs) and in power modulations of oscillatory activity at different temporal scales within the first 150 msec after stimulus onset. The temporal profiles of the iERPs are compatible with the hypothesis that MSI occurs by means of direct pathways linking early visual and auditory regions. Our data indicate, moreover, that MSI mainly relies on modulations of the low-frequency bands (foremost the theta band in the auditory cortex and the alpha band in the visual cortex), suggesting the involvement of feedback pathways between the two sensory regions. Remarkably, we also observed high-gamma power modulations by sounds in the early visual cortex, thus suggesting the presence of neuronal populations involved in auditory processing in the calcarine and pericalcarine region in humans.
Collapse
|
25
|
Unimodal and Bimodal Access to Sensory Working Memories by Auditory and Visual Impulses. J Neurosci 2019; 40:671-681. [PMID: 31754009 DOI: 10.1523/jneurosci.1194-19.2019] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2019] [Revised: 10/29/2019] [Accepted: 11/07/2019] [Indexed: 12/24/2022] Open
Abstract
It is unclear to what extent sensory processing areas are involved in the maintenance of sensory information in working memory (WM). Previous studies have thus far relied on finding neural activity in the corresponding sensory cortices, neglecting potential activity-silent mechanisms, such as connectivity-dependent encoding. It has recently been found that visual stimulation during visual WM maintenance reveals WM-dependent changes through a bottom-up neural response. Here, we test whether this impulse response is uniquely visual and sensory-specific. Human participants (both sexes) completed visual and auditory WM tasks while electroencephalography was recorded. During the maintenance period, the WM network was perturbed serially with fixed and task-neutral auditory and visual stimuli. We show that a neutral auditory impulse-stimulus presented during the maintenance of a pure tone resulted in a WM-dependent neural response, providing evidence for the auditory counterpart to the visual WM findings reported previously. Interestingly, visual stimulation also resulted in an auditory WM-dependent impulse response, implicating the visual cortex in the maintenance of auditory information, either directly or indirectly, as a pathway to the neural auditory WM representations elsewhere. In contrast, during visual WM maintenance, only the impulse response to visual stimulation was content-specific, suggesting that visual information is maintained in a sensory-specific neural network, separated from auditory processing areas.SIGNIFICANCE STATEMENT Working memory is a crucial component of intelligent, adaptive behavior. Our understanding of the neural mechanisms that support it has recently shifted: rather than being dependent on an unbroken chain of neural activity, working memory may rely on transient changes in neuronal connectivity, which can be maintained efficiently in activity-silent brain states. Previous work using a visual impulse stimulus to perturb the memory network has implicated such silent states in the retention of line orientations in visual working memory. Here, we show that auditory working memory similarly retains auditory information. We also observed a sensory-specific impulse response in visual working memory, while auditory memory responded bimodally to both visual and auditory impulses, possibly reflecting visual dominance of working memory.
Collapse
|
26
|
Lau C, Manno FAM, Dong CM, Chan KC, Wu EX. Auditory-visual convergence at the superior colliculus in rat using functional MRI. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:5531-5536. [PMID: 30441590 DOI: 10.1109/embc.2018.8513633] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The superior colliculus (SC) of the midbrain has been a model structure for multisensory processing. Many neurons in the intermediate and deep SC layers respond to two or more of auditory, visual, and somatosensory stimuli as assessed by electrophysiology. In contrast, noninvasive and large field of view functional magnetic resonance imaging (fMRI) studies have focused on multisensory processing in the cortex. In this study, we applied blood oxygenation leveldependent (BOLD) fMRI on Sprague-Dawley rats receiving monaural (auditory) and binocular (visual) stimuli to study subcortical multisensory processing. Activation was observed in the left superior olivary complex, lateral lemniscus, and inferior colliculus and both hemispheres of the superior colliculus during auditory stimulation. The SC response was bilateral even though the stimulus was monaural. During visual stimulation, activation was observed in both hemispheres of the SC and lateral geniculate nucleus. In both hemispheres of the SC, the number of voxels in the activation area $( \mathrm {p}<10 -8$) and BOLD signal changes $( \mathrm {p}<0.01)$ were significantly greater during visual than auditory stimulation. These results provide functional imaging evidence that the SC is a site of auditoryvisual convergence due to its involvement in both auditory and visual processing. The auditory and visual fMRI activations likely reflect the firing of unisensory and multisensory neurons in the SC. The present study lays the groundwork for noninvasive functional imaging studies of multisensory convergence and integration in the SC.
Collapse
|
27
|
Wong NA, Rafique SA, Moro SS, Kelly KR, Steeves JKE. Altered white matter structure in auditory tracts following early monocular enucleation. NEUROIMAGE-CLINICAL 2019; 24:102006. [PMID: 31622842 PMCID: PMC6812283 DOI: 10.1016/j.nicl.2019.102006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Revised: 09/04/2019] [Accepted: 09/14/2019] [Indexed: 01/29/2023]
Abstract
Purpose: Similar to early blindness, monocular enucleation (the removal of one eye) early in life results in crossmodal behavioral and morphological adaptations. Previously it has been shown that partial visual deprivation from early monocular enucleation results in structural white matter changes throughout the visual system (Wong et al., 2018). The current study investigated structural white matter of the auditory system in adults who have undergone early monocular enucleation compared to binocular control participants. Methods: We reconstructed four auditory and audiovisual tracts of interest using probabilistic tractography and compared microstructural properties of these tracts to binocularly intact controls using standard diffusion indices. Results: Although both groups demonstrated asymmetries in indices in intrahemispheric tracts, monocular enucleation participants showed asymmetries opposite to control participants in the auditory and A1-V1 tracts. Monocular enucleation participants also demonstrated significantly lower fractional anisotropy in the audiovisual projections contralateral to the enucleated eye relative to control participants. Conclusions: Partial vision loss from early monocular enucleation results in altered structural connectivity that extends into the auditory system, beyond tracts primarily dedicated to vision. Does losing one eye during postnatal maturation affect auditory white matter? Performed DTI of auditory and audiovisual tracts using probabilistic tractography. Patients differed in diffusion indices for auditory and audiovisual tracts. Early eye removal alters auditory white matter in addition to visual tracts.
Collapse
Affiliation(s)
- Nikita A Wong
- Department of Psychology, York University, Toronto, ON, Canada; Centre for Vision Research, York University, Toronto, ON, Canada
| | - Sara A Rafique
- Department of Psychology, York University, Toronto, ON, Canada; Centre for Vision Research, York University, Toronto, ON, Canada
| | - Stefania S Moro
- Department of Psychology, York University, Toronto, ON, Canada; Centre for Vision Research, York University, Toronto, ON, Canada; Department of Ophthalmology and Visual Sciences, The Hospital for Sick Children, Toronto, ON, Canada
| | | | - Jennifer K E Steeves
- Department of Psychology, York University, Toronto, ON, Canada; Centre for Vision Research, York University, Toronto, ON, Canada; Department of Ophthalmology and Visual Sciences, The Hospital for Sick Children, Toronto, ON, Canada.
| |
Collapse
|
28
|
Feierabend M, Karnath HO, Lewald J. Auditory Space Perception in the Blind: Horizontal Sound Localization in Acoustically Simple and Complex Situations. Perception 2019; 48:1039-1057. [PMID: 31462156 DOI: 10.1177/0301006619872062] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
| | - Hans-Otto Karnath
- Center of Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Faculty of Psychology, Ruhr University Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
29
|
Ahmad H, Setti W, Campus C, Capris E, Facchini V, Sandini G, Gori M. The Sound of Scotoma: Audio Space Representation Reorganization in Individuals With Macular Degeneration. Front Integr Neurosci 2019; 13:44. [PMID: 31481884 PMCID: PMC6710446 DOI: 10.3389/fnint.2019.00044] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Accepted: 08/05/2019] [Indexed: 12/12/2022] Open
Abstract
Blindness is an ideal condition to study the role of visual input on the development of spatial representation, as studies have shown how audio space representation reorganizes in blindness. However, how spatial reorganization works is still unclear. A limitation of the study on blindness is that it is a "stable" system and it does not allow for studying the mechanisms that subtend the progress of this reorganization. To overcome this problem here we study, for the first time, audio spatial reorganization in 18 adults with macular degeneration (MD) for which the loss of vision due to scotoma is an ongoing progressive process. Our results show that the loss of vision produces immediate changes in the processing of spatial audio signals. In individuals with MD, the lateral sounds are "attracted" toward the central scotoma position resulting in a strong bias in the spatial auditory percept. This result suggests that the reorganization of audio space representation is a fast and plastic process occurring also later in life, after vision loss.
Collapse
Affiliation(s)
- Hafsah Ahmad
- Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy.,Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy.,Department of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genoa, Genoa, Italy
| | - Walter Setti
- Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy.,Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy.,Department of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genoa, Genoa, Italy
| | - Claudio Campus
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | | | | | - Giulio Sandini
- Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
30
|
Qiao Y, Li X, Shen H, Zhang X, Sun Y, Hao W, Guo B, Ni D, Gao Z, Guo H, Shang Y. Downward cross-modal plasticity in single-sided deafness. Neuroimage 2019; 197:608-617. [DOI: 10.1016/j.neuroimage.2019.05.031] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 03/21/2019] [Accepted: 05/10/2019] [Indexed: 10/26/2022] Open
|
31
|
Császár-Nagy N, Kapócs G, Bókkon I. Classic psychedelics: the special role of the visual system. Rev Neurosci 2019; 30:651-669. [PMID: 30939118 DOI: 10.1515/revneuro-2018-0092] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2018] [Accepted: 11/05/2018] [Indexed: 12/23/2022]
Abstract
Here, we briefly overview the various aspects of classic serotonergic hallucinogens reported by a number of studies. One of the key hypotheses of our paper is that the visual effects of psychedelics might play a key role in resetting fears. Namely, we especially focus on visual processes because they are among the most prominent features of hallucinogen-induced hallucinations. We hypothesize that our brain has an ancient visual-based (preverbal) intrinsic cognitive process that, during the transient inhibition of top-down convergent and abstract thinking (mediated by the prefrontal cortex) by psychedelics, can neutralize emotional fears of unconscious and conscious life experiences from the past. In these processes, the decreased functional integrity of the self-referencing processes of the default mode network, the modified multisensory integration (linked to bodily self-consciousness and self-awareness), and the modified amygdala activity may also play key roles. Moreover, the emotional reset (elimination of stress-related emotions) by psychedelics may induce psychological changes and overwrite the stress-related neuroepigenetic information of past unconscious and conscious emotional fears.
Collapse
Affiliation(s)
- Noemi Császár-Nagy
- National University of Public Services, Budapest, Hungary.,Psychosomatic Outpatient Clinics, Budapest, Hungary
| | - Gábor Kapócs
- Saint John Hospital, Budapest, Hungary.,Institute of Behavioral Sciences, Semmelweis University, Budapest, Hungary
| | - István Bókkon
- Psychosomatic Outpatient Clinics, Budapest, Hungary.,Vision Research Institute, Neuroscience and Consciousness Research Department, Lowell, MA, USA
| |
Collapse
|
32
|
Császár N, Kapócs G, Bókkon I. A possible key role of vision in the development of schizophrenia. Rev Neurosci 2019; 30:359-379. [PMID: 30244235 DOI: 10.1515/revneuro-2018-0022] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 08/01/2018] [Indexed: 12/12/2022]
Abstract
Based on a brief overview of the various aspects of schizophrenia reported by numerous studies, here we hypothesize that schizophrenia may originate (and in part be performed) from visual areas. In other words, it seems that a normal visual system or at least an evanescent visual perception may be an essential prerequisite for the development of schizophrenia as well as of various types of hallucinations. Our study focuses on auditory and visual hallucinations, as they are the most prominent features of schizophrenic hallucinations (and also the most studied types of hallucinations). Here, we evaluate the possible key role of the visual system in the development of schizophrenia.
Collapse
Affiliation(s)
- Noemi Császár
- Gaspar Karoly University Psychological Institute, H-1091 Budapest, Hungary.,Psychoszomatic Outpatient Department, H-1037 Budapest, Hungary
| | - Gabor Kapócs
- Buda Family Centred Mental Health Centre, Department of Psychiatry and Psychiatric Rehabilitation, St. John Hospital, Budapest, Hungary
| | - István Bókkon
- Psychoszomatic Outpatient Department, H-1037 Budapest, Hungary.,Vision Research Institute, Neuroscience and Consciousness Research Department, 25 Rita Street, Lowell, MA 01854, USA
| |
Collapse
|
33
|
Lu L, Liu B. Revealing the multisensory modulation of auditory stimulus in degraded visual object recognition by dynamic causal modeling. Brain Imaging Behav 2019; 14:1187-1198. [PMID: 31172360 DOI: 10.1007/s11682-019-00134-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Recent evidence from neurophysiological and functional imaging research has demonstrated that semantically congruent sounds can modulate the identification of a degraded visual object. However, it remains unclear how different integration regions interact with each other when only a visual object was obscured. The present study aimed to elucidate the neural bases of cross-modal functional interactions in degraded visual object recognition. Naturally degraded images and semantically congruent sounds were used in our experiment. Participants were presented with three different modalities of audio-visual stimuli: auditory only (A), degraded visual only (Vd), and simultaneous auditory and degraded visual (AVd). We used conjunction analysis and the classical 'max criterion' to define three audiovisual integration cortical hubs: the visual association cortex, the superior temporal sulcus and the Heschl's gyrus. Dynamic causal modeling (DCM) was then used to infer effective connectivity between these regions. The DCM results revealed that the modulation of an auditory stimulus resulted in increased connectivity from the Heschl's gyrus to the visual association cortex and from the superior temporal sulcus to the visual association cortex. It was shown that the visual association cortex is modulated not only via feedback and top-down connections from higher-order convergence areas but also via lateral feedforward connectivity from the auditory cortex. The present findings give support to interconnected models of cross-modal information integration.
Collapse
Affiliation(s)
- Lu Lu
- Institute of Disaster Medicine, Tianjin University, Tianjin, 300072, People's Republic of China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, People's Republic of China.
| |
Collapse
|
34
|
Störmer VS. Orienting spatial attention to sounds enhances visual processing. Curr Opin Psychol 2019; 29:193-198. [PMID: 31022562 DOI: 10.1016/j.copsyc.2019.03.010] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 03/12/2019] [Accepted: 03/14/2019] [Indexed: 11/20/2022]
Abstract
Attention, the mechanism by which information is selected for further processing, has mostly been studied within the visual system. While this research has been exceptionally successful, it is important to understand how attention operates across the sensory modalities. This review focuses on recent studies showing that orienting to a peripheral, salient sound affects visual processing: it enhances visual perception, boosts visual-cortical responses, and modulates visual cortex activity before the appearance of a visual object. Critically, all of these effects are spatially selective, indicating that spatial attention facilitates perceptual processing at an attended location across sensory modalities. The neural changes in visual cortex triggered by the sounds not only resemble some of the neural modulations reported in uni-modal visual attention studies, but also reveal some important differences.
Collapse
Affiliation(s)
- Viola S Störmer
- Department of Psychology, University of California, San Diego, United States.
| |
Collapse
|
35
|
Van der Stoep N, Van der Stigchel S, Van Engelen RC, Biesbroek JM, Nijboer TCW. Impairments in Multisensory Integration after Stroke. J Cogn Neurosci 2019; 31:885-899. [PMID: 30883294 DOI: 10.1162/jocn_a_01389] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
The integration of information from multiple senses leads to a plethora of behavioral benefits, most predominantly to faster and better detection, localization, and identification of events in the environment. Although previous studies of multisensory integration (MSI) in humans have provided insights into the neural underpinnings of MSI, studies of MSI at a behavioral level in individuals with brain damage are scarce. Here, a well-known psychophysical paradigm (the redundant target paradigm) was employed to quantify MSI in a group of stroke patients. The relation between MSI and lesion location was analyzed using lesion subtraction analysis. Twenty-one patients with ischemic infarctions and 14 healthy control participants responded to auditory, visual, and audiovisual targets in the left and right visual hemifield. Responses to audiovisual targets were faster than to unisensory targets. This could be due to MSI or statistical facilitation. Comparing the audiovisual RTs to the winner of a race between unisensory signals allowed us to determine whether participants could integrate auditory and visual information. The results indicated that (1) 33% of the patients showed an impairment in MSI; (2) patients with MSI impairment had left hemisphere and brainstem/cerebellar lesions; and (3) the left caudate, left pallidum, left putamen, left thalamus, left insula, left postcentral and precentral gyrus, left central opercular cortex, left amygdala, and left OFC were more often damaged in patients with MSI impairments. These results are the first to demonstrate the impact of brain damage on MSI in stroke patients using a well-established psychophysical paradigm.
Collapse
Affiliation(s)
| | | | | | | | - Tanja C W Nijboer
- Helmholtz Institute, Utrecht University.,Brain Center Rudolph Magnus, University Medical Center, Utrecht University.,Center for Brain Rehabilitation Medicine, Utrecht Medical Center, Utrecht University
| |
Collapse
|
36
|
Innes BR, Otto TU. A comparative analysis of response times shows that multisensory benefits and interactions are not equivalent. Sci Rep 2019; 9:2921. [PMID: 30814642 PMCID: PMC6393672 DOI: 10.1038/s41598-019-39924-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Accepted: 02/05/2019] [Indexed: 12/26/2022] Open
Abstract
Multisensory signals allow faster responses than the unisensory components. While this redundant signals effect (RSE) has been studied widely with diverse signals, no modelling approach explored the RSE systematically across studies. For a comparative analysis, here, we propose three steps: The first quantifies the RSE compared to a simple, parameter-free race model. The second quantifies processing interactions beyond the race mechanism: history effects and so-called violations of Miller's bound. The third models the RSE on the level of response time distributions using a context-variant race model with two free parameters that account for the interactions. Mimicking the diversity of studies, we tested different audio-visual signals that target the interactions using a 2 × 2 design. We show that the simple race model provides overall a strong prediction of the RSE. Regarding interactions, we found that history effects do not depend on low-level feature repetition. Furthermore, violations of Miller's bound seem linked to transient signal onsets. Critically, the latter dissociates from the RSE, demonstrating that multisensory interactions and multisensory benefits are not equivalent. Overall, we argue that our approach, as a blueprint, provides both a general framework and the precision needed to understand the RSE when studied across diverse signals and participant groups.
Collapse
Affiliation(s)
- Bobby R Innes
- School of Psychology & Neuroscience, St. Mary's Quad, South Street, St. Andrews, KY16 9JP, United Kingdom.
| | - Thomas U Otto
- School of Psychology & Neuroscience, St. Mary's Quad, South Street, St. Andrews, KY16 9JP, United Kingdom.
| |
Collapse
|
37
|
Stronger responses in the visual cortex of sighted compared to blind individuals during auditory space representation. Sci Rep 2019; 9:1935. [PMID: 30760758 PMCID: PMC6374481 DOI: 10.1038/s41598-018-37821-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Accepted: 12/11/2018] [Indexed: 01/02/2023] Open
Abstract
It has been previously shown that the interaction between vision and audition involves early sensory cortices. However, the functional role of these interactions and their modulation due to sensory impairment is not yet understood. To shed light on the impact of vision on auditory spatial processing, we recorded ERPs and collected psychophysical responses during space and time bisection tasks in sighted and blind participants. They listened to three consecutive sounds and judged whether the second sound was either spatially or temporally further from the first or the third sound. We demonstrate that spatial metric representation of sounds elicits an early response of the visual cortex (P70) which is different between sighted and visually deprived individuals. Indeed, only in sighted and not in blind people P70 is strongly selective for the spatial position of sounds, mimicking many aspects of the visual-evoked C1. These results suggest that early auditory processing associated with the construction of spatial maps is mediated by visual experience. The lack of vision might impair the projection of multi-sensory maps on the retinotopic maps used by the visual cortex.
Collapse
|
38
|
Maruyama AT, Komai S. Auditory-induced response in the primary sensory cortex of rodents. PLoS One 2018; 13:e0209266. [PMID: 30571722 PMCID: PMC6301624 DOI: 10.1371/journal.pone.0209266] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Accepted: 12/03/2018] [Indexed: 11/18/2022] Open
Abstract
The details of auditory response at the subthreshold level in the rodent primary somatosensory cortex, the barrel cortex, have not been studied extensively, although several phenomenological reports have been published. Multisensory features may act as neuronal representations of links between inputs from one sensory modality to other sensory modalities. Here, we examined the basic multisensory postsynaptic responses in the rodent barrel cortex using in vivo whole-cell recordings of neurons. We observed robust responses to acoustic stimuli in most barrel cortex neurons. Acoustically evoked responses were mediated by hearing and reached approximately 60% of the postsynaptic response amplitude elicited by strong somatosensory stimuli. Compared to tactile stimuli, auditory stimuli evoked postsynaptic potentials with a longer latency and longer duration. Specifically, auditory stimuli in barrel cortex neurons appeared to trigger "up states", episodes associated with membrane depolarization and increased synaptic activity. Taken together, our data suggest that barrel cortex neurons have multisensory properties, with distinct synaptic mechanisms underlying tactile and non-tactile responses.
Collapse
Affiliation(s)
- Atsuko T. Maruyama
- Department of Science and Technology, Nara Institute of Science Technology, Takayama, Japan
| | - Shoji Komai
- Department of Science and Technology, Nara Institute of Science Technology, Takayama, Japan
- * E-mail:
| |
Collapse
|
39
|
Xia J, Zhang W, Jiang Y, Li Y, Chen Q. Neural practice effect during cross-modal selective attention: Supra-modal and modality-specific effects. Cortex 2018; 106:47-64. [PMID: 29864595 DOI: 10.1016/j.cortex.2018.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2017] [Revised: 01/11/2018] [Accepted: 05/02/2018] [Indexed: 11/25/2022]
Abstract
Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention.
Collapse
Affiliation(s)
- Jing Xia
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou 510631, PR China
| | - Wei Zhang
- Epilepsy Center, Shanghai Deji Hospital, No. 378 Gulang Road, Putuo District, Shanghai 200331, PR China
| | - Yizhou Jiang
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou 510631, PR China
| | - You Li
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou 510631, PR China
| | - Qi Chen
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou 510631, PR China; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, PR China.
| |
Collapse
|
40
|
Lu L, Zhang G, Xu J, Liu B. Semantically Congruent Sounds Facilitate the Decoding of Degraded Images. Neuroscience 2018; 377:12-25. [PMID: 29408368 DOI: 10.1016/j.neuroscience.2018.01.051] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Revised: 01/20/2018] [Accepted: 01/23/2018] [Indexed: 11/19/2022]
Abstract
Semantically congruent sounds can facilitate perception of visual objects in the human brain. However, the manner in which semantically congruent sounds affect cognitive processing for degraded visual stimuli remains unclear. We presented participants with naturalistic degraded images and semantically congruent sounds from different conceptual categories in three modalities: degraded visual only, auditory only, and auditory and degraded visual. Functional magnetic resonance imaging was performed to assess variations in brain-activation spatial patterns. In order to account for the facilitation of auditory modulation at different levels, four conceptual categories of stimuli were divided into coarse and fine groups. Conjunction analysis and multivariate pattern analysis were used to investigate integrative properties. Superadditive interactions were found in the visual association cortex and subadditive interactions were observed in the superior temporal sulcus/superior temporal gyrus (STS/STG). Our results demonstrate that the visual association cortex and STS/STG are involved in the integration of auditory and degraded visual information. In addition, the pattern classification results imply that semantically congruent sounds may facilitate identification of degraded images in both coarse and fine groups. Importantly, when naturalistic visual stimuli were further subdivided, facilitation through auditory modulation exhibited category selectivity.
Collapse
Affiliation(s)
- Lu Lu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China
| | - Gaoyan Zhang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China
| | - Junhai Xu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China; State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing 100084, PR China.
| |
Collapse
|
41
|
Huang R, Chen C, Sereno MI. Spatiotemporal integration of looming visual and tactile stimuli near the face. Hum Brain Mapp 2018; 39:2156-2176. [PMID: 29411461 PMCID: PMC5895522 DOI: 10.1002/hbm.23995] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Revised: 01/10/2018] [Accepted: 01/26/2018] [Indexed: 12/27/2022] Open
Abstract
Real-world objects approaching or passing by an observer often generate visual, auditory, and tactile signals with different onsets and durations. Prompt detection and avoidance of an impending threat depend on precise binding of looming signals across modalities. Here we constructed a multisensory apparatus to study the spatiotemporal integration of looming visual and tactile stimuli near the face. In a psychophysical experiment, subjects assessed the subjective synchrony between a looming ball and an air puff delivered to the same side of the face with a varying temporal offset. Multisensory stimuli with similar onset times were perceived as completely out of sync and assessed with the lowest subjective synchrony index (SSI). Across subjects, the SSI peaked at an offset between 800 and 1,000 ms, where the multisensory stimuli were perceived as optimally in sync. In an fMRI experiment, tactile, visual, tactile-visual out-of-sync (TVoS), and tactile-visual in-sync (TViS) stimuli were delivered to either side of the face in randomized events. Group-average statistical responses to different stimuli were compared within each surface-based region of interest (sROI) outlined on the cortical surface. Most sROIs showed a preference for contralateral stimuli and higher responses to multisensory than unisensory stimuli. In several bilateral sROIs, particularly the human MT+ complex and V6A, responses to spatially aligned multisensory stimuli (TVoS) were further enhanced when the stimuli were in-sync (TViS), as expressed by TVoS < TViS. This study demonstrates the perceptual and neural mechanisms of multisensory integration near the face, which has potential applications in the development of multisensory entertainment systems and media.
Collapse
Affiliation(s)
- Ruey‐Song Huang
- Institute for Neural Computation, University of California, San DiegoLa JollaCalifornia
| | - Ching‐fu Chen
- Department of Electrical and Computer EngineeringUniversity of California, San DiegoLa JollaCalifornia
| | - Martin I. Sereno
- Department of Psychology and Neuroimaging CenterSan Diego State UniversitySan DiegoCalifornia
- Experimental PsychologyUniversity College LondonLondonUK
| |
Collapse
|
42
|
Ortiz JJ, Portillo W, Paredes RG, Young LJ, Alcauter S. Resting state brain networks in the prairie vole. Sci Rep 2018; 8:1231. [PMID: 29352154 PMCID: PMC5775431 DOI: 10.1038/s41598-017-17610-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2017] [Accepted: 11/24/2017] [Indexed: 12/20/2022] Open
Abstract
Resting state functional magnetic resonance imaging (rsfMRI) has shown the hierarchical organization of the human brain into large-scale complex networks, referred as resting state networks. This technique has turned into a promising translational research tool after the finding of similar resting state networks in non-human primates, rodents and other animal models of great value for neuroscience. Here, we demonstrate and characterize the presence of resting states networks in Microtus ochrogaster, the prairie vole, an extraordinary animal model to study complex human-like social behavior, with potential implications for the research of normal social development, addiction and neuropsychiatric disorders. Independent component analysis of rsfMRI data from isoflurane-anestethized prairie voles resulted in cortical and subcortical networks, including primary motor and sensory networks, but also included putative salience and default mode networks. We further discuss how future research could help to close the gap between the properties of the large scale functional organization and the underlying neurobiology of several aspects of social cognition. These results contribute to the evidence of preserved resting state brain networks across species and provide the foundations to explore the use of rsfMRI in the prairie vole for basic and translational research.
Collapse
Affiliation(s)
- Juan J Ortiz
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Boulevard Juriquilla 3001, Queretaro, 76230, Mexico
| | - Wendy Portillo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Boulevard Juriquilla 3001, Queretaro, 76230, Mexico
| | - Raul G Paredes
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Boulevard Juriquilla 3001, Queretaro, 76230, Mexico
| | - Larry J Young
- Department of Psychiatry and Behavioral Sciences, Silvio O. Conte Center for Oxytocin and Social Cognition, Center for Translational Social Neuroscience, Yerkes National Primate Research Center, Emory University, 954 Gatewood Rd., Atlanta, GA, 30322, USA
| | - Sarael Alcauter
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Boulevard Juriquilla 3001, Queretaro, 76230, Mexico.
| |
Collapse
|
43
|
Aggius-Vella E, Campus C, Finocchietti S, Gori M. Audio Motor Training at the Foot Level Improves Space Representation. Front Integr Neurosci 2017; 11:36. [PMID: 29326564 PMCID: PMC5741674 DOI: 10.3389/fnint.2017.00036] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 12/05/2017] [Indexed: 11/26/2022] Open
Abstract
Spatial representation is developed thanks to the integration of visual signals with the other senses. It has been shown that the lack of vision compromises the development of some spatial representations. In this study we tested the effect of a new rehabilitation device called ABBI (Audio Bracelet for Blind Interaction) to improve space representation. ABBI produces an audio feedback linked to body movement. Previous studies from our group showed that this device improves the spatial representation of space in early blind adults around the upper part of the body. Here we evaluate whether the audio motor feedback produced by ABBI can also improve audio spatial representation of sighted individuals in the space around the legs. Forty five blindfolded sighted subjects participated in the study, subdivided into three experimental groups. An audio space localization (front-back discrimination) task was performed twice by all groups of subjects before and after different kind of training conditions. A group (experimental) performed an audio-motor training with the ABBI device placed on their foot. Another group (control) performed a free motor activity without audio feedback associated with body movement. The other group (control) passively listened to the ABBI sound moved at foot level by the experimenter without producing any body movement. Results showed that only the experimental group, which performed the training with the audio-motor feedback, showed an improvement in accuracy for sound discrimination. No improvement was observed for the two control groups. These findings suggest that the audio-motor training with ABBI improves audio space perception also in the space around the legs in sighted individuals. This result provides important inputs for the rehabilitation of the space representations in the lower part of the body.
Collapse
Affiliation(s)
- Elena Aggius-Vella
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Claudio Campus
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Sara Finocchietti
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
44
|
Spatial localization of sound elicits early responses from occipital visual cortex in humans. Sci Rep 2017; 7:10415. [PMID: 28874681 PMCID: PMC5585168 DOI: 10.1038/s41598-017-09142-z] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2017] [Accepted: 07/20/2017] [Indexed: 11/08/2022] Open
Abstract
Much evidence points to an interaction between vision and audition at early cortical sites. However, the functional role of these interactions is not yet understood. Here we show an early response of the occipital cortex to sound that it is strongly linked to the spatial localization task performed by the observer. The early occipital response to a sound, usually absent, increased by more than 10-fold when presented during a space localization task, but not during a time localization task. The response amplification was not only specific to the task, but surprisingly also to the position of the stimulus in the two hemifields. We suggest that early occipital processing of sound is linked to the construction of an audio spatial map that may utilize the visual map of the occipital cortex.
Collapse
|
45
|
Abstract
When I am looking at my coffee machine that makes funny noises, this is an instance of multisensory perception – I perceive this event by means of both vision and audition. But very often we only receive sensory stimulation from a multisensory event by means of one sense modality, for example, when I hear the noisy coffee machine in the next room, that is, without seeing it. The aim of this paper is to bring together empirical findings about multimodal perception and empirical findings about (visual, auditory, tactile) mental imagery and argue that on occasions like this, we have multimodal mental imagery: perceptual processing in one sense modality (here: vision) that is triggered by sensory stimulation in another sense modality (here: audition). Multimodal mental imagery is not a rare and obscure phenomenon. The vast majority of what we perceive are multisensory events: events that can be perceived in more than one sense modality – like the noisy coffee machine. And most of the time we are only acquainted with these multisensory events via a subset of the sense modalities involved – all the other aspects of these multisensory events are represented by means of multisensory mental imagery. This means that multisensory mental imagery is a crucial element of almost all instances of everyday perception.
Collapse
Affiliation(s)
- Bence Nanay
- University of Antwerp, Antwerp, Belgium; Peterhouse, University of Cambridge, Cambridge, UK.
| |
Collapse
|
46
|
Abstract
Many philosophers use findings about sensory substitution devices in the grand debate about how we should individuate the senses. The big question is this: Is "vision" assisted by (tactile) sensory substitution really vision? Or is it tactile perception? Or some sui generis novel form of perception? My claim is that sensory substitution assisted "vision" is neither vision nor tactile perception, because it is not perception at all. It is mental imagery: visual mental imagery triggered by tactile sensory stimulation. But it is a special form of mental imagery that is triggered by corresponding sensory stimulation in a different sense modality, which I call "multimodal mental imagery."
Collapse
Affiliation(s)
- Bence Nanay
- University of Antwerp, Belgium; Peterhouse, University of Cambridge, UK
| |
Collapse
|
47
|
Auditory-visual integration in fields of the auditory cortex. Hear Res 2017; 346:25-33. [PMID: 28115229 DOI: 10.1016/j.heares.2017.01.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/17/2016] [Revised: 01/12/2017] [Accepted: 01/17/2017] [Indexed: 11/21/2022]
Abstract
While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields.
Collapse
|
48
|
Abstract
The use of separate multisensory signals is often beneficial. A prominent example is the speed-up of responses to two redundant signals relative to the components, which is known as the redundant signals effect (RSE). A convenient explanation for the effect is statistical facilitation, which is inherent in the basic architecture of race models (Raab, 1962,Trans. N. Y. Acad. Sci.24, 574–590). However, this class of models has been largely rejected in multisensory research, which we think results from an ambiguity in definitions and misinterpretations of the influential race model test (Miller, 1982,Cogn. Psychol.14, 247–279). To resolve these issues, we here discuss four main items. First, we clarify definitions and ask how successful models of perceptual decision making can be extended from uni- to multisensory decisions. Second, we review the race model test and emphasize elements leading to confusion with its interpretation. Third, we introduce a new approach to study the RSE. As a major change of direction, our working hypothesis is that the basic race model architecture is correct even if the race model test seems to suggest otherwise. Based on this approach, we argue that understanding the variability of responses is the key to understand the RSE. Finally, we highlight the critical role of model testability to advance research on multisensory decisions. Despite being largely rejected, it should be recognized that race models, as part of a broader class of parallel decision models, demonstrate, in fact, a convincing explanatory power in a range of experimental paradigms. To improve research consistency in the future, we conclude with a short checklist for RSE studies.
Collapse
Affiliation(s)
- Thomas U. Otto
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK
| | - Pascal Mamassian
- Laboratoire des Systèmes Perceptifs (CNRS UMR 8248), Ecole Normale Supérieure — PSL Research University, Paris, France
| |
Collapse
|
49
|
Gordon I, Jack A, Pretzsch CM, Vander Wyk B, Leckman JF, Feldman R, Pelphrey KA. Intranasal Oxytocin Enhances Connectivity in the Neural Circuitry Supporting Social Motivation and Social Perception in Children with Autism. Sci Rep 2016; 6:35054. [PMID: 27845765 PMCID: PMC5109935 DOI: 10.1038/srep35054] [Citation(s) in RCA: 73] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2016] [Accepted: 09/23/2016] [Indexed: 02/07/2023] Open
Abstract
Oxytocin (OT) has become a focus in investigations of autism spectrum disorder (ASD). The social deficits that characterize ASD may relate to reduced connectivity between brain sites on the mesolimbic reward pathway (nucleus accumbens; amygdala) that receive OT projections and contribute to social motivation, and cortical sites involved in social perception. Using functional magnetic resonance imaging and a randomized, double blind, placebo-controlled crossover design, we show that OT administration in ASD increases activity in brain regions important for perceiving social-emotional information. Further, OT enhances connectivity between nodes of the brain’s reward and socioemotional processing systems, and does so preferentially for social (versus nonsocial) stimuli. This effect is observed both while viewing coherent versus scrambled biological motion, and while listening to happy versus angry voices. Our findings suggest a mechanism by which intranasal OT may bolster social motivation—one that could, in future, be harnessed to augment behavioral treatments for ASD.
Collapse
Affiliation(s)
- Ilanit Gordon
- Child Study Center, Yale University, New Haven, CT 06520, USA.,Department of Psychology, Bar-Ilan University, Ramat-Gan 5290002, Israel
| | - Allison Jack
- Autism and Neurodevelopmental Disorders Institute, George Washington University, Ashburn, VA 20147, USA
| | | | | | - James F Leckman
- Child Study Center, Yale University, New Haven, CT 06520, USA
| | - Ruth Feldman
- Child Study Center, Yale University, New Haven, CT 06520, USA.,Department of Psychology, Bar-Ilan University, Ramat-Gan 5290002, Israel
| | - Kevin A Pelphrey
- Autism and Neurodevelopmental Disorders Institute, George Washington University, Ashburn, VA 20147, USA
| |
Collapse
|
50
|
Seeing the sound after visual loss: functional MRI in acquired auditory-visual synesthesia. Exp Brain Res 2016; 235:415-420. [DOI: 10.1007/s00221-016-4802-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Accepted: 10/12/2016] [Indexed: 10/20/2022]
|