1
|
Wu J, Nie S, Li C, Wang X, Peng Y, Shang J, Diao L, Ding H, Si Q, Wang S, Tong R, Li Y, Sun L, Zhang J. Sound-localization-related activation and functional connectivity of dorsal auditory pathway in relation to demographic, cognitive, and behavioral characteristics in age-related hearing loss. Front Neurosci 2024; 18:1353413. [PMID: 38562303 PMCID: PMC10982313 DOI: 10.3389/fnins.2024.1353413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
Background Patients with age-related hearing loss (ARHL) often struggle with tracking and locating sound sources, but the neural signature associated with these impairments remains unclear. Materials and methods Using a passive listening task with stimuli from five different horizontal directions in functional magnetic resonance imaging, we defined functional regions of interest (ROIs) of the auditory "where" pathway based on the data of previous literatures and young normal hearing listeners (n = 20). Then, we investigated associations of the demographic, cognitive, and behavioral features of sound localization with task-based activation and connectivity of the ROIs in ARHL patients (n = 22). Results We found that the increased high-level region activation, such as the premotor cortex and inferior parietal lobule, was associated with increased localization accuracy and cognitive function. Moreover, increased connectivity between the left planum temporale and left superior frontal gyrus was associated with increased localization accuracy in ARHL. Increased connectivity between right primary auditory cortex and right middle temporal gyrus, right premotor cortex and left anterior cingulate cortex, and right planum temporale and left lingual gyrus in ARHL was associated with decreased localization accuracy. Among the ARHL patients, the task-dependent brain activation and connectivity of certain ROIs were associated with education, hearing loss duration, and cognitive function. Conclusion Consistent with the sensory deprivation hypothesis, in ARHL, sound source identification, which requires advanced processing in the high-level cortex, is impaired, whereas the right-left discrimination, which relies on the primary sensory cortex, is compensated with a tendency to recruit more resources concerning cognition and attention to the auditory sensory cortex. Overall, this study expanded our understanding of the neural mechanisms contributing to sound localization deficits associated with ARHL and may serve as a potential imaging biomarker for investigating and predicting anomalous sound localization.
Collapse
Affiliation(s)
- Junzhi Wu
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Shuai Nie
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Chunlin Li
- School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Xing Wang
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Ye Peng
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Jiaqi Shang
- Center of Clinical Hearing, Shandong Second Provincial General Hospital, Jinan, Shandong, China
| | - Linan Diao
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Hongping Ding
- College of Special Education, Binzhou Medical University, Yantai, Shandong, China
| | - Qian Si
- School of Cyber Science and Technology, Beihang University, Beijing, China
| | - Songjian Wang
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Ministry of Education, Beijing Institute of Otolaryngology, Beijing, China
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Renjie Tong
- School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Yutang Li
- School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Liwei Sun
- School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Juan Zhang
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
2
|
Zhang H, Xie J, Tao Q, Xiao Y, Cui G, Fang W, Zhu X, Xu G, Li M, Han C. The effect of motion frequency and sound source frequency on steady-state auditory motion evoked potential. Hear Res 2023; 439:108897. [PMID: 37871451 DOI: 10.1016/j.heares.2023.108897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/18/2023] [Accepted: 10/12/2023] [Indexed: 10/25/2023]
Abstract
The ability of humans to perceive motion sound sources is important for accurate response to the living environment. Periodic motion sound sources can elicit steady-state motion auditory evoked potential (SSMAEP). The purpose of this study was to investigate the effects of different motion frequencies and different frequencies of sound source on SSMAEP. The stimulation paradigms for simulating periodic motion of sound sources were designed utilizing head-related transfer function (HRTF) techniques in this study. The motion frequencies of the paradigm are set respectively to 1-10 Hz, 15 Hz, 20 Hz, 30 Hz, 40 Hz, 60 Hz, and 80 Hz. In addition, the frequencies of sound source of the paradigms were set to 500 Hz, 1000 Hz, 2000 Hz, 3000 Hz, and 4000 Hz at motion frequencies of 6 Hz and 40 Hz. Fourteen subjects with normal hearing were recruited for the study. SSMAEP was elicited by 500 Hz pure tone at motion frequencies of 1-10 Hz, 15 Hz, 20 Hz, 30 Hz, 40 Hz, 60 Hz, and 80 Hz. SSMAEP was strongest at motion frequencies of 6 Hz. Moreover, at 6 Hz motion frequency, the SSMAEP amplitude was largest at the tone frequency of 500 Hz and smallest at 4000 Hz. Whilst SSMAEP elicited by 4000 Hz pure tone was significantly the strongest at motion frequency of 40 Hz. SSMAEP can be elicited by periodic motion sound sources at motion frequencies up to 80 Hz. SSMAEP also has a strong response at lower frequency. Low-frequency pure tones are beneficial to enhance SSMAEP at low-frequency sound source motion, whilst high-frequency pure tones help to enhance SSMAEP at high-frequency sound source motion. The study provides new insight into the brain's perception of rhythmic auditory motion.
Collapse
Affiliation(s)
- Huanqing Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jun Xie
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; School of Mechanical Engineering, Xinjiang University, Urumqi, China; National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China.
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Urumqi, China.
| | - Yi Xiao
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Guiling Cui
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Wenhu Fang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Xinyu Zhu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Guanghua Xu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Min Li
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Chengcheng Han
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
3
|
Kral A, Sharma A. Crossmodal plasticity in hearing loss. Trends Neurosci 2023; 46:377-393. [PMID: 36990952 PMCID: PMC10121905 DOI: 10.1016/j.tins.2023.02.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/27/2023] [Accepted: 02/21/2023] [Indexed: 03/29/2023]
Abstract
Crossmodal plasticity is a textbook example of the ability of the brain to reorganize based on use. We review evidence from the auditory system showing that such reorganization has significant limits, is dependent on pre-existing circuitry and top-down interactions, and that extensive reorganization is often absent. We argue that the evidence does not support the hypothesis that crossmodal reorganization is responsible for closing critical periods in deafness, and crossmodal plasticity instead represents a neuronal process that is dynamically adaptable. We evaluate the evidence for crossmodal changes in both developmental and adult-onset deafness, which start as early as mild-moderate hearing loss and show reversibility when hearing is restored. Finally, crossmodal plasticity does not appear to affect the neuronal preconditions for successful hearing restoration. Given its dynamic and versatile nature, we describe how this plasticity can be exploited for improving clinical outcomes after neurosensory restoration.
Collapse
Affiliation(s)
- Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, Otolaryngology Clinics, Hannover Medical School, Hannover, Germany; Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, NSW, Australia
| | - Anu Sharma
- Department of Speech Language and Hearing Science, Center for Neuroscience, Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA.
| |
Collapse
|
4
|
Sun L, Li C, Wang S, Si Q, Lin M, Wang N, Sun J, Li H, Liang Y, Wei J, Zhang X, Zhang J. Left frontal eye field encodes sound locations during passive listening. Cereb Cortex 2023; 33:3067-3079. [PMID: 35858212 DOI: 10.1093/cercor/bhac261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 06/02/2022] [Accepted: 06/04/2022] [Indexed: 11/12/2022] Open
Abstract
Previous studies reported that auditory cortices (AC) were mostly activated by sounds coming from the contralateral hemifield. As a result, sound locations could be encoded by integrating opposite activations from both sides of AC ("opponent hemifield coding"). However, human auditory "where" pathway also includes a series of parietal and prefrontal regions. It was unknown how sound locations were represented in those high-level regions during passive listening. Here, we investigated the neural representation of sound locations in high-level regions by voxel-level tuning analysis, regions-of-interest-level (ROI-level) laterality analysis, and ROI-level multivariate pattern analysis. Functional magnetic resonance imaging data were collected while participants listened passively to sounds from various horizontal locations. We found that opponent hemifield coding of sound locations not only existed in AC, but also spanned over intraparietal sulcus, superior parietal lobule, and frontal eye field (FEF). Furthermore, multivariate pattern representation of sound locations in both hemifields could be observed in left AC, right AC, and left FEF. Overall, our results demonstrate that left FEF, a high-level region along the auditory "where" pathway, encodes sound locations during passive listening in two ways: a univariate opponent hemifield activation representation and a multivariate full-field activation pattern representation.
Collapse
Affiliation(s)
- Liwei Sun
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Chunlin Li
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Songjian Wang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Qian Si
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Meng Lin
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Ningyu Wang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| | - Jun Sun
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Hongjun Li
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Ying Liang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Jing Wei
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Xu Zhang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Juan Zhang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| |
Collapse
|
5
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
6
|
Fine I, Park WJ. Do you hear what I see? How do early blind individuals experience object motion? Philos Trans R Soc Lond B Biol Sci 2023; 378:20210460. [PMID: 36511418 PMCID: PMC9745882 DOI: 10.1098/rstb.2021.0460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 09/13/2022] [Indexed: 12/15/2022] Open
Abstract
One of the most important tasks for 3D vision is tracking the movement of objects in space. The ability of early blind individuals to understand motion in the environment from noisy and unreliable auditory information is an impressive example of cortical adaptation that is only just beginning to be understood. Here, we compare visual and auditory motion processing, and discuss the effect of early blindness on the perception of auditory motion. Blindness leads to cross-modal recruitment of the visual motion area hMT+ for auditory motion processing. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion. We discuss how this dramatic shift in the cortical basis of motion processing might influence the perceptual experience of motion in early blind individuals. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Ione Fine
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA
| | - Woon Ju Park
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA
| |
Collapse
|
7
|
Battal C, Gurtubay-Antolin A, Rezk M, Mattioni S, Bertonati G, Occelli V, Bottini R, Targher S, Maffei C, Jovicich J, Collignon O. Structural and Functional Network-Level Reorganization in the Coding of Auditory Motion Directions and Sound Source Locations in the Absence of Vision. J Neurosci 2022; 42:4652-4668. [PMID: 35501150 PMCID: PMC9186796 DOI: 10.1523/jneurosci.1554-21.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 03/16/2022] [Accepted: 03/21/2022] [Indexed: 11/21/2022] Open
Abstract
hMT+/V5 is a region in the middle occipitotemporal cortex that responds preferentially to visual motion in sighted people. In cases of early visual deprivation, hMT+/V5 enhances its response to moving sounds. Whether hMT+/V5 contains information about motion directions and whether the functional enhancement observed in the blind is motion specific, or also involves sound source location, remains unsolved. Moreover, the impact of this cross-modal reorganization of hMT+/V5 on the regions typically supporting auditory motion processing, like the human planum temporale (hPT), remains equivocal. We used a combined functional and diffusion-weighted MRI approach and individual in-ear recordings to study the impact of early blindness on the brain networks supporting spatial hearing in male and female humans. Whole-brain univariate analysis revealed that the anterior portion of hMT+/V5 responded to moving sounds in sighted and blind people, while the posterior portion was selective to moving sounds only in blind participants. Multivariate decoding analysis revealed that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in hPT in the blind group. While both groups showed axis-of-motion organization in hMT+/V5 and hPT, this organization was reduced in the hPT of blind people. Diffusion-weighted MRI revealed that the strength of hMT+/V5-hPT connectivity did not differ between groups, whereas the microstructure of the connections was altered by blindness. Our results suggest that the axis-of-motion organization of hMT+/V5 does not depend on visual experience, but that congenital blindness alters the response properties of occipitotemporal networks supporting spatial hearing in the sighted.SIGNIFICANCE STATEMENT Spatial hearing helps living organisms navigate their environment. This is certainly even more true in people born blind. How does blindness affect the brain network supporting auditory motion and sound source location? Our results show that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in human planum temporale in blind relative to sighted people; and that this functional reorganization is accompanied by microstructural (but not macrostructural) alterations in their connections. These findings suggest that blindness alters cross-modal responses between connected areas that share the same computational goals.
Collapse
Affiliation(s)
- Ceren Battal
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Ane Gurtubay-Antolin
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- BCBL, Basque Center on Cognition, Brain and Language, 20009, Donostia-San Sebastián, Spain
| | - Mohamed Rezk
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Stefania Mattioni
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Giorgia Bertonati
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Valeria Occelli
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
- Department of Psychology, Edge Hill University, Ormskirk L39 4QP, United Kingdom
| | - Roberto Bottini
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Stefano Targher
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
| | - Chiara Maffei
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts 01129
| | - Jorge Jovicich
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Olivier Collignon
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
- School of Health Sciences, HES-SO Valais-Wallis, 1950 Sion, Switzerland
- The Sense Innovation and Research Center, CH-1011 Lausanne, Switzerland
| |
Collapse
|
8
|
Benetti S, Collignon O. Cross-modal integration and plasticity in the superior temporal cortex. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:127-143. [PMID: 35964967 DOI: 10.1016/b978-0-12-823493-8.00026-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In congenitally deaf people, temporal regions typically believed to be primarily auditory enhance their response to nonauditory information. The neural mechanisms and functional principles underlying this phenomenon, as well as its impact on auditory recovery after sensory restoration, yet remain debated. In this chapter, we demonstrate that the cross-modal recruitment of temporal regions by visual inputs in congenitally deaf people follows organizational principles known to be present in the hearing brain. We propose that the functional and structural mechanisms allowing optimal convergence of multisensory information in the temporal cortex of hearing people also provide the neural scaffolding for feeding visual or tactile information into the deafened temporal areas. Innate in their nature, such anatomo-functional links between the auditory and other sensory systems would represent the common substrate of both early multisensory integration and expression of selective cross-modal plasticity in the superior temporal cortex.
Collapse
Affiliation(s)
- Stefania Benetti
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy; Institute for Research in Psychology and Neuroscience, Faculty of Psychology and Educational Science, UC Louvain, Louvain-la-Neuve, Belgium.
| |
Collapse
|
9
|
Rennig J, Beauchamp MS. Intelligibility of audiovisual sentences drives multivoxel response patterns in human superior temporal cortex. Neuroimage 2021; 247:118796. [PMID: 34906712 PMCID: PMC8819942 DOI: 10.1016/j.neuroimage.2021.118796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 11/18/2021] [Accepted: 12/08/2021] [Indexed: 11/18/2022] Open
Abstract
Regions of the human posterior superior temporal gyrus and sulcus (pSTG/S) respond to the visual mouth movements that constitute visual speech and the auditory vocalizations that constitute auditory speech, and neural responses in pSTG/S may underlie the perceptual benefit of visual speech for the comprehension of noisy auditory speech. We examined this possibility through the lens of multivoxel pattern responses in pSTG/S. BOLD fMRI data was collected from 22 participants presented with speech consisting of English sentences presented in five different formats: visual-only; auditory with and without added auditory noise; and audiovisual with and without auditory noise. Participants reported the intelligibility of each sentence with a button press and trials were sorted post-hoc into those that were more or less intelligible. Response patterns were measured in regions of the pSTG/S identified with an independent localizer. Noisy audiovisual sentences with very similar physical properties evoked very different response patterns depending on their intelligibility. When a noisy audiovisual sentence was reported as intelligible, the pattern was nearly identical to that elicited by clear audiovisual sentences. In contrast, an unintelligible noisy audiovisual sentence evoked a pattern like that of visual-only sentences. This effect was less pronounced for noisy auditory-only sentences, which evoked similar response patterns regardless of intelligibility. The successful integration of visual and auditory speech produces a characteristic neural signature in pSTG/S, highlighting the importance of this region in generating the perceptual benefit of visual speech.
Collapse
Affiliation(s)
- Johannes Rennig
- Division of Neuropsychology, Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Michael S Beauchamp
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Richards Medical Research Building, A607, 3700 Hamilton Walk, Philadelphia, PA 19104-6016, United States.
| |
Collapse
|
10
|
Berto M, Ricciardi E, Pietrini P, Bottari D. Interactions between auditory statistics processing and visual experience emerge only in late development. iScience 2021; 24:103383. [PMID: 34816108 PMCID: PMC8593607 DOI: 10.1016/j.isci.2021.103383] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 09/18/2021] [Accepted: 10/27/2021] [Indexed: 01/10/2023] Open
Abstract
The auditory system relies on local and global representations to discriminate sounds. This study investigated whether vision influences the development and functioning of these fundamental sound computations. We employed a computational approach to control statistical properties embedded in sounds and tested samples of sighted controls (SC) and congenitally (CB) and late-onset (LB) blind individuals in two experiments. In experiment 1, performance relied on local features analysis; in experiment 2, performance benefited from computing global representations. In both experiments, SC and CB performance remarkably overlapped. Conversely, LB performed systematically worse than the other groups when relying on local features, with no alterations on global representations. Results suggest that auditory computations tested here develop independently from vision. The efficiency of local auditory processing can be hampered in case sight becomes unavailable later in life, supporting the existence of an audiovisual interplay for the processing of auditory details, which emerges only in late development. Computational and deprivation models can be combined to assess sensory plasticity Basic auditory computations develop independently from early visual input Late-onset sight loss can hamper the efficiency of local auditory processing
Collapse
Affiliation(s)
- Martina Berto
- Molecular Mind Lab, IMT School for Advanced Studies Lucca, 55100 Lucca, Italy
| | - Emiliano Ricciardi
- Molecular Mind Lab, IMT School for Advanced Studies Lucca, 55100 Lucca, Italy
| | - Pietro Pietrini
- Molecular Mind Lab, IMT School for Advanced Studies Lucca, 55100 Lucca, Italy
| | - Davide Bottari
- Molecular Mind Lab, IMT School for Advanced Studies Lucca, 55100 Lucca, Italy
| |
Collapse
|
11
|
Ferrari A, Noppeney U. Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy. PLoS Biol 2021; 19:e3001465. [PMID: 34793436 PMCID: PMC8639080 DOI: 10.1371/journal.pbio.3001465] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 12/02/2021] [Accepted: 11/01/2021] [Indexed: 11/22/2022] Open
Abstract
To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals' causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.
Collapse
Affiliation(s)
- Ambra Ferrari
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
12
|
Dobrushina OR, Arina GA, Dobrynina LA, Novikova ES, Gubanova MV, Belopasova AV, Vorobeva VP, Suslina AD, Pechenkova EV, Perepelkina OS, Kremneva EI, Krotenkova MV. Sensory integration in interoception: Interplay between top-down and bottom-up processing. Cortex 2021; 144:185-197. [PMID: 34673435 DOI: 10.1016/j.cortex.2021.08.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 08/10/2021] [Accepted: 08/31/2021] [Indexed: 01/27/2023]
Abstract
Although the neural systems supporting interoception have been outlined in general, the exact processes underlying the integration of visceral signals still await research. Based on the predictive coding concept, we aimed to reveal the neural networks responsible for the bottom-up (stimulus-dependent) and top-down (model-dependent) processing of interoceptive information. In a study of 30 female participants, we utilized two classical body perception experiments-the rubber hand illusion and a heartbeat detection task (cardioception), with the latter being implemented in fMRI settings. We interpreted a stronger rubber hand illusion, as measured by higher proprioceptive drift, as a tendency to rely on actual sensory experience, i.e., bottom-up processing, while lower proprioceptive drift served as an indicator of the prevalence of top-down, model-based influences. To reveal the bottom-up and top-down processes in cardioception, we performed a seed-based connectivity analysis of the heartbeat detection task, using as seeds the areas with known roles in sensory integration and entering proprioceptive drift as a covariate. The results revealed a left thalamus-dependent network positively associated with proprioceptive drift (bottom-up processing) and a left amygdala-dependent network negatively associated with drift (top-down processing). Bottom-up processing was related to thalamic connectivity with the left frontal operculum and anterior insula, anterior cingulate cortex, hypothalamus, right planum polare and right inferior frontal gyrus. Top-down processing was related to amygdalar connectivity with the rostral prefrontal cortex and an area involving the left frontal opercular and anterior insular cortex, with the latter area being an intersection of the two networks. Thus, we revealed the neural mechanisms underlying the integration of interoceptive information through the interaction between the current sensory experience and internal models.
Collapse
Affiliation(s)
| | - Galina A Arina
- M.V. Lomonosov Moscow State University, Faculty of Psychology, Moscow, Russia
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
13
|
Sardari S, Pourrahimi A, Fathi M, Talebi H, Mazhari S. Auditory processing in schizophrenia: Behavioural evidence of abnormal spatial awareness. Laterality 2021; 27:71-85. [PMID: 34293997 DOI: 10.1080/1357650x.2021.1955910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Spatial processing deficits are the reason for many daily life problems of schizophrenia (SCZ) patients. In this study, we aimed to examine the possibility of abnormal bias to one hemifield, in form of hemispatial neglect and extinction, in auditory modality in SCZ. Twenty-five SCZ patients and 25 healthy individuals were compared on speech tasks to study the auditory neglect and extinction, as well as an auditory localization task for studying neglect. In the speech tasks, participants reproduced some nonsense syllables, played from one or two speakers on the right and/or left sides. On the localization task, examinees discriminated the subjective location of the noise stimuli presented randomly from five speakers. On the speech task, patients had significantly lower hit rates for the right ear compared with controls (p = 0.01). While healthy controls showed right ear advantage, SCZs showed a left ear priority. In the localization task, although both groups had a left-side bias, this bias was much more prominent for the patients (all p < 0.05). SCZ could potentially alter the auditory spatial function, which may appear in the form of auditory neglect and extinction on the right side, depending on the characteristics of patient population.
Collapse
Affiliation(s)
- Sara Sardari
- Kerman Neuroscience Research center, Kerman University of Medical Sciences, Kerman, Iran
| | - AliMohammad Pourrahimi
- Kerman Neuroscience Research center, Kerman University of Medical Sciences, Kerman, Iran
| | - Mazyar Fathi
- Kerman Neuroscience Research center, Kerman University of Medical Sciences, Kerman, Iran
| | - Hosein Talebi
- Audiology department, Rehabilitation faculty, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Shahrzad Mazhari
- Kerman Neuroscience Research center, Kerman University of Medical Sciences, Kerman, Iran.,Department of Psychiatry, Medical School, Kerman University of Medical Sciences, Kerman, Iran
| |
Collapse
|
14
|
Direct Structural Connections between Auditory and Visual Motion-Selective Regions in Humans. J Neurosci 2021; 41:2393-2405. [PMID: 33514674 DOI: 10.1523/jneurosci.1552-20.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 12/23/2020] [Accepted: 01/04/2021] [Indexed: 11/21/2022] Open
Abstract
In humans, the occipital middle-temporal region (hMT+/V5) specializes in the processing of visual motion, while the planum temporale (hPT) specializes in auditory motion processing. It has been hypothesized that these regions might communicate directly to achieve fast and optimal exchange of multisensory motion information. Here we investigated, for the first time in humans (male and female), the presence of direct white matter connections between visual and auditory motion-selective regions using a combined fMRI and diffusion MRI approach. We found evidence supporting the potential existence of direct white matter connections between individually and functionally defined hMT+/V5 and hPT. We show that projections between hMT+/V5 and hPT do not overlap with large white matter bundles, such as the inferior longitudinal fasciculus and the inferior frontal occipital fasciculus. Moreover, we did not find evidence suggesting the presence of projections between the fusiform face area and hPT, supporting the functional specificity of hMT+/V5-hPT connections. Finally, the potential presence of hMT+/V5-hPT connections was corroborated in a large sample of participants (n = 114) from the human connectome project. Together, this study provides a first indication for potential direct occipitotemporal projections between hMT+/V5 and hPT, which may support the exchange of motion information between functionally specialized auditory and visual regions.SIGNIFICANCE STATEMENT Perceiving and integrating moving signal across the senses is arguably one of the most important perceptual skills for the survival of living organisms. In order to create a unified representation of movement, the brain must therefore integrate motion information from separate senses. Our study provides support for the potential existence of direct connections between motion-selective regions in the occipital/visual (hMT+/V5) and temporal/auditory (hPT) cortices in humans. This connection could represent the structural scaffolding for the rapid and optimal exchange and integration of multisensory motion information. These findings suggest the existence of computationally specific pathways that allow information flow between areas that share a similar computational goal.
Collapse
|
15
|
Visual motion processing recruits regions selective for auditory motion in early deaf individuals. Neuroimage 2021; 230:117816. [PMID: 33524580 DOI: 10.1016/j.neuroimage.2021.117816] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 01/18/2021] [Accepted: 01/25/2021] [Indexed: 01/24/2023] Open
Abstract
In early deaf individuals, the auditory deprived temporal brain regions become engaged in visual processing. In our study we tested further the hypothesis that intrinsic functional specialization guides the expression of cross-modal responses in the deprived auditory cortex. We used functional MRI to characterize the brain response to horizontal, radial and stochastic visual motion in early deaf and hearing individuals matched for the use of oral or sign language. Visual motion showed enhanced response in the 'deaf' mid-lateral planum temporale, a region selective to auditory motion as demonstrated by a separate auditory motion localizer in hearing people. Moreover, multivariate pattern analysis revealed that this reorganized temporal region showed enhanced decoding of motion categories in the deaf group, while visual motion-selective region hMT+/V5 showed reduced decoding when compared to hearing people. Dynamic Causal Modelling revealed that the 'deaf' motion-selective temporal region shows a specific increase of its functional interactions with hMT+/V5 and is now part of a large-scale visual motion selective network. In addition, we observed preferential responses to radial, compared to horizontal, visual motion in the 'deaf' right superior temporal cortex region that also show preferential response to approaching/receding sounds in the hearing brain. Overall, our results suggest that the early experience of auditory deprivation interacts with intrinsic constraints and triggers a large-scale reallocation of computational load between auditory and visual brain regions that typically support the multisensory processing of motion information.
Collapse
|
16
|
Gaglianese A, Branco MP, Groen IIA, Benson NC, Vansteensel MJ, Murray MM, Petridou N, Ramsey NF. Electrocorticography Evidence of Tactile Responses in Visual Cortices. Brain Topogr 2020; 33:559-570. [PMID: 32661933 PMCID: PMC7429547 DOI: 10.1007/s10548-020-00783-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2019] [Accepted: 06/28/2020] [Indexed: 01/30/2023]
Abstract
There is ongoing debate regarding the extent to which human cortices are specialized for processing a given sensory input versus a given type of information, independently of the sensory source. Many neuroimaging and electrophysiological studies have reported that primary and extrastriate visual cortices respond to tactile and auditory stimulation, in addition to visual inputs, suggesting these cortices are intrinsically multisensory. In particular for tactile responses, few studies have proven neuronal processes in visual cortex in humans. Here, we assessed tactile responses in both low-level and extrastriate visual cortices using electrocorticography recordings in a human participant. Specifically, we observed significant spectral power increases in the high frequency band (30-100 Hz) in response to tactile stimuli, reportedly associated with spiking neuronal activity, in both low-level visual cortex (i.e. V2) and in the anterior part of the lateral occipital-temporal cortex. These sites were both involved in processing tactile information and responsive to visual stimulation. More generally, the present results add to a mounting literature in support of task-sensitive and sensory-independent mechanisms underlying functions like spatial, motion, and self-processing in the brain and extending from higher-level as well as to low-level cortices.
Collapse
Affiliation(s)
- Anna Gaglianese
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, University Hospital Center, University of Lausanne, Rue Centrale 7, Lausanne, 1003, Switzerland.
- Department of Neurosurgery and Neurology, UMC Utrecht Brain Center, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands.
- Department of Radiology, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands.
| | - Mariana P Branco
- Department of Neurosurgery and Neurology, UMC Utrecht Brain Center, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Iris I A Groen
- Department of Psychology, New York University, Washington Place 6, New York, 10003, NY, USA
| | - Noah C Benson
- Department of Psychology, New York University, Washington Place 6, New York, 10003, NY, USA
- eScience Institute, University of Washington, 15th Ave NE, Seattle, 98195, WA, USA
| | - Mariska J Vansteensel
- Department of Neurosurgery and Neurology, UMC Utrecht Brain Center, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, University Hospital Center, University of Lausanne, Rue Centrale 7, Lausanne, 1003, Switzerland
- Sensory, Perceptual and Cognitive Neuroscience Section, Center for Biomedical Imaging (CIBM), Station 6, Lausanne, 1015, Switzerland
- Ophthalmology Service, Fondation Asile des aveugles and University of Lausanne, Avenue de France 15, Lausanne, 1004, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, 21st Avenue South 1215, Nashville, 37232, TN, USA
| | - Natalia Petridou
- Department of Radiology, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Nick F Ramsey
- Department of Neurosurgery and Neurology, UMC Utrecht Brain Center, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| |
Collapse
|
17
|
Shared Representation of Visual and Auditory Motion Directions in the Human Middle-Temporal Cortex. Curr Biol 2020; 30:2289-2299.e8. [DOI: 10.1016/j.cub.2020.04.039] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 03/03/2020] [Accepted: 04/16/2020] [Indexed: 11/23/2022]
|
18
|
Representation of Auditory Motion Directions and Sound Source Locations in the Human Planum Temporale. J Neurosci 2019; 39:2208-2220. [PMID: 30651333 DOI: 10.1523/jneurosci.2289-18.2018] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is, however, poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to sounds moving left, right, up, and down as well as static sounds. Whole-brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human planum temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were, however, significantly distinct. Altogether, our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENT Compared with what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human planum temporale (hPT) and that they rely on partially shared pattern geometries. Our study, therefore, sheds important new light on how computing the location or direction of sounds is implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a "preferred axis of motion" organization, reminiscent of the coding mechanisms typically observed in the occipital middle-temporal cortex (hMT+/V5) region for computing visual motion.
Collapse
|