1
|
Siklos-Whillans J, Itier RJ. Effects of Inversion and Fixation Location on the Processing of Face and House Stimuli - A Mass Univariate Analysis. Brain Topogr 2024; 37:972-992. [PMID: 39042323 DOI: 10.1007/s10548-024-01068-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Accepted: 07/05/2024] [Indexed: 07/24/2024]
Abstract
Most Event Related Potential studies investigating the time course of visual processing have focused mainly on the N170 component. Stimulus orientation affects the N170 amplitude for faces but not for objects, a finding interpreted as reflecting holistic/configural processing for faces and featural processing for objects. Furthermore, while recent studies suggest where on the face people fixate impacts the N170, fixation location effects have not been investigated in objects. A data-driven mass univariate analysis (all time points and electrodes) was used to investigate the time course of inversion and fixation location effects on the neural processing of faces and houses. Strong and widespread orientation effects were found for both faces and houses, from 100-350ms post-stimulus onset, including P1 and N170 components, and later, a finding arguing against a lack of holistic processing for houses. While no clear fixation effect was found for houses, fixation location strongly impacted face processing early, reflecting retinotopic mapping around the C2 and P1 components, and during the N170-P2 interval. Face inversion effects were also largest for nasion fixation around 120ms. The results support the view that facial feature integration (1) depends on which feature is being fixated and where the other features are situated in the visual field, (2) occurs maximally during the P1-N170 interval when fixation is on the nasion and (3) continues past 200ms, suggesting the N170 peak, where weak effects were found, might be an inflexion point between processes rather than the end of a feature integration into a whole process.
Collapse
Affiliation(s)
- James Siklos-Whillans
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
| | - Roxane J Itier
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada.
| |
Collapse
|
2
|
Kartheiser G, Cormier K, Bell-Souder D, Dye M, Sharma A. Neurocognitive outcomes in young adults with cochlear implants: The role of early language access and crossmodal plasticity. Hear Res 2024; 451:109074. [PMID: 39018768 DOI: 10.1016/j.heares.2024.109074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 06/03/2024] [Accepted: 07/02/2024] [Indexed: 07/19/2024]
Abstract
Many children with profound hearing loss have received cochlear implants (CI) to help restore some sense of hearing. There is, however, limited research on long-term neurocognitive outcomes in young adults who have grown up hearing through a CI. This study compared the cognitive outcomes of early-implanted (n = 20) and late-implanted (n = 21) young adult CI users, and typically hearing (TH) controls (n=56), all of whom were enrolled in college. Cognitive fluidity, nonverbal intelligence, and American Sign Language (ASL) comprehension were assessed, revealing no significant differences in cognition and nonverbal intelligence between the early and late-implanted groups. However, there was a difference in ASL comprehension, with the late-implanted group having significantly higher ASL comprehension. Although young adult CI users showed significantly lower scores in a working memory and processing speed task than TH age-matched controls, there were no significant differences in tasks involving executive function shifting, inhibitory control, and episodic memory between young adult CI and young adult TH participants. In an exploratory analysis of a subset of CI participants (n = 17) in whom we were able to examine crossmodal plasticity, we saw greater evidence of crossmodal recruitment from the visual system in late-implanted compared with early-implanted CI young adults. However, cortical visual evoked potential latency biomarkers of crossmodal plasticity were not correlated with cognitive measures or ASL comprehension. The results suggest that in the late-implanted CI users, early access to sign language may have served as a scaffold for appropriate cognitive development, while in the early-implanted group early access to oral language benefited cognitive development. Furthermore, our results suggest that the persistence of crossmodal neuroplasticity into adulthood does not necessarily impact cognitive development. In conclusion, early access to language - spoken or signed - may be important for cognitive development, with no observable effect of crossmodal plasticity on cognitive outcomes.
Collapse
Affiliation(s)
- Geo Kartheiser
- Rochester Institute of Technology, Rochester, NY, United States of America
| | - Kayla Cormier
- Department of Speech Language and Hearing Sciences, University of Colorado Boulder, Boulder, CO, United States of America
| | - Don Bell-Souder
- Department of Speech Language and Hearing Sciences, University of Colorado Boulder, Boulder, CO, United States of America
| | - Matthew Dye
- Rochester Institute of Technology, Rochester, NY, United States of America
| | - Anu Sharma
- Department of Speech Language and Hearing Sciences, University of Colorado Boulder, Boulder, CO, United States of America.
| |
Collapse
|
3
|
Zhu M, Qiao Y, Sun W, Sun Y, Long Y, Guo H, Cai C, Shen H, Shang Y. Visual selective attention in individuals with age-related hearing loss. Neuroimage 2024; 298:120787. [PMID: 39147293 DOI: 10.1016/j.neuroimage.2024.120787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 07/11/2024] [Accepted: 08/12/2024] [Indexed: 08/17/2024] Open
Abstract
Evidence from epidemiological studies suggests that hearing loss is associated with an accelerated decline in cognitive function, but the underlying pathophysiological mechanism remains poorly understood. Studies using auditory tasks have suggested that degraded auditory input increases the cognitive load for auditory perceptual processing and thereby reduces the resources available for other cognitive tasks. Attention-related networks are among the systems overrecruited to support degraded auditory perception, but it is unclear how they function when no excessive recruitment of cognitive resources for auditory processing is needed. Here, we implemented an EEG study using a nonauditory visual attentional selection task in 30 individuals with age-related hearing loss (ARHLs, 60-73 years) and compared them with aged (N = 30, 60-70 years) and young (N = 35, 22-29 years) normal-hearing controls. Compared with their normal-hearing peers, ARHLs demonstrated a significant amplitude reduction for the posterior contralateral N2 component, which is a well-validated index of the allocation of selective visual attention, despite the comparable behavioral performance. Furthermore, the amplitudes were observed to correlate significantly with hearing acuities (pure tone audiometry thresholds) and higher-order hearing abilities (speech-in-noise thresholds) in aged individuals. The target-elicited alpha lateralization, another mechanism of visuospatial attention, demonstrated in control groups was not observed in ARHLs. Although behavioral performance is comparable, the significant decrease in N2pc amplitude in ARHLs provides neurophysiologic evidence that may suggest a visual attentional deficit in ARHLs even without extra-recruitment of cognitive resources by auditory processing. It supports the hypothesis that constant degraded auditory input in ARHLs has an adverse impact on the function of cognitive control systems, which is a possible mechanism mediating the relationship between hearing loss and cognitive decline.
Collapse
Affiliation(s)
- Min Zhu
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Beijing, People's Republic of China
| | - Yufei Qiao
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Beijing, People's Republic of China
| | - Wen Sun
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Beijing, People's Republic of China
| | - Yang Sun
- School of Educational Science, Shenyang Normal University, Shenyang, People's Republic of China
| | - Yuanshun Long
- National Engineering Research Center for E-Learning, Central China Normal University, Wuhan, People's Republic of China
| | - Hua Guo
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, People's Republic of China
| | - Chang Cai
- National Engineering Research Center for E-Learning, Central China Normal University, Wuhan, People's Republic of China
| | - Hang Shen
- Department of Neurology, Peking Union Medical College Hospital, Beijing, People's Republic of China.
| | - Yingying Shang
- Department of Otorhinolaryngology, Peking Union Medical College Hospital, Beijing, People's Republic of China.
| |
Collapse
|
4
|
Seol HY, Kang S, Kim S, Kim J, Kim E, Hong SH, Moon IJ. P1 and N1 Characteristics in Individuals with Normal Hearing and Hearing Loss, and Cochlear Implant Users: A Pilot Study. J Clin Med 2024; 13:4941. [PMID: 39201083 PMCID: PMC11355419 DOI: 10.3390/jcm13164941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Revised: 08/14/2024] [Accepted: 08/20/2024] [Indexed: 09/02/2024] Open
Abstract
Background: It has been reported in many previous studies that the lack of auditory input due to hearing loss (HL) can induce changes in the brain. However, most of these studies have focused on individuals with pre-lingual HL and have predominantly compared the characteristics of those with normal hearing (NH) to cochlear implant (CI) users in children. This study examined the visual and auditory evoked potential characteristics in NH listeners, individuals with bilateral HL, and CI users, including those with single-sided deafness. Methods: A total of sixteen participants (seven NH listeners, four individuals with bilateral sensorineural HL, and five CI users) completed speech testing in quiet and noise and evoked potential testing. For speech testing, the Korean version of the Hearing in Noise Test was used to assess individuals' speech understanding ability in quiet and in noise (noise from the front, +90 degrees, and -90 degrees). For evoked potential testing, visual and auditory (1000 Hz, /ba/, and /da/) evoked potentials were measured. Results: The results showed that CI users understood speech better than those with HL in all conditions except for the noise from +90 and -90 degrees. In the CI group, a decrease in P1 amplitudes was noted across all channels after implantation. The NH group exhibited the highest amplitudes, followed by the HL group, with the CI group (post-CI) showing the lowest amplitudes. In terms of auditory evoked potentials, the smallest amplitude was observed in the pre-CI condition regardless of the type of stimulus. Conclusions: To the best of our knowledge, this is the first study that examined visual and auditory evoked potentials based on various hearing profiles. The characteristics of evoked potentials varied across participant groups, and further studies with CI users are necessary, as there are significant challenges in collecting and analyzing evoked potentials due to artifact issues on the CI side.
Collapse
Affiliation(s)
- Hye Yoon Seol
- Department of Communication Disorders, Ewha Womans University, Seoul 03760, Republic of Korea
| | - Soojin Kang
- Center for Digital Humanities and Computational Social Sciences, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea
| | - Sungkean Kim
- Department of Human–Computer Interaction, Hanyang University, Ansan 15588, Republic of Korea
- Department of Interdisciplinary Robot Engineering Systems, Hanyang University, Ansan 15588, Republic of Korea
| | - Jihoo Kim
- Department of Interdisciplinary Robot Engineering Systems, Hanyang University, Ansan 15588, Republic of Korea
| | - Euijin Kim
- Department of Human–Computer Interaction, Hanyang University, Ansan 15588, Republic of Korea
| | - Sung Hwa Hong
- Department of Otolaryngology-Head and Neck Surgery, Soree Ear Clinic, Seoul 07560, Republic of Korea
| | - Il Joon Moon
- Hearing Research Laboratory, Samsung Medical Center, Seoul 16419, Republic of Korea
- Department of Otolaryngology-Head & Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 03181, Republic of Korea
| |
Collapse
|
5
|
Weglage A, Layer N, Meister H, Müller V, Lang-Roth R, Walger M, Sandmann P. Changes in visually and auditory attended audiovisual speech processing in cochlear implant users: A longitudinal ERP study. Hear Res 2024; 447:109023. [PMID: 38733710 DOI: 10.1016/j.heares.2024.109023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/25/2024] [Accepted: 04/26/2024] [Indexed: 05/13/2024]
Abstract
Limited auditory input, whether caused by hearing loss or by electrical stimulation through a cochlear implant (CI), can be compensated by the remaining senses. Specifically for CI users, previous studies reported not only improved visual skills, but also altered cortical processing of unisensory visual and auditory stimuli. However, in multisensory scenarios, it is still unclear how auditory deprivation (before implantation) and electrical hearing experience (after implantation) affect cortical audiovisual speech processing. Here, we present a prospective longitudinal electroencephalography (EEG) study which systematically examined the deprivation- and CI-induced alterations of cortical processing of audiovisual words by comparing event-related potentials (ERPs) in postlingually deafened CI users before and after implantation (five weeks and six months of CI use). A group of matched normal-hearing (NH) listeners served as controls. The participants performed a word-identification task with congruent and incongruent audiovisual words, focusing their attention on either the visual (lip movement) or the auditory speech signal. This allowed us to study the (top-down) attention effect on the (bottom-up) sensory cortical processing of audiovisual speech. When compared to the NH listeners, the CI candidates (before implantation) and the CI users (after implantation) exhibited enhanced lipreading abilities and an altered cortical response at the N1 latency range (90-150 ms) that was characterized by a decreased theta oscillation power (4-8 Hz) and a smaller amplitude in the auditory cortex. After implantation, however, the auditory-cortex response gradually increased and developed a stronger intra-modal connectivity. Nevertheless, task efficiency and activation in the visual cortex was significantly modulated in both groups by focusing attention on the visual as compared to the auditory speech signal, with the NH listeners additionally showing an attention-dependent decrease in beta oscillation power (13-30 Hz). In sum, these results suggest remarkable deprivation effects on audiovisual speech processing in the auditory cortex, which partially reverse after implantation. Although even experienced CI users still show distinct audiovisual speech processing compared to NH listeners, pronounced effects of (top-down) direction of attention on (bottom-up) audiovisual processing can be observed in both groups. However, NH listeners but not CI users appear to show enhanced allocation of cognitive resources in visually as compared to auditory attended audiovisual speech conditions, which supports our behavioural observations of poorer lipreading abilities and reduced visual influence on audition in NH listeners as compared to CI users.
Collapse
Affiliation(s)
- Anna Weglage
- Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Centre, University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Germany.
| | - Natalie Layer
- Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Centre, University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Germany
| | - Hartmut Meister
- Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Centre, University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Germany; Jean-Uhrmacher-Institute for Clinical ENT Research, University of Cologne, Germany
| | - Verena Müller
- Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Centre, University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Germany
| | - Ruth Lang-Roth
- Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Centre, University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Germany
| | - Martin Walger
- Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Centre, University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Germany; Jean-Uhrmacher-Institute for Clinical ENT Research, University of Cologne, Germany
| | - Pascale Sandmann
- Department of Otolaryngology, Head and Neck Surgery, Carl von Ossietzky University of Oldenburg, Germany; Research Center Neurosensory Science University of Oldenburg, Germany; Cluster of Excellence "Hearing4all", University of Oldenburg, Germany
| |
Collapse
|
6
|
Deroche MLD, Wolfe J, Neumann S, Manning J, Hanna L, Towler W, Wilson C, Bien AG, Miller S, Schafer E, Gemignani J, Alemi R, Muthuraman M, Koirala N, Gracco VL. Cross-modal plasticity in children with cochlear implant: converging evidence from EEG and functional near-infrared spectroscopy. Brain Commun 2024; 6:fcae175. [PMID: 38846536 PMCID: PMC11154148 DOI: 10.1093/braincomms/fcae175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 04/02/2024] [Accepted: 05/17/2024] [Indexed: 06/09/2024] Open
Abstract
Over the first years of life, the brain undergoes substantial organization in response to environmental stimulation. In a silent world, it may promote vision by (i) recruiting resources from the auditory cortex and (ii) making the visual cortex more efficient. It is unclear when such changes occur and how adaptive they are, questions that children with cochlear implants can help address. Here, we examined 7-18 years old children: 50 had cochlear implants, with delayed or age-appropriate language abilities, and 25 had typical hearing and language. High-density electroencephalography and functional near-infrared spectroscopy were used to evaluate cortical responses to a low-level visual task. Evidence for a 'weaker visual cortex response' and 'less synchronized or less inhibitory activity of auditory association areas' in the implanted children with language delays suggests that cross-modal reorganization can be maladaptive and does not necessarily strengthen the dominant visual sense.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Department of Psychology, Concordia University, Montreal, Quebec, Canada, H4B 1R6
| | - Jace Wolfe
- Hearts for Hearing Foundation, Oklahoma City, OK 73120, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, Oklahoma City, OK 73120, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, Oklahoma City, OK 73120, USA
| | - Lindsay Hanna
- Hearts for Hearing Foundation, Oklahoma City, OK 73120, USA
| | - Will Towler
- Hearts for Hearing Foundation, Oklahoma City, OK 73120, USA
| | - Caleb Wilson
- Department of Otolaryngology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Alexander G Bien
- Department of Otolaryngology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Sharon Miller
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX 76201, USA
| | - Erin Schafer
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX 76201, USA
| | - Jessica Gemignani
- Department of Developmental and Social Psychology, University of Padova, 35131 Padua, Italy
| | - Razieh Alemi
- Department of Psychology, Concordia University, Montreal, Quebec, Canada, H4B 1R6
| | - Muthuraman Muthuraman
- Section of Neural Engineering with Signal Analytics and Artificial Intelligence, Department of Neurology, University Hospital Würzburg, 97080 Würzburg, Germany
| | | | | |
Collapse
|
7
|
Liu Y, Zhang H, Fan C, Liu F, Li S, Li J, Zhao H, Zeng X. Potential role of Bcl2 in lipid metabolism and synaptic dysfunction of age-related hearing loss. Neurobiol Dis 2023; 187:106320. [PMID: 37813166 DOI: 10.1016/j.nbd.2023.106320] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 09/25/2023] [Accepted: 10/06/2023] [Indexed: 10/11/2023] Open
Abstract
Age-related hearing loss (ARHL) is a prevalent condition affecting millions of individuals globally. This study investigated the role of the cell survival regulator Bcl2 in ARHL through in vitro and in vivo experiments and metabolomics analysis. The results showed that the lack of Bcl2 in the auditory cortex affects lipid metabolism, resulting in reduced synaptic function and neurodegeneration. Immunohistochemical analysis demonstrated enrichment of Bcl2 in specific areas of the auditory cortex, including the secondary auditory cortex, dorsal and ventral areas, and primary somatosensory cortex. In ARHL rats, a significant decrease in Bcl2 expression was observed in these areas. RNAseq analysis showed that the downregulation of Bcl2 altered lipid metabolism pathways within the auditory pathway, which was further confirmed by metabolomics analysis. These results suggest that Bcl2 plays a crucial role in regulating lipid metabolism, synaptic function, and neurodegeneration in ARHL; thereby, it could be a potential therapeutic target. We also revealed that Bcl2 probably has a close connection with lipid peroxidation and reactive oxygen species (ROS) production occurring in cochlear hair cells and cortical neurons in ARHL. The study also identified changes in hair cells, spiral ganglion cells, and nerve fiber density as consequences of Bcl2 deficiency, which could potentially contribute to the inner ear nerve blockage and subsequent hearing loss. Therefore, targeting Bcl2 may be a promising potential therapeutic intervention for ARHL. These findings provide valuable insights into the molecular mechanisms underlying ARHL and may pave the way for novel treatment approaches for this prevalent age-related disorder.
Collapse
Affiliation(s)
- Yue Liu
- Department of Graduate and Scientific Research, Zunyi Medical University Zhuhai Campus, Zhuhai 519041, China; Department of Otolaryngology, Longgang E.N.T Hospital & Shenzhen Key Laboratory of E.N.T, Institute of E.N.T, Shenzhen 518172, China.
| | - Huasong Zhang
- Department of Otolaryngology, Longgang E.N.T Hospital & Shenzhen Key Laboratory of E.N.T, Institute of E.N.T, Shenzhen 518172, China; Department of Otolaryngology, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510000, China; Department of Medical Research Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University; Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou, 510000, China.
| | - Cong Fan
- Department of Medical Research Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University; Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou, 510000, China
| | - Feiyi Liu
- Department of Medical Research Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University; Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou, 510000, China
| | - Shaoying Li
- Department of Medical Research Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University; Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou, 510000, China
| | - Juanjuan Li
- Department of Otolaryngology, Longgang E.N.T Hospital & Shenzhen Key Laboratory of E.N.T, Institute of E.N.T, Shenzhen 518172, China
| | - Huiying Zhao
- Department of Medical Research Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University; Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Guangzhou, 510000, China
| | - Xianhai Zeng
- Department of Graduate and Scientific Research, Zunyi Medical University Zhuhai Campus, Zhuhai 519041, China; Department of Otolaryngology, Longgang E.N.T Hospital & Shenzhen Key Laboratory of E.N.T, Institute of E.N.T, Shenzhen 518172, China.
| |
Collapse
|
8
|
Layer N, Abdel-Latif KHA, Radecke JO, Müller V, Weglage A, Lang-Roth R, Walger M, Sandmann P. Effects of noise and noise reduction on audiovisual speech perception in cochlear implant users: An ERP study. Clin Neurophysiol 2023; 154:141-156. [PMID: 37611325 DOI: 10.1016/j.clinph.2023.07.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 06/19/2023] [Accepted: 07/14/2023] [Indexed: 08/25/2023]
Abstract
OBJECTIVE Hearing with a cochlear implant (CI) is difficult in noisy environments, but the use of noise reduction algorithms, specifically ForwardFocus, can improve speech intelligibility. The current event-related potentials (ERP) study examined the electrophysiological correlates of this perceptual improvement. METHODS Ten bimodal CI users performed a syllable-identification task in auditory and audiovisual conditions, with syllables presented from the front and stationary noise presented from the sides. Brainstorm was used for spatio-temporal evaluation of ERPs. RESULTS CI users revealed an audiovisual benefit as reflected by shorter response times and greater activation in temporal and occipital regions at P2 latency. However, in auditory and audiovisual conditions, background noise hampered speech processing, leading to longer response times and delayed auditory-cortex-activation at N1 latency. Nevertheless, activating ForwardFocus resulted in shorter response times, reduced listening effort and enhanced superior-frontal-cortex-activation at P2 latency, particularly in audiovisual conditions. CONCLUSIONS ForwardFocus enhances speech intelligibility in audiovisual speech conditions by potentially allowing the reallocation of attentional resources to relevant auditory speech cues. SIGNIFICANCE This study shows for CI users that background noise and ForwardFocus differentially affect spatio-temporal cortical response patterns, both in auditory and audiovisual speech conditions.
Collapse
Affiliation(s)
- Natalie Layer
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany.
| | | | - Jan-Ole Radecke
- Dept. of Psychiatry and Psychotherapy, University of Lübeck, Germany; Center for Brain, Behaviour and Metabolism (CBBM), University of Lübeck, Germany
| | - Verena Müller
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany
| | - Anna Weglage
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany
| | - Ruth Lang-Roth
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany
| | - Martin Walger
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany; Jean-Uhrmacher-Institute for Clinical ENT Research, University of Cologne, Germany
| | - Pascale Sandmann
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Germany; Department of Otolaryngology, Head and Neck Surgery, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
9
|
Berger JI, Gander PE, Kim S, Schwalje AT, Woo J, Na YM, Holmes A, Hong JM, Dunn CC, Hansen MR, Gantz BJ, McMurray B, Griffiths TD, Choi I. Neural Correlates of Individual Differences in Speech-in-Noise Performance in a Large Cohort of Cochlear Implant Users. Ear Hear 2023; 44:1107-1120. [PMID: 37144890 PMCID: PMC10426791 DOI: 10.1097/aud.0000000000001357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 01/11/2023] [Indexed: 05/06/2023]
Abstract
OBJECTIVES Understanding speech-in-noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group ( Kim et al. 2021 , Neuroimage ) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The present study examined neural predictors of SiN ability in a large cohort of cochlear-implant (CI) users. DESIGN We recorded electroencephalography in 114 postlingually deafened CI users while they completed the California consonant test: a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (consonant-nucleus-consonant) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a vertex electrode (Cz), which could help maximize eventual generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of SiN performance. RESULTS In general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance, which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the California consonant test (which was conducted simultaneously with electroencephalography recording) and the consonant-nucleus-consonant (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise. CONCLUSIONS These data indicate a neurophysiological correlate of SiN performance, thereby revealing a richer profile of an individual's hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.
Collapse
Affiliation(s)
- Joel I. Berger
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Phillip E. Gander
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Subong Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Adam T. Schwalje
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Jihwan Woo
- Department of Biomedical Engineering, University of Ulsan, Ulsan, South Korea
| | - Young-min Na
- Department of Biomedical Engineering, University of Ulsan, Ulsan, South Korea
| | - Ann Holmes
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky, USA
| | - Jean M. Hong
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Camille C. Dunn
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Marlan R. Hansen
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Bruce J. Gantz
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Bob McMurray
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, Iowa, USA
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | - Timothy D. Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Inyong Choi
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
10
|
Koirala N, Deroche MLD, Wolfe J, Neumann S, Bien AG, Doan D, Goldbeck M, Muthuraman M, Gracco VL. Dynamic networks differentiate the language ability of children with cochlear implants. Front Neurosci 2023; 17:1141886. [PMID: 37409105 PMCID: PMC10318154 DOI: 10.3389/fnins.2023.1141886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 05/29/2023] [Indexed: 07/07/2023] Open
Abstract
Background Cochlear implantation (CI) in prelingually deafened children has been shown to be an effective intervention for developing language and reading skill. However, there is a substantial proportion of the children receiving CI who struggle with language and reading. The current study-one of the first to implement electrical source imaging in CI population was designed to identify the neural underpinnings in two groups of CI children with good and poor language and reading skill. Methods Data using high density electroencephalography (EEG) under a resting state condition was obtained from 75 children, 50 with CIs having good (HL) or poor language skills (LL) and 25 normal hearing (NH) children. We identified coherent sources using dynamic imaging of coherent sources (DICS) and their effective connectivity computing time-frequency causality estimation based on temporal partial directed coherence (TPDC) in the two CI groups compared to a cohort of age and gender matched NH children. Findings Sources with higher coherence amplitude were observed in three frequency bands (alpha, beta and gamma) for the CI groups when compared to normal hearing children. The two groups of CI children with good (HL) and poor (LL) language ability exhibited not only different cortical and subcortical source profiles but also distinct effective connectivity between them. Additionally, a support vector machine (SVM) algorithm using these sources and their connectivity patterns for each CI group across the three frequency bands was able to predict the language and reading scores with high accuracy. Interpretation Increased coherence in the CI groups suggest overall that the oscillatory activity in some brain areas become more strongly coupled compared to the NH group. Moreover, the different sources and their connectivity patterns and their association to language and reading skill in both groups, suggest a compensatory adaptation that either facilitated or impeded language and reading development. The neural differences in the two groups of CI children may reflect potential biomarkers for predicting outcome success in CI children.
Collapse
Affiliation(s)
- Nabin Koirala
- Child Study Center, Yale School of Medicine, Yale University, New Haven, CT, United States
| | | | - Jace Wolfe
- Hearts for Hearing Foundation, Oklahoma City, OK, United States
| | - Sara Neumann
- Hearts for Hearing Foundation, Oklahoma City, OK, United States
| | - Alexander G. Bien
- Department of Otolaryngology – Head and Neck Surgery, University of Oklahoma Medical Center, Oklahoma City, OK, United States
| | - Derek Doan
- University of Oklahoma College of Medicine, Oklahoma City, OK, United States
| | - Michael Goldbeck
- University of Oklahoma College of Medicine, Oklahoma City, OK, United States
| | - Muthuraman Muthuraman
- Department of Neurology, Neural Engineering with Signal Analytics and Artificial Intelligence (NESA-AI), Universitätsklinikum Würzburg, Würzburg, Germany
| | - Vincent L. Gracco
- Child Study Center, Yale School of Medicine, Yale University, New Haven, CT, United States
- School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| |
Collapse
|
11
|
Kral A, Sharma A. Crossmodal plasticity in hearing loss. Trends Neurosci 2023; 46:377-393. [PMID: 36990952 PMCID: PMC10121905 DOI: 10.1016/j.tins.2023.02.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/27/2023] [Accepted: 02/21/2023] [Indexed: 03/29/2023]
Abstract
Crossmodal plasticity is a textbook example of the ability of the brain to reorganize based on use. We review evidence from the auditory system showing that such reorganization has significant limits, is dependent on pre-existing circuitry and top-down interactions, and that extensive reorganization is often absent. We argue that the evidence does not support the hypothesis that crossmodal reorganization is responsible for closing critical periods in deafness, and crossmodal plasticity instead represents a neuronal process that is dynamically adaptable. We evaluate the evidence for crossmodal changes in both developmental and adult-onset deafness, which start as early as mild-moderate hearing loss and show reversibility when hearing is restored. Finally, crossmodal plasticity does not appear to affect the neuronal preconditions for successful hearing restoration. Given its dynamic and versatile nature, we describe how this plasticity can be exploited for improving clinical outcomes after neurosensory restoration.
Collapse
Affiliation(s)
- Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, Otolaryngology Clinics, Hannover Medical School, Hannover, Germany; Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, NSW, Australia
| | - Anu Sharma
- Department of Speech Language and Hearing Science, Center for Neuroscience, Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA.
| |
Collapse
|
12
|
Fullerton AM, Vickers DA, Luke R, Billing AN, McAlpine D, Hernandez-Perez H, Peelle JE, Monaghan JJM, McMahon CM. Cross-modal functional connectivity supports speech understanding in cochlear implant users. Cereb Cortex 2023; 33:3350-3371. [PMID: 35989307 PMCID: PMC10068270 DOI: 10.1093/cercor/bhac277] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 06/10/2022] [Accepted: 06/11/2022] [Indexed: 11/12/2022] Open
Abstract
Sensory deprivation can lead to cross-modal cortical changes, whereby sensory brain regions deprived of input may be recruited to perform atypical function. Enhanced cross-modal responses to visual stimuli observed in auditory cortex of postlingually deaf cochlear implant (CI) users are hypothesized to reflect increased activation of cortical language regions, but it is unclear if this cross-modal activity is "adaptive" or "mal-adaptive" for speech understanding. To determine if increased activation of language regions is correlated with better speech understanding in CI users, we assessed task-related activation and functional connectivity of auditory and visual cortices to auditory and visual speech and non-speech stimuli in CI users (n = 14) and normal-hearing listeners (n = 17) and used functional near-infrared spectroscopy to measure hemodynamic responses. We used visually presented speech and non-speech to investigate neural processes related to linguistic content and observed that CI users show beneficial cross-modal effects. Specifically, an increase in connectivity between the left auditory and visual cortices-presumed primary sites of cortical language processing-was positively correlated with CI users' abilities to understand speech in background noise. Cross-modal activity in auditory cortex of postlingually deaf CI users may reflect adaptive activity of a distributed, multimodal speech network, recruited to enhance speech understanding.
Collapse
Affiliation(s)
- Amanda M Fullerton
- Department of Linguistics and Macquarie University Hearing, Australian Hearing Hub, Macquarie University, Sydney 2109, Australia
| | - Deborah A Vickers
- Cambridge Hearing Group, Sound Lab, Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 OSZ, United Kingdom
- Speech, Hearing and Phonetic Sciences, University College London, London WC1N 1PF, United Kingdom
| | - Robert Luke
- Department of Linguistics and Macquarie University Hearing, Australian Hearing Hub, Macquarie University, Sydney 2109, Australia
| | - Addison N Billing
- Institute of Cognitive Neuroscience, University College London, London WCIN 3AZ, United Kingdom
- DOT-HUB, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, United Kingdom
| | - David McAlpine
- Department of Linguistics and Macquarie University Hearing, Australian Hearing Hub, Macquarie University, Sydney 2109, Australia
| | - Heivet Hernandez-Perez
- Department of Linguistics and Macquarie University Hearing, Australian Hearing Hub, Macquarie University, Sydney 2109, Australia
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO 63110, United States
| | - Jessica J M Monaghan
- National Acoustic Laboratories, Australian Hearing Hub, Sydney 2109, Australia
- Department of Linguistics and Macquarie University Hearing, Australian Hearing Hub, Macquarie University, Sydney 2109, Australia
| | - Catherine M McMahon
- Department of Linguistics and Macquarie University Hearing, Australian Hearing Hub, Macquarie University, Sydney 2109, Australia
- HEAR Centre, Macquarie University, Sydney 2109, Australia
| |
Collapse
|
13
|
Lazard DS, Doelling KB, Arnal LH. Plasticity After Hearing Rehabilitation in the Aging Brain. Trends Hear 2023; 27:23312165231156412. [PMID: 36794429 PMCID: PMC9936397 DOI: 10.1177/23312165231156412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023] Open
Abstract
Age-related hearing loss, presbycusis, is an unavoidable sensory degradation, often associated with the progressive decline of cognitive and social functions, and dementia. It is generally considered a natural consequence of the inner-ear deterioration. However, presbycusis arguably conflates a wide array of peripheral and central impairments. Although hearing rehabilitation maintains the integrity and activity of auditory networks and can prevent or revert maladaptive plasticity, the extent of such neural plastic changes in the aging brain is poorly appreciated. By reanalyzing a large-scale dataset of more than 2200 cochlear implant users (CI) and assessing the improvement in speech perception from 6 to 24 months of use, we show that, although rehabilitation improves speech understanding on average, age at implantation only minimally affects speech scores at 6 months but has a pejorative effect at 24 months post implantation. Furthermore, older subjects (>67 years old) were significantly more likely to degrade their performances after 2 years of CI use than the younger patients for each year increase in age. Secondary analysis reveals three possible plasticity trajectories after auditory rehabilitation to account for these disparities: Awakening, reversal of deafness-specific changes; Counteracting, stabilization of additional cognitive impairments; or Decline, independent pejorative processes that hearing rehabilitation cannot prevent. The role of complementary behavioral interventions needs to be considered to potentiate the (re)activation of auditory brain networks.
Collapse
Affiliation(s)
- Diane S. Lazard
- Institut Pasteur, Université Paris Cité, INSERM AU06, Institut de l’Audition, Paris, France,ENT department, Institut Arthur Vernes, Paris, France,Diane Lazard, Institut de l’Audition, Institut Pasteur, 63 rue de Charenton, 75012 Paris, France.
| | - Keith B. Doelling
- Institut Pasteur, Université Paris Cité, INSERM AU06, Institut de l’Audition, Paris, France
| | - Luc H. Arnal
- Institut Pasteur, Université Paris Cité, INSERM AU06, Institut de l’Audition, Paris, France
| |
Collapse
|
14
|
Burkhardt P, Müller V, Meister H, Weglage A, Lang-Roth R, Walger M, Sandmann P. Age effects on cognitive functions and speech-in-noise processing: An event-related potential study with cochlear-implant users and normal-hearing listeners. Front Neurosci 2022; 16:1005859. [PMID: 36620447 PMCID: PMC9815545 DOI: 10.3389/fnins.2022.1005859] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 11/15/2022] [Indexed: 12/24/2022] Open
Abstract
A cochlear implant (CI) can partially restore hearing in individuals with profound sensorineural hearing loss. However, electrical hearing with a CI is limited and highly variable. The current study aimed to better understand the different factors contributing to this variability by examining how age affects cognitive functions and cortical speech processing in CI users. Electroencephalography (EEG) was applied while two groups of CI users (young and elderly; N = 13 each) and normal-hearing (NH) listeners (young and elderly; N = 13 each) performed an auditory sentence categorization task, including semantically correct and incorrect sentences presented either with or without background noise. Event-related potentials (ERPs) representing earlier, sensory-driven processes (N1-P2 complex to sentence onset) and later, cognitive-linguistic integration processes (N400 to semantically correct/incorrect sentence-final words) were compared between the different groups and speech conditions. The results revealed reduced amplitudes and prolonged latencies of auditory ERPs in CI users compared to NH listeners, both at earlier (N1, P2) and later processing stages (N400 effect). In addition to this hearing-group effect, CI users and NH listeners showed a comparable background-noise effect, as indicated by reduced hit rates and reduced (P2) and delayed (N1/P2) ERPs in conditions with background noise. Moreover, we observed an age effect in CI users and NH listeners, with young individuals showing improved specific cognitive functions (working memory capacity, cognitive flexibility and verbal learning/retrieval), reduced latencies (N1/P2), decreased N1 amplitudes and an increased N400 effect when compared to the elderly. In sum, our findings extend previous research by showing that the CI users' speech processing is impaired not only at earlier (sensory) but also at later (semantic integration) processing stages, both in conditions with and without background noise. Using objective ERP measures, our study provides further evidence of strong age effects on cortical speech processing, which can be observed in both the NH listeners and the CI users. We conclude that elderly individuals require more effortful processing at sensory stages of speech processing, which however seems to be at the cost of the limited resources available for the later semantic integration processes.
Collapse
Affiliation(s)
- Pauline Burkhardt
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany,*Correspondence: Pauline Burkhardt, ; orcid.org/0000-0001-9850-9881
| | - Verena Müller
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Hartmut Meister
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne, Cologne, Germany
| | - Anna Weglage
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Ruth Lang-Roth
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Martin Walger
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany,Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne, Cologne, Germany
| | - Pascale Sandmann
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
15
|
Electrophysiological differences and similarities in audiovisual speech processing in CI users with unilateral and bilateral hearing loss. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100059. [DOI: 10.1016/j.crneur.2022.100059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 08/24/2022] [Accepted: 10/07/2022] [Indexed: 11/11/2022] Open
|
16
|
Evidence of visual crossmodal reorganization positively relates to speech outcomes in cochlear implant users. Sci Rep 2022; 12:17749. [PMID: 36273017 PMCID: PMC9587996 DOI: 10.1038/s41598-022-22117-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 10/10/2022] [Indexed: 01/18/2023] Open
Abstract
Deaf individuals who use a cochlear implant (CI) have remarkably different outcomes for auditory speech communication ability. One factor assumed to affect CI outcomes is visual crossmodal plasticity in auditory cortex, where deprived auditory regions begin to support non-auditory functions such as vision. Previous research has viewed crossmodal plasticity as harmful for speech outcomes for CI users if it interferes with sound processing, while others have demonstrated that plasticity related to visual language may be beneficial for speech recovery. To clarify, we used electroencephalography (EEG) to measure brain responses to a partial face speaking a silent single-syllable word (visual language) in 15 CI users and 13 age-matched typical-hearing controls. We used source analysis on EEG activity to measure crossmodal visual responses in auditory cortex and then compared them to CI users' speech-in-noise listening ability. CI users' brain response to the onset of the video stimulus (face) was larger than controls in left auditory cortex, consistent with crossmodal activation after deafness. CI users also produced a mixture of alpha (8-12 Hz) synchronization and desynchronization in auditory cortex while watching lip movement while controls instead showed desynchronization. CI users with higher speech scores had stronger crossmodal responses in auditory cortex to the onset of the video, but those with lower speech scores had increases in alpha power during lip movement in auditory areas. Therefore, evidence of crossmodal reorganization in CI users does not necessarily predict poor speech outcomes, and differences in crossmodal activation during lip reading may instead relate to strategies or differences that CI users use in audiovisual speech communication.
Collapse
|
17
|
Cross-Modal Reorganization From Both Visual and Somatosensory Modalities in Cochlear Implanted Children and Its Relationship to Speech Perception. Otol Neurotol 2022; 43:e872-e879. [PMID: 35970165 DOI: 10.1097/mao.0000000000003619] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
HYPOTHESIS We hypothesized that children with cochlear implants (CIs) who demonstrate cross-modal reorganization by vision also demonstrate cross-modal reorganization by somatosensation and that these processes are interrelated and impact speech perception. BACKGROUND Cross-modal reorganization, which occurs when a deprived sensory modality's cortical resources are recruited by other intact modalities, has been proposed as a source of variability underlying speech perception in deaf children with CIs. Visual and somatosensory cross-modal reorganization of auditory cortex have been documented separately in CI children, but reorganization in these modalities has not been documented within the same subjects. Our goal was to examine the relationship between cross-modal reorganization from both visual and somatosensory modalities within a single group of CI children. METHODS We analyzed high-density electroencephalogram responses to visual and somatosensory stimuli and current density reconstruction of brain activity sources. Speech perception in noise testing was performed. Current density reconstruction patterns were analyzed within the entire subject group and across groups of CI children exhibiting good versus poor speech perception. RESULTS Positive correlations between visual and somatosensory cross-modal reorganization suggested that neuroplasticity in different sensory systems may be interrelated. Furthermore, CI children with good speech perception did not show recruitment of frontal or auditory cortices during visual processing, unlike CI children with poor speech perception. CONCLUSION Our results reflect changes in cortical resource allocation in pediatric CI users. Cross-modal recruitment of auditory and frontal cortices by vision, and cross-modal reorganization of auditory cortex by somatosensation, may underlie variability in speech and language outcomes in CI children.
Collapse
|
18
|
Zhou X, Feng M, Hu Y, Zhang C, Zhang Q, Luo X, Yuan W. The Effects of Cortical Reorganization and Applications of Functional Near-Infrared Spectroscopy in Deaf People and Cochlear Implant Users. Brain Sci 2022; 12:brainsci12091150. [PMID: 36138885 PMCID: PMC9496692 DOI: 10.3390/brainsci12091150] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/19/2022] [Accepted: 08/24/2022] [Indexed: 11/22/2022] Open
Abstract
A cochlear implant (CI) is currently the only FDA-approved biomedical device that can restore hearing for the majority of patients with severe-to-profound sensorineural hearing loss (SNHL). While prelingually and postlingually deaf individuals benefit substantially from CI, the outcomes after implantation vary greatly. Numerous studies have attempted to study the variables that affect CI outcomes, including the personal characteristics of CI candidates, environmental variables, and device-related variables. Up to 80% of the results remained unexplainable because all these variables could only roughly predict auditory performance with a CI. Brain structure/function differences after hearing deprivation, that is, cortical reorganization, has gradually attracted the attention of neuroscientists. The cross-modal reorganization in the auditory cortex following deafness is thought to be a key factor in the success of CI. In recent years, the adaptive and maladaptive effects of this reorganization on CI rehabilitation have been argued because the neural mechanisms of how this reorganization impacts CI learning and rehabilitation have not been revealed. Due to the lack of brain processes describing how this plasticity affects CI learning and rehabilitation, the adaptive and deleterious consequences of this reorganization on CI outcomes have recently been the subject of debate. This review describes the evidence for different roles of cross-modal reorganization in CI performance and attempts to explore the possible reasons. Additionally, understanding the core influencing mechanism requires taking into account the cortical changes from deafness to hearing restoration. However, methodological issues have restricted longitudinal research on cortical function in CI. Functional near-infrared spectroscopy (fNIRS) has been increasingly used for the study of brain function and language assessment in CI because of its unique advantages, which are considered to have great potential. Here, we review studies on auditory cortex reorganization in deaf patients and CI recipients, and then we try to illustrate the feasibility of fNIRS as a neuroimaging tool in predicting and assessing speech performance in CI recipients. Here, we review research on the cross-modal reorganization of the auditory cortex in deaf patients and CI recipients and seek to demonstrate the viability of using fNIRS as a neuroimaging technique to predict and evaluate speech function in CI recipients.
Collapse
Affiliation(s)
- Xiaoqing Zhou
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Menglong Feng
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Yaqin Hu
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Chanyuan Zhang
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Qingling Zhang
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Xiaoqin Luo
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
| | - Wei Yuan
- Department of Otolaryngolgy, Chongqing General Hospital, Chongqing 401147, China
- Chongqing Medical University, Chongqing 400042, China
- Chongqing School, University of Chinese Academy of Sciences, Chongqing 400714, China
- Chongqing Institute of Green and Intelligent Technology, University of Chinese Academy of Sciences, Chongqing 400714, China
- Correspondence: ; Tel.: +86-23-63535180
| |
Collapse
|
19
|
The timecourse of multisensory speech processing in unilaterally stimulated cochlear implant users revealed by ERPs. Neuroimage Clin 2022; 34:102982. [PMID: 35303598 PMCID: PMC8927996 DOI: 10.1016/j.nicl.2022.102982] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 03/01/2022] [Accepted: 03/02/2022] [Indexed: 11/21/2022]
Abstract
Both normal-hearing (NH) and cochlear implant (CI) users show a clear benefit in multisensory speech processing. Group differences in ERP topographies and cortical source activation suggest distinct audiovisual speech processing in CI users when compared to NH listeners. Electrical neuroimaging, including topographic and ERP source analysis, provides a suitable tool to study the timecourse of multisensory speech processing in CI users.
A cochlear implant (CI) is an auditory prosthesis which can partially restore the auditory function in patients with severe to profound hearing loss. However, this bionic device provides only limited auditory information, and CI patients may compensate for this limitation by means of a stronger interaction between the auditory and visual system. To better understand the electrophysiological correlates of audiovisual speech perception, the present study used electroencephalography (EEG) and a redundant target paradigm. Postlingually deafened CI users and normal-hearing (NH) listeners were compared in auditory, visual and audiovisual speech conditions. The behavioural results revealed multisensory integration for both groups, as indicated by shortened response times for the audiovisual as compared to the two unisensory conditions. The analysis of the N1 and P2 event-related potentials (ERPs), including topographic and source analyses, confirmed a multisensory effect for both groups and showed a cortical auditory response which was modulated by the simultaneous processing of the visual stimulus. Nevertheless, the CI users in particular revealed a distinct pattern of N1 topography, pointing to a strong visual impact on auditory speech processing. Apart from these condition effects, the results revealed ERP differences between CI users and NH listeners, not only in N1/P2 ERP topographies, but also in the cortical source configuration. When compared to the NH listeners, the CI users showed an additional activation in the visual cortex at N1 latency, which was positively correlated with CI experience, and a delayed auditory-cortex activation with a reversed, rightward functional lateralisation. In sum, our behavioural and ERP findings demonstrate a clear audiovisual benefit for both groups, and a CI-specific alteration in cortical activation at N1 latency when auditory and visual input is combined. These cortical alterations may reflect a compensatory strategy to overcome the limited CI input, which allows the CI users to improve the lip-reading skills and to approximate the behavioural performance of NH listeners in audiovisual speech conditions. Our results are clinically relevant, as they highlight the importance of assessing the CI outcome not only in auditory-only, but also in audiovisual speech conditions.
Collapse
|
20
|
Wang F, Zhou T, Wang P, Li Z, Meng X, Jiang J. Study of extravisual resting-state networks in pituitary adenoma patients with vision restoration. BMC Neurosci 2022; 23:15. [PMID: 35300588 PMCID: PMC8932055 DOI: 10.1186/s12868-022-00701-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 03/08/2022] [Indexed: 11/10/2022] Open
Abstract
Background Pituitary adenoma (PA) may compress the optic apparatus, resulting in impaired vision. Some patients can experience improved vision rapidly after surgery. During the early period after surgery, however, the change in neurofunction in the extravisual cortex and higher cognitive cortex has yet to be explored. Objective Our study focused on the changes in the extravisual resting-state networks in patients with PA after vision restoration. Methods We recruited 14 patients with PA who experienced visual improvement after surgery. The functional connectivity (FC) of 6 seeds [auditory cortex (A1), Broca’s area, posterior cingulate cortex (PCC) for the default mode network (DMN), right caudal anterior cingulate cortex for the salience network (SN) and left dorsolateral prefrontal cortex for the executive control network (ECN)] were evaluated. A paired t test was conducted to identify the differences between two groups of patients. Results Compared with their preoperative counterparts, patients with PA with improved vision exhibited decreased FC with the right A1 in the left insula lobule, right middle temporal gyrus and left postcentral gyrus and increased FC in the right paracentral lobule; decreased FC with the Broca in the left middle temporal gyrus and increased FC in the left insula lobule and right thalamus; decreased FC with the DMN in the right declive and right precuneus; increased FC in right Brodmann area 17, the left cuneus and the right posterior cingulate; decreased FC with the ECN in the right posterior cingulate, right angular and right precuneus; decreased FC with the SN in the right middle temporal gyrus, right hippocampus, and right precuneus; and increased FC in the right fusiform gyrus, the left lingual gyrus and right Brodmann area 19. Conclusions Vision restoration may cause a response of cross-modal plasticity and multisensory systems related to A1 and the Broca. The DMN and SN may be involved in top-down control of the subareas within the visual cortex. The precuneus may be involved in the DMN, ECN and SN simultaneously.
Collapse
Affiliation(s)
- Fuyu Wang
- Department of Neurosurgery, The First Medical Center, Chinese PLA General Hospital, Beijing, China.
| | - Tao Zhou
- Department of Neurosurgery, The First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Peng Wang
- Department of Neurosurgery, The First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Ze Li
- Department of Neurosurgery, The First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Xianghui Meng
- Department of Neurosurgery, The First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Jinli Jiang
- Department of Neurosurgery, The First Medical Center, Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
21
|
Radecke JO, Schierholz I, Kral A, Lenarz T, Murray MM, Sandmann P. Distinct multisensory perceptual processes guide enhanced auditory recognition memory in older cochlear implant users. Neuroimage Clin 2022; 33:102942. [PMID: 35033811 PMCID: PMC8762088 DOI: 10.1016/j.nicl.2022.102942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 12/23/2021] [Accepted: 01/10/2022] [Indexed: 11/15/2022]
Abstract
Congruent audio-visual encoding enhances later auditory processing in the elderly. CI users benefit from additional congruent visual information, similar to controls. CI users show distinct neurophysiological processes, compared to controls. CI users show an earlier modulation of event-related topographies, compared to controls.
In naturalistic situations, sounds are often perceived in conjunction with matching visual impressions. For example, we see and hear the neighbor’s dog barking in the garden. Still, there is a good chance that we recognize the neighbor’s dog even when we only hear it barking, but do not see it behind the fence. Previous studies with normal-hearing (NH) listeners have shown that the audio-visual presentation of a perceptual object (like an animal) increases the probability to recognize this object later on, even if the repeated presentation of this object occurs in a purely auditory condition. In patients with a cochlear implant (CI), however, the electrical hearing of sounds is impoverished, and the ability to recognize perceptual objects in auditory conditions is significantly limited. It is currently not well understood whether CI users – as NH listeners – show a multisensory facilitation for auditory recognition. The present study used event-related potentials (ERPs) and a continuous recognition paradigm with auditory and audio-visual stimuli to test the prediction that CI users show a benefit from audio-visual perception. Indeed, the congruent audio-visual context resulted in an improved recognition ability of objects in an auditory-only condition, both in the NH listeners and the CI users. The ERPs revealed a group-specific pattern of voltage topographies and correlations between these ERP maps and the auditory recognition ability, indicating a different processing of congruent audio-visual stimuli in CI users when compared to NH listeners. Taken together, our results point to distinct cortical processing of naturalistic audio-visual objects in CI users and NH listeners, which however allows both groups to improve the recognition ability of these objects in a purely auditory context. Our findings are of relevance for future clinical research since audio-visual perception might also improve the auditory rehabilitation after cochlear implantation.
Collapse
Affiliation(s)
- Jan-Ole Radecke
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Germany; Institute of Audioneurotechnology, Hannover Medical School, Hannover, Germany; Department of Experimental Otology, ENT Clinics, Hannover Medical School, Hannover, Germany.
| | - Irina Schierholz
- Department of Otolaryngology, Hannover Medical School, Hannover, Germany; Department of Otorhinolaryngology, University of Cologne, Cologne, Germany
| | - Andrej Kral
- Institute of Audioneurotechnology, Hannover Medical School, Hannover, Germany; Department of Experimental Otology, ENT Clinics, Hannover Medical School, Hannover, Germany
| | - Thomas Lenarz
- Institute of Audioneurotechnology, Hannover Medical School, Hannover, Germany; Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Micah M Murray
- The LINE (The Laboratory for Investigative Neurophysiology), Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; CIBM Center for Biomedical Imaging of Lausanne and Geneva, Lausanne, Switzerland; Department of Ophthalmology, Fondation Asile des aveugles, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Pascale Sandmann
- Department of Otorhinolaryngology, University of Cologne, Cologne, Germany
| |
Collapse
|
22
|
Benetti S, Collignon O. Cross-modal integration and plasticity in the superior temporal cortex. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:127-143. [PMID: 35964967 DOI: 10.1016/b978-0-12-823493-8.00026-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In congenitally deaf people, temporal regions typically believed to be primarily auditory enhance their response to nonauditory information. The neural mechanisms and functional principles underlying this phenomenon, as well as its impact on auditory recovery after sensory restoration, yet remain debated. In this chapter, we demonstrate that the cross-modal recruitment of temporal regions by visual inputs in congenitally deaf people follows organizational principles known to be present in the hearing brain. We propose that the functional and structural mechanisms allowing optimal convergence of multisensory information in the temporal cortex of hearing people also provide the neural scaffolding for feeding visual or tactile information into the deafened temporal areas. Innate in their nature, such anatomo-functional links between the auditory and other sensory systems would represent the common substrate of both early multisensory integration and expression of selective cross-modal plasticity in the superior temporal cortex.
Collapse
Affiliation(s)
- Stefania Benetti
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy; Institute for Research in Psychology and Neuroscience, Faculty of Psychology and Educational Science, UC Louvain, Louvain-la-Neuve, Belgium.
| |
Collapse
|
23
|
Prince P, Paul BT, Chen J, Le T, Lin V, Dimitrijevic A. Neural correlates of visual stimulus encoding and verbal working memory differ between cochlear implant users and normal-hearing controls. Eur J Neurosci 2021; 54:5016-5037. [PMID: 34146363 PMCID: PMC8457219 DOI: 10.1111/ejn.15365] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 06/10/2021] [Accepted: 06/14/2021] [Indexed: 11/29/2022]
Abstract
A common concern for individuals with severe‐to‐profound hearing loss fitted with cochlear implants (CIs) is difficulty following conversations in noisy environments. Recent work has suggested that these difficulties are related to individual differences in brain function, including verbal working memory and the degree of cross‐modal reorganization of auditory areas for visual processing. However, the neural basis for these relationships is not fully understood. Here, we investigated neural correlates of visual verbal working memory and sensory plasticity in 14 CI users and age‐matched normal‐hearing (NH) controls. While we recorded the high‐density electroencephalogram (EEG), participants completed a modified Sternberg visual working memory task where sets of letters and numbers were presented visually and then recalled at a later time. Results suggested that CI users had comparable behavioural working memory performance compared with NH. However, CI users had more pronounced neural activity during visual stimulus encoding, including stronger visual‐evoked activity in auditory and visual cortices, larger modulations of neural oscillations and increased frontotemporal connectivity. In contrast, during memory retention of the characters, CI users had descriptively weaker neural oscillations and significantly lower frontotemporal connectivity. We interpret the differences in neural correlates of visual stimulus processing in CI users through the lens of cross‐modal and intramodal plasticity.
Collapse
Affiliation(s)
- Priyanka Prince
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada.,Department of Physiology, University of Toronto, Toronto, Ontario, Canada
| | - Brandon T Paul
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada.,Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Department of Psychology, Ryerson University, Toronto, Ontario, Canada
| | - Joseph Chen
- Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Trung Le
- Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Vincent Lin
- Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Andrew Dimitrijevic
- Evaluative Clinical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario, Canada.,Department of Physiology, University of Toronto, Toronto, Ontario, Canada.,Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.,Faculty of Medicine, Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
24
|
Visual motion processing recruits regions selective for auditory motion in early deaf individuals. Neuroimage 2021; 230:117816. [PMID: 33524580 DOI: 10.1016/j.neuroimage.2021.117816] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 01/18/2021] [Accepted: 01/25/2021] [Indexed: 01/24/2023] Open
Abstract
In early deaf individuals, the auditory deprived temporal brain regions become engaged in visual processing. In our study we tested further the hypothesis that intrinsic functional specialization guides the expression of cross-modal responses in the deprived auditory cortex. We used functional MRI to characterize the brain response to horizontal, radial and stochastic visual motion in early deaf and hearing individuals matched for the use of oral or sign language. Visual motion showed enhanced response in the 'deaf' mid-lateral planum temporale, a region selective to auditory motion as demonstrated by a separate auditory motion localizer in hearing people. Moreover, multivariate pattern analysis revealed that this reorganized temporal region showed enhanced decoding of motion categories in the deaf group, while visual motion-selective region hMT+/V5 showed reduced decoding when compared to hearing people. Dynamic Causal Modelling revealed that the 'deaf' motion-selective temporal region shows a specific increase of its functional interactions with hMT+/V5 and is now part of a large-scale visual motion selective network. In addition, we observed preferential responses to radial, compared to horizontal, visual motion in the 'deaf' right superior temporal cortex region that also show preferential response to approaching/receding sounds in the hearing brain. Overall, our results suggest that the early experience of auditory deprivation interacts with intrinsic constraints and triggers a large-scale reallocation of computational load between auditory and visual brain regions that typically support the multisensory processing of motion information.
Collapse
|
25
|
Bottari D, Bednaya E, Dormal G, Villwock A, Dzhelyova M, Grin K, Pietrini P, Ricciardi E, Rossion B, Röder B. EEG frequency-tagging demonstrates increased left hemispheric involvement and crossmodal plasticity for face processing in congenitally deaf signers. Neuroimage 2020; 223:117315. [DOI: 10.1016/j.neuroimage.2020.117315] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 08/06/2020] [Accepted: 08/25/2020] [Indexed: 12/14/2022] Open
|
26
|
Altered Brain Activity and Functional Connectivity in Unilateral Sudden Sensorineural Hearing Loss. Neural Plast 2020; 2020:9460364. [PMID: 33029130 PMCID: PMC7527900 DOI: 10.1155/2020/9460364] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 08/04/2020] [Accepted: 08/18/2020] [Indexed: 11/18/2022] Open
Abstract
Background Sudden sensorineural hearing loss (SSNHL) is an otologic emergency and could lead to social difficulties and mental disorders in some patients. Although many studies have analyzed altered brain function in populations with hearing loss, little information is available about patients with idiopathic SSNHL. This study is aimed at investigating brain functional changes in SSNHL via functional magnetic resonance imaging (fMRI). Methods Thirty-six patients with SSNHL and thirty well-matched normal hearing individuals underwent resting-state fMRI. Amplitude of low-frequency fluctuation (ALFF), fractional ALFF (fALFF), and functional connectivity (FC) values were calculated. Results In the SSNHL patients, ALFF and fALFF were significantly increased in the bilateral putamen but decreased in the right calcarine cortex, right middle temporal gyrus (MTG), and right precentral gyrus. Widespread increases in FC were observed between brain regions, mainly including the bilateral auditory cortex, bilateral visual cortex, left striatum, left angular gyrus (AG), bilateral precuneus, and bilateral limbic lobes in patients with SSNHL. No decreased FC was observed. Conclusion SSNHL causes functional alterations in brain regions, mainly in the striatum, auditory cortex, visual cortex, MTG, AG, precuneus, and limbic lobes within the acute period of hearing loss.
Collapse
|
27
|
Deng Q, Gu F, Tong SX. Lexical processing in sign language: A visual mismatch negativity study. Neuropsychologia 2020; 148:107629. [PMID: 32976852 DOI: 10.1016/j.neuropsychologia.2020.107629] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 09/08/2020] [Accepted: 09/14/2020] [Indexed: 10/23/2022]
Abstract
Event-related potential studies of spoken and written language show the automatic access of auditory and visual words, as indexed by mismatch negativity (MMN) or visual MMN (vMMN). The present study examined whether the same automatic lexical processing occurs in a visual-gestural language, i.e., Hong Kong Sign Language (HKSL). Using a classic visual oddball paradigm, deaf signers and hearing non-signers were presented with a sequence of static images representing HKSL lexical signs and non-signs. When compared with hearing non-signers, deaf signers exhibited an enhanced vMMN elicited by the lexical signs at around 230 ms, and a larger P1-N170 complex evoked by both lexical sign and non-sign standards at the parieto-occipital area in the early time window between 65 ms and 170 ms. These findings indicate that deaf signers implicitly process the lexical sign and that neural response differences between deaf signers and hearing non-signers occur at the early stage of sign processing.
Collapse
Affiliation(s)
- Qinli Deng
- Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong, China.
| | - Feng Gu
- Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong, China; The College of Literature and Journalism, Sichuan University, Chengdu, China.
| | - Shelley Xiuli Tong
- Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong, China.
| |
Collapse
|
28
|
Campbell J, Sharma A. Frontal Cortical Modulation of Temporal Visual Cross-Modal Re-organization in Adults with Hearing Loss. Brain Sci 2020; 10:brainsci10080498. [PMID: 32751543 PMCID: PMC7465622 DOI: 10.3390/brainsci10080498] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 07/24/2020] [Accepted: 07/27/2020] [Indexed: 11/19/2022] Open
Abstract
Recent research has demonstrated frontal cortical involvement to co-occur with visual re-organization, suggestive of top-down modulation of cross-modal mechanisms. However, it is unclear whether top-down modulation of visual re-organization takes place in mild hearing loss, or is dependent upon greater degrees of hearing loss severity. Thus, the purpose of this study was to determine if frontal top-down modulation of visual cross-modal re-organization increased across hearing loss severity. We recorded visual evoked potentials (VEPs) in response to apparent motion stimuli in 17 adults with mild-moderate hearing loss using 128-channel high-density electroencephalography (EEG). Current density reconstructions (CDRs) were generated using sLORETA to visualize VEP generators in both groups. VEP latency and amplitude in frontal regions of interest (ROIs) were compared between groups and correlated with auditory behavioral measures. Activation of frontal networks in response to visual stimulation increased across mild to moderate hearing loss, with simultaneous activation of the temporal cortex. In addition, group differences in VEP latency and amplitude correlated with auditory behavioral measures. Overall, these findings support the hypothesis that frontal top-down modulation of visual cross-modal re-organization is dependent upon hearing loss severity.
Collapse
Affiliation(s)
- Julia Campbell
- Central Sensory Processes Laboratory, Department of Communication Sciences and Disorders, University of Texas at Austin, 2504 Whitis Ave a1100, Austin, TX 78712, USA;
| | - Anu Sharma
- Anu Sharma, Brain and Behavior Laboratory, Institute of Cognitive Science, Department of Speech, Language and Hearing Science, University of Colorado at Boulder, 409 UCB, 2501 Kittredge Loop Drive, Boulder, CO 80309, USA
- Correspondence:
| |
Collapse
|
29
|
Shalev T, Schwartz S, Miller P, Hadad BS. Do deaf individuals have better visual skills in the periphery? Evidence from processing facial attributes. VISUAL COGNITION 2020. [DOI: 10.1080/13506285.2020.1770390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Tal Shalev
- Department of Special Education, University of Haifa, Haifa, Israel
| | - Sivan Schwartz
- Department of psychology, University of Haifa, Haifa, Israel
| | - Paul Miller
- Department of Special Education, University of Haifa, Haifa, Israel
| | - Bat-Sheva Hadad
- Department of Special Education, University of Haifa, Haifa, Israel
- Edmond J. Safra Brain Research Center, University of Haifa, Haifa, Israel
| |
Collapse
|
30
|
Bauer AKR, Debener S, Nobre AC. Synchronisation of Neural Oscillations and Cross-modal Influences. Trends Cogn Sci 2020; 24:481-495. [PMID: 32317142 PMCID: PMC7653674 DOI: 10.1016/j.tics.2020.03.003] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Revised: 02/20/2020] [Accepted: 03/14/2020] [Indexed: 01/23/2023]
Abstract
At any given moment, we receive multiple signals from our different senses. Prior research has shown that signals in one sensory modality can influence neural activity and behavioural performance associated with another sensory modality. Recent human and nonhuman primate studies suggest that such cross-modal influences in sensory cortices are mediated by the synchronisation of ongoing neural oscillations. In this review, we consider two mechanisms proposed to facilitate cross-modal influences on sensory processing, namely cross-modal phase resetting and neural entrainment. We consider how top-down processes may further influence cross-modal processing in a flexible manner, and we highlight fruitful directions for further research.
Collapse
Affiliation(s)
- Anna-Katharina R Bauer
- Department of Experimental Psychology, Brain and Cognition Lab, Oxford Centre for Human Brain Activity, Department of Psychiatry, Wellcome Centre for Integrative Neuroimaging, University of Oxford, UK.
| | - Stefan Debener
- Department of Psychology, Neuropsychology Lab, Cluster of Excellence Hearing4All, University of Oldenburg, Germany
| | - Anna C Nobre
- Department of Experimental Psychology, Brain and Cognition Lab, Oxford Centre for Human Brain Activity, Department of Psychiatry, Wellcome Centre for Integrative Neuroimaging, University of Oxford, UK
| |
Collapse
|
31
|
Age-related hearing loss influences functional connectivity of auditory cortex for the McGurk illusion. Cortex 2020; 129:266-280. [PMID: 32535378 DOI: 10.1016/j.cortex.2020.04.022] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 03/30/2020] [Accepted: 04/09/2020] [Indexed: 01/23/2023]
Abstract
Age-related hearing loss affects hearing at high frequencies and is associated with difficulties in understanding speech. Increased audio-visual integration has recently been found in age-related hearing impairment, the brain mechanisms that contribute to this effect are however unclear. We used functional magnetic resonance imaging in elderly subjects with normal hearing and mild to moderate uncompensated hearing loss. Audio-visual integration was studied using the McGurk task. In this task, an illusionary fused percept can occur if incongruent auditory and visual syllables are presented. The paradigm included unisensory stimuli (auditory only, visual only), congruent audio-visual and incongruent (McGurk) audio-visual stimuli. An illusionary precept was reported in over 60% of incongruent trials. These McGurk illusion rates were equal in both groups of elderly subjects and correlated positively with speech-in-noise perception and daily listening effort. Normal-hearing participants showed an increased neural response in left pre- and postcentral gyri and right middle frontal gyrus for incongruent stimuli (McGurk) compared to congruent audio-visual stimuli. Activation patterns were however not different between groups. Task-modulated functional connectivity differed between groups showing increased connectivity from auditory cortex to visual, parietal and frontal areas in hard of hearing participants as compared to normal-hearing participants when comparing incongruent stimuli (McGurk) with congruent audio-visual stimuli. These results suggest that changes in functional connectivity of auditory cortex rather than activation strength during processing of audio-visual McGurk stimuli accompany age-related hearing loss.
Collapse
|
32
|
Mowad TG, Willett AE, Mahmoudian M, Lipin M, Heinecke A, Maguire AM, Bennett J, Ashtari M. Compensatory Cross-Modal Plasticity Persists After Sight Restoration. Front Neurosci 2020; 14:291. [PMID: 32477041 PMCID: PMC7235304 DOI: 10.3389/fnins.2020.00291] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Accepted: 03/13/2020] [Indexed: 11/30/2022] Open
Abstract
Sensory deprivation prompts extensive structural and functional reorganizations of the cortex resulting in the occupation of space for the lost sense by the intact sensory systems. This process, known as cross-modal plasticity, has been widely studied in individuals with vision or hearing loss. However, little is known on the neuroplastic changes in restoring the deprived sense. Some reports consider the cross-modal functionality maladaptive to the return of the original sense, and others view this as a critical process in maintaining the neurons of the deprived sense active and operational. These controversial views have been challenged in both auditory and vision restoration reports for decades. Recently with the approval of Luxturna as the first retinal gene therapy (GT) drug to reverse blindness, there is a renewed interest for the crucial role of cross-modal plasticity on sight restoration. Employing a battery of task and resting state functional magnetic resonance imaging (rsfMRI), in comparison to a group of sighted controls, we tracked the functional changes in response to auditory and visual stimuli and at rest, in a group of patients with biallelic mutations in the RPE65 gene (“RPE65 patients”) before and 3 years after GT. While the sighted controls did not present any evidence for auditory cross-modal plasticity, robust responses to the auditory stimuli were found in occipital cortex of the RPE65 patients overlapping visual responses and significantly elevated 3 years after GT. The rsfMRI results showed significant connectivity between the auditory and visual areas for both groups albeit attenuated in patients at baseline but enhanced 3 years after GT. Taken together, these findings demonstrate that (1) RPE65 patients present with an auditory cross-modal component; (2) visual and non-visual responses of the visual cortex are considerably enhanced after vision restoration; and (3) auditory cross-modal functions did not adversely affect the success of vision restitution. We hypothesize that following GT, to meet the demand for the newly established retinal signals, remaining or dormant visual neurons are revived or unmasked for greater participation. These neurons or a subset of these neurons respond to both the visual and non-visual demands and further strengthen connectivity between the auditory and visual cortices.
Collapse
Affiliation(s)
- Theresa G Mowad
- Department of Ophthalmology, Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, United States
| | - Aimee E Willett
- The Edward Via College of Osteopathic Medicine, Blacksburg, VA, United States
| | | | - Mikhail Lipin
- Department of Ophthalmology, Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, United States
| | - Armin Heinecke
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Albert M Maguire
- Department of Ophthalmology, Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, United States.,Department of Ophthalmology, F.M. Kirby Center for Molecular Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA, United States.,Center for Cellular and Molecular Therapeutics, The Children's Hospital of Philadelphia, Philadelphia, PA, United States
| | - Jean Bennett
- Department of Ophthalmology, Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, United States.,Department of Ophthalmology, F.M. Kirby Center for Molecular Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA, United States.,Center for Cellular and Molecular Therapeutics, The Children's Hospital of Philadelphia, Philadelphia, PA, United States
| | - Manzar Ashtari
- Department of Ophthalmology, Center for Advanced Retinal and Ocular Therapeutics, University of Pennsylvania, Philadelphia, PA, United States.,Department of Ophthalmology, F.M. Kirby Center for Molecular Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA, United States.,Department of Radiology, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
33
|
Müller JA, Kollmeier B, Debener S, Brand T. Influence of auditory attention on sentence recognition captured by the neural phase. Eur J Neurosci 2020. [DOI: 10.1111/ejn.13896] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Jana Annina Müller
- Medizinische Physik Carl von Ossietzky Universität Oldenburg 26111 Oldenburg Germany
- Cluster of Excellence Hearing4all Carl von Ossietzky Universität Oldenburg Oldenburg Germany
| | - Birger Kollmeier
- Medizinische Physik Carl von Ossietzky Universität Oldenburg 26111 Oldenburg Germany
- Cluster of Excellence Hearing4all Carl von Ossietzky Universität Oldenburg Oldenburg Germany
| | - Stefan Debener
- Cluster of Excellence Hearing4all Carl von Ossietzky Universität Oldenburg Oldenburg Germany
- Neuropsychology Carl von Ossietzky Universität Oldenburg Oldenburg Germany
| | - Thomas Brand
- Medizinische Physik Carl von Ossietzky Universität Oldenburg 26111 Oldenburg Germany
- Cluster of Excellence Hearing4all Carl von Ossietzky Universität Oldenburg Oldenburg Germany
| |
Collapse
|
34
|
Glick HA, Sharma A. Cortical Neuroplasticity and Cognitive Function in Early-Stage, Mild-Moderate Hearing Loss: Evidence of Neurocognitive Benefit From Hearing Aid Use. Front Neurosci 2020; 14:93. [PMID: 32132893 PMCID: PMC7040174 DOI: 10.3389/fnins.2020.00093] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 01/23/2020] [Indexed: 12/26/2022] Open
Abstract
Age-related hearing loss (ARHL) is associated with cognitive decline as well as structural and functional brain changes. However, the mechanisms underlying neurocognitive deficits in ARHL are poorly understood and it is unclear whether clinical treatment with hearing aids may modify neurocognitive outcomes. To address these topics, cortical visual evoked potentials (CVEPs), cognitive function, and speech perception abilities were measured in 28 adults with untreated, mild-moderate ARHL and 13 age-matched normal hearing (NH) controls. The group of adults with ARHL were then fit with bilateral hearing aids and re-evaluated after 6 months of amplification use. At baseline, the ARHL group exhibited more extensive recruitment of auditory, frontal, and pre-frontal cortices during a visual motion processing task, providing evidence of cross-modal re-organization and compensatory cortical neuroplasticity. Further, more extensive cross-modal recruitment of the right auditory cortex was associated with greater degree of hearing loss, poorer speech perception in noise, and worse cognitive function. Following clinical treatment with hearing aids, a reversal in cross-modal re-organization of auditory cortex by vision was observed in the ARHL group, coinciding with gains in speech perception and cognitive performance. Thus, beyond the known benefits of hearing aid use on communication, outcomes from this study provide evidence that clinical intervention with well-fit amplification may promote more typical cortical organization and functioning and provide cognitive benefit.
Collapse
Affiliation(s)
| | - Anu Sharma
- Brain and Behavior Laboratory, Department of Speech, Language, and Hearing Science, Center for Neuroscience, Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, United States
| |
Collapse
|
35
|
Rosemann S, Thiel CM. Neural Signatures of Working Memory in Age-related Hearing Loss. Neuroscience 2020; 429:134-142. [PMID: 31935488 DOI: 10.1016/j.neuroscience.2019.12.046] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 12/20/2019] [Accepted: 12/29/2019] [Indexed: 11/17/2022]
Abstract
Age-related hearing loss affects the ability to hear high frequencies and therefore leads to difficulties in understanding speech, particularly under adverse listening conditions. This decrease in hearing can be partly compensated by the recruitment of executive functions, such as working memory. The compensatory effort may, however, lead to a decrease in available neural resources compromising cognitive abilities. We here aim to investigate whether mild to moderate hearing loss impacts prefrontal functions and related executive processes and whether these are related to speech-in-noise perception abilities. Nineteen hard of hearing and nineteen age-matched normal-hearing participants performed a working memory task to drive prefrontal activity, which was gauged with functional magnetic resonance imaging. In addition, speech-in-noise understanding, cognitive flexibility and inhibition control were assessed. Our results showed no differences in frontoparietal activation patterns and working memory performance between normal-hearing and hard of hearing participants. The behavioral assessment of further executive functions, however, provided evidence of lower cognitive flexibility in hard of hearing participants. Cognitive flexibility and hearing abilities further predicted speech-in-noise perception. We conclude that neural and behavioral signatures of working memory are intact in mild to moderate hearing loss. Moreover, cognitive flexibility seems to be closely related to hearing impairment and speech-in-noise perception and should, therefore, be investigated in future studies assessing age-related hearing loss and its implications on prefrontal functions.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany.
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany
| |
Collapse
|
36
|
Jacquemin L, Mertens G, Schlee W, Van de Heyning P, Gilles A. Literature overview on P3 measurement as an objective measure of auditory performance in post-lingually deaf adults with a cochlear implant. Int J Audiol 2019; 58:816-823. [PMID: 31441664 DOI: 10.1080/14992027.2019.1654622] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Objective: Cochlear implantation results in restoration of hearing, potential cortical reorganisation and the reallocation of attentional resources to the auditory system. Hence, the distorted cortical activity of patients with profound sensorineural hearing loss may be partially reversed. The measurement of auditory event-related potentials (ERPs) forms a promising electrophysiological evaluation of the central auditory nervous system. In particular, the P3 component is hypothesised to be a differential indicator of subjective auditory discrimination. This overview discusses the association between the cortical P3 component and the performance on auditory tests in post-lingually deaf adults using a CI. Moreover, the current article proposes important guidelines on eliciting, recording and analysing ERPs in CI users.Design: The literature search was conducted in PubMed.Study sample: Articles were included if they focussed on the relationship between P3 and auditory performance of an adult CI population.Results: The higher-order processing of speech in quiet and in noise of adult CI users is correlated with the ERP components, including the P3, shedding light on neurophysiological foundations for auditory performance differences.Conclusions: There is a need for replicating studies with larger sample sizes to fully comprehend the relationship between P3 and the auditory performance of CI users.
Collapse
Affiliation(s)
- Laure Jacquemin
- University Department of Otorhinolaryngology and Head and Neck Surgery, Antwerp University Hospital, Edegem, Belgium.,Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Universiteitsplein, Wilrijk, Belgium
| | - Griet Mertens
- University Department of Otorhinolaryngology and Head and Neck Surgery, Antwerp University Hospital, Edegem, Belgium.,Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Universiteitsplein, Wilrijk, Belgium
| | - Winfried Schlee
- Department of Psychiatry and Psychotherapy, University Regensburg, Germany
| | - Paul Van de Heyning
- University Department of Otorhinolaryngology and Head and Neck Surgery, Antwerp University Hospital, Edegem, Belgium.,Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Universiteitsplein, Wilrijk, Belgium
| | - Annick Gilles
- University Department of Otorhinolaryngology and Head and Neck Surgery, Antwerp University Hospital, Edegem, Belgium.,Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Universiteitsplein, Wilrijk, Belgium.,Department of Education, Health & Social Work, University College Ghent, Ghent, Belgium
| |
Collapse
|
37
|
Hearing-impaired listeners show increased audiovisual benefit when listening to speech in noise. Neuroimage 2019; 196:261-268. [PMID: 30978494 DOI: 10.1016/j.neuroimage.2019.04.017] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2018] [Revised: 04/02/2019] [Accepted: 04/04/2019] [Indexed: 11/22/2022] Open
Abstract
Recent studies provide evidence for changes in audiovisual perception as well as for adaptive cross-modal auditory cortex plasticity in older individuals with high-frequency hearing impairments (presbycusis). We here investigated whether these changes facilitate the use of visual information, leading to an increased audiovisual benefit of hearing-impaired individuals when listening to speech in noise. We used a naturalistic design in which older participants with a varying degree of high-frequency hearing loss attended to running auditory or audiovisual speech in noise and detected rare target words. Passages containing only visual speech served as a control condition. Simultaneously acquired scalp electroencephalography (EEG) data were used to study cortical speech tracking. Target word detection accuracy was significantly increased in the audiovisual as compared to the auditory listening condition. The degree of this audiovisual enhancement was positively related to individual high-frequency hearing loss and subjectively reported listening effort in challenging daily life situations, which served as a subjective marker of hearing problems. On the neural level, the early cortical tracking of the speech envelope was enhanced in the audiovisual condition. Similar to the behavioral findings, individual differences in the magnitude of the enhancement were positively associated with listening effort ratings. Our results therefore suggest that hearing-impaired older individuals make increased use of congruent visual information to compensate for the degraded auditory input.
Collapse
|
38
|
Han JH, Lee HJ, Kang H, Oh SH, Lee DS. Brain Plasticity Can Predict the Cochlear Implant Outcome in Adult-Onset Deafness. Front Hum Neurosci 2019; 13:38. [PMID: 30837852 PMCID: PMC6389609 DOI: 10.3389/fnhum.2019.00038] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 01/24/2019] [Indexed: 01/30/2023] Open
Abstract
Sensory plasticity, which is associated with deafness, has not been as thoroughly investigated in the adult brain as it has in the developing brain. In this study, we examined the brain reorganization induced by auditory deprivation in people with adult-onset deafness and its clinical relevance by measuring glucose metabolism before cochlear implant (CI) surgery. F-18 fluorodeoxyglucose positron emission tomography (18F-FDG-PET) scans were performed in 37 postlingually deafened patients during the preoperative workup period, and in 39 normal-hearing (NH) controls. Behavioral CI outcomes were measured at 1 year after implantation using a phoneme identification test with auditory cueing only. In the deaf individuals, areas involved in the auditory pathway such as the inferior colliculus and bilateral superior temporal gyri were hypometabolic compared to the NH controls. The hypometabolism observed in the deaf auditory cortices gradually returned to levels similar to the controls as the duration of deafness increased. However, contrary to our previous findings in congenitally deaf children, this metabolic recovery failed to have a significant prognostic value for the recovery of the speech perception ability in adult CI patients. In a broad occipital area centered on the primary visual cortices, glucose metabolism was higher in the deaf patients than the controls, suggesting that the area had become visually hyperactive for sensory compensation immediately after the onset of deafness. In addition, a negative correlation between the metabolic activity and behavioral speech perception outcomes was observed in the visual association areas. In the medial frontal cortices, cortical metabolism in most patients decreased, but patients who had preserved metabolic activities showed better speech performance. These results suggest that the auditory cortex in people with adult-onset deafness is relatively resistant to cross-modal plasticity, and instead, individual traits in late-stage visual processing and cognitive control seem to be more reliable prognostic markers for adult-onset deafness.
Collapse
Affiliation(s)
- Ji-Hye Han
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Chuncheon, South Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Chuncheon, South Korea.,Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, South Korea
| | - Hyejin Kang
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, South Korea.,BK21 Plus Global Translational Research on Molecular Medicine and Biopharmaceutical Sciences, Seoul National University, Seoul, South Korea
| | - Seung-Ha Oh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University College of Medicine, Seoul, South Korea.,Sensory Organ Research Institute, Seoul National University College of Medicine, Seoul, South Korea
| | - Dong Soo Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, South Korea.,Department of Molecular Medicine and Biopharmaceutical Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| |
Collapse
|
39
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
40
|
Stropahl M, Bauer AKR, Debener S, Bleichner MG. Source-Modeling Auditory Processes of EEG Data Using EEGLAB and Brainstorm. Front Neurosci 2018; 12:309. [PMID: 29867321 PMCID: PMC5952032 DOI: 10.3389/fnins.2018.00309] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Accepted: 04/20/2018] [Indexed: 11/25/2022] Open
Abstract
Electroencephalography (EEG) source localization approaches are often used to disentangle the spatial patterns mixed up in scalp EEG recordings. However, approaches differ substantially between experiments, may be strongly parameter-dependent, and results are not necessarily meaningful. In this paper we provide a pipeline for EEG source estimation, from raw EEG data pre-processing using EEGLAB functions up to source-level analysis as implemented in Brainstorm. The pipeline is tested using a data set of 10 individuals performing an auditory attention task. The analysis approach estimates sources of 64-channel EEG data without the prerequisite of individual anatomies or individually digitized sensor positions. First, we show advanced EEG pre-processing using EEGLAB, which includes artifact attenuation using independent component analysis (ICA). ICA is a linear decomposition technique that aims to reveal the underlying statistical sources of mixed signals and is further a powerful tool to attenuate stereotypical artifacts (e.g., eye movements or heartbeat). Data submitted to ICA are pre-processed to facilitate good-quality decompositions. Aiming toward an objective approach on component identification, the semi-automatic CORRMAP algorithm is applied for the identification of components representing prominent and stereotypic artifacts. Second, we present a step-wise approach to estimate active sources of auditory cortex event-related processing, on a single subject level. The presented approach assumes that no individual anatomy is available and therefore the default anatomy ICBM152, as implemented in Brainstorm, is used for all individuals. Individual noise modeling in this dataset is based on the pre-stimulus baseline period. For EEG source modeling we use the OpenMEEG algorithm as the underlying forward model based on the symmetric Boundary Element Method (BEM). We then apply the method of dynamical statistical parametric mapping (dSPM) to obtain physiologically plausible EEG source estimates. Finally, we show how to perform group level analysis in the time domain on anatomically defined regions of interest (auditory scout). The proposed pipeline needs to be tailored to the specific datasets and paradigms. However, the straightforward combination of EEGLAB and Brainstorm analysis tools may be of interest to others performing EEG source localization.
Collapse
Affiliation(s)
- Maren Stropahl
- Neuropsychology Lab, Department of Psychology, European Medical School, University of Oldenburg, Oldenburg, Germany
| | - Anna-Katharina R Bauer
- Neuropsychology Lab, Department of Psychology, European Medical School, University of Oldenburg, Oldenburg, Germany
| | - Stefan Debener
- Neuropsychology Lab, Department of Psychology, European Medical School, University of Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany
| | - Martin G Bleichner
- Neuropsychology Lab, Department of Psychology, European Medical School, University of Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
41
|
Cardon G, Sharma A. Somatosensory Cross-Modal Reorganization in Adults With Age-Related, Early-Stage Hearing Loss. Front Hum Neurosci 2018; 12:172. [PMID: 29773983 PMCID: PMC5943502 DOI: 10.3389/fnhum.2018.00172] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Accepted: 04/12/2018] [Indexed: 02/04/2023] Open
Abstract
Under conditions of profound sensory deprivation, the brain has the propensity to reorganize. For example, intact sensory modalities often recruit deficient modalities' cortices for neural processing. This process is known as cross-modal reorganization and has been shown in congenitally and profoundly deaf patients. However, much less is known about cross-modal cortical reorganization in persons with less severe cases of age-related hearing loss (ARHL), even though such cases are far more common. Thus, we investigated cross-modal reorganization between the auditory and somatosensory modalities in older adults with normal hearing (NH) and mild-moderate ARHL in response to vibrotactile stimulation using high density electroencephalography (EEG). Results showed activation of the somatosensory cortices in adults with NH as well as those with hearing loss (HL). However, adults with mild-moderate ARHL also showed robust activation of auditory cortical regions in response to somatosensory stimulation. Neurophysiologic data exhibited significant correlations with speech perception in noise outcomes suggesting that the degree of cross-modal reorganization may be associated with functional performance. Our study presents the first evidence of somatosensory cross-modal reorganization of the auditory cortex in adults with early-stage, mild-moderate ARHL. Our findings suggest that even mild levels of ARHL associated with communication difficulty result in fundamental cortical changes.
Collapse
Affiliation(s)
- Garrett Cardon
- Department of Psychiatry, University of Colorado Denver Anschutz Medical Campus, Aurora, CO, United States
| | - Anu Sharma
- Department of Speech, Language, and Hearing Sciences, University of Colorado Boulder, Boulder, CO, United States
| |
Collapse
|
42
|
Language and Sensory Neural Plasticity in the Superior Temporal Cortex of the Deaf. Neural Plast 2018; 2018:9456891. [PMID: 29853853 PMCID: PMC5954881 DOI: 10.1155/2018/9456891] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Accepted: 03/26/2018] [Indexed: 11/18/2022] Open
Abstract
Visual stimuli are known to activate the auditory cortex of deaf people, presenting evidence of cross-modal plasticity. However, the mechanisms underlying such plasticity are poorly understood. In this functional MRI study, we presented two types of visual stimuli, language stimuli (words, sign language, and lip-reading) and a general stimulus (checkerboard) to investigate neural reorganization in the superior temporal cortex (STC) of deaf subjects and hearing controls. We found that only in the deaf subjects, all visual stimuli activated the STC. The cross-modal activation induced by the checkerboard was mainly due to a sensory component via a feed-forward pathway from the thalamus and primary visual cortex, positively correlated with duration of deafness, indicating a consequence of pure sensory deprivation. In contrast, the STC activity evoked by language stimuli was functionally connected to both the visual cortex and the frontotemporal areas, which were highly correlated with the learning of sign language, suggesting a strong language component via a possible feedback modulation. While the sensory component exhibited specificity to features of a visual stimulus (e.g., selective to the form of words, bodies, or faces) and the language (semantic) component appeared to recruit a common frontotemporal neural network, the two components converged to the STC and caused plasticity with different multivoxel activity patterns. In summary, the present study showed plausible neural pathways for auditory reorganization and correlations of activations of the reorganized cortical areas with developmental factors and provided unique evidence towards the understanding of neural circuits involved in cross-modal plasticity.
Collapse
|
43
|
Rosemann S, Thiel CM. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment. Neuroimage 2018; 175:425-437. [PMID: 29655940 DOI: 10.1016/j.neuroimage.2018.04.023] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 03/09/2018] [Accepted: 04/09/2018] [Indexed: 11/19/2022] Open
Abstract
Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
44
|
Does hearing aid use affect audiovisual integration in mild hearing impairment? Exp Brain Res 2018; 236:1161-1179. [PMID: 29453491 DOI: 10.1007/s00221-018-5206-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Accepted: 02/11/2018] [Indexed: 10/18/2022]
Abstract
There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.
Collapse
|
45
|
McGurk stimuli for the investigation of multisensory integration in cochlear implant users: The Oldenburg Audio Visual Speech Stimuli (OLAVS). Psychon Bull Rev 2018; 24:863-872. [PMID: 27562763 DOI: 10.3758/s13423-016-1148-9] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.
Collapse
|
46
|
Liang M, Zhang J, Liu J, Chen Y, Cai Y, Wang X, Wang J, Zhang X, Chen S, Li X, Chen L, Zheng Y. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants. Front Hum Neurosci 2017; 11:510. [PMID: 29114213 PMCID: PMC5660683 DOI: 10.3389/fnhum.2017.00510] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2017] [Accepted: 10/06/2017] [Indexed: 11/18/2022] Open
Abstract
Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI) patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP) and ten were poor (PCP). Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs) were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC), with a downward trend in the primary auditory cortex (PAC) activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls) before CI use (0M) and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.
Collapse
Affiliation(s)
- Maojin Liang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xin Hua College of Sun Yat-sen University, Guangzhou, China
| | - Junpeng Zhang
- Department of Biomedical Information Engineering, School of Electrical Engineering and Information, Sichuan University, Chengdu, China
| | - Jiahao Liu
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xin Hua College of Sun Yat-sen University, Guangzhou, China
| | - Yuebo Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xin Hua College of Sun Yat-sen University, Guangzhou, China
| | - Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xin Hua College of Sun Yat-sen University, Guangzhou, China
| | - Xianjun Wang
- Department of Biomedical Information Engineering, School of Electrical Engineering and Information, Sichuan University, Chengdu, China
| | - Junbo Wang
- Department of Clinical Medicine, Sun Yat-sen University, Guangzhou, China
| | - Xueyuan Zhang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xin Hua College of Sun Yat-sen University, Guangzhou, China
| | - Suijun Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xin Hua College of Sun Yat-sen University, Guangzhou, China
| | - Xianghui Li
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xin Hua College of Sun Yat-sen University, Guangzhou, China
| | - Ling Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xin Hua College of Sun Yat-sen University, Guangzhou, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xin Hua College of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
47
|
Stropahl M, Debener S. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration. Neuroimage Clin 2017; 16:514-523. [PMID: 28971005 PMCID: PMC5609862 DOI: 10.1016/j.nicl.2017.09.001] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2017] [Revised: 08/15/2017] [Accepted: 09/02/2017] [Indexed: 11/28/2022]
Abstract
There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users (n = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls (n = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system even at early stages of hearing loss.
Collapse
Affiliation(s)
- Maren Stropahl
- Neuropsychology Lab, Department of Psychology, European Medical School, Carl von Ossietzky University Oldenburg, Germany
| | - Stefan Debener
- Neuropsychology Lab, Department of Psychology, European Medical School, Carl von Ossietzky University Oldenburg, Germany
- Cluster of Excellence Hearing4all Oldenburg, Germany
| |
Collapse
|
48
|
Chen LC, Puschmann S, Debener S. Increased cross-modal functional connectivity in cochlear implant users. Sci Rep 2017; 7:10043. [PMID: 28855675 PMCID: PMC5577186 DOI: 10.1038/s41598-017-10792-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Accepted: 08/15/2017] [Indexed: 11/10/2022] Open
Abstract
Previous studies have reported increased cross-modal auditory and visual cortical activation in cochlear implant (CI) users, suggesting cross-modal reorganization of both visual and auditory cortices in CI users as a consequence of sensory deprivation and restoration. How these processes affect the functional connectivity of the auditory and visual system in CI users is however unknown. We here investigated task-induced intra-modal functional connectivity between hemispheres for both visual and auditory cortices and cross-modal functional connectivity between visual and auditory cortices using functional near infrared spectroscopy in post-lingually deaf CI users and age-matched normal hearing controls. Compared to controls, CI users exhibited decreased intra-modal functional connectivity between hemispheres and increased cross-modal functional connectivity between visual and left auditory cortices for both visual and auditory stimulus processing. Importantly, the difference between cross-modal functional connectivity for visual and for auditory stimuli correlated with speech recognition outcome in CI users. Higher cross-modal connectivity for auditory than for visual stimuli was associated with better speech recognition abilities, pointing to a new pattern of functional reorganization that is related to successful hearing restoration with a CI.
Collapse
Affiliation(s)
- Ling-Chia Chen
- Neuropsychology Lab, Department of Psychology, European Medical School, University of Oldenburg, Oldenburg, Germany. .,Cluster of Excellence Hearing4all, Oldenburg, Germany.
| | - Sebastian Puschmann
- Cluster of Excellence Hearing4all, Oldenburg, Germany.,Biological Psychology Lab, Department of Psychology, European medical school, University of Oldenburg, Oldenburg, Germany
| | - Stefan Debener
- Neuropsychology Lab, Department of Psychology, European Medical School, University of Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Oldenburg, Germany.,Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
49
|
Cross-Modal Plasticity in Higher-Order Auditory Cortex of Congenitally Deaf Cats Does Not Limit Auditory Responsiveness to Cochlear Implants. J Neurosci 2017; 36:6175-85. [PMID: 27277796 DOI: 10.1523/jneurosci.0046-16.2016] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2016] [Accepted: 04/19/2016] [Indexed: 12/29/2022] Open
Abstract
UNLABELLED Congenital sensory deprivation can lead to reorganization of the deprived cortical regions by another sensory system. Such cross-modal reorganization may either compete with or complement the "original" inputs to the deprived area after sensory restoration and can thus be either adverse or beneficial for sensory restoration. In congenital deafness, a previous inactivation study documented that supranormal visual behavior was mediated by higher-order auditory fields in congenitally deaf cats (CDCs). However, both the auditory responsiveness of "deaf" higher-order fields and interactions between the reorganized and the original sensory input remain unknown. Here, we studied a higher-order auditory field responsible for the supranormal visual function in CDCs, the auditory dorsal zone (DZ). Hearing cats and visual cortical areas served as a control. Using mapping with microelectrode arrays, we demonstrate spatially scattered visual (cross-modal) responsiveness in the DZ, but show that this did not interfere substantially with robust auditory responsiveness elicited through cochlear implants. Visually responsive and auditory-responsive neurons in the deaf auditory cortex formed two distinct populations that did not show bimodal interactions. Therefore, cross-modal plasticity in the deaf higher-order auditory cortex had limited effects on auditory inputs. The moderate number of scattered cross-modally responsive neurons could be the consequence of exuberant connections formed during development that were not pruned postnatally in deaf cats. Although juvenile brain circuits are modified extensively by experience, the main driving input to the cross-modally (visually) reorganized higher-order auditory cortex remained auditory in congenital deafness. SIGNIFICANCE STATEMENT In a common view, the "unused" auditory cortex of deaf individuals is reorganized to a compensatory sensory function during development. According to this view, cross-modal plasticity takes over the unused cortex and reassigns it to the remaining senses. Therefore, cross-modal plasticity might conflict with restoration of auditory function with cochlear implants. It is unclear whether the cross-modally reorganized auditory areas lose auditory responsiveness. We show that the presence of cross-modal plasticity in a higher-order auditory area does not reduce auditory responsiveness of that area. Visual reorganization was moderate, spatially scattered and there were no interactions between cross-modally reorganized visual and auditory inputs. These results indicate that cross-modal reorganization is less detrimental for neurosensory restoration than previously thought.
Collapse
|
50
|
Functional selectivity for face processing in the temporal voice area of early deaf individuals. Proc Natl Acad Sci U S A 2017; 114:E6437-E6446. [PMID: 28652333 DOI: 10.1073/pnas.1618287114] [Citation(s) in RCA: 55] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.
Collapse
|