1
|
Ben-David BM, Chebat DR, Icht M. "Love looks not with the eyes": supranormal processing of emotional speech in individuals with late-blindness versus preserved processing in individuals with congenital-blindness. Cogn Emot 2024:1-14. [PMID: 38785380 DOI: 10.1080/02699931.2024.2357656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 05/11/2024] [Indexed: 05/25/2024]
Abstract
Processing of emotional speech in the absence of visual information relies on two auditory channels: semantics and prosody. No study to date has investigated how blindness impacts this process. Two theories, Perceptual Deficit, and Sensory Compensation, yiled different expectations about the role of visual experience (or its lack thereof) in processing emotional speech. To test the effect of vision and early visual experience on processing of emotional speech, we compared individuals with congenital blindness (CB, n = 17), individuals with late blindness (LB, n = 15), and sighted controls (SC, n = 21) on identification and selective-attention of semantic and prosodic spoken-emotions. Results showed that individuals with blindness performed at least as well as SC, supporting Sensory Compensation and the role of cortical reorganisation. Individuals with LB outperformed individuals with CB, in accordance with Perceptual Deficit, supporting the role of early visual experience. The LB advantage was moderated by executive functions (working-memory). Namely, the advantage was erased for individuals with CB who showed higher levels of executive functions. Results suggest that vision is not necessary for processing of emotional speech, but early visual experience could improve it. The findings support a combination of the two aforementioned theories and reject a dichotomous view of deficiencies/enhancements of blindness.
Collapse
Affiliation(s)
- Boaz M Ben-David
- Communication, Aging, and Neuropsychology Lab (CANlab), Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, Canada
- KITE, Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), The Department of Psychology, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center (NARCA), Ariel University, Ariel, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| |
Collapse
|
2
|
Ming L, Geng L, Zhao X, Wang Y, Hu N, Yang Y, Hu X. The mechanism of phonetic information in voice identity discrimination: a comparative study based on sighted and blind people. Front Psychol 2024; 15:1352692. [PMID: 38845764 PMCID: PMC11153856 DOI: 10.3389/fpsyg.2024.1352692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 05/10/2024] [Indexed: 06/09/2024] Open
Abstract
Purpose The purpose of this study is to examine whether phonetic information functions and how phonetic information affects voice identity processing in blind people. Method To address the first inquiry, 25 normal sighted participants and 30 blind participants discriminated voice identity, when listening forward speech and backward speech from their own native language and another unfamiliar language. To address the second inquiry, combining articulatory suppression paradigm, 26 normal sighted participants and 26 blind participants discriminated voice identity, when listening forward speech from their own native language and another unfamiliar language. Results In Experiment 1, not only in the voice identity discrimination task with forward speech, but also in the discrimination task with backward speech, both the sighted and blind groups showed the superiority of the native language. This finding supports the view that backward speech still retains some phonetic information, and indicates that phonetic information can affect voice identity processing in sighted and blind people. In addition, only the superiority of the native language of sighted people was regulated by the speech manner, which is related to articulatory rehearsal. In Experiment 2, only the superiority of the native language of sighted people was regulated by articulatory suppression. This indicates that phonetic information may act in different ways on voice identity processing in sighted and blind people. Conclusion The heightened dependence on voice source information in blind people appears not to undermine the function of phonetic information, but it appears to change the functional mechanism of phonetic information. These findings suggest that the present phonetic familiarity model needs to be improved with respect to the mechanism of phonetic information.
Collapse
Affiliation(s)
- Lili Ming
- School of Linguistic Sciences and Arts, Jiangsu Normal University, Xuzhou, China
- Key Laboratory of Language and Cognitive Neuroscience of Jiangsu Province, Collaborative Innovation Center for Language Ability, Xuzhou, China
| | - Libo Geng
- School of Linguistic Sciences and Arts, Jiangsu Normal University, Xuzhou, China
- Key Laboratory of Language and Cognitive Neuroscience of Jiangsu Province, Collaborative Innovation Center for Language Ability, Xuzhou, China
| | - Xinyu Zhao
- School of Linguistic Sciences and Arts, Jiangsu Normal University, Xuzhou, China
- Key Laboratory of Language and Cognitive Neuroscience of Jiangsu Province, Collaborative Innovation Center for Language Ability, Xuzhou, China
| | - Yichan Wang
- School of Linguistic Sciences and Arts, Jiangsu Normal University, Xuzhou, China
- Key Laboratory of Language and Cognitive Neuroscience of Jiangsu Province, Collaborative Innovation Center for Language Ability, Xuzhou, China
| | - Na Hu
- School of Preschool and Special Education, Kunming University, Yunnan, China
| | - Yiming Yang
- School of Linguistic Sciences and Arts, Jiangsu Normal University, Xuzhou, China
- Key Laboratory of Language and Cognitive Neuroscience of Jiangsu Province, Collaborative Innovation Center for Language Ability, Xuzhou, China
| | - Xueping Hu
- College of Education, Huaibei Normal University, Huaibei, China
- Anhui Engineering Research Center for Intelligent Computing and Application on Cognitive Behavior (ICACB), Huaibei, China
| |
Collapse
|
3
|
Sarzedas J, Lima CF, Roberto MS, Scott SK, Pinheiro AP, Conde T. Blindness influences emotional authenticity perception in voices: Behavioral and ERP evidence. Cortex 2024; 172:254-270. [PMID: 38123404 DOI: 10.1016/j.cortex.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Collapse
Affiliation(s)
- João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| |
Collapse
|
4
|
Pang W, Zhou W, Ruan Y, Zhang L, Shu H, Zhang Y, Zhang Y. Visual Deprivation Alters Functional Connectivity of Neural Networks for Voice Recognition: A Resting-State fMRI Study. Brain Sci 2023; 13:brainsci13040636. [PMID: 37190601 DOI: 10.3390/brainsci13040636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/29/2023] [Accepted: 04/04/2023] [Indexed: 05/17/2023] Open
Abstract
Humans recognize one another by identifying their voices and faces. For sighted people, the integration of voice and face signals in corresponding brain networks plays an important role in facilitating the process. However, individuals with vision loss primarily resort to voice cues to recognize a person's identity. It remains unclear how the neural systems for voice recognition reorganize in the blind. In the present study, we collected behavioral and resting-state fMRI data from 20 early blind (5 females; mean age = 22.6 years) and 22 sighted control (7 females; mean age = 23.7 years) individuals. We aimed to investigate the alterations in the resting-state functional connectivity (FC) among the voice- and face-sensitive areas in blind subjects in comparison with controls. We found that the intranetwork connections among voice-sensitive areas, including amygdala-posterior "temporal voice areas" (TVAp), amygdala-anterior "temporal voice areas" (TVAa), and amygdala-inferior frontal gyrus (IFG) were enhanced in the early blind. The blind group also showed increased FCs of "fusiform face area" (FFA)-IFG and "occipital face area" (OFA)-IFG but decreased FCs between the face-sensitive areas (i.e., FFA and OFA) and TVAa. Moreover, the voice-recognition accuracy was positively related to the strength of TVAp-FFA in the sighted, and the strength of amygdala-FFA in the blind. These findings indicate that visual deprivation shapes functional connectivity by increasing the intranetwork connections among voice-sensitive areas while decreasing the internetwork connections between the voice- and face-sensitive areas. Moreover, the face-sensitive areas are still involved in the voice-recognition process in blind individuals through pathways such as the subcortical-occipital or occipitofrontal connections, which may benefit the visually impaired greatly during voice processing.
Collapse
Affiliation(s)
- Wenbin Pang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- China National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Wei Zhou
- Beijing Key Lab of Learning and Cognition, School of Psychology, Capital Normal University, Beijing 100048, China
| | - Yufang Ruan
- School of Communication Sciences and Disorders, Faculty of Medicine and Health Sciences, McGill University, Montréal, QC H3A 1G1, Canada
- Centre for Research on Brain, Language and Music, Montréal, QC H3A 1G1, Canada
| | - Linjun Zhang
- School of Chinese as a Second Language, Peking University, Beijing 100871, China
| | - Hua Shu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, The University of Minnesota, Minneapolis, MN 55455, USA
| | - Yumei Zhang
- China National Clinical Research Center for Neurological Diseases, Beijing 100070, China
- Department of Rehabilitation, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| |
Collapse
|
5
|
Humble D, Schweinberger SR, Mayer A, Jesgarzewsky TL, Dobel C, Zäske R. The Jena Voice Learning and Memory Test (JVLMT): A standardized tool for assessing the ability to learn and recognize voices. Behav Res Methods 2023; 55:1352-1371. [PMID: 35648317 PMCID: PMC10126074 DOI: 10.3758/s13428-022-01818-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2022] [Indexed: 11/08/2022]
Abstract
The ability to recognize someone's voice spans a broad spectrum with phonagnosia on the low end and super-recognition at the high end. Yet there is no standardized test to measure an individual's ability of learning and recognizing newly learned voices with samples of speech-like phonetic variability. We have developed the Jena Voice Learning and Memory Test (JVLMT), a 22-min test based on item response theory and applicable across languages. The JVLMT consists of three phases in which participants (1) become familiarized with eight speakers, (2) revise the learned voices, and (3) perform a 3AFC recognition task, using pseudo-sentences devoid of semantic content. Acoustic (dis)similarity analyses were used to create items with various levels of difficulty. Test scores are based on 22 items which had been selected and validated based on two online studies with 232 and 454 participants, respectively. Mean accuracy in the JVLMT is 0.51 (SD = .18) with an empirical (marginal) reliability of 0.66. Correlational analyses showed high and moderate convergent validity with the Bangor Voice Matching Test (BVMT) and Glasgow Voice Memory Test (GVMT), respectively, and high discriminant validity with a digit span test. Four participants with potential super recognition abilities and seven participants with potential phonagnosia were identified who performed at least 2 SDs above or below the mean, respectively. The JVLMT is a promising research and diagnostic screening tool to detect both impairments in voice recognition and super-recognition abilities.
Collapse
Affiliation(s)
- Denise Humble
- Department of Experimental Otorhinolaryngology, Jena University Hospital, Stoystrasse 3, 07743, Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3/1, 07743, Jena, Germany
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3/1, 07743, Jena, Germany
| | - Axel Mayer
- Department of Psychological Methods and Evaluation, Institute of Psychology, Institute of Psychology and Sports Science, University of Bielefeld, Universitätsstr. 25, 33615, Bielefeld, Germany
| | - Tim L Jesgarzewsky
- Department of Experimental Otorhinolaryngology, Jena University Hospital, Stoystrasse 3, 07743, Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3/1, 07743, Jena, Germany
| | - Christian Dobel
- Department of Experimental Otorhinolaryngology, Jena University Hospital, Stoystrasse 3, 07743, Jena, Germany
| | - Romi Zäske
- Department of Experimental Otorhinolaryngology, Jena University Hospital, Stoystrasse 3, 07743, Jena, Germany.
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3/1, 07743, Jena, Germany.
| |
Collapse
|
6
|
Kattner F, Fischer M, Caling AL, Cremona S, Ihle A, Hodgson T, Föcker J. The disruptive effects of changing-state sound and emotional prosody on verbal short-term memory in blind, visually impaired, and sighted listeners. JOURNAL OF COGNITIVE PSYCHOLOGY 2023. [DOI: 10.1080/20445911.2023.2186771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2023]
Affiliation(s)
- Florian Kattner
- Department of Psychology, Health and Medical University, Potsdam, Germany
- Institute for Psychology, Technical University of Darmstadt, Darmstadt, Germany
| | - Marieke Fischer
- Institute for Psychology, Technical University of Darmstadt, Darmstadt, Germany
| | - Alliza Lejano Caling
- School of Psychology, College of Social Science, University of Lincoln, Lincoln, UK
| | - Sarah Cremona
- School of Psychology, College of Social Science, University of Lincoln, Lincoln, UK
| | - Andreas Ihle
- Department of Psychology, University of Geneva, Geneva, Switzerland
- Center for the Interdisciplinary Study of Gerontology and Vulnerability, University of Geneva, Geneva, Switzerland
- Swiss National Centre of Competence in Research LIVES – Overcoming vulnerability: Life course perspectives, Geneva, Switzerland
| | - Timothy Hodgson
- School of Psychology, College of Social Science, University of Lincoln, Lincoln, UK
| | - Julia Föcker
- School of Psychology, College of Social Science, University of Lincoln, Lincoln, UK
| |
Collapse
|
7
|
Klauke S, Sondocie C, Fine I. The impact of low vision on social function: The potential importance of lost visual social cues. JOURNAL OF OPTOMETRY 2023; 16:3-11. [PMID: 35568628 PMCID: PMC9811370 DOI: 10.1016/j.optom.2022.03.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 01/09/2022] [Accepted: 03/06/2022] [Indexed: 05/06/2023]
Abstract
Visual cues usually play a vital role in social interaction. As well as being the primary cue for identifying other people, visual cues also provide crucial non-verbal social information via both facial expressions and body language. One consequence of vision loss is the need to rely on non-visual cues during social interaction. Although verbal cues can carry a significant amount of information, this information is often not available to an untrained listener. Here, we review the current literature examining potential ways that the loss of social information due to vision loss might impact social functioning. A large number of studies suggest that low vision and blindness is a risk factor for anxiety and depression. This relationship has been attributed to multiple factors, including anxiety about disease progression, and impairments to quality of life that include difficulties reading, and a lack of access to work and social activities. However, our review suggests a potential additional contributing factor to reduced quality of life that has been hitherto overlooked: blindness may make it more difficult to effectively engage in social interactions, due to a loss of visual information. The current literature suggests it might be worth considering training in voice discrimination and/or recognition when carrying out rehabilitative training in late blind individuals.
Collapse
Affiliation(s)
| | - Chloe Sondocie
- Department of Psychology, University of Washington, Seattle, USA
| | - Ione Fine
- Department of Psychology, University of Washington, Seattle, USA.
| |
Collapse
|
8
|
Daneshi A, Sajjadi H, Blevins N, Jenkins HA, Farhadi M, Ajallouyan M, Hashemi SB, Thai A, Tran E, Rajati M, Asghari A, Mohseni M, Mohebbi S, Bayat A, Saki N, Emamdjomeh H, Romiani M, Hosseinzadeh F, Nasori Y, Mirsaleh M. The Outcome of Cochlear Implantations in Deaf-Blind Patients: A Multicenter Observational Study. Otol Neurotol 2022; 43:908-914. [PMID: 35970154 DOI: 10.1097/mao.0000000000003611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE This multicenter study aimed to evaluate the auditory and speech outcomes of cochlear implantation (CI) in deaf-blind patients compared with deaf-only patients. STUDY DESIGN Retrospective cohort study. SETTING Multiple cochlear implant centers. PATIENTS The current study was conducted on 17 prelingual deaf-blind children and 12 postlingual deaf-blind adults who underwent CI surgery. As a control group, 17 prelingual deaf children and 12 postlingual deaf adults were selected. INTERVENTION Cochlear implantation. MAIN OUTCOME MEASURES Auditory and linguistic performances in children were assessed using the categories of auditory performance (CAP) and Speech Intelligibility Rating (SIR) scales, respectively. The word recognition score (WRS) was also used to measure speech perception ability in adults. The mean CAP, SIR, and WRS cores were compared between the deaf-only and deaf-blind groups before CI surgery and at "12 months" and "24 months" after device activation. Cohen's d was used for effect size estimation. RESULTS We found no significant differences in the mean CAP and SIR scores between the deaf-blind and deaf-only children before the CI surgery. For both groups, SIR and CAP scores improved with increasing time after the device activation. The mean CAP scores in the deaf-only children were either equivalent or slightly higher than those of the deaf-blind children at "12 months post-CI" (3.94 ± 0.74 vs 3.24 ± 1.25; mean difference score, 0.706) and "24 months post-CI" (6.01 ± 0.79 vs 5.47 ± 1.06; mean difference score, 0.529) time intervals, but these differences were not statistically significant. The SIR scores in deaf-only implanted children were, on average, 0.870 scores greater than the deaf-blind children at "12 months post-CI" (2.94 ± 0.55 vs 2.07 ± 1.4; p = 0.01, d = 0.97) and, on average, 1.067 scores greater than deaf-blind children at "24 months post-CI" (4.35 ± 0.49 vs 3.29 ± 1.20; p = 0.002; d = 1.15) time intervals. We also found an improvement in WRS scores from the "preimplantation" to the "12-month post-CI" and "24-month post-CI" time intervals in both groups. Pairwise comparisons indicated that the mean WRS in the deaf-only adults was, on average, 10.61% better than deaf-blind implanted adults at "12 months post-CI" (62.33 ± 9.09% vs 51.71 ± 10.73%, p = 0.034, d = 1.06) and, on average, 15.81% better than deaf-blind adults at "24-months post-CI" (72.67 ± 8.66% vs 56.8 ± 9.78%, p = 0.002, d = 1.61) follow-ups. CONCLUSION Cochlear implantation is a beneficial method for the rehabilitation of deaf-blind patients. Both deaf-blind and deaf-only implanted children revealed similar auditory performances. However, speech perception ability in deaf-blind patients was slightly lower than the deaf-only patients in both children and adults.
Collapse
Affiliation(s)
- Ahmad Daneshi
- ENT and Head & Neck Research Center, The Five Senses Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Hamed Sajjadi
- Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, Stanford, CA
| | - Nikolas Blevins
- Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, Stanford, CA
| | - Herman A Jenkins
- Department of Otolaryngology-Head & Neck Surgery, University of Colorado, Anschutz Medical Campus, Aurora, Colorado, USA
| | - Mohammad Farhadi
- ENT and Head & Neck Research Center, The Five Senses Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Mohammad Ajallouyan
- Department of Otorhinolaryngology, Baqiyatallah Hospital, Baqiyatallah University of Medical Sciences, Tehran
| | - Seyed Basir Hashemi
- Department of Otorhinolaryngology, Khalili Hospital, Shiraz University of Medical Sciences, Shiraz
| | - Anthony Thai
- Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, Stanford, CA
| | - Emma Tran
- Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, Stanford, CA
| | - Mohsen Rajati
- Ghaem Hospital, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Alimohamad Asghari
- Skull Base Research Center, The Five Senses Institute, Iran University of Medical Sciences, Tehran
| | - Mohammad Mohseni
- ENT and Head & Neck Research Center, The Five Senses Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Saleh Mohebbi
- ENT and Head & Neck Research Center, The Five Senses Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Arash Bayat
- Department of Audiology, School of Rehabilitation Sciences
| | | | - Hesamaldin Emamdjomeh
- ENT and Head & Neck Research Center, The Five Senses Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Maryam Romiani
- ENT and Head & Neck Research Center, The Five Senses Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Farideh Hosseinzadeh
- ENT and Head & Neck Research Center, The Five Senses Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Yasser Nasori
- ENT and Head & Neck Research Center, The Five Senses Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Marjan Mirsaleh
- ENT and Head & Neck Research Center, The Five Senses Institute, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
9
|
Arcos K, Harhen N, Loiotile R, Bedny M. Superior verbal but not nonverbal memory in congenital blindness. Exp Brain Res 2022; 240:897-908. [PMID: 35076724 PMCID: PMC9204649 DOI: 10.1007/s00221-021-06304-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Accepted: 12/30/2021] [Indexed: 11/28/2022]
Abstract
Previous studies suggest that people who are congenitally blind outperform sighted people on some memory tasks. Whether blindness-associated memory advantages are specific to verbal materials or are also observed with nonverbal sounds has not been determined. Congenitally blind individuals (n = 20) and age and education matched blindfolded sighted controls (n = 22) performed a series of auditory memory tasks. These included: verbal forward and backward letter spans, a complex letter span with intervening equations, as well as two matched recognition tasks: one with verbal stimuli (i.e., letters) and one with nonverbal complex meaningless sounds. Replicating previously observed findings, blind participants outperformed sighted people on forward and backward letter span tasks. Blind participants also recalled more letters on the complex letter span task despite the interference of intervening equations. Critically, the same blind participants showed larger advantages on the verbal as compared to the nonverbal recognition task. These results suggest that blindness selectively enhances memory for verbal material. Possible explanations for blindness-related verbal memory advantages include blindness-induced memory practice and 'visual' cortex recruitment for verbal processing.
Collapse
Affiliation(s)
- Karen Arcos
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, USA.
| | - Nora Harhen
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| | - Rita Loiotile
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, United States
| | - Marina Bedny
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
10
|
Zhang L, Li Y, Zhou H, Zhang Y, Shu H. Language-familiarity effect on voice recognition by blind listeners. JASA EXPRESS LETTERS 2021; 1:055201. [PMID: 36154110 DOI: 10.1121/10.0004848] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The current study compared the language-familiarity effect on voice recognition by blind listeners and sighted individuals. Both groups performed better on the recognition of native voices than nonnative voices, but the language-familiarity effect is smaller in the blind than in the sighted group, with blind individuals performing better than their sighted counterparts only on the recognition of nonnative voices. Furthermore, recognition of native and nonnative voices was significantly correlated only in the blind group. These results indicate that language familiarity affects voice recognition by blind listeners, who differ to some extent from their sighted counterparts in the use of linguistic and nonlinguistic features during voice recognition.
Collapse
Affiliation(s)
- Linjun Zhang
- Beijing Advanced Innovation Center for Language Resources and College of Advanced Chinese Training, Beijing Language and Culture University, Beijing 100083, China
| | - Yu Li
- Division of Science and Technology, BNU-HKBU United International College, Zhuhai 519085, Guangdong, China
| | - Hong Zhou
- International Cultural Exchange School, Shanghai University of Finance and Economics, Shanghai 200433, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Hua Shu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China , , , ,
| |
Collapse
|
11
|
The processing of intimately familiar and unfamiliar voices: Specific neural responses of speaker recognition and identification. PLoS One 2021; 16:e0250214. [PMID: 33861789 PMCID: PMC8051806 DOI: 10.1371/journal.pone.0250214] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 04/03/2021] [Indexed: 11/19/2022] Open
Abstract
Research has repeatedly shown that familiar and unfamiliar voices elicit different neural responses. But it has also been suggested that different neural correlates associate with the feeling of having heard a voice and knowing who the voice represents. The terminology used to designate these varying responses remains vague, creating a degree of confusion in the literature. Additionally, terms serving to designate tasks of voice discrimination, voice recognition, and speaker identification are often inconsistent creating further ambiguities. The present study used event-related potentials (ERPs) to clarify the difference between responses to 1) unknown voices, 2) trained-to-familiar voices as speech stimuli are repeatedly presented, and 3) intimately familiar voices. In an experiment, 13 participants listened to repeated utterances recorded from 12 speakers. Only one of the 12 voices was intimately familiar to a participant, whereas the remaining 11 voices were unfamiliar. The frequency of presentation of these 11 unfamiliar voices varied with only one being frequently presented (the trained-to-familiar voice). ERP analyses revealed different responses for intimately familiar and unfamiliar voices in two distinct time windows (P2 between 200-250 ms and a late positive component, LPC, between 450-850 ms post-onset) with late responses occurring only for intimately familiar voices. The LPC present sustained shifts, and short-time ERP components appear to reflect an early recognition stage. The trained voice equally elicited distinct responses, compared to rarely heard voices, but these occurred in a third time window (N250 between 300-350 ms post-onset). Overall, the timing of responses suggests that the processing of intimately familiar voices operates in two distinct steps of voice recognition, marked by a P2 on right centro-frontal sites, and speaker identification marked by an LPC component. The recognition of frequently heard voices entails an independent recognition process marked by a differential N250. Based on the present results and previous observations, it is proposed that there is a need to distinguish between processes of voice "recognition" and "identification". The present study also specifies test conditions serving to reveal this distinction in neural responses, one of which bears on the length of speech stimuli given the late responses associated with voice identification.
Collapse
|
12
|
Chen W, Lan L, Xiao W, Li J, Liu J, Zhao F, Wang CD, Zheng Y, Chen W, Cai Y. Reduced Functional Connectivity in Children With Congenital Cataracts Using Resting-State Electroencephalography Measurement. Front Neurosci 2021; 15:657865. [PMID: 33935639 PMCID: PMC8079630 DOI: 10.3389/fnins.2021.657865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 03/22/2021] [Indexed: 11/19/2022] Open
Abstract
Objectives Numerous task-based functional magnetic resonance imaging studies indicate the presence of compensatory functional improvement in patients with congenital cataracts. However, there is neuroimaging evidence that shows decreased sensory perception or cognition information processing related to visual dysfunction, which favors a general loss hypothesis. This study explored the functional connectivity between visual and other networks in children with congenital cataracts using resting state electroencephalography. Methods Twenty-one children with congenital cataracts (age: 8.02 ± 2.03 years) and thirty-five sex- and age-matched normal sighted controls were enrolled to investigate functional connectivity between the visual cortex and the default mode network, the salience network, and the cerebellum network during resting state electroencephalography (eyes closed) recordings. Result The congenital cataract group was less active, than the control group, in the occipital, temporal, frontal and limbic lobes in the theta, alpha, beta1 and beta2 frequency bands. Additionally, there was reduced alpha-band connectivity between the visual and somatosensory cortices and between regions of the frontal and parietal cortices associated with cognitive and attentive control. Conclusion The results indicate abnormalities in sensory, cognition, motion and execution functional connectivity across the developing brains of children with congenital cataracts when compared with normal controls. Reduced frontal alpha activity and alpha-band connectivity between the visual cortex and salience network might reflect attenuated inhibitory information flow, leading to higher attentional states, which could contribute to adaptation of environmental change in this group of patients.
Collapse
Affiliation(s)
- Wan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Liping Lan
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Wei Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jiahong Li
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Jiahao Liu
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Fei Zhao
- Department of Speech and Language Therapy and Hearing Science, Cardiff Metropolitan University, Cardiff, United Kingdom.,Department of Hearing and Speech Science, Xinhua College, Sun Yat-sen University, Guangzhou, China
| | - Chang-Dong Wang
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Weirong Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
13
|
Lubinus C, Orpella J, Keitel A, Gudi-Mindermann H, Engel AK, Roeder B, Rimmele JM. Data-Driven Classification of Spectral Profiles Reveals Brain Region-Specific Plasticity in Blindness. Cereb Cortex 2021; 31:2505-2522. [PMID: 33338212 DOI: 10.1093/cercor/bhaa370] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 11/10/2020] [Accepted: 11/10/2020] [Indexed: 01/22/2023] Open
Abstract
Congenital blindness has been shown to result in behavioral adaptation and neuronal reorganization, but the underlying neuronal mechanisms are largely unknown. Brain rhythms are characteristic for anatomically defined brain regions and provide a putative mechanistic link to cognitive processes. In a novel approach, using magnetoencephalography resting state data of congenitally blind and sighted humans, deprivation-related changes in spectral profiles were mapped to the cortex using clustering and classification procedures. Altered spectral profiles in visual areas suggest changes in visual alpha-gamma band inhibitory-excitatory circuits. Remarkably, spectral profiles were also altered in auditory and right frontal areas showing increased power in theta-to-beta frequency bands in blind compared with sighted individuals, possibly related to adaptive auditory and higher cognitive processing. Moreover, occipital alpha correlated with microstructural white matter properties extending bilaterally across posterior parts of the brain. We provide evidence that visual deprivation selectively modulates spectral profiles, possibly reflecting structural and functional adaptation.
Collapse
Affiliation(s)
- Christina Lubinus
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
| | - Joan Orpella
- Department of Psychology, New York University, New York, NY 10003, USA
| | - Anne Keitel
- Psychology, University of Dundee, Dundee DD1 4HN, UK
| | - Helene Gudi-Mindermann
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany.,Department of Social Epidemiology, University of Bremen, 28359 Bremen, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| | - Brigitte Roeder
- Biological Psychology and Neuropsychology, University of Hamburg, 20146 Hamburg, Germany
| | - Johanna M Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany.,Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| |
Collapse
|
14
|
Turgeon C, Trudeau-Fisette P, Lepore F, Lippé S, Ménard L. Impact of visual and auditory deprivation on speech perception and production in adults. CLINICAL LINGUISTICS & PHONETICS 2020; 34:1061-1087. [PMID: 32013589 DOI: 10.1080/02699206.2020.1719207] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Revised: 01/10/2020] [Accepted: 01/17/2020] [Indexed: 06/10/2023]
Abstract
Speech perception relies on auditory and visual cues and there are strong links between speech perception and production. We aimed to evaluate the role of auditory and visual modalities on speech perception and production in adults with impaired hearing or sight versus those with normal hearing and sight. We examined speech perception and production of three isolated vowels (/i/, /y/, /u/), which were selected based on their different auditory and visual perceptual saliencies, in 12 deaf adults who used one or two cochlear implants (CIs), 14 congenitally blind adults, and 16 adults with normal sight and hearing. The results showed that the deaf adults who used a CI had worse vowel identification and discrimination perception and they also produced vowels that were less typical or precise than other participants. They had different tongue positions in speech production, which possibly partly explains the poorer quality of their spoken vowels. Blind individuals had larger lip openings and smaller lip protrusions for the rounded vowel and unrounded vowels, compared to the other participants, but they still produced vowels that were similar to those produced by the adults with normal sight and hearing. In summary, the deaf adults, even though they used CIs, had greater difficulty in producing accurate vowel targets than the blind adults, whereas the blind adults were still able to produce accurate vowel targets, even though they used different articulatory strategies.
Collapse
Affiliation(s)
| | | | - Franco Lepore
- Department of Psychology, Université de Montréal , Montréal, Canada
| | - Sarah Lippé
- Department of Psychology, Université de Montréal , Montréal, Canada
| | - Lucie Ménard
- Department of Linguistic, UQAM , Montréal, Canada
| |
Collapse
|
15
|
Röder B, Kekunnaya R, Guerreiro MJS. Neural mechanisms of visual sensitive periods in humans. Neurosci Biobehav Rev 2020; 120:86-99. [PMID: 33242562 DOI: 10.1016/j.neubiorev.2020.10.030] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 10/08/2020] [Indexed: 01/18/2023]
Abstract
Sensitive periods in brain development are phases of enhanced susceptibility to experience. Here we discuss research from human and non-human neuroscience studies which have demonstrated a) differences in the way infants vs. adults learn; b) how the brain adapts to atypical conditions, in particular a congenital vs. a late onset blindness (sensitive periods for atypical brain development); and c) the extent to which neural systems are capable of acquiring a typical brain organization after sight restoration following a congenital vs. late phase of pattern vision deprivation (sensitive periods for typical brain development). By integrating these three lines of research, we propose neural mechanisms characteristic of sensitive periods vs. adult neuroplasticity and learning.
Collapse
Affiliation(s)
- Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Germany.
| | - Ramesh Kekunnaya
- Jasti V Ramanamma Children's Eye Care Center, LV Prasad Eye Institute, Hyderabad, India
| | | |
Collapse
|
16
|
Pang W, Xing H, Zhang L, Shu H, Zhang Y. Superiority of blind over sighted listeners in voice recognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:EL208. [PMID: 32873006 DOI: 10.1121/10.0001804] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 08/05/2020] [Indexed: 05/22/2023]
Abstract
The current study examined whether the blind are superior to sighted listeners in voice recognition. Three subject groups, including 17 congenitally blind, 18 late blind, and 18 sighted, showed no significant differences in the immediate voice recognition test. In the delayed test conducted two weeks later, however, both congenitally blind and late blind groups performed better than the sighted with no significant difference between the two blind groups. These results partly confirmed the anecdotal observation about the blind's superiority in voice recognition, which resides mainly in delayed memory phase but not in immediate recall and generalization phase.
Collapse
Affiliation(s)
- Wenbin Pang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Hongbing Xing
- Beijing Advanced Innovation Center for Language Resources and College of Advanced Chinese Training, Beijing Language and Culture University, Beijing 100083, China
| | - Linjun Zhang
- Beijing Advanced Innovation Center for Language Resources and College of Advanced Chinese Training, Beijing Language and Culture University, Beijing 100083, China
| | - Hua Shu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota, Minnesota 55455, , , , ,
| |
Collapse
|
17
|
Topalidis P, Zinchenko A, Gädeke JC, Föcker J. The role of spatial selective attention in the processing of affective prosodies in congenitally blind adults: An ERP study. Brain Res 2020; 1739:146819. [PMID: 32251662 DOI: 10.1016/j.brainres.2020.146819] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 03/25/2020] [Accepted: 04/02/2020] [Indexed: 10/24/2022]
Abstract
The question whether spatial selective attention is necessary in order to process vocal affective prosody has been controversially discussed in sighted individuals: whereas some studies argue that attention is required in order to process emotions, other studies conclude that vocal prosody can be processed even outside the focus of spatial selective attention. Here, we asked whether spatial selective attention is necessary for the processing of affective prosodies after visual deprivation from birth. For this purpose, pseudowords spoken in happy, neutral, fearful or threatening prosodies were presented at the left or right loudspeaker. Congenitally blind individuals (N = 8) and sighted controls (N = 13) had to attend to one of the loudspeakers and detect rare pseudowords presented at the attended loudspeaker during EEG recording. Emotional prosody of the syllables was task-irrelevant. Blind individuals outperformed sighted controls by being more efficient in detecting deviant syllables at the attended loudspeaker. A higher auditory N1 amplitude was observed in blind individuals compared to sighted controls. Additionally, sighted controls showed enhanced attention-related ERP amplitudes in response to fearful and threatening voices during the time range of the N1. By contrast, blind individuals revealed enhanced ERP amplitudes in attended relative to unattended locations irrespective of the affective valence in all time windows (110-350 ms). These effects were mainly observed at posterior electrodes. The results provide evidence for "emotion-general" auditory spatial selective attention effects in congenitally blind individuals and suggest a potential reorganization of the voice processing brain system following visual deprivation from birth.
Collapse
Affiliation(s)
- Pavlos Topalidis
- Department of Psychology and Educational Sciences, Ludwig Maximilian University, Munich, Germany
| | - Artyom Zinchenko
- Department of Psychology and Educational Sciences, Ludwig Maximilian University, Munich, Germany
| | - Julia C Gädeke
- Biological Psychology and Neuropsychology, University of Hamburg, Germany
| | - Julia Föcker
- Biological Psychology and Neuropsychology, University of Hamburg, Germany; University of Lincoln, School of Social Sciences, United Kingdom.
| |
Collapse
|
18
|
Gori M, Amadeo MB, Campus C. Temporal cues trick the visual and auditory cortices mimicking spatial cues in blind individuals. Hum Brain Mapp 2020; 41:2077-2091. [PMID: 32048380 PMCID: PMC7267917 DOI: 10.1002/hbm.24931] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 01/03/2020] [Accepted: 01/07/2020] [Indexed: 11/05/2022] Open
Abstract
In the absence of vision, spatial representation may be altered. When asked to compare the relative distances between three sounds (i.e., auditory spatial bisection task), blind individuals demonstrate significant deficits and do not show an event-related potential response mimicking the visual C1 reported in sighted people. However, we have recently demonstrated that the spatial deficit disappears if coherent time and space cues are presented to blind people, suggesting that they may use time information to infer spatial maps. In this study, we examined whether the modification of temporal cues during space evaluation altered the recruitment of the visual and auditory cortices in blind individuals. We demonstrated that the early (50-90 ms) occipital response, mimicking the visual C1, is not elicited by the physical position of the sound, but by its virtual position suggested by its temporal delay. Even more impressively, in the same time window, the auditory cortex also showed this pattern and responded to temporal instead of spatial coordinates.
Collapse
Affiliation(s)
- Monica Gori
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Maria Bianca Amadeo
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Claudio Campus
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
19
|
Gori M, Amadeo MB, Campus C. Spatial metric in blindness: behavioural and cortical processing. Neurosci Biobehav Rev 2020; 109:54-62. [PMID: 31899299 DOI: 10.1016/j.neubiorev.2019.12.031] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Revised: 11/30/2019] [Accepted: 12/29/2019] [Indexed: 11/29/2022]
Abstract
Visual modality dominates spatial perception and, in lack of vision, space representation might be altered. Here we review our work showing that blind individuals have a strong deficit when performing spatial bisection tasks (Gori et al., 2014). We also describe the neural correlates associated with this deficit, as blind individuals do not show the same ERP response mimicking the visual C1 reported in sighted people during spatial bisection (Campus et al., 2019). Interestingly, the deficit is not always evident in late blind individuals, and it is dependent on blindness duration. We report that the deficit disappears when one presents coherent temporal and spatial cues to blind people. This suggests that they may use time information to infer spatial maps (Gori et al., 2018). Finally, we propose a model to explain why blind individuals are impaired in this task, speculating that a lack of vision drives the construction of a multi-sensory cortical network that codes space based on temporal, rather than spatial, coordinates.
Collapse
Affiliation(s)
- Monica Gori
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano Di Tecnologia, Via E. Melen, 83, 16152 Genova, Italy.
| | - Maria Bianca Amadeo
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano Di Tecnologia, Via E. Melen, 83, 16152 Genova, Italy; Department of Informatics, Bioengineering, Robotics and Systems Engineering, Università Degli Studi Di Genova, via all'Opera Pia, 13, 16145 Genova, Italy
| | - Claudio Campus
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano Di Tecnologia, Via E. Melen, 83, 16152 Genova, Italy
| |
Collapse
|
20
|
Amadeo MB, Störmer VS, Campus C, Gori M. Peripheral sounds elicit stronger activity in contralateral occipital cortex in blind than sighted individuals. Sci Rep 2019; 9:11637. [PMID: 31406158 PMCID: PMC6690873 DOI: 10.1038/s41598-019-48079-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Accepted: 07/26/2019] [Indexed: 11/17/2022] Open
Abstract
Previous research has shown that peripheral, task-irrelevant sounds elicit activity in contralateral visual cortex of sighted people, as revealed by a sustained positive deflection in the event-related potential (ERP) over the occipital scalp contralateral to the sound’s location. This Auditory-evoked Contralateral Occipital Positivity (ACOP) appears between 200–450 ms after sound onset, and is present even when the task is entirely auditory and no visual stimuli are presented at all. Here, we investigate whether this cross-modal activation of contralateral visual cortex is influenced by visual experience. To this end, ERPs were recorded in 12 sighted and 12 blind subjects during a unimodal auditory task. Participants listened to a stream of sounds and pressed a button every time they heard a central target tone, while ignoring the peripheral noise bursts. It was found that task-irrelevant noise bursts elicited a larger ACOP in blind compared to sighted participants, indicating for the first time that peripheral sounds can enhance neural activity in visual cortex in a spatially lateralized manner even in visually deprived individuals. Overall, these results suggest that the cross-modal activation of contralateral visual cortex triggered by peripheral sounds does not require any visual input to develop, and is rather enhanced by visual deprivation.
Collapse
Affiliation(s)
- Maria Bianca Amadeo
- U-VIP: Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy. .,Department of Informatics, Bioengineering, Robotics and Systems Engineering, Università degli Studi di Genova, Genova, Italy.
| | - Viola S Störmer
- Department of Psychology and Neuroscience Graduate Program, University of California San Diego, San Diego, USA
| | - Claudio Campus
- U-VIP: Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Monica Gori
- U-VIP: Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
21
|
Alterations of the Brain Microstructure and Corresponding Functional Connectivity in Early-Blind Adolescents. Neural Plast 2019; 2019:2747460. [PMID: 30996726 PMCID: PMC6408999 DOI: 10.1155/2019/2747460] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 10/17/2018] [Accepted: 12/27/2018] [Indexed: 11/17/2022] Open
Abstract
Although evidence from studies on blind adults indicates that visual deprivation early in life leads to structural and functional disruption and reorganization of the brain, whether young blind people show similar patterns remains unknown. Therefore, this study is aimed at exploring the structural and functional alterations of the brain of early-blind adolescents (EBAs) compared to normal-sighted controls (NSCs) and investigating the effects of residual light perception on brain microstructure and function in EBAs. We obtained magnetic resonance imaging (MRI) data from 23 EBAs (8 with residual light perception (LPs), 15 without light perception (NLPs)) and 21 NSCs (age range 11-19 years old). Whole-brain voxel-based analyses of diffusion tensor imaging metrics and region-of-interest analyses of resting-state functional connectivity (RSFC) were performed to compare patterns of brain microstructure and the corresponding RSFC between the groups. The results showed that structural disruptions of LPs and NLPs were mainly located in the occipital visual pathway. Compared with NLPs, LPs showed increased fractional anisotropy (FA) in the superior frontal gyrus and reduced diffusivity in the caudate nucleus. Moreover, the correlations between FA of the occipital cortices or mean diffusivity of the lingual gyrus and age were consistent with the development trajectory of the brain in NSCs, but inconsistent or even opposite in EBAs. Additionally, we found functional, but not structural, reorganization in NLPs compared with NSCs, suggesting that functional neuroplasticity occurs earlier than structural neuroplasticity in EBAs. Altogether, these findings provided new insights into the mechanisms underlying the neural reorganization of the brain in adolescents with early visual deprivation.
Collapse
|
22
|
Stronger responses in the visual cortex of sighted compared to blind individuals during auditory space representation. Sci Rep 2019; 9:1935. [PMID: 30760758 PMCID: PMC6374481 DOI: 10.1038/s41598-018-37821-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Accepted: 12/11/2018] [Indexed: 01/02/2023] Open
Abstract
It has been previously shown that the interaction between vision and audition involves early sensory cortices. However, the functional role of these interactions and their modulation due to sensory impairment is not yet understood. To shed light on the impact of vision on auditory spatial processing, we recorded ERPs and collected psychophysical responses during space and time bisection tasks in sighted and blind participants. They listened to three consecutive sounds and judged whether the second sound was either spatially or temporally further from the first or the third sound. We demonstrate that spatial metric representation of sounds elicits an early response of the visual cortex (P70) which is different between sighted and visually deprived individuals. Indeed, only in sighted and not in blind people P70 is strongly selective for the spatial position of sounds, mimicking many aspects of the visual-evoked C1. These results suggest that early auditory processing associated with the construction of spatial maps is mediated by visual experience. The lack of vision might impair the projection of multi-sensory maps on the retinotopic maps used by the visual cortex.
Collapse
|
23
|
de Borst AW, de Gelder B. Mental Imagery Follows Similar Cortical Reorganization as Perception: Intra-Modal and Cross-Modal Plasticity in Congenitally Blind. Cereb Cortex 2018; 29:2859-2875. [DOI: 10.1093/cercor/bhy151] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Revised: 05/27/2018] [Accepted: 06/05/2018] [Indexed: 11/14/2022] Open
Abstract
Abstract
Cortical plasticity in congenitally blind individuals leads to cross-modal activation of the visual cortex and may lead to superior perceptual processing in the intact sensory domains. Although mental imagery is often defined as a quasi-perceptual experience, it is unknown whether it follows similar cortical reorganization as perception in blind individuals. In this study, we show that auditory versus tactile perception evokes similar intra-modal discriminative patterns in congenitally blind compared with sighted participants. These results indicate that cortical plasticity following visual deprivation does not influence broad intra-modal organization of auditory and tactile perception as measured by our task. Furthermore, not only the blind, but also the sighted participants showed cross-modal discriminative patterns for perception modality in the visual cortex. During mental imagery, both groups showed similar decoding accuracies for imagery modality in the intra-modal primary sensory cortices. However, no cross-modal discriminative information for imagery modality was found in early visual cortex of blind participants, in contrast to the sighted participants. We did find evidence of cross-modal activation of higher visual areas in blind participants, including the representation of specific-imagined auditory features in visual area V4.
Collapse
Affiliation(s)
- A W de Borst
- Department of Computer Science, University College London, London, UK
- Brain and Emotion Lab, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - B de Gelder
- Department of Computer Science, University College London, London, UK
- Brain and Emotion Lab, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
24
|
Jafari Z, Malayeri S. Subcortical encoding of speech cues in children with congenital blindness. Restor Neurol Neurosci 2018; 34:757-68. [PMID: 27589504 DOI: 10.3233/rnn-160639] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Congenital visual deprivation underlies neural plasticity in different brain areas, and provides an outstanding opportunity to study the neuroplastic capabilities of the brain. OBJECTIVES The present study aimed to investigate the effect of congenital blindness on subcortical auditory processing using electrophysiological and behavioral assessments in children. METHODS A total of 47 children aged 8-12 years, including 22 congenitally blind (CB) children and 25 normal-sighted (NS) control, were studied. All children were tested using an auditory brainstem response (ABR) test with both click and speech stimuli. Speech recognition and musical abilities were tested using standard tools. RESULTS Significant differences were observed between the two groups in speech ABR wave latencies A, F and O (p≤0.043), wave amplitude F (p = 0.039), V-A slope (p = 0.026), and three spectral magnitudes F0, F1 and HF (p≤0.002). CB children showed a superior performance compared to NS peers in all the subtests and the total score of musical abilities (p≤0.003). Moreover, they had significantly higher scores during the nonsense syllable test in noise than the NS children (p = 0.034). Significant negative correlations were found only in CB children between the total music score and both wave A (p = 0.039) and wave F (p = 0.029) latencies, as well as nonsense-syllable test in noise and the wave A latency (p = 0.041). CONCLUSION Our results suggest that neuroplasticity resulting from congenital blindness can be measured subcortically and has a heightened effect on temporal, musical and speech processing abilities. The findings have been discussed based on models of plasticity and the influence of corticofugal modulation in synthesizing complex auditory stimuli.
Collapse
Affiliation(s)
- Zahra Jafari
- Rehabilitation Research Center (RRC), Iran University of Medical Sciences (IUMS), Tehran, Iran.,Department of Basic Sciences in Rehabilitation, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran.,Canadian Center for Behavioral Neuroscience (CCBN), University of Lethbridge, Lethbridge, Alberta, Canada
| | | |
Collapse
|
25
|
Spatial localization of sound elicits early responses from occipital visual cortex in humans. Sci Rep 2017; 7:10415. [PMID: 28874681 PMCID: PMC5585168 DOI: 10.1038/s41598-017-09142-z] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2017] [Accepted: 07/20/2017] [Indexed: 11/08/2022] Open
Abstract
Much evidence points to an interaction between vision and audition at early cortical sites. However, the functional role of these interactions is not yet understood. Here we show an early response of the occipital cortex to sound that it is strongly linked to the spatial localization task performed by the observer. The early occipital response to a sound, usually absent, increased by more than 10-fold when presented during a space localization task, but not during a time localization task. The response amplification was not only specific to the task, but surprisingly also to the position of the stimulus in the two hemifields. We suggest that early occipital processing of sound is linked to the construction of an audio spatial map that may utilize the visual map of the occipital cortex.
Collapse
|
26
|
Maguinness C, von Kriegstein K. Cross-modal processing of voices and faces in developmental prosopagnosia and developmental phonagnosia. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1313347] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Corrina Maguinness
- Max Planck Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Max Planck Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Psychology, Humboldt University of Berlin, Berlin, Germany
| |
Collapse
|
27
|
Gori M, Cappagli G, Baud-Bovy G, Finocchietti S. Shape Perception and Navigation in Blind Adults. Front Psychol 2017; 8:10. [PMID: 28144226 PMCID: PMC5240028 DOI: 10.3389/fpsyg.2017.00010] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 01/03/2017] [Indexed: 11/25/2022] Open
Abstract
Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia Genoa, Italy
| | - Giulia Cappagli
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia Genoa, Italy
| | - Gabriel Baud-Bovy
- Robotics, Brain and Cognitive Science Department, Istituto Italiano di TecnologiaGenoa, Italy; The Unit of Experimental Psychology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele UniversityMilan, Italy
| | - Sara Finocchietti
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia Genoa, Italy
| |
Collapse
|
28
|
Abstract
UNLABELLED Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. SIGNIFICANCE STATEMENT The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function.
Collapse
|
29
|
Lexical processing deficits in children with developmental language disorder: An event-related potentials study. Dev Psychopathol 2016; 27:459-76. [PMID: 25997765 DOI: 10.1017/s0954579415000097] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Lexical processing deficits in children with developmental language disorder (DLD) have been postulated to arise as sequelae of their grammatical deficits (either directly or via compensatory mechanisms) and vice versa. We examined event-related potential indices of lexical processing in children with DLD (n = 23) and their typically developing peers (n = 16) using a picture-word matching paradigm. We found that children with DLD showed markedly reduced N400 amplitudes in response both to auditorily presented words that had initial phonological overlap with the name of the pictured object and to words that were not semantically or phonologically related to the pictured object. Moreover, this reduction was related to behavioral indices of phonological and lexical but not grammatical development. We also found that children with DLD showed a depressed phonological mapping negativity component in the early time window, suggesting deficits in phonological processing or early lexical access. The results are partially consistent with the overactivation account of lexical processing deficits in DLD and point to the relative functional independence of lexical/phonological and grammatical deficits in DLD, supporting a multidimensional view of the disorder. The results also, although indirectly, support the neuroplasticity account of DLD, according to which language impairment affects brain development and shapes the specific patterns of brain responses to language stimuli.
Collapse
|
30
|
Crossmodal plasticity in the fusiform gyrus of late blind individuals during voice recognition. Neuroimage 2014; 103:374-382. [DOI: 10.1016/j.neuroimage.2014.09.050] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2014] [Revised: 09/04/2014] [Accepted: 09/22/2014] [Indexed: 11/19/2022] Open
|
31
|
Hölig C, Föcker J, Best A, Röder B, Büchel C. Brain systems mediating voice identity processing in blind humans. Hum Brain Mapp 2014; 35:4607-19. [PMID: 24639401 DOI: 10.1002/hbm.22498] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2013] [Revised: 02/10/2014] [Accepted: 02/13/2014] [Indexed: 11/10/2022] Open
Abstract
Blind people rely more on vocal cues when they recognize a person's identity than sighted people. Indeed, a number of studies have reported better voice recognition skills in blind than in sighted adults. The present functional magnetic resonance imaging study investigated changes in the functional organization of neural systems involved in voice identity processing following congenital blindness. A group of congenitally blind individuals and matched sighted control participants were tested in a priming paradigm, in which two voice stimuli (S1, S2) were subsequently presented. The prime (S1) and the target (S2) were either from the same speaker (person-congruent voices) or from two different speakers (person-incongruent voices). Participants had to classify the S2 as either a old or a young person. Person-incongruent voices (S2) compared with person-congruent voices elicited an increased activation in the right anterior fusiform gyrus in congenitally blind individuals but not in matched sighted control participants. In contrast, only matched sighted controls showed a higher activation in response to person-incongruent compared with person-congruent voices (S2) in the right posterior superior temporal sulcus. These results provide evidence for crossmodal plastic changes of the person identification system in the brain after visual deprivation.
Collapse
Affiliation(s)
- Cordula Hölig
- Department of Biological Psychology and Neuropsychology, University of Hamburg, Germany; Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Germany
| | | | | | | | | |
Collapse
|
32
|
Jafari Z, Malayeri S. Effects of congenital blindness on the subcortical representation of speech cues. Neuroscience 2013; 258:401-9. [PMID: 24291729 DOI: 10.1016/j.neuroscience.2013.11.027] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Revised: 10/28/2013] [Accepted: 11/14/2013] [Indexed: 11/18/2022]
Abstract
Human modalities play a vital role in the way the brain produces mental representations of the world around us. Although congenital blindness limits the understanding of the environment in some aspects, blind individuals may have other superior capabilities from long-term experience and neural plasticity. This study investigated the effects of congenital blindness on temporal and spectral neural encoding of speech at the subcortical level. The study included 26 congenitally blind individuals and 24 normal-sighted individuals with normal hearing. Auditory brainstem response (ABR) was recorded with both click and speech synthetic 40-ms /da/ stimuli. No significant difference was observed between the two groups in wave latencies or amplitudes of click ABR. Latencies of speech ABR D (p=0.012) and O (p=0.014) waves were significantly shorter in blind individuals than in normal-sighted individuals. Amplitudes of the A (p<0.001) and E (p=0.001) speech ABR (sABR) waves were also significantly higher in blind subjects. Blind individuals had significantly better results for duration (p<0.001) amplitude (p=0.015) and slope of the V-A complex (p=0.004), signal-to-noise ratio (p<0.001), and amplitude of the stimulus fundamental frequency (F0) (p=0.009), first formant (F1) (p<0.001) and higher-frequency region (HF) (p<0.001) ranges. Results indicate that congenitally blind subjects have improved hearing function in response to the /da/ syllable in both source and filter classes of sABR. It is possible that these subjects have enhanced neural representation of vocal cord vibrations and improved neural synchronization in temporal encoding of the onset and offset parts of speech stimuli at the brainstem level. This may result from the compensatory mechanism of neural reorganization in blind subjects influenced from top-down corticofugal connections with the auditory cortex.
Collapse
Affiliation(s)
- Z Jafari
- Rehabilitation Research Center (RRC), Iran University of Medical Sciences (IUMS), Tehran, Iran; Department of Basic Sciences in Rehabilitation, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran.
| | - S Malayeri
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences (USWR), Tehran, Iran; NEWSHA Hearing Institute, Tehran, Iran.
| |
Collapse
|