1
|
Schmitt O. Relationships and representations of brain structures, connectivity, dynamics and functions. Prog Neuropsychopharmacol Biol Psychiatry 2025; 138:111332. [PMID: 40147809 DOI: 10.1016/j.pnpbp.2025.111332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 02/20/2025] [Accepted: 03/10/2025] [Indexed: 03/29/2025]
Abstract
The review explores the complex interplay between brain structures and their associated functions, presenting a diversity of hierarchical models that enhances our understanding of these relationships. Central to this approach are structure-function flow diagrams, which offer a visual representation of how specific neuroanatomical structures are linked to their functional roles. These diagrams are instrumental in mapping the intricate connections between different brain regions, providing a clearer understanding of how functions emerge from the underlying neural architecture. The study details innovative attempts to develop new functional hierarchies that integrate structural and functional data. These efforts leverage recent advancements in neuroimaging techniques such as fMRI, EEG, MEG, and PET, as well as computational models that simulate neural dynamics. By combining these approaches, the study seeks to create a more refined and dynamic hierarchy that can accommodate the brain's complexity, including its capacity for plasticity and adaptation. A significant focus is placed on the overlap of structures and functions within the brain. The manuscript acknowledges that many brain regions are multifunctional, contributing to different cognitive and behavioral processes depending on the context. This overlap highlights the need for a flexible, non-linear hierarchy that can capture the brain's intricate functional landscape. Moreover, the study examines the interdependence of these functions, emphasizing how the loss or impairment of one function can impact others. Another crucial aspect discussed is the brain's ability to compensate for functional deficits following neurological diseases or injuries. The investigation explores how the brain reorganizes itself, often through the recruitment of alternative neural pathways or the enhancement of existing ones, to maintain functionality despite structural damage. This compensatory mechanism underscores the brain's remarkable plasticity, demonstrating its ability to adapt and reconfigure itself in response to injury, thereby ensuring the continuation of essential functions. In conclusion, the study presents a system of brain functions that integrates structural, functional, and dynamic perspectives. It offers a robust framework for understanding how the brain's complex network of structures supports a wide range of cognitive and behavioral functions, with significant implications for both basic neuroscience and clinical applications.
Collapse
Affiliation(s)
- Oliver Schmitt
- Medical School Hamburg - University of Applied Sciences and Medical University - Institute for Systems Medicine, Am Kaiserkai 1, Hamburg 20457, Germany; University of Rostock, Department of Anatomy, Gertrudenstr. 9, Rostock, 18055 Rostock, Germany.
| |
Collapse
|
2
|
Deok Moon K, Kyung Park Y, Seop Kim M, Jeong CY. Improving Acceptance to Sensory Substitution: A Study on the V2A-SS Learning Model Based on Information Processing Learning Theory. IEEE Trans Neural Syst Rehabil Eng 2025; 33:1097-1107. [PMID: 40048329 DOI: 10.1109/tnsre.2025.3548942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2025]
Abstract
The visual sensory organ (VSO) serves as the primary channel for transmitting external information to the brain; therefore, damage to the VSO can severely limit daily activities. Visual-to-Auditory Sensory Substitution (V2A-SS), an innovative approach to restoring vision, offers a promising solution by leveraging neuroplasticity to convey visual information via auditory channels. Advances in information technology and artificial intelligence mitigate technical challenges such as low resolution and limited bandwidth, thereby enabling broader applicability of V2A-SS. Despite these advances, integrating V2A-SS effectively into everyday life necessitates extensive training and adaptation. Therefore, alongside addressing technical challenges, investigating effective learning strategies to accelerate the acceptance of V2A-SS is crucial. This study introduces a V2A-SS learning model based on the Information Processing Learning Theory (IPLT), encompassing the stages of "concept acquisition, rehearsal, assessment" to reduce the learning curve and enhance adaptation. The experimental results show that the proposed learning model improves recognition rates, achieving an 11% increase over simple random repetition learning. This improvement is significantly higher than the gain of 2.72% achieved by optimizing the V2A-SS algorithm with Mel-Scaled Frequency Mapping. This study suggests that a structured learning model for sensory substitution technologies can contribute to bridging gaps between technical feasibility and practical application. This underscores the need to develop effective learning models, alongside technological advancements, to accelerate the adoption of V2A-SS and neuroplasticity.
Collapse
|
3
|
Miller ZA, Hinkley LBN, Borghesani V, Mauer E, Shwe W, Mizuiri D, Bogley R, Mandelli ML, de Leon J, Pereira CW, Allen I, Houde J, Kramer J, Miller BL, Nagarajan SS, Gorno-Tempini ML. Non-right-handedness, male sex, and regional, network-specific, ventral occipito-temporal anomalous lateralization in adults with a history of reading disability. Cortex 2025; 183:116-130. [PMID: 39631179 PMCID: PMC11936465 DOI: 10.1016/j.cortex.2024.09.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 02/27/2024] [Accepted: 09/23/2024] [Indexed: 12/07/2024]
Abstract
Based on historic observations that children with reading disabilities were disproportionately both male and non-right-handed, and that early life insults of the left hemisphere were more frequent in boys and non-right-handed children, it was proposed that early focal neuronal injury disrupts typical patterns of motor hand and language dominance and in the process produces developmental dyslexia. To date, these theories remain controversial. We revisited these earliest theories in a contemporary manner, investigating demographics associated with reading disability, and in a subgroup with and without reading disability, compared structural imaging as well as patterns of activity during tasks of verb generation and non-word repetition using magnetoencephalography source imaging. In a large group of healthy aging adults (n = 282; average age 72.3), we assessed reading ability via the Adult Reading History Questionnaire and found that non-right-handedness and male sex significantly predicted endorsed reading disability. In a subset of participants from the larger cohort who endorsed reading disability (n = 14) and a group who denied reading disability (n = 22), we compared structural and functional imaging data. We failed to detect structural differences in volumetric brain morphometry analyses, however we observed decreased neural activity on magnetoencephalography within the reading disability group. The detected differences were largely restricted to left hemisphere ventral occipito-temporal and posterior-lateral temporal cortices, the visual word form area and middle temporal gyrus, regions implicated in developmental dyslexia. Moreover, these observed disruptions occurred in a focal, network-specific manner, preferentially disturbing the ventral/sight reading recognition pathway, resulting in a pattern of regional anomalous lateralization of function that distinguished the reading disability cohort from normal readers. Collectively, the results presented here align with old theories regarding the etiology of developmental dyslexia and highlight how results from investigating neurodevelopmental differences in healthy aging individuals can powerfully contribute towards our overall understanding of neurodevelopment and neurodiversity.
Collapse
Affiliation(s)
- Zachary A Miller
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Dyslexia Center, Department of Neurology and Psychiatry, UCSF Weill Institute for Neurosciences, University of California, San Francisco, CA, USA.
| | - Leighton B N Hinkley
- Department of Radiology, University of California, San Francisco, San Francisco, CA, USA.
| | - Valentina Borghesani
- University of Geneva, Swiss National Centre of Competence in Research, Geneva, Switzerland.
| | - Ezra Mauer
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Dyslexia Center, Department of Neurology and Psychiatry, UCSF Weill Institute for Neurosciences, University of California, San Francisco, CA, USA; Department of Psychology, University of California, Berkeley, Berkeley, CA, USA.
| | - Wendy Shwe
- George Washington University, School of Medicine, Washington, DC, USA.
| | - Danielle Mizuiri
- Department of Radiology, University of California, San Francisco, San Francisco, CA, USA.
| | - Rian Bogley
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Dyslexia Center, Department of Neurology and Psychiatry, UCSF Weill Institute for Neurosciences, University of California, San Francisco, CA, USA.
| | - Maria Luisa Mandelli
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Dyslexia Center, Department of Neurology and Psychiatry, UCSF Weill Institute for Neurosciences, University of California, San Francisco, CA, USA.
| | - Jessica de Leon
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Dyslexia Center, Department of Neurology and Psychiatry, UCSF Weill Institute for Neurosciences, University of California, San Francisco, CA, USA.
| | - Christa Watson Pereira
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Dyslexia Center, Department of Neurology and Psychiatry, UCSF Weill Institute for Neurosciences, University of California, San Francisco, CA, USA.
| | - Isabel Allen
- Department of Biostatistics, University of California, San Francisco, San Francisco, CA, USA.
| | - John Houde
- Department of Radiology, University of California, San Francisco, San Francisco, CA, USA.
| | - Joel Kramer
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA.
| | - Bruce L Miller
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Dyslexia Center, Department of Neurology and Psychiatry, UCSF Weill Institute for Neurosciences, University of California, San Francisco, CA, USA.
| | - Srikantan S Nagarajan
- Department of Radiology, University of California, San Francisco, San Francisco, CA, USA.
| | - Maria Luisa Gorno-Tempini
- Memory and Aging Center, Department of Neurology, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA; Dyslexia Center, Department of Neurology and Psychiatry, UCSF Weill Institute for Neurosciences, University of California, San Francisco, CA, USA.
| |
Collapse
|
4
|
Kolarik AJ, Moore BCJ. Principles governing the effects of sensory loss on human abilities: An integrative review. Neurosci Biobehav Rev 2025; 169:105986. [PMID: 39710017 DOI: 10.1016/j.neubiorev.2024.105986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Revised: 12/02/2024] [Accepted: 12/17/2024] [Indexed: 12/24/2024]
Abstract
Blindness or deafness can significantly influence sensory abilities in intact modalities, affecting communication, orientation and navigation. Explanations for why certain abilities are enhanced and others degraded include: crossmodal cortical reorganization enhances abilities by providing additional neural processing resources; and sensory processing is impaired for tasks where calibration from the normally intact sense is required for good performance. However, these explanations are often specific to tasks or modalities, not accounting for why task-dependent enhancement or degradation are observed. This paper investigates whether sensory systems operate according to a theoretical framework comprising seven general principles (the perceptual restructuring hypothesis) spanning the various modalities. These principles predict whether an ability will be enhanced or degraded following sensory loss. Evidence from a wide range of studies is discussed, to assess the validity of the principles across different combinations of impaired sensory modalities (deafness or blindness) and intact modalities (vision, audition, touch, olfaction). It is concluded that sensory systems do operate broadly according to the principles of the framework, but with some exceptions.
Collapse
Affiliation(s)
- Andrew J Kolarik
- School of Psychology, University of East Anglia, Norwich, United Kingdom; Cambridge Hearing Group, Department of Psychology, University of Cambridge, Cambridge, United Kingdom; Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom.
| | - Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, University of Cambridge, Cambridge, United Kingdom; Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom.
| |
Collapse
|
5
|
Gaca M, Olszewska AM, Droździel D, Kulesza A, Paplińska M, Kossowski B, Jednoróg K, Matuszewski J, Herman AM, Marchewka A. How learning to read Braille in visual and tactile domains reorganizes the sighted brain. Front Neurosci 2025; 18:1297344. [PMID: 39834698 PMCID: PMC11744719 DOI: 10.3389/fnins.2024.1297344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 08/23/2024] [Indexed: 01/22/2025] Open
Abstract
Learning tactile Braille reading leverages cross-modal plasticity, emphasizing the brain's ability to reallocate functions across sensory domains. This neuroplasticity engages motor and somatosensory areas and reaches language and cognitive centers like the visual word form area (VWFA), even in sighted subjects following training. No study has employed a complex reading task to monitor neural activity during the first weeks of Braille training. Since neuroplasticity can occur within days, understanding neural reorganization during early learning stages is critical. Moreover, such activation was not tested in visual and tactile domains using comparable tasks. Furthermore, implicit reading has not been studied in tactile Braille. Although visual reading in the native script occurs automatically, it remains uncertain whether the same applies to tactile reading. An implicit reading task could extend the knowledge of linguistic processing in Braille. Our study involved 17 sighted adults who learned Braille for 7 months and 19 controls. The experimental group participated in 7 testing sessions (1 week before the course, on the first day, after 1 and 6 weeks, after 3 and 7 months, and after 3 month-long hiatus). Using the fMRI Lexical Decision Task, we observed increased activity within the reading network, including the inferior frontal and supramarginal gyri, 1 week into learning in tactile and visual Braille. Interestingly, VWFA activation was observed after 1 week in the visual domain but only after 6 weeks in the tactile domain. This suggests that skill level in tactile reading influences the onset of involvement of VWFA. Once this activation was achieved, the peak level of VWFA engagement remained stable, even after the follow-up. Furthermore, an implicit reading task revealed increased activity within the reading network, including the VWFA, among participants learning Braille compared to the passive controls. Possibly, implicit reading occurs during non-reading tactile tasks where the Braille alphabet is present. We showed that the VWFA activity peak occurs faster in the visual domain compared to the tactile domain. We also showed that sighted subjects can process tactile Braille implicitly. These results enrich our understanding of neural adaptation mechanisms and the interplay between sensory modalities during complex, cross-modal learning.
Collapse
Affiliation(s)
- Maciej Gaca
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Alicja M. Olszewska
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Dawid Droździel
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Agnieszka Kulesza
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | | | - Bartosz Kossowski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Jacek Matuszewski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Aleksandra M. Herman
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
6
|
Amedi A, Shelly S, Saporta N, Catalogna M. Perceptual learning and neural correlates of virtual navigation in subjective cognitive decline: A pilot study. iScience 2024; 27:111411. [PMID: 39669432 PMCID: PMC11634985 DOI: 10.1016/j.isci.2024.111411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Revised: 08/24/2024] [Accepted: 11/13/2024] [Indexed: 12/14/2024] Open
Abstract
Spatial navigation deficits in age-related diseases involve brain changes affecting spatial memory and verbal cognition. Studies in blind and blindfolded individuals show that multisensory training can induce neuroplasticity through visual cortex recruitment. This proof-of-concept study introduces a digital navigation training protocol, integrating egocentric and allocentric strategies with multisensory stimulation and visual masking to enhance spatial cognition and brain connectivity in 17 individuals (mean age 57.2 years) with subjective cognitive decline. Results indicate improved spatial memory performance correlated with recruitment of the visual area 6-thalamic pathway and enhanced connectivity between memory, executive frontal areas, and default mode network (DMN) regions. Additionally, increased connectivity between allocentric and egocentric navigation areas via the retrosplenial complex (RSC) hub was observed. These findings suggest that this training has the potential to induce perceptual learning and neuroplasticity through key functional connectivity hubs, offering potential widespread cognitive benefits by enhancing critical brain network functions.
Collapse
Affiliation(s)
- Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | - Shahar Shelly
- Department of Neurology, Rambam Medical Center, Haifa, Israel
- Rappaport Faculty of Medicine, Technion-Israel Institute of Technology, Haifa, Israel
| | | | - Merav Catalogna
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| |
Collapse
|
7
|
Teng S, Cichy R, Pantazis D, Oliva A. Touch to text: Spatiotemporal evolution of braille letter representations in blind readers. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.10.30.620429. [PMID: 39553970 PMCID: PMC11565808 DOI: 10.1101/2024.10.30.620429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/19/2024]
Abstract
Visual deprivation does not silence the visual cortex, which is responsive to auditory, tactile, and other nonvisual tasks in blind persons. However, the underlying functional dynamics of the neural networks mediating such crossmodal responses remain unclear. Here, using braille reading as a model framework to investigate these networks, we presented sighted (N=13) and blind (N=12) readers with individual visual print and tactile braille alphabetic letters, respectively, during MEG recording. Using time-resolved multivariate pattern analysis and representational similarity analysis, we traced the alphabetic letter processing cascade in both groups of participants. We found that letter representations unfolded more slowly in blind than in sighted brains, with decoding peak latencies ~200 ms later in braille readers. Focusing on the blind group, we found that the format of neural letter representations transformed within the first 500 ms after stimulus onset from a low-level structure consistent with peripheral nerve afferent coding to high-level format reflecting pairwise letter embeddings in a text corpus. The spatiotemporal dynamics of the transformation suggest that the processing cascade proceeds from a starting point in somatosensory cortex to early visual cortex and then to inferotemporal cortex. Together our results give insight into the neural mechanisms underlying braille reading in blind persons and the dynamics of functional reorganization in sensory deprivation.
Collapse
Affiliation(s)
- Santani Teng
- The Smith-Kettlewell Eye Research Institute
- Computer Science and Artificial Intelligence Laboratory, MIT
| | - Radoslaw Cichy
- Department of Education and Psychology, Freie Universität Berlin
| | | | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, MIT
| |
Collapse
|
8
|
Jiao S, Wang K, Luo Y, Zeng J, Han Z. Plastic reorganization of the topological asymmetry of hemispheric white matter networks induced by congenital visual experience deprivation. Neuroimage 2024; 299:120844. [PMID: 39260781 DOI: 10.1016/j.neuroimage.2024.120844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 09/01/2024] [Accepted: 09/08/2024] [Indexed: 09/13/2024] Open
Abstract
Congenital blindness offers a unique opportunity to investigate human brain plasticity. The influence of congenital visual loss on the asymmetry of the structural network remains poorly understood. To address this question, we recruited 21 participants with congenital blindness (CB) and 21 age-matched sighted controls (SCs). Employing diffusion and structural magnetic resonance imaging, we constructed hemispheric white matter (WM) networks using deterministic fiber tractography and applied graph theory methodologies to assess topological efficiency (i.e., network global efficiency, network local efficiency, and nodal local efficiency) within these networks. Statistical analyses revealed a consistent leftward asymmetry in global efficiency across both groups. However, a different pattern emerged in network local efficiency, with the CB group exhibiting a symmetric state, while the SC group showed a leftward asymmetry. Specifically, compared to the SC group, the CB group exhibited a decrease in local efficiency in the left hemisphere, which was caused by a reduction in the nodal properties of some key regions mainly distributed in the left occipital lobe. Furthermore, interhemispheric tracts connecting these key regions exhibited significant structural changes primarily in the splenium of the corpus callosum. This result confirms the initial observation that the reorganization in asymmetry of the WM network following congenital visual loss is associated with structural changes in the corpus callosum. These findings provide novel insights into the neuroplasticity and adaptability of the brain, particularly at the network level.
Collapse
Affiliation(s)
- Saiyi Jiao
- National Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Ke Wang
- National Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; School of System Science, Beijing Normal University, Beijing 100875, China
| | - Yudan Luo
- National Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Department of Psychology and Art Education, Chengdu Education Research Institute, Chengdu 610036, China
| | - Jiahong Zeng
- National Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Zaizhu Han
- National Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|
9
|
Maimon A, Wald IY, Snir A, Ben Oz M, Amedi A. Perceiving depth beyond sight: Evaluating intrinsic and learned cues via a proof of concept sensory substitution method in the visually impaired and sighted. PLoS One 2024; 19:e0310033. [PMID: 39321152 PMCID: PMC11423994 DOI: 10.1371/journal.pone.0310033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 08/23/2024] [Indexed: 09/27/2024] Open
Abstract
This study explores spatial perception of depth by employing a novel proof of concept sensory substitution algorithm. The algorithm taps into existing cognitive scaffolds such as language and cross modal correspondences by naming objects in the scene while representing their elevation and depth by manipulation of the auditory properties for each axis. While the representation of verticality utilized a previously tested correspondence with pitch, the representation of depth employed an ecologically inspired manipulation, based on the loss of gain and filtration of higher frequency sounds over distance. The study, involving 40 participants, seven of which were blind (5) or visually impaired (2), investigates the intrinsicness of an ecologically inspired mapping of auditory cues for depth by comparing it to an interchanged condition where the mappings of the two axes are swapped. All participants successfully learned to use the algorithm following a very brief period of training, with the blind and visually impaired participants showing similar levels of success for learning to use the algorithm as did their sighted counterparts. A significant difference was found at baseline between the two conditions, indicating the intuitiveness of the original ecologically inspired mapping. Despite this, participants were able to achieve similar success rates following the training in both conditions. The findings indicate that both intrinsic and learned cues come into play with respect to depth perception. Moreover, they suggest that by employing perceptual learning, novel sensory mappings can be trained in adulthood. Regarding the blind and visually impaired, the results also support the convergence view, which claims that with training, their spatial abilities can converge with those of the sighted. Finally, we discuss how the algorithm can open new avenues for accessibility technologies, virtual reality, and other practical applications.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Computational Psychiatry and Neurotechnology Lab, Ben Gurion University, Be'er Sheva, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Digital Media Lab, University of Bremen, Bremen, Germany
| | - Adi Snir
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Amir Amedi
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| |
Collapse
|
10
|
Amaral L, Thomas P, Amedi A, Striem-Amit E. Longitudinal stability of individual brain plasticity patterns in blindness. Proc Natl Acad Sci U S A 2024; 121:e2320251121. [PMID: 39078671 PMCID: PMC11317565 DOI: 10.1073/pnas.2320251121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 05/24/2024] [Indexed: 07/31/2024] Open
Abstract
The primary visual cortex (V1) in blindness is engaged in a wide spectrum of tasks and sensory modalities, including audition, touch, language, and memory. This widespread involvement raises questions regarding the constancy of its role and whether it might exhibit flexibility in its function over time, connecting to diverse network functions specific to task demands. This would suggest that reorganized V1 assumes a role like multiple-demand system regions. Alternatively, varying patterns of plasticity in blind V1 may be attributed to individual factors, with different blind individuals recruiting V1 preferentially for different functions. In support of this, we recently showed that V1 functional connectivity (FC) varies greatly across blind individuals. But do these represent stable individual patterns of plasticity, or are they driven more by instantaneous changes, like a multiple-demand system now inhabiting V1? Here, we tested whether individual FC patterns from the V1 of blind individuals are stable over time. We show that over two years, FC from the V1 is unique and highly stable in a small sample of repeatedly sampled congenitally blind individuals. Further, using multivoxel pattern analysis, we demonstrate that the unique reorganization patterns of these individuals allow decoding of participant identity. Together with recent evidence for substantial individual differences in V1 connectivity, this indicates that there may be a consistent role for V1 in blindness, which may differ for each individual. Further, it suggests that the variability in visual reorganization in blindness across individuals could be used to seek stable neuromarkers for sight rehabilitation and assistive approaches.
Collapse
Affiliation(s)
- Lénia Amaral
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC20057
| | - Peyton Thomas
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC20057
| | - Amir Amedi
- Ivcher School of Psychology, The Institute for Brain, Mind and Technology, Reichman University, Herzliya4610101, Israel
- The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya4610101, Israel
| | - Ella Striem-Amit
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC20057
| |
Collapse
|
11
|
Arcaro M, Livingstone M. A Whole-Brain Topographic Ontology. Annu Rev Neurosci 2024; 47:21-40. [PMID: 38360565 DOI: 10.1146/annurev-neuro-082823-073701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
It is a common view that the intricate array of specialized domains in the ventral visual pathway is innately prespecified. What this review postulates is that it is not. We explore the origins of domain specificity, hypothesizing that the adult brain emerges from an interplay between a domain-general map-based architecture, shaped by intrinsic mechanisms, and experience. We argue that the most fundamental innate organization of cortex in general, and not just the visual pathway, is a map-based topography that governs how the environment maps onto the brain, how brain areas interconnect, and ultimately, how the brain processes information.
Collapse
Affiliation(s)
- Michael Arcaro
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | | |
Collapse
|
12
|
Kim H, Kim JS, Chung CK. Visual Mental Imagery and Neural Dynamics of Sensory Substitution in the Blindfolded Subjects. Neuroimage 2024; 295:120621. [PMID: 38797383 DOI: 10.1016/j.neuroimage.2024.120621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/29/2024] Open
Abstract
Although one can recognize the environment by soundscape substituting vision to auditory signal, whether subjects could perceive the soundscape as visual or visual-like sensation has been questioned. In this study, we investigated hierarchical process to elucidate the recruitment mechanism of visual areas by soundscape stimuli in blindfolded subjects. Twenty-two healthy subjects were repeatedly trained to recognize soundscape stimuli converted by visual shape information of letters. An effective connectivity method called dynamic causal modeling (DCM) was employed to reveal how the brain was hierarchically organized to recognize soundscape stimuli. The visual mental imagery model generated cortical source signals of five regions of interest better than auditory bottom-up, cross-modal perception, and mixed models. Spectral couplings between brain areas in the visual mental imagery model were analyzed. While within-frequency coupling is apparent in bottom-up processing where sensory information is transmitted, cross-frequency coupling is prominent in top-down processing, corresponding to the expectation and interpretation of information. Sensory substitution in the brain of blindfolded subjects derived visual mental imagery by combining bottom-up and top-down processing.
Collapse
Affiliation(s)
- HongJune Kim
- Dept. of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea; Clinical Research Institute, Konkuk University Medical Center Seoul, Republic of Korea
| | - June Sic Kim
- Clinical Research Institute, Konkuk University Medical Center Seoul, Republic of Korea; Research Institute of Biomedical Science & Technology, Konkuk University, Seoul, Republic of Korea.
| | - Chun Kee Chung
- Dept. of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea; Interdisciplinary Program in Neuroscience, Seoul National University, Seoul, Republic of Korea; Dept. of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea; Neuroscience Research Institute, Seoul National University Medical Research Center, Seoul, Republic of Korea
| |
Collapse
|
13
|
Chauhan VS, McCook KC, White AL. Reading Reshapes Stimulus Selectivity in the Visual Word Form Area. eNeuro 2024; 11:ENEURO.0228-24.2024. [PMID: 38997142 DOI: 10.1523/eneuro.0228-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 07/03/2024] [Accepted: 07/04/2024] [Indexed: 07/14/2024] Open
Abstract
Reading depends on a brain region known as the "visual word form area" (VWFA) in the left ventral occipitotemporal cortex. This region's function is debated because its stimulus selectivity is not absolute, it is modulated by a variety of task demands, and it is inconsistently localized. We used fMRI to characterize the combination of sensory and cognitive factors that activate word-responsive regions that we precisely localized in 16 adult humans (4 male). We then presented three types of character strings: English words, pseudowords, and unfamiliar characters with matched visual features. Participants performed three different tasks while viewing those stimuli: detecting real words, detecting color in the characters, and detecting color in the fixation mark. There were three primary findings about the VWFA's response: (1) It preferred letter strings over unfamiliar characters even when the stimuli were ignored during the fixation task. (2) Compared with those baseline responses, engaging in the word reading task enhanced the response to words but suppressed the response to unfamiliar characters. (3) Attending to the stimuli to judge their color had little effect on the response magnitudes. Thus, the VWFA is uniquely modulated by a cognitive signal that is specific to voluntary linguistic processing and is not additive. Functional connectivity analyses revealed that communication between the VWFA and a left frontal language area increased when the participant engaged in the linguistic task. We conclude that the VWFA is inherently selective for familiar orthography, but it falls under control of the language network when the task demands it.
Collapse
Affiliation(s)
- Vassiki S Chauhan
- Department of Neuroscience & Behavior, Barnard College, Columbia University, New York, New York 10027
| | - Krystal C McCook
- Department of Neuroscience & Behavior, Barnard College, Columbia University, New York, New York 10027
| | - Alex L White
- Department of Neuroscience & Behavior, Barnard College, Columbia University, New York, New York 10027
| |
Collapse
|
14
|
Chauhan VS, McCook KC, White AL. Reading reshapes stimulus selectivity in the visual word form area. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.04.560764. [PMID: 38948708 PMCID: PMC11212929 DOI: 10.1101/2023.10.04.560764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Reading depends on a brain region known as the "visual word form area" (VWFA) in left ventral occipito-temporal cortex. This region's function is debated because its stimulus selectivity is not absolute, it is modulated by a variety of task demands, and it is inconsistently localized. We used fMRI to characterize the combination of sensory and cognitive factors that activate word-responsive regions that we precisely localized in 16 adult humans (4 male). We then presented three types of character strings: English words, pseudowords, and unfamiliar characters with matched visual features. Participants performed three different tasks while viewing those stimuli: detecting real words, detecting color in the characters, and detecting color in the fixation mark. There were three primary findings about the VWFA's response: (1) It preferred letter strings over unfamiliar characters even when the stimuli were ignored during the fixation task; (2) Compared to those baseline responses, engaging in the word reading task enhanced the response to words but suppressed the response to unfamiliar characters. (3) Attending to the stimuli to judge their font color had little effect on the response magnitudes. Thus, the VWFA is uniquely modulated by a cognitive signal that is specific to voluntary linguistic processing and is not additive. Functional connectivity analyses revealed that communication between the VWFA and a left frontal language area increased when the participant engaged in the linguistic task. We conclude that the VWFA is inherently selective for familiar orthography, but it falls under control of the language network when the task demands it.
Collapse
Affiliation(s)
- Vassiki S. Chauhan
- Department of Neuroscience & Behavior Barnard College, Columbia University 76 Claremont Ave New York, NY 10027 USA
| | - Krystal C McCook
- Department of Neuroscience & Behavior Barnard College, Columbia University 76 Claremont Ave New York, NY 10027 USA
| | - Alex L. White
- Department of Neuroscience & Behavior Barnard College, Columbia University 76 Claremont Ave New York, NY 10027 USA
| |
Collapse
|
15
|
Norman LJ, Hartley T, Thaler L. Changes in primary visual and auditory cortex of blind and sighted adults following 10 weeks of click-based echolocation training. Cereb Cortex 2024; 34:bhae239. [PMID: 38897817 PMCID: PMC11186672 DOI: 10.1093/cercor/bhae239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 05/14/2024] [Accepted: 05/29/2024] [Indexed: 06/21/2024] Open
Abstract
Recent work suggests that the adult human brain is very adaptable when it comes to sensory processing. In this context, it has also been suggested that structural "blueprints" may fundamentally constrain neuroplastic change, e.g. in response to sensory deprivation. Here, we trained 12 blind participants and 14 sighted participants in echolocation over a 10-week period, and used MRI in a pre-post design to measure functional and structural brain changes. We found that blind participants and sighted participants together showed a training-induced increase in activation in left and right V1 in response to echoes, a finding difficult to reconcile with the view that sensory cortex is strictly organized by modality. Further, blind participants and sighted participants showed a training induced increase in activation in right A1 in response to sounds per se (i.e. not echo-specific), and this was accompanied by an increase in gray matter density in right A1 in blind participants and in adjacent acoustic areas in sighted participants. The similarity in functional results between sighted participants and blind participants is consistent with the idea that reorganization may be governed by similar principles in the two groups, yet our structural analyses also showed differences between the groups suggesting that a more nuanced view may be required.
Collapse
Affiliation(s)
- Liam J Norman
- Department of Psychology, Durham University, Durham, DH1 3LE, UK
| | - Tom Hartley
- Department of Psychology and York Biomedical Research Institute, University of York, Heslington, YO10 5DD, UK
| | - Lore Thaler
- Department of Psychology, Durham University, Durham, DH1 3LE, UK
| |
Collapse
|
16
|
Powell P, Pätzold F, Rouygari M, Furtak M, Kärcher SM, König P. Helping Blind People Grasp: Evaluating a Tactile Bracelet for Remotely Guiding Grasping Movements. SENSORS (BASEL, SWITZERLAND) 2024; 24:2949. [PMID: 38733054 PMCID: PMC11086327 DOI: 10.3390/s24092949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Revised: 03/20/2024] [Accepted: 04/24/2024] [Indexed: 05/13/2024]
Abstract
The problem of supporting visually impaired and blind people in meaningful interactions with objects is often neglected. To address this issue, we adapted a tactile belt for enhanced spatial navigation into a bracelet worn on the wrist that allows visually impaired people to grasp target objects. Participants' performance in locating and grasping target items when guided using the bracelet, which provides direction commands via vibrotactile signals, was compared to their performance when receiving auditory instructions. While participants were faster with the auditory commands, they also performed well with the bracelet, encouraging future development of this system and similar systems.
Collapse
Affiliation(s)
- Piper Powell
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany (F.P.); (M.R.); (S.M.K.); (P.K.)
| | - Florian Pätzold
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany (F.P.); (M.R.); (S.M.K.); (P.K.)
| | - Milad Rouygari
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany (F.P.); (M.R.); (S.M.K.); (P.K.)
| | - Marcin Furtak
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany (F.P.); (M.R.); (S.M.K.); (P.K.)
- FeelSpace GmbH, 49069 Osnabrück, Germany
| | - Silke M. Kärcher
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany (F.P.); (M.R.); (S.M.K.); (P.K.)
- FeelSpace GmbH, 49069 Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany (F.P.); (M.R.); (S.M.K.); (P.K.)
- Department of Neurophysiology, University Medical Centre Hamburg-Eppendorf, 20251 Hamburg, Germany
| |
Collapse
|
17
|
D'Angiulli A, Wymark D, Temi S, Bahrami S, Telfer A. Reconsidering Luria's speech mediation: Verbalization and haptic picture identification in children with congenital total blindness. Cortex 2024; 173:263-282. [PMID: 38432177 DOI: 10.1016/j.cortex.2024.01.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 11/20/2023] [Accepted: 01/18/2024] [Indexed: 03/05/2024]
Abstract
Current accounts of behavioral and neurocognitive correlates of plasticity in blindness are just beginning to incorporate the role of speech and verbal production. We assessed Vygotsky/Luria's speech mediation hypothesis, according to which speech activity can become a mediating tool for perception of complex stimuli, specifically, for encoding tactual/haptic spatial patterns which convey pictorial information (haptic pictures). We compared verbalization in congenitally totally blind (CTB) and age-matched sighted but visually impaired (VI) children during a haptic picture naming task which included two repeated, test-retest, identifications. The children were instructed to explore 10 haptic schematic pictures of objects (e.g., cup) and body parts (e.g., face) and provide (without experimenter's feedback) their typical name. Children's explorations and verbalizations were videorecorded and transcribed into audio segments. Using the Computerized Analysis of Language (CLAN) program, we extracted several measurements from the observed verbalizations, including number of utterances and words, utterance/word duration, and exploration time. Using the Word2Vec natural language processing technique we operationalized semantic content from the relative distances between the names provided. Furthermore, we conducted an observational content analysis in which three judges categorized verbalizations according to a rating scale assessing verbalization content. Results consistently indicated across all measures that the CTB children were faster and semantically more precise than their VI counterparts in the first identification test, however, the VI children reached the same level of precision and speed as the CTB children at retest. Overall, the task was harder for the VI group. Consistent with current neuroscience literature, the prominent role of speech in CTB and VI children's data suggests that an underlying cross-modal involvement of integrated brain networks, notably associated with Broca's network, likely also influenced by Braille, could play a key role in compensatory plasticity via the mediational mechanism postulated by Luria.
Collapse
Affiliation(s)
- Amedeo D'Angiulli
- Carleton University, Department of Neuroscience, Canada; Children's Hospital of Eastern Ontario Research Institute, Neurodevelopmental Health, Canada.
| | - Dana Wymark
- Carleton University, Department of Neuroscience, Canada
| | - Santa Temi
- Carleton University, Department of Neuroscience, Canada
| | - Sahar Bahrami
- Carleton University, Department of Neuroscience, Canada
| | - Andre Telfer
- Carleton University, Department of Neuroscience, Canada
| |
Collapse
|
18
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
19
|
Gori M, Burr D, Campus C. Disambiguating vision with sound. Curr Biol 2024; 34:R235-R236. [PMID: 38531313 DOI: 10.1016/j.cub.2024.01.043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 01/16/2024] [Accepted: 01/16/2024] [Indexed: 03/28/2024]
Abstract
An important task for the visual system is to identify and segregate objects from background. Figure-ground illusions, such as Edgar Rubin's bistable 'vase-faces illusion'1, make the point clearly: we see either a central vase or lateral faces, alternating spontaneously, but never both images simultaneously. The border is perceptually assigned to either faces or vase, which become figure, the other shapeless background2. The stochastic alternation between figure and ground probably reflects mutual inhibitory processes that ensure a single perceptual outcome3. Which shape dominates perception depends on many factors, such as size, symmetry, convexity, enclosure, and so on, as well as attention and intention4. Here we show that the assignment of the visual border can be strongly influenced by auditory input, far more than is possible by voluntary intention. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Monica Gori
- UVIP - Unit for visually impaired people, Italian Institute of Technology, Genoa 16152, Italy.
| | - David Burr
- UVIP - Unit for visually impaired people, Italian Institute of Technology, Genoa 16152, Italy; Department of Neuroscience, University of Florence, Florence 50135, Italy; School of Psychology, University of Sydney, Camperdown, NSW 2050, Australia.
| | - Claudio Campus
- UVIP - Unit for visually impaired people, Italian Institute of Technology, Genoa 16152, Italy.
| |
Collapse
|
20
|
Jiao S, Wang K, Zhang L, Luo Y, Lin J, Han Z. Developmental plasticity of the structural network of the occipital cortex in congenital blindness. Cereb Cortex 2023; 33:11526-11540. [PMID: 37851850 DOI: 10.1093/cercor/bhad385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 09/28/2023] [Accepted: 09/29/2023] [Indexed: 10/20/2023] Open
Abstract
The occipital cortex is the visual processing center in the mammalian brain. An unanswered scientific question pertains to the impact of congenital visual deprivation on the development of various profiles within the occipital network. To address this issue, we recruited 30 congenitally blind participants (8 children and 22 adults) as well as 31 sighted participants (10 children and 21 adults). Our investigation focused on identifying the gray matter regions and white matter connections within the occipital cortex, alongside behavioral measures, that demonstrated different developmental patterns between blind and sighted individuals. We discovered significant developmental changes in the gray matter regions and white matter connections of the occipital cortex among blind individuals from childhood to adulthood, in comparison with sighted individuals. Moreover, some of these structures exhibited cognitive functional reorganization. Specifically, in blind adults, the posterior occipital regions (left calcarine fissure and right middle occipital gyrus) showed reorganization of tactile perception, and the forceps major tracts were reorganized for braille reading. These plastic changes in blind individuals may be attributed to experience-dependent neuronal apoptosis, pruning, and myelination. These findings provide valuable insights into the longitudinal neuroanatomical and cognitive functional plasticity of the occipital network following long-term visual deprivation.
Collapse
Affiliation(s)
- Saiyi Jiao
- National Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, No. 19 Xinjiekouwai Street, Haidian District, Beijing 100875, China
| | - Ke Wang
- National Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, No. 19 Xinjiekouwai Street, Haidian District, Beijing 100875, China
| | - Linjun Zhang
- School of Chinese as a Second Language, Peking University, No. 5 Yiheyuan Road, Haidian District, Beijing 100871, China
| | - Yudan Luo
- National Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, No. 19 Xinjiekouwai Street, Haidian District, Beijing 100875, China
| | - Junfeng Lin
- National Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, No. 19 Xinjiekouwai Street, Haidian District, Beijing 100875, China
| | - Zaizhu Han
- National Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, No. 19 Xinjiekouwai Street, Haidian District, Beijing 100875, China
| |
Collapse
|
21
|
Plaza PL, Renier L, Rosemann S, De Volder AG, Rauschecker JP. Sound-encoded faces activate the left fusiform face area in the early blind. PLoS One 2023; 18:e0286512. [PMID: 37992062 PMCID: PMC10664868 DOI: 10.1371/journal.pone.0286512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 05/17/2023] [Indexed: 11/24/2023] Open
Abstract
Face perception in humans and nonhuman primates is accomplished by a patchwork of specialized cortical regions. How these regions develop has remained controversial. In sighted individuals, facial information is primarily conveyed via the visual modality. Early blind individuals, on the other hand, can recognize shapes using auditory and tactile cues. Here we demonstrate that such individuals can learn to distinguish faces from houses and other shapes by using a sensory substitution device (SSD) presenting schematic faces as sound-encoded stimuli in the auditory modality. Using functional MRI, we then asked whether a face-selective brain region like the fusiform face area (FFA) shows selectivity for faces in the same subjects, and indeed, we found evidence for preferential activation of the left FFA by sound-encoded faces. These results imply that FFA development does not depend on experience with visual faces per se but may instead depend on exposure to the geometry of facial configurations.
Collapse
Affiliation(s)
- Paula L. Plaza
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| | - Laurent Renier
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
- Neural Rehabilitation Laboratory, Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Stephanie Rosemann
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| | - Anne G. De Volder
- Neural Rehabilitation Laboratory, Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Josef P. Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| |
Collapse
|
22
|
Arbel R, Heimler B, Amedi A. Rapid plasticity in the ventral visual stream elicited by a newly learnt auditory script in congenitally blind adults. Neuropsychologia 2023; 190:108685. [PMID: 37741551 DOI: 10.1016/j.neuropsychologia.2023.108685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 08/07/2023] [Accepted: 09/20/2023] [Indexed: 09/25/2023]
Abstract
Accumulating evidence in the last decades has given rise to a new theory of brain organization, positing that cortical regions are recruited for specific tasks irrespective of the sensory modality via which information is channeled. For instance, the visual reading network has been shown to be recruited for reading via the tactile Braille code in congenitally blind adults. Yet, how rapidly non-typical sensory input modulates activity in typically visual regions is yet to be explored. To this aim, we developed a novel reading orthography, termed OVAL, enabling congenitally blind adults to quickly acquire reading via the auditory modality. OVAL uses the EyeMusic, a visual-to-auditory sensory-substitution-device (SSD) to transform visually presented letters optimized for auditory transformation into sound. Using fMRI, we show modulation in the right ventral visual stream following 2-h of same-day training. Crucially, following more extensive training (i.e., ∼12 h) we show that OVAL reading recruits the left ventral visual stream including the location of the Visual Word Form Area, a key graphene-responsive region within the visual reading network. Our results show that while after 2 h of SSD training we can already observe the recruitment of the deprived ventral visual stream by auditory stimuli, computation-selective cross-modal recruitment requires longer training to establish.
Collapse
Affiliation(s)
- Roni Arbel
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; Department of Pediatrics, Hadassah Mount Scopus Hospital, Jerusalem, Israel.
| | - Benedetta Heimler
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Institute for Brain, Mind and Technology, Ivcher School of Psychology, Reichman University, Herzeliya, Israel; Center of Advanced Technologies in Rehabilitation (CATR), The Chaim Sheba Medical Center, Tel Hashomer, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Institute for Brain, Mind and Technology, Ivcher School of Psychology, Reichman University, Herzeliya, Israel
| |
Collapse
|
23
|
Sun F, Wang S, Wang Y, Sun J, Li Y, Li Y, Xu Y, Wang X. Differences in generation and maintenance between ictal and interictal generalized spike-and-wave discharges in childhood absence epilepsy: A magnetoencephalography study. Epilepsy Behav 2023; 148:109440. [PMID: 37748416 DOI: 10.1016/j.yebeh.2023.109440] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 09/05/2023] [Accepted: 09/05/2023] [Indexed: 09/27/2023]
Abstract
PURPOSE Childhood absence epilepsy (CAE) is characterized by impaired consciousness and distinct electroencephalogram (EEG) patterns. However, interictal epileptiform discharges (IEDs) do not lead to noticeable symptoms. This study examines the disparity between ictal and interictal generalized spike-and-wave discharges (GSWDs) to determine the mechanisms behind CAE and consciousness. METHODS We enrolled 24 patients with ictal and interictal GSWDs in the study. The magnetoencephalography (MEG) data were recorded before and during GSWDs at a sampling rate of 6000 Hz and analyzed across six frequency bands. The absolute and relative spectral power were estimated with the Minimum Norm Estimate (MNE) combined with the Welch technique. All the statistical analyses were performed using paired-sample tests. RESULTS During GSWDs, the right lateral occipital cortex indicated a significant difference in the theta band (5-7 Hz) with stronger power (P = 0.027). The interictal group possessed stronger spectral power in the delta band (P < 0.01) and weaker power in the alpha band (P < 0.01) as early as 10 s before GSWDs in absolute and relative spectral power. Additionally, the ictal group revealed enhanced spectral power inside the occipital cortex in the alpha band and stronger spectral power in the right frontal regions within beta (15-29 Hz), gamma 1 (30-59 Hz), and gamma 2 (60-90 Hz) bands. CONCLUSIONS GSWDs seem to change gradually, with local neural activity changing even 10 s before discharge. During GSWDs, visual afferent stimulus insensitivity could be related to the impaired response state in CAE. The inhibitory signal in the low-frequency band can shorten GSWD duration, thereby achieving seizure control through inhibitory effect strengthening.
Collapse
Affiliation(s)
- Fangling Sun
- Department of Neurology, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Siyi Wang
- Department of Neurology, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Yingfan Wang
- Department of Neurology, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Jintao Sun
- Department of Neurology, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Yihan Li
- Department of Neurology, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Yanzhang Li
- Department of Neurology, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Yue Xu
- Department of Neurology, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Xiaoshan Wang
- Department of Neurology, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China.
| |
Collapse
|
24
|
Baltieri M, Iizuka H, Witkowski O, Sinapayen L, Suzuki K. Hybrid Life: Integrating biological, artificial, and cognitive systems. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2023; 14:e1662. [PMID: 37403661 DOI: 10.1002/wcs.1662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 05/22/2023] [Accepted: 05/30/2023] [Indexed: 07/06/2023]
Abstract
Artificial life is a research field studying what processes and properties define life, based on a multidisciplinary approach spanning the physical, natural, and computational sciences. Artificial life aims to foster a comprehensive study of life beyond "life as we know it" and toward "life as it could be," with theoretical, synthetic, and empirical models of the fundamental properties of living systems. While still a relatively young field, artificial life has flourished as an environment for researchers with different backgrounds, welcoming ideas, and contributions from a wide range of subjects. Hybrid Life brings our attention to some of the most recent developments within the artificial life community, rooted in more traditional artificial life studies but looking at new challenges emerging from interactions with other fields. Hybrid Life aims to cover studies that can lead to an understanding, from first principles, of what systems are and how biological and artificial systems can interact and integrate to form new kinds of hybrid (living) systems, individuals, and societies. To do so, it focuses on three complementary perspectives: theories of systems and agents, hybrid augmentation, and hybrid interaction. Theories of systems and agents are used to define systems, how they differ (e.g., biological or artificial, autonomous, or nonautonomous), and how multiple systems relate in order to form new hybrid systems. Hybrid augmentation focuses on implementations of systems so tightly connected that they act as a single, integrated one. Hybrid interaction is centered around interactions within a heterogeneous group of distinct living and nonliving systems. After discussing some of the major sources of inspiration for these themes, we will focus on an overview of the works that appeared in Hybrid Life special sessions, hosted by the annual Artificial Life Conference between 2018 and 2022. This article is categorized under: Neuroscience > Cognition Philosophy > Artificial Intelligence Computer Science and Robotics > Robotics.
Collapse
Affiliation(s)
- Manuel Baltieri
- Araya Inc., Tokyo, Japan
- Department of Informatics, University of Sussex, Brighton, UK
| | - Hiroyuki Iizuka
- Faculty of Information Science and Technology, Hokkaido University, Sapporo, Japan
- Center for Human Nature, Artificial Intelligence and Neuroscience (CHAIN), Hokkaido University, Sapporo, Japan
| | - Olaf Witkowski
- Center for Human Nature, Artificial Intelligence and Neuroscience (CHAIN), Hokkaido University, Sapporo, Japan
- Cross Labs, Cross Compass, Kyoto, Japan
- College of Arts and Sciences, University of Tokyo, Tokyo, Japan
| | - Lana Sinapayen
- Sony Computer Science Laboratories, Kyoto, Japan
- National Institute for Basic Biology, Okazaki, Japan
| | - Keisuke Suzuki
- Center for Human Nature, Artificial Intelligence and Neuroscience (CHAIN), Hokkaido University, Sapporo, Japan
| |
Collapse
|
25
|
Damera SR, Malone PS, Stevens BW, Klein R, Eberhardt SP, Auer ET, Bernstein LE, Riesenhuber M. Metamodal Coupling of Vibrotactile and Auditory Speech Processing Systems through Matched Stimulus Representations. J Neurosci 2023; 43:4984-4996. [PMID: 37197979 PMCID: PMC10324991 DOI: 10.1523/jneurosci.1710-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 03/10/2023] [Accepted: 04/29/2023] [Indexed: 05/19/2023] Open
Abstract
It has been postulated that the brain is organized by "metamodal," sensory-independent cortical modules capable of performing tasks (e.g., word recognition) in both "standard" and novel sensory modalities. Still, this theory has primarily been tested in sensory-deprived individuals, with mixed evidence in neurotypical subjects, thereby limiting its support as a general principle of brain organization. Critically, current theories of metamodal processing do not specify requirements for successful metamodal processing at the level of neural representations. Specification at this level may be particularly important in neurotypical individuals, where novel sensory modalities must interface with existing representations for the standard sense. Here we hypothesized that effective metamodal engagement of a cortical area requires congruence between stimulus representations in the standard and novel sensory modalities in that region. To test this, we first used fMRI to identify bilateral auditory speech representations. We then trained 20 human participants (12 female) to recognize vibrotactile versions of auditory words using one of two auditory-to-vibrotactile algorithms. The vocoded algorithm attempted to match the encoding scheme of auditory speech while the token-based algorithm did not. Crucially, using fMRI, we found that only in the vocoded group did trained-vibrotactile stimuli recruit speech representations in the superior temporal gyrus and lead to increased coupling between them and somatosensory areas. Our results advance our understanding of brain organization by providing new insight into unlocking the metamodal potential of the brain, thereby benefitting the design of novel sensory substitution devices that aim to tap into existing processing streams in the brain.SIGNIFICANCE STATEMENT It has been proposed that the brain is organized by "metamodal," sensory-independent modules specialized for performing certain tasks. This idea has inspired therapeutic applications, such as sensory substitution devices, for example, enabling blind individuals "to see" by transforming visual input into soundscapes. Yet, other studies have failed to demonstrate metamodal engagement. Here, we tested the hypothesis that metamodal engagement in neurotypical individuals requires matching the encoding schemes between stimuli from the novel and standard sensory modalities. We trained two groups of subjects to recognize words generated by one of two auditory-to-vibrotactile transformations. Critically, only vibrotactile stimuli that were matched to the neural encoding of auditory speech engaged auditory speech areas after training. This suggests that matching encoding schemes is critical to unlocking the brain's metamodal potential.
Collapse
Affiliation(s)
- Srikanth R Damera
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Patrick S Malone
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Benson W Stevens
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Richard Klein
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Silvio P Eberhardt
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | - Edward T Auer
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | - Lynne E Bernstein
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | | |
Collapse
|
26
|
Zhan M, Pallier C, Agrawal A, Dehaene S, Cohen L. Does the visual word form area split in bilingual readers? A millimeter-scale 7-T fMRI study. SCIENCE ADVANCES 2023; 9:eadf6140. [PMID: 37018408 PMCID: PMC10075963 DOI: 10.1126/sciadv.adf6140] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 03/06/2023] [Indexed: 05/29/2023]
Abstract
In expert readers, a brain region known as the visual word form area (VWFA) is highly sensitive to written words, exhibiting a posterior-to-anterior gradient of increasing sensitivity to orthographic stimuli whose statistics match those of real words. Using high-resolution 7-tesla functional magnetic resonance imaging (fMRI), we ask whether, in bilingual readers, distinct cortical patches specialize for different languages. In 21 English-French bilinguals, unsmoothed 1.2-millimeters fMRI revealed that the VWFA is actually composed of several small cortical patches highly selective for reading, with a posterior-to-anterior word-similarity gradient, but with near-complete overlap between the two languages. In 10 English-Chinese bilinguals, however, while most word-specific patches exhibited similar reading specificity and word-similarity gradients for reading in Chinese and English, additional patches responded specifically to Chinese writing and, unexpectedly, to faces. Our results show that the acquisition of multiple writing systems can indeed tune the visual cortex differently in bilinguals, sometimes leading to the emergence of cortical patches specialized for a single language.
Collapse
Affiliation(s)
- Minye Zhan
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France
| | - Christophe Pallier
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France
| | - Aakash Agrawal
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France
- Collège de France, Université Paris-Sciences-Lettres (PSL), 11 Place Marcelin Berthelot, 75005 Paris, France
| | - Laurent Cohen
- Inserm U 1127, CNRS UMR 7225, Sorbonne Université, Institut du Cerveau, ICM, Paris, France
- AP-HP, Hôpital de la Pitié Salpêtrière, Fédération de Neurologie, Paris, France
| |
Collapse
|
27
|
Tian M, Saccone EJ, Kim JS, Kanjlia S, Bedny M. Sensory modality and spoken language shape reading network in blind readers of Braille. Cereb Cortex 2023; 33:2426-2440. [PMID: 35671478 PMCID: PMC10016046 DOI: 10.1093/cercor/bhac216] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 05/06/2022] [Accepted: 05/07/2022] [Indexed: 01/24/2023] Open
Abstract
The neural basis of reading is highly consistent across many languages and scripts. Are there alternative neural routes to reading? How does the sensory modality of symbols (tactile vs. visual) influence their neural representations? We examined these questions by comparing reading of visual print (sighted group, n = 19) and tactile Braille (congenitally blind group, n = 19). Blind and sighted readers were presented with written (words, consonant strings, non-letter shapes) and spoken stimuli (words, backward speech) that varied in word-likeness. Consistent with prior work, the ventral occipitotemporal cortex (vOTC) was active during Braille and visual reading. A posterior/anterior vOTC word-form gradient was observed only in sighted readers with more anterior regions preferring larger orthographic units (words). No such gradient was observed in blind readers. Consistent with connectivity predictions, in blind compared to sighted readers, posterior parietal cortices were recruited to a greater degree and contained word-preferring patches. Lateralization of Braille in blind readers was predicted by laterality of spoken language and reading hand. The effect of spoken language increased along a cortical hierarchy, whereas effect of reading hand waned. These results suggested that the neural basis of reading is influenced by symbol modality and spoken language and support connectivity-based views of cortical function.
Collapse
Affiliation(s)
- Mengyu Tian
- Corresponding author: Department of Psychological and Brain Sciences, Johns Hopkins University, 3400 N Charles St, Baltimore, MD 21218, United States.
| | - Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University , 3400 N Charles Street, Baltimore, MD 21218, United States
| | - Judy S Kim
- Department of Psychological and Brain Sciences, Johns Hopkins University , 3400 N Charles Street, Baltimore, MD 21218, United States
- Department of Psychology, Yale University, 2 Hillhouse Ave., New Haven, CT 06511, United States
| | - Shipra Kanjlia
- Department of Psychological and Brain Sciences, Johns Hopkins University , 3400 N Charles Street, Baltimore, MD 21218, United States
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue Pittsburgh, PA 15213, United States
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University , 3400 N Charles Street, Baltimore, MD 21218, United States
| |
Collapse
|
28
|
Yizhar O, Tal Z, Amedi A. Loss of action-related function and connectivity in the blind extrastriate body area. Front Neurosci 2023; 17:973525. [PMID: 36968509 PMCID: PMC10035577 DOI: 10.3389/fnins.2023.973525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 02/23/2023] [Indexed: 03/11/2023] Open
Abstract
The Extrastriate Body Area (EBA) participates in the visual perception and motor actions of body parts. We recently showed that EBA’s perceptual function develops independently of visual experience, responding to stimuli with body-part information in a supramodal fashion. However, it is still unclear if the EBA similarly maintains its action-related function. Here, we used fMRI to study motor-evoked responses and connectivity patterns in the congenitally blind brain. We found that, unlike the case of perception, EBA does not develop an action-related response without visual experience. In addition, we show that congenital blindness alters EBA’s connectivity profile in a counterintuitive way—functional connectivity with sensorimotor cortices dramatically decreases, whereas connectivity with perception-related visual occipital cortices remains high. To the best of our knowledge, we show for the first time that action-related functions and connectivity in the visual cortex could be contingent on visuomotor experience. We further discuss the role of the EBA within the context of visuomotor control and predictive coding theory.
Collapse
Affiliation(s)
- Or Yizhar
- Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind and Technology, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- *Correspondence: Or Yizhar,
| | - Zohar Tal
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Amir Amedi
- Ivcher School of Psychology, The Institute for Brain, Mind and Technology, Reichman University, Herzliya, Israel
- The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
29
|
Maimon A, Wald IY, Ben Oz M, Codron S, Netzer O, Heimler B, Amedi A. The Topo-Speech sensory substitution system as a method of conveying spatial information to the blind and vision impaired. Front Hum Neurosci 2023; 16:1058093. [PMID: 36776219 PMCID: PMC9909096 DOI: 10.3389/fnhum.2022.1058093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 12/13/2022] [Indexed: 01/27/2023] Open
Abstract
Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects' identity by employing naming in a spoken word and simultaneously conveying the objects' location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study's findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Sophie Codron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Ophir Netzer
- Gonda Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
30
|
Maimon A, Netzer O, Heimler B, Amedi A. Testing geometry and 3D perception in children following vision restoring cataract-removal surgery. Front Neurosci 2023; 16:962817. [PMID: 36711132 PMCID: PMC9879291 DOI: 10.3389/fnins.2022.962817] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 12/19/2022] [Indexed: 01/13/2023] Open
Abstract
As neuroscience and rehabilitative techniques advance, age-old questions concerning the visual experience of those who gain sight after blindness, once thought to be philosophical alone, take center stage and become the target for scientific inquiries. In this study, we employ a battery of visual perception tasks to study the unique experience of a small group of children who have undergone vision-restoring cataract removal surgery as part of the Himalayan Cataract Project. We tested their abilities to perceive in three dimensions (3D) using a binocular rivalry task and the Brock string task, perceive visual illusions, use cross-modal mappings between touch and vision, and spatially group based on geometric cues. Some of the children in this study gained a sense of sight for the first time in their lives, having been born with bilateral congenital cataracts, while others suffered late-onset blindness in one eye alone. This study simultaneously supports yet raises further questions concerning Hubel and Wiesel's critical periods theory and provides additional insight into Molyneux's problem, the ability to correlate vision with touch quickly. We suggest that our findings present a relatively unexplored intermediate stage of 3D vision development. Importantly, we spotlight some essential geometrical perception visual abilities that strengthen the idea that spontaneous geometry intuitions arise independently from visual experience (and education), thus replicating and extending previous studies. We incorporate a new model, not previously explored, of testing children with congenital cataract removal surgeries who perform the task via vision. In contrast, previous work has explored these abilities in the congenitally blind via touch. Taken together, our findings provide insight into the development of what is commonly known as the visual system in the visually deprived and highlight the need to further empirically explore an amodal, task-based interpretation of specializations in the development and structure of the brain. Moreover, we propose a novel objective method, based on a simple binocular rivalry task and the Brock string task, for determining congenital (early) vs. late blindness where medical history and records are partial or lacking (e.g., as is often the case in cataract removal cases).
Collapse
Affiliation(s)
- Amber Maimon
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel,The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel,*Correspondence: Amber Maimon,
| | - Ophir Netzer
- Gonda Brain Research Center, Bar-Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel,The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
31
|
Shvadron S, Snir A, Maimon A, Yizhar O, Harel S, Poradosu K, Amedi A. Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device. Front Hum Neurosci 2023; 17:1058617. [PMID: 36936618 PMCID: PMC10017858 DOI: 10.3389/fnhum.2023.1058617] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/09/2023] [Indexed: 03/06/2023] Open
Abstract
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
Collapse
Affiliation(s)
- Shira Shvadron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- *Correspondence: Shira Shvadron,
| | - Adi Snir
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Or Yizhar
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck Dahlem Campus of Cognition (MPDCC), Max Planck Institute for Human Development, Berlin, Germany
| | - Sapir Harel
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Keinan Poradosu
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Weizmann Institute of Science, Rehovot, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
32
|
Bosten JM, Coen-Cagli R, Franklin A, Solomon SG, Webster MA. Calibrating Vision: Concepts and Questions. Vision Res 2022; 201:108131. [PMID: 37139435 PMCID: PMC10151026 DOI: 10.1016/j.visres.2022.108131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The idea that visual coding and perception are shaped by experience and adjust to changes in the environment or the observer is universally recognized as a cornerstone of visual processing, yet the functions and processes mediating these calibrations remain in many ways poorly understood. In this article we review a number of facets and issues surrounding the general notion of calibration, with a focus on plasticity within the encoding and representational stages of visual processing. These include how many types of calibrations there are - and how we decide; how plasticity for encoding is intertwined with other principles of sensory coding; how it is instantiated at the level of the dynamic networks mediating vision; how it varies with development or between individuals; and the factors that may limit the form or degree of the adjustments. Our goal is to give a small glimpse of an enormous and fundamental dimension of vision, and to point to some of the unresolved questions in our understanding of how and why ongoing calibrations are a pervasive and essential element of vision.
Collapse
Affiliation(s)
| | - Ruben Coen-Cagli
- Department of Systems Computational Biology, and Dominick P. Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx NY
| | | | - Samuel G Solomon
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, UK
| | | |
Collapse
|
33
|
Gori M, Amadeo MB, Pavani F, Valzolgher C, Campus C. Temporal visual representation elicits early auditory-like responses in hearing but not in deaf individuals. Sci Rep 2022; 12:19036. [PMID: 36351944 PMCID: PMC9646881 DOI: 10.1038/s41598-022-22224-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 10/11/2022] [Indexed: 11/10/2022] Open
Abstract
It is evident that the brain is capable of large-scale reorganization following sensory deprivation, but the extent of such reorganization is to date, not clear. The auditory modality is the most accurate to represent temporal information, and deafness is an ideal clinical condition to study the reorganization of temporal representation when the audio signal is not available. Here we show that hearing, but not deaf individuals, show a strong ERP response to visual stimuli in temporal areas during a time-bisection task. This ERP response appears 50-90 ms after the flash and recalls some aspects of the N1 ERP component usually elicited by auditory stimuli. The same ERP is not evident for a visual space-bisection task, suggesting that the early recruitment of temporal cortex is specific for building a highly resolved temporal representation within the visual modality. These findings provide evidence that the lack of auditory input can interfere with typical development of complex visual temporal representations.
Collapse
Affiliation(s)
- Monica Gori
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| | - Maria Bianca Amadeo
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| | - Francesco Pavani
- grid.11696.390000 0004 1937 0351Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy ,grid.11696.390000 0004 1937 0351Centro Interateneo di Ricerca Cognizione, Linguaggio e Sordità (CIRCLeS), University of Trento, Trento, Italy ,grid.461862.f0000 0004 0614 7222Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Centre de Recherche en Neuroscience de Lyon (CRNL), Bron, France
| | - Chiara Valzolgher
- grid.11696.390000 0004 1937 0351Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy ,grid.461862.f0000 0004 0614 7222Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Centre de Recherche en Neuroscience de Lyon (CRNL), Bron, France
| | - Claudio Campus
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| |
Collapse
|
34
|
Martolini C, Amadeo MB, Campus C, Cappagli G, Gori M. Effects of audio-motor training on spatial representations in long-term late blindness. Neuropsychologia 2022; 176:108391. [DOI: 10.1016/j.neuropsychologia.2022.108391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 08/16/2022] [Accepted: 10/01/2022] [Indexed: 11/15/2022]
|
35
|
Sabourin CJ, Merrikhi Y, Lomber SG. Do blind people hear better? Trends Cogn Sci 2022; 26:999-1012. [PMID: 36207258 DOI: 10.1016/j.tics.2022.08.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 08/22/2022] [Accepted: 08/25/2022] [Indexed: 01/12/2023]
Abstract
For centuries, anecdotal evidence such as the perfect pitch of the blind piano tuner or blind musician has supported the notion that individuals who have lost their sight early in life have superior hearing abilities compared with sighted people. Recently, auditory psychophysical and functional imaging studies have identified that specific auditory enhancements in the early blind can be linked to activation in extrastriate visual cortex, suggesting crossmodal plasticity. Furthermore, the nature of the sensory reorganization in occipital cortex supports the concept of a task-based functional cartography for the cerebral cortex rather than a sensory-based organization. In total, studies of early-blind individuals provide valuable insights into mechanisms of cortical plasticity and principles of cerebral organization.
Collapse
Affiliation(s)
- Carina J Sabourin
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Biological and Biomedical Engineering Graduate Program, McGill University, Montreal, Quebec H3G 1Y6, Canada
| | - Yaser Merrikhi
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada
| | - Stephen G Lomber
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Biological and Biomedical Engineering Graduate Program, McGill University, Montreal, Quebec H3G 1Y6, Canada; Department of Psychology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Department of Neurology and Neurosurgery, McGill University, Montreal, Quebec H3G 1Y6, Canada.
| |
Collapse
|
36
|
Arbel R, Heimler B, Amedi A. Face shape processing via visual-to-auditory sensory substitution activates regions within the face processing networks in the absence of visual experience. Front Neurosci 2022; 16:921321. [PMID: 36263367 PMCID: PMC9576157 DOI: 10.3389/fnins.2022.921321] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 09/05/2022] [Indexed: 11/16/2022] Open
Abstract
Previous evidence suggests that visual experience is crucial for the emergence and tuning of the typical neural system for face recognition. To challenge this conclusion, we trained congenitally blind adults to recognize faces via visual-to-auditory sensory-substitution (SDD). Our results showed a preference for trained faces over other SSD-conveyed visual categories in the fusiform gyrus and in other known face-responsive-regions of the deprived ventral visual stream. We also observed a parametric modulation in the same cortical regions, for face orientation (upright vs. inverted) and face novelty (trained vs. untrained). Our results strengthen the conclusion that there is a predisposition for sensory-independent and computation-specific processing in specific cortical regions that can be retained in life-long sensory deprivation, independently of previous perceptual experience. They also highlight that if the right training is provided, such cortical preference maintains its tuning to what were considered visual-specific face features.
Collapse
Affiliation(s)
- Roni Arbel
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Pediatrics, Hadassah University Hospital-Mount Scopus, Jerusalem, Israel
- *Correspondence: Roni Arbel,
| | - Benedetta Heimler
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind, and Technology, Reichman University, Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation, Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind, and Technology, Reichman University, Herzeliya, Israel
| |
Collapse
|
37
|
Mattioni S, Rezk M, Battal C, Vadlamudi J, Collignon O. Impact of blindness onset on the representation of sound categories in occipital and temporal cortices. eLife 2022; 11:e79370. [PMID: 36070354 PMCID: PMC9451537 DOI: 10.7554/elife.79370] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 08/15/2022] [Indexed: 11/30/2022] Open
Abstract
The ventral occipito-temporal cortex (VOTC) reliably encodes auditory categories in people born blind using a representational structure partially similar to the one found in vision (Mattioni et al.,2020). Here, using a combination of uni- and multivoxel analyses applied to fMRI data, we extend our previous findings, comprehensively investigating how early and late acquired blindness impact on the cortical regions coding for the deprived and the remaining senses. First, we show enhanced univariate response to sounds in part of the occipital cortex of both blind groups that is concomitant to reduced auditory responses in temporal regions. We then reveal that the representation of the sound categories in the occipital and temporal regions is more similar in blind subjects compared to sighted subjects. What could drive this enhanced similarity? The multivoxel encoding of the 'human voice' category that we observed in the temporal cortex of all sighted and blind groups is enhanced in occipital regions in blind groups , suggesting that the representation of vocal information is more similar between the occipital and temporal regions in blind compared to sighted individuals. We additionally show that blindness does not affect the encoding of the acoustic properties of our sounds (e.g. pitch, harmonicity) in occipital and in temporal regions but instead selectively alter the categorical coding of the voice category itself. These results suggest a functionally congruent interplay between the reorganization of occipital and temporal regions following visual deprivation, across the lifespan.
Collapse
Affiliation(s)
- Stefania Mattioni
- Institute for research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, Crossmodal Perception and Plasticity Laboratory - University of Louvain (UCLouvain)Louvain-la-NeuveBelgium
- Department of Brain and Cognition, KU LeuvenLeuvenBelgium
| | - Mohamed Rezk
- Institute for research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, Crossmodal Perception and Plasticity Laboratory - University of Louvain (UCLouvain)Louvain-la-NeuveBelgium
| | - Ceren Battal
- Institute for research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, Crossmodal Perception and Plasticity Laboratory - University of Louvain (UCLouvain)Louvain-la-NeuveBelgium
| | - Jyothirmayi Vadlamudi
- Institute for research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, Crossmodal Perception and Plasticity Laboratory - University of Louvain (UCLouvain)Louvain-la-NeuveBelgium
| | - Olivier Collignon
- Institute for research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, Crossmodal Perception and Plasticity Laboratory - University of Louvain (UCLouvain)Louvain-la-NeuveBelgium
- Center for Mind/Brain Studies, University of TrentoTrentoItaly
- School of Health Sciences, HES-SO Valais-WallisSionSwitzerland
- The Sense Innovation and Research Center, Lausanne and SionSionSwitzerland
| |
Collapse
|
38
|
Campbell EE, Bergelson E. Making sense of sensory language: Acquisition of sensory knowledge by individuals with congenital sensory impairments. Neuropsychologia 2022; 174:108320. [PMID: 35842021 DOI: 10.1016/j.neuropsychologia.2022.108320] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 06/21/2022] [Accepted: 07/06/2022] [Indexed: 10/17/2022]
Abstract
The present article provides a narrative review on how language communicates sensory information and how knowledge of sight and sound develops in individuals born deaf or blind. Studying knowledge of the perceptually inaccessible sensory domain for these populations offers a lens into how humans learn about that which they cannot perceive. We first review the linguistic strategies within language that communicate sensory information. Highlighting the power of language to shape knowledge, we next review the detailed knowledge of sensory information by individuals with congenital sensory impairments, limitations therein, and neural representations of imperceptible phenomena. We suggest that the acquisition of sensory knowledge is supported by language, experience with multiple perceptual domains, and cognitive and social abilities which mature over the first years of life, both in individuals with and without sensory impairment. We conclude by proposing a developmental trajectory for acquiring sensory knowledge in the absence of sensory perception.
Collapse
Affiliation(s)
- Erin E Campbell
- Duke University, Department of Psychology and Neuroscience, USA.
| | - Elika Bergelson
- Duke University, Department of Psychology and Neuroscience, USA
| |
Collapse
|
39
|
Maimon A, Yizhar O, Buchs G, Heimler B, Amedi A. A case study in phenomenology of visual experience with retinal prosthesis versus visual-to-auditory sensory substitution. Neuropsychologia 2022; 173:108305. [PMID: 35752268 PMCID: PMC9297294 DOI: 10.1016/j.neuropsychologia.2022.108305] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 04/30/2022] [Accepted: 06/13/2022] [Indexed: 11/26/2022]
Abstract
The phenomenology of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both devices. He offers a firsthand account into what he considers the great potential of combining sensory substitution devices with visual prostheses as part of a complete visual restoration protocol. While the Argus II retinal prosthesis alone provided him with immediate visual percepts by way of electrically stimulated phosphenes elicited by the device, the EyeMusic SSD requires extensive training from the onset. Yet following the extensive training program with the EyeMusic sensory substitution device, our subject reports that the sensory substitution device allowed him to experience a richer, more complex perceptual experience, that felt more "second nature" to him, while the Argus II prosthesis (which also requires training) did not allow him to achieve the same levels of automaticity and transparency. Following long-term use of the EyeMusic SSD, our subject reported that visual percepts representing mainly, but not limited to, colors portrayed by the EyeMusic SSD are elicited in association with auditory stimuli, indicating the acquisition of a high level of automaticity. Finally, the case study indicates an additive benefit to the combination of both devices on the user's subjective phenomenological visual experience.
Collapse
Affiliation(s)
- Amber Maimon
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| | - Or Yizhar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel; Max Planck Institute for Human Development, Research Group Adaptive Memory and Decision Making, Berlin, Germany; Max Planck Institute for Human Development, Max Planck Dahlem Campus of Cognition (MPDCC), Berlin, Germany
| | - Galit Buchs
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| |
Collapse
|
40
|
Dai R, Huang Z, Weng X, He S. Early visual exposure primes future cross-modal specialization of the fusiform face area in tactile face processing in the blind. Neuroimage 2022; 253:119062. [PMID: 35263666 DOI: 10.1016/j.neuroimage.2022.119062] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 02/21/2022] [Accepted: 03/05/2022] [Indexed: 10/18/2022] Open
Abstract
The fusiform face area (FFA) is a core cortical region for face information processing. Evidence suggests that its sensitivity to faces is largely innate and tuned by visual experience. However, how experience in different time windows shape the plasticity of the FFA remains unclear. In this study, we investigated the role of visual experience at different time points of an individual's early development in the cross-modal face specialization of the FFA. Participants (n = 74) were classified into five groups: congenital blind, early blind, late blind, low vision, and sighted control. Functional magnetic resonance imaging data were acquired when the participants haptically processed carved faces and other objects. Our results showed a robust and highly consistent face-selective activation in the FFA region in the early blind participants, invariant to size and level of abstraction of the face stimuli. The cross-modal face activation in the FFA was much less consistent in other groups. These results suggest that early visual experience primes cross-modal specialization of the FFA, and even after the absence of visual experience for more than 14 years in early blind participants, their FFA can engage in cross-modal processing of face information.
Collapse
Affiliation(s)
- Rui Dai
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
| | - Zirui Huang
- Center for Consciousness Science, Department of Anesthesiology, University of Michigan Medical School, Ann Arbor, MI 48109, USA
| | - Xuchu Weng
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou 510631, China.
| | - Sheng He
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, 20031, China; University of Chinese Academy of Sciences, Beijing 100049, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
41
|
Qu J, Pang Y, Liu X, Cao Y, Huang C, Mei L. Task modulates the orthographic and phonological representations in the bilateral ventral Occipitotemporal cortex. Brain Imaging Behav 2022; 16:1695-1707. [PMID: 35247162 DOI: 10.1007/s11682-022-00641-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/18/2022] [Indexed: 11/25/2022]
Abstract
As a key area in word reading, the left ventral occipitotemporal cortex is proposed for abstract orthographic processing, and its middle part has even been labeled as the visual word form area. Because the definition of the VWFA largely varies and the reading task differs across studies, the function of the left ventral occipitotemporal cortex in word reading is continuingly debated on whether this region is specific for orthographic processing or be involved in an interactive framework. By using representational similarity analysis (RSA), this study examined information representation in the VWFA at the individual level and the modulatory effect of reading task. Twenty-four subjects were scanned while performing the explicit (i.e., the naming task) and implicit (i.e., the perceptual task) reading tasks. Activation analysis showed that the naming task elicited greater activation in regions related to phonological processing (e.g., the bilateral prefrontal cortex and temporoparietal cortex), while the perceptual task recruited greater activation in visual cortex and default mode network (e.g., the bilateral middle frontal gyrus, angular gyrus, and the right middle temporal gyrus). More importantly, RSA also showed that task modulated information representation in the bilateral anterior occipitotemporal cortex and VWFA. Specifically, ROI-based RSA revealed enhanced orthographic and phonological representations in the bilateral anterior fusiform cortex and VWFA in the naming task relative to the perceptual task. These results suggest that lexical representation in the VWFA is influenced by the demand of phonological processing, which supports the interactive account of the VWFA.
Collapse
Affiliation(s)
- Jing Qu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Yingdan Pang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Xiaoyu Liu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Ying Cao
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Chengmei Huang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- School of Psychology, South China Normal University, Guangzhou, 510631, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Leilei Mei
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China.
| |
Collapse
|
42
|
Mahon BZ. Domain-specific connectivity drives the organization of object knowledge in the brain. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:221-244. [PMID: 35964974 PMCID: PMC11498098 DOI: 10.1016/b978-0-12-823493-8.00028-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The goal of this chapter is to review neuropsychological and functional MRI findings that inform a theory of the causes of functional specialization for semantic categories within occipito-temporal cortex-the ventral visual processing pathway. The occipito-temporal pathway supports visual object processing and recognition. The theoretical framework that drives this review considers visual object recognition through the lens of how "downstream" systems interact with the outputs of visual recognition processes. Those downstream processes include conceptual interpretation, grasping and object use, navigating and orienting in an environment, physical reasoning about the world, and inferring future actions and the inner mental states of agents. The core argument of this chapter is that innately constrained connectivity between occipito-temporal areas and other regions of the brain is the basis for the emergence of neural specificity for a limited number of semantic domains in the brain.
Collapse
Affiliation(s)
- Bradford Z Mahon
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States.
| |
Collapse
|
43
|
Radziun D, Crucianelli L, Ehrsson HH. Limits of Cross-modal Plasticity? Short-term Visual Deprivation Does Not Enhance Cardiac Interoception, Thermosensation, or Tactile Spatial Acuity. Biol Psychol 2021; 168:108248. [PMID: 34971758 DOI: 10.1016/j.biopsycho.2021.108248] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Revised: 11/01/2021] [Accepted: 12/23/2021] [Indexed: 01/30/2023]
Abstract
In the present study, we investigated the effect of short-term visual deprivation on discriminative touch, cardiac interoception, and thermosensation by asking 64 healthy volunteers to perform four behavioral tasks. The experimental group contained 32 subjects who were blindfolded and kept in complete darkness for 110minutes, while the control group consisted of 32 volunteers who were not blindfolded but were otherwise kept under identical experimental conditions. Both groups performed the required tasks three times: before and directly after deprivation (or control) and after an additional washout period of 40minutes, in which all participants were exposed to normal light conditions. Our results showed that short-term visual deprivation had no effect on any of the senses tested. This finding suggests that short-term visual deprivation does not modulate basic bodily senses and extends this principle beyond tactile processing to the interoceptive modalities of cardiac and thermal sensations.
Collapse
Affiliation(s)
- Dominika Radziun
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Laura Crucianelli
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - H Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| |
Collapse
|
44
|
Romanovska L, Bonte M. How Learning to Read Changes the Listening Brain. Front Psychol 2021; 12:726882. [PMID: 34987442 PMCID: PMC8721231 DOI: 10.3389/fpsyg.2021.726882] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 11/23/2021] [Indexed: 01/18/2023] Open
Abstract
Reading acquisition reorganizes existing brain networks for speech and visual processing to form novel audio-visual language representations. This requires substantial cortical plasticity that is reflected in changes in brain activation and functional as well as structural connectivity between brain areas. The extent to which a child's brain can accommodate these changes may underlie the high variability in reading outcome in both typical and dyslexic readers. In this review, we focus on reading-induced functional changes of the dorsal speech network in particular and discuss how its reciprocal interactions with the ventral reading network contributes to reading outcome. We discuss how the dynamic and intertwined development of both reading networks may be best captured by approaching reading from a skill learning perspective, using audio-visual learning paradigms and longitudinal designs to follow neuro-behavioral changes while children's reading skills unfold.
Collapse
Affiliation(s)
| | - Milene Bonte
- *Correspondence: Linda Romanovska, ; Milene Bonte,
| |
Collapse
|
45
|
Hamilton-Fletcher G, Chan KC. Auditory Scene Analysis Principles Improve Image Reconstruction Abilities of Novice Vision-to-Audio Sensory Substitution Users. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:5868-5871. [PMID: 34892454 PMCID: PMC9352562 DOI: 10.1109/embc46164.2021.9630296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Sensory substitution devices (SSDs) such as the 'vOICe' preserve visual information in sound by turning visual height, brightness, and laterality into auditory pitch, volume, and panning/time respectively. However, users have difficulty identifying or tracking multiple simultaneously presented tones - a skill necessary to discriminate the upper and lower edges of object shapes. We explore how these deficits can be addressed by using image-sonifications inspired by auditory scene analysis (ASA). Here, sighted subjects (N=25) of varying musical experience listened to, and then reconstructed, complex shapes consisting of simultaneously presented upper and lower lines. Complex shapes were sonified using the vOICe, with either the upper and lower lines varying only in pitch (i.e. the vOICe's 'unaltered' default settings), or with one line degraded to alter its auditory timbre or volume. Results found that overall performance increased with subjects' years of prior musical experience. ANOVAs revealed that both sonification style and musical experience significantly affected performance, but with no interaction effect between them. Compared to the vOICe's 'unaltered' pitch-height mapping, subjects had significantly better image-reconstruction abilities when the lower line was altered via timbre or volume-modulation. By contrast, altering the upper line only helped users identify the unaltered lower line. In conclusion, adding ASA principles to vision-to-audio SSDs boosts subjects' image-reconstruction abilities, even if this also reduces total task-relevant information. Future SSDs should seek to exploit these findings to enhance both novice user abilities and the use of SSDs as visual rehabilitation tools.
Collapse
|
46
|
Arcaro MJ, Livingstone MS. On the relationship between maps and domains in inferotemporal cortex. Nat Rev Neurosci 2021; 22:573-583. [PMID: 34345018 PMCID: PMC8865285 DOI: 10.1038/s41583-021-00490-4] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/24/2021] [Indexed: 02/07/2023]
Abstract
How does the brain encode information about the environment? Decades of research have led to the pervasive notion that the object-processing pathway in primate cortex consists of multiple areas that are each specialized to process different object categories (such as faces, bodies, hands, non-face objects and scenes). The anatomical consistency and modularity of these regions have been interpreted as evidence that these regions are innately specialized. Here, we propose that ventral-stream modules do not represent clusters of circuits that each evolved to process some specific object category particularly important for survival, but instead reflect the effects of experience on a domain-general architecture that evolved to be able to adapt, within a lifetime, to its particular environment. Furthermore, we propose that the mechanisms underlying the development of domains are both evolutionarily old and universal across cortex. Topographic maps are fundamental, governing the development of specializations across systems, providing a framework for brain organization.
Collapse
|
47
|
Structural and white matter changes associated with duration of Braille education in early and late blind children. Vis Neurosci 2021; 38:E011. [PMID: 34425936 DOI: 10.1017/s0952523821000080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
In early (EB) and late blind (LB) children, vision deprivation produces cross-modal plasticity in the visual cortex. The progression of structural- and tract-based spatial statistics changes in the visual cortex in EB and LB, as well as their impact on global cognition, have yet to be investigated. The purpose of this study was to determine the cortical thickness (CT), gyrification index (GI), and white matter (WM) integrity in EB and LB children, as well as their association to the duration of blindness and education. Structural and diffusion tensor imaging data were acquired in a 3T magnetic resonance imaging in EB and LB children (n = 40 each) and 30 sighted controls (SCs) and processed using CAT12 toolbox and FSL software. Two sample t-test was used for group analyses with P < 0.05 (false discovery rate-corrected). Increased CT in visual, sensory-motor, and auditory areas, and GI in bilateral visual cortex was observed in EB children. In LB children, the right visual cortex, anterior-cingulate, sensorimotor, and auditory areas showed increased GI. Structural- and tract-based spatial statistics changes were observed in anterior visual pathway, thalamo-cortical, and corticospinal tracts, and were correlated with education onset and global cognition in EB children. Reduced impairment in WM, increased CT and GI and its correlation with global cognitive functions in visually impaired children suggests cross-modal plasticity due to adaptive compensatory mechanism (as compared to SCs). Reduced CT and increased FA in thalamo-cortical areas in EB suggest synaptic pruning and alteration in WM integrity. In the visual cortical pathway, higher education and the development of blindness modify the morphology of brain areas and influence the probabilistic tractography in EB rather than LB.
Collapse
|
48
|
Sakai H, Ueda S, Ueno K, Kumada T. Neuroplastic Reorganization Induced by Sensory Augmentation for Self-Localization During Locomotion. FRONTIERS IN NEUROERGONOMICS 2021; 2:691993. [PMID: 38235242 PMCID: PMC10790880 DOI: 10.3389/fnrgo.2021.691993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 07/21/2021] [Indexed: 01/19/2024]
Abstract
Sensory skills can be augmented through training and technological support. This process is underpinned by neural plasticity in the brain. We previously demonstrated that auditory-based sensory augmentation can be used to assist self-localization during locomotion. However, the neural mechanisms underlying this phenomenon remain unclear. Here, by using functional magnetic resonance imaging, we aimed to identify the neuroplastic reorganization induced by sensory augmentation training for self-localization during locomotion. We compared activation in response to auditory cues for self-localization before, the day after, and 1 month after 8 days of sensory augmentation training in a simulated driving environment. Self-localization accuracy improved after sensory augmentation training, compared with the control (normal driving) condition; importantly, sensory augmentation training resulted in auditory responses not only in temporal auditory areas but also in higher-order somatosensory areas extending to the supramarginal gyrus and the parietal operculum. This sensory reorganization had disappeared by 1 month after the end of the training. These results suggest that the use of auditory cues for self-localization during locomotion relies on multimodality in higher-order somatosensory areas, despite substantial evidence that information for self-localization during driving is estimated from visual cues on the proximal part of the road. Our findings imply that the involvement of higher-order somatosensory, rather than visual, areas is crucial for acquiring augmented sensory skills for self-localization during locomotion.
Collapse
Affiliation(s)
- Hiroyuki Sakai
- Human Science Laboratory, Toyota Central R&D Laboratories, Inc., Tokyo, Japan
| | - Sayako Ueda
- TOYOTA Collaboration Center, RIKEN Center for Brain Science, Wako, Japan
| | - Kenichi Ueno
- Support Unit for Functional Magnetic Resonance Imaging, RIKEN Center for Brain Science, Wako, Japan
| | | |
Collapse
|
49
|
Pesnot Lerousseau J, Arnold G, Auvray M. Training-induced plasticity enables visualizing sounds with a visual-to-auditory conversion device. Sci Rep 2021; 11:14762. [PMID: 34285265 PMCID: PMC8292401 DOI: 10.1038/s41598-021-94133-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 06/28/2021] [Indexed: 12/04/2022] Open
Abstract
Sensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.
Collapse
Affiliation(s)
| | | | - Malika Auvray
- Sorbonne Université, CNRS UMR 7222, Institut des Systèmes Intelligents et de Robotique (ISIR), 75005, Paris, France.
| |
Collapse
|
50
|
Analysis and Validation of Cross-Modal Generative Adversarial Network for Sensory Substitution. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18126216. [PMID: 34201269 PMCID: PMC8228544 DOI: 10.3390/ijerph18126216] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/03/2021] [Accepted: 06/03/2021] [Indexed: 11/20/2022]
Abstract
Visual-auditory sensory substitution has demonstrated great potential to help visually impaired and blind groups to recognize objects and to perform basic navigational tasks. However, the high latency between visual information acquisition and auditory transduction may contribute to the lack of the successful adoption of such aid technologies in the blind community; thus far, substitution methods have remained only laboratory-scale research or pilot demonstrations. This high latency for data conversion leads to challenges in perceiving fast-moving objects or rapid environmental changes. To reduce this latency, prior analysis of auditory sensitivity is necessary. However, existing auditory sensitivity analyses are subjective because they were conducted using human behavioral analysis. Therefore, in this study, we propose a cross-modal generative adversarial network-based evaluation method to find an optimal auditory sensitivity to reduce transmission latency in visual-auditory sensory substitution, which is related to the perception of visual information. We further conducted a human-based assessment to evaluate the effectiveness of the proposed model-based analysis in human behavioral experiments. We conducted experiments with three participant groups, including sighted users (SU), congenitally blind (CB) and late-blind (LB) individuals. Experimental results from the proposed model showed that the temporal length of the auditory signal for sensory substitution could be reduced by 50%. This result indicates the possibility of improving the performance of the conventional vOICe method by up to two times. We confirmed that our experimental results are consistent with human assessment through behavioral experiments. Analyzing auditory sensitivity with deep learning models has the potential to improve the efficiency of sensory substitution.
Collapse
|