1
|
Wenger M, Maimon A, Yizhar O, Snir A, Sasson Y, Amedi A. Hearing temperatures: employing machine learning for elucidating the cross-modal perception of thermal properties through audition. Front Psychol 2024; 15:1353490. [PMID: 39156805 PMCID: PMC11327021 DOI: 10.3389/fpsyg.2024.1353490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2023] [Accepted: 06/28/2024] [Indexed: 08/20/2024] Open
Abstract
People can use their sense of hearing for discerning thermal properties, though they are for the most part unaware that they can do so. While people unequivocally claim that they cannot perceive the temperature of pouring water through the auditory properties of hearing it being poured, our research further strengthens the understanding that they can. This multimodal ability is implicitly acquired in humans, likely through perceptual learning over the lifetime of exposure to the differences in the physical attributes of pouring water. In this study, we explore people's perception of this intriguing cross modal correspondence, and investigate the psychophysical foundations of this complex ecological mapping by employing machine learning. Our results show that not only can the auditory properties of pouring water be classified by humans in practice, the physical characteristics underlying this phenomenon can also be classified by a pre-trained deep neural network.
Collapse
Affiliation(s)
- Mohr Wenger
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
- Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amber Maimon
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
- Computational Psychiatry and Neurotechnology Lab, Department of Brain and Cognitive Sciences, Ben Gurion University, Be’er Sheva, Israel
| | - Or Yizhar
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
- Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
| | - Adi Snir
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | - Yonatan Sasson
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | - Amir Amedi
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| |
Collapse
|
2
|
Abdel-Ghaffar SA, Huth AG, Lescroart MD, Stansbury D, Gallant JL, Bishop SJ. Occipital-temporal cortical tuning to semantic and affective features of natural images predicts associated behavioral responses. Nat Commun 2024; 15:5531. [PMID: 38982092 PMCID: PMC11233618 DOI: 10.1038/s41467-024-49073-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 05/22/2024] [Indexed: 07/11/2024] Open
Abstract
In everyday life, people need to respond appropriately to many types of emotional stimuli. Here, we investigate whether human occipital-temporal cortex (OTC) shows co-representation of the semantic category and affective content of visual stimuli. We also explore whether OTC transformation of semantic and affective features extracts information of value for guiding behavior. Participants viewed 1620 emotional natural images while functional magnetic resonance imaging data were acquired. Using voxel-wise modeling we show widespread tuning to semantic and affective image features across OTC. The top three principal components underlying OTC voxel-wise responses to image features encoded stimulus animacy, stimulus arousal and interactions of animacy with stimulus valence and arousal. At low to moderate dimensionality, OTC tuning patterns predicted behavioral responses linked to each image better than regressors directly based on image features. This is consistent with OTC representing stimulus semantic category and affective content in a manner suited to guiding behavior.
Collapse
Affiliation(s)
- Samy A Abdel-Ghaffar
- Department of Psychology, UC Berkeley, Berkeley, CA, 94720, USA
- Google LLC, San Francisco, CA, USA
| | - Alexander G Huth
- Centre for Theoretical and Computational Neuroscience, UT Austin, Austin, TX, 78712, USA
| | - Mark D Lescroart
- Department of Psychology University of Nevada Reno, Reno, NV, 89557, USA
| | - Dustin Stansbury
- Program in Vision Sciences, UC Berkeley, Berkeley, CA, 94720, USA
| | - Jack L Gallant
- Department of Psychology, UC Berkeley, Berkeley, CA, 94720, USA
- Program in Vision Sciences, UC Berkeley, Berkeley, CA, 94720, USA
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, 94720, USA
| | - Sonia J Bishop
- Department of Psychology, UC Berkeley, Berkeley, CA, 94720, USA.
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, 94720, USA.
- School of Psychology, Trinity College Dublin, Dublin, Ireland.
- Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, D02 PX31, Ireland.
| |
Collapse
|
3
|
Szubielska M, Szewczyk M, Augustynowicz P, Kędziora W, Möhring W. Adults' spatial scaling of tactile maps: Insights from studying sighted, early and late blind individuals. PLoS One 2024; 19:e0304008. [PMID: 38814897 PMCID: PMC11139347 DOI: 10.1371/journal.pone.0304008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 05/04/2024] [Indexed: 06/01/2024] Open
Abstract
The current study investigated spatial scaling of tactile maps among blind adults and blindfolded sighted controls. We were specifically interested in identifying spatial scaling strategies as well as effects of different scaling directions (up versus down) on participants' performance. To this aim, we asked late blind participants (with visual memory, Experiment 1) and early blind participants (without visual memory, Experiment 2) as well as sighted blindfolded controls to encode a map including a target and to place a response disc at the same spot on an empty, constant-sized referent space. Maps had five different sizes resulting in five scaling factors (1:3, 1:2, 1:1, 2:1, 3:1), allowing to investigate different scaling directions (up and down) in a single, comprehensive design. Accuracy and speed of learning about the target location as well as responding served as dependent variables. We hypothesized that participants who can use visual mental representations (i.e., late blind and blindfolded sighted participants) may adopt mental transformation scaling strategies. However, our results did not support this hypothesis. At the same time, we predicted the usage of relative distance scaling strategies in early blind participants, which was supported by our findings. Moreover, our results suggested that tactile maps can be scaled as accurately and even faster by blind participants than by sighted participants. Furthermore, irrespective of the visual status, participants of each visual status group gravitated their responses towards the center of the space. Overall, it seems that a lack of visual imagery does not impair early blind adults' spatial scaling ability but causes them to use a different strategy than sighted and late blind individuals.
Collapse
Affiliation(s)
- Magdalena Szubielska
- Faculty of Social Sciences, Institute of Psychology, The John Paul II Catholic University of Lublin, Poland
| | - Marta Szewczyk
- Faculty of Social Sciences, Institute of Psychology, The John Paul II Catholic University of Lublin, Poland
| | - Paweł Augustynowicz
- Faculty of Social Sciences, Institute of Psychology, The John Paul II Catholic University of Lublin, Poland
| | | | - Wenke Möhring
- Faculty of Psychology, University of Basel, Basel, Switzerland
- Department of Educational and Health Psychology, University of Education Schwäbisch Gmünd, Germany
| |
Collapse
|
4
|
Versace E, Freeland L, Emmerson MG. First-sight recognition of touched objects shows that chicks can solve Molyneux's problem. Biol Lett 2024; 20:20240025. [PMID: 38565149 PMCID: PMC10987231 DOI: 10.1098/rsbl.2024.0025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 03/06/2024] [Indexed: 04/04/2024] Open
Abstract
If a congenitally blind person learns to distinguish between a cube and a sphere by touch, would they immediately recognize these objects by sight once their vision is restored? This question, posed by Molyneux in 1688, has puzzled philosophers and scientists since then. To overcome ethical and practical difficulties in the investigation of cross-modal recognition, we studied inexperienced poultry chicks, which can be reared in darkness until the moment of a visual test with no detrimental consequences. After hatching chicks in darkness, we exposed them to either tactile smooth or tactile bumpy stimuli for 24 h. Immediately after the tactile exposure, chicks were tested in a visual recognition task, during their first experience with light. At first sight, chicks that had been exposed in the tactile modality to smooth stimuli approached the visual smooth stimulus significantly more than those exposed to the tactile bumpy stimuli. These results show that visually inexperienced chicks can solve Molyneux's problem, indicating cross-modal recognition does not require previous multimodal experience. At least in this precocial species, supra-modal brain areas appear functional already at birth. This discovery paves the way for the investigation of predisposed cross-modal cognition that does not depend on visual experience.
Collapse
Affiliation(s)
- Elisabetta Versace
- Department of Biological and Experimental Psychology, School of Biological and Behavioural Sciences, Queen Mary University of London, 327 Mile End Road, London E1 4NS, UK
| | - Laura Freeland
- Department of Biological and Experimental Psychology, School of Biological and Behavioural Sciences, Queen Mary University of London, 327 Mile End Road, London E1 4NS, UK
| | - Michael G. Emmerson
- Department of Biological and Experimental Psychology, School of Biological and Behavioural Sciences, Queen Mary University of London, 327 Mile End Road, London E1 4NS, UK
| |
Collapse
|
5
|
D'Angiulli A, Wymark D, Temi S, Bahrami S, Telfer A. Reconsidering Luria's speech mediation: Verbalization and haptic picture identification in children with congenital total blindness. Cortex 2024; 173:263-282. [PMID: 38432177 DOI: 10.1016/j.cortex.2024.01.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 11/20/2023] [Accepted: 01/18/2024] [Indexed: 03/05/2024]
Abstract
Current accounts of behavioral and neurocognitive correlates of plasticity in blindness are just beginning to incorporate the role of speech and verbal production. We assessed Vygotsky/Luria's speech mediation hypothesis, according to which speech activity can become a mediating tool for perception of complex stimuli, specifically, for encoding tactual/haptic spatial patterns which convey pictorial information (haptic pictures). We compared verbalization in congenitally totally blind (CTB) and age-matched sighted but visually impaired (VI) children during a haptic picture naming task which included two repeated, test-retest, identifications. The children were instructed to explore 10 haptic schematic pictures of objects (e.g., cup) and body parts (e.g., face) and provide (without experimenter's feedback) their typical name. Children's explorations and verbalizations were videorecorded and transcribed into audio segments. Using the Computerized Analysis of Language (CLAN) program, we extracted several measurements from the observed verbalizations, including number of utterances and words, utterance/word duration, and exploration time. Using the Word2Vec natural language processing technique we operationalized semantic content from the relative distances between the names provided. Furthermore, we conducted an observational content analysis in which three judges categorized verbalizations according to a rating scale assessing verbalization content. Results consistently indicated across all measures that the CTB children were faster and semantically more precise than their VI counterparts in the first identification test, however, the VI children reached the same level of precision and speed as the CTB children at retest. Overall, the task was harder for the VI group. Consistent with current neuroscience literature, the prominent role of speech in CTB and VI children's data suggests that an underlying cross-modal involvement of integrated brain networks, notably associated with Broca's network, likely also influenced by Braille, could play a key role in compensatory plasticity via the mediational mechanism postulated by Luria.
Collapse
Affiliation(s)
- Amedeo D'Angiulli
- Carleton University, Department of Neuroscience, Canada; Children's Hospital of Eastern Ontario Research Institute, Neurodevelopmental Health, Canada.
| | - Dana Wymark
- Carleton University, Department of Neuroscience, Canada
| | - Santa Temi
- Carleton University, Department of Neuroscience, Canada
| | - Sahar Bahrami
- Carleton University, Department of Neuroscience, Canada
| | - Andre Telfer
- Carleton University, Department of Neuroscience, Canada
| |
Collapse
|
6
|
Lettieri G, Handjaras G, Cappello EM, Setti F, Bottari D, Bruno V, Diano M, Leo A, Tinti C, Garbarini F, Pietrini P, Ricciardi E, Cecchetti L. Dissecting abstract, modality-specific and experience-dependent coding of affect in the human brain. SCIENCE ADVANCES 2024; 10:eadk6840. [PMID: 38457501 PMCID: PMC10923499 DOI: 10.1126/sciadv.adk6840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 02/06/2024] [Indexed: 03/10/2024]
Abstract
Emotion and perception are tightly intertwined, as affective experiences often arise from the appraisal of sensory information. Nonetheless, whether the brain encodes emotional instances using a sensory-specific code or in a more abstract manner is unclear. Here, we answer this question by measuring the association between emotion ratings collected during a unisensory or multisensory presentation of a full-length movie and brain activity recorded in typically developed, congenitally blind and congenitally deaf participants. Emotional instances are encoded in a vast network encompassing sensory, prefrontal, and temporal cortices. Within this network, the ventromedial prefrontal cortex stores a categorical representation of emotion independent of modality and previous sensory experience, and the posterior superior temporal cortex maps the valence dimension using an abstract code. Sensory experience more than modality affects how the brain organizes emotional information outside supramodal regions, suggesting the existence of a scaffold for the representation of emotional states where sensory inputs during development shape its functioning.
Collapse
Affiliation(s)
- Giada Lettieri
- Crossmodal Perception and Plasticity Laboratory, Institute of Research in Psychology & Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Giacomo Handjaras
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Elisa M. Cappello
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Francesca Setti
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Davide Bottari
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Sensory Experience Dependent Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - Matteo Diano
- Department of Psychology, University of Turin, Turin, Italy
| | - Andrea Leo
- Department of of Translational Research and Advanced Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Carla Tinti
- Department of Psychology, University of Turin, Turin, Italy
| | | | - Pietro Pietrini
- Forensic Neuroscience and Psychiatry Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Emiliano Ricciardi
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Sensory Experience Dependent Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Luca Cecchetti
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| |
Collapse
|
7
|
Szubielska M, Kędziora W, Augustynowicz P, Picard D. Drawing as a tool for investigating the nature of imagery representations of blind people: The case of the canonical size phenomenon. Mem Cognit 2023:10.3758/s13421-023-01491-7. [PMID: 37985536 DOI: 10.3758/s13421-023-01491-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/06/2023] [Indexed: 11/22/2023]
Abstract
Several studies have shown that blind people, including those with congenital blindness, can use raised-line drawings, both for "reading" tactile graphics and for drawing unassisted. However, research on drawings produced by blind people has mainly been qualitative. The current experimental study was designed to investigate the under-researched issue of the size of drawings created by people with blindness. Participants (N = 59) varied in their visual status. Adventitiously blind people had previous visual experience and might use visual representations (e.g., when visualising objects in imagery/working memory). Congenitally blind people did not have any visual experience. The participant's task was to draw from memory common objects that vary in size in the real world. The findings revealed that both groups of participants produced larger drawings of objects that have larger actual sizes. This means that the size of familiar objects is a property of blind people's mental representations, regardless of their visual status. Our research also sheds light on the nature of the phenomenon of canonical size. Since we have found the canonical size effect in a group of people who are blind from birth, the assumption of the visual nature of this phenomenon - caused by the ocular-centric biases present in studies on drawing performance - should be revised.
Collapse
Affiliation(s)
- Magdalena Szubielska
- Institute of Psychology, The John Paul II Catholic University of Lublin, Al. Racławickie 14, 20-950, Lublin, Poland.
| | | | - Paweł Augustynowicz
- Institute of Psychology, The John Paul II Catholic University of Lublin, Al. Racławickie 14, 20-950, Lublin, Poland
| | | |
Collapse
|
8
|
Del Gatto C, Indraccolo A, Pedale T, Brunetti R. Crossmodal interference on counting performance: Evidence for shared attentional resources. PLoS One 2023; 18:e0294057. [PMID: 37948407 PMCID: PMC10637692 DOI: 10.1371/journal.pone.0294057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 10/16/2023] [Indexed: 11/12/2023] Open
Abstract
During the act of counting, our perceptual system may rely on information coming from different sensory channels. However, when the information coming from different sources is discordant, such as in the case of a de-synchronization between visual stimuli to be counted and irrelevant auditory stimuli, the performance in a sequential counting task might deteriorate. Such deterioration may originate from two different mechanisms, both linked to exogenous attention attracted by auditory stimuli. Indeed, exogenous auditory triggers may infiltrate our internal "counter", interfering with the counting process, resulting in an overcount; alternatively, the exogenous auditory triggers may disrupt the internal "counter" by deviating participants' attention from the visual stimuli, resulting in an undercount. We tested these hypotheses by asking participants to count visual discs sequentially appearing on the screen while listening to task-irrelevant sounds, in systematically varied conditions: visual stimuli could be synchronized or de-synchronized with sounds; they could feature regular or irregular pacing; and their speed presentation could be fast (approx. 3/sec), moderate (approx. 2/sec), or slow (approx. 1.5/sec). Our results support the second hypothesis since participants tend to undercount visual stimuli in all harder conditions (de-synchronized, irregular, fast sequences). We discuss these results in detail, adding novel elements to the study of crossmodal interference.
Collapse
Affiliation(s)
- Claudia Del Gatto
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| | - Allegra Indraccolo
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| | - Tiziana Pedale
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
- Functional Neuroimaging Laboratory, Fondazione Santa Lucia, IRCCS, Rome, Italy
| | - Riccardo Brunetti
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| |
Collapse
|
9
|
Xu Y, Vignali L, Sigismondi F, Crepaldi D, Bottini R, Collignon O. Similar object shape representation encoded in the inferolateral occipitotemporal cortex of sighted and early blind people. PLoS Biol 2023; 21:e3001930. [PMID: 37490508 PMCID: PMC10368275 DOI: 10.1371/journal.pbio.3001930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/23/2023] [Indexed: 07/27/2023] Open
Abstract
We can sense an object's shape by vision or touch. Previous studies suggested that the inferolateral occipitotemporal cortex (ILOTC) implements supramodal shape representations as it responds more to seeing or touching objects than shapeless textures. However, such activation in the anterior portion of the ventral visual pathway could be due to the conceptual representation of an object or visual imagery triggered by touching an object. We addressed these possibilities by directly comparing shape and conceptual representations of objects in early blind (who lack visual experience/imagery) and sighted participants. We found that bilateral ILOTC in both groups showed stronger activation during a shape verification task than during a conceptual verification task made on the names of the same manmade objects. Moreover, the distributed activity in the ILOTC encoded shape similarity but not conceptual association among objects. Besides the ILOTC, we also found shape representation in both groups' bilateral ventral premotor cortices and intraparietal sulcus (IPS), a frontoparietal circuit relating to object grasping and haptic processing. In contrast, the conceptual verification task activated both groups' left perisylvian brain network relating to language processing and, interestingly, the cuneus in early blind participants only. The ILOTC had stronger functional connectivity to the frontoparietal circuit than to the left perisylvian network, forming a modular structure specialized in shape representation. Our results conclusively support that the ILOTC selectively implements shape representation independently of visual experience, and this unique functionality likely comes from its privileged connection to the frontoparietal haptic circuit.
Collapse
Affiliation(s)
- Yangwen Xu
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Lorenzo Vignali
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
- International School for Advanced Studies (SISSA), Trieste, Italy
| | | | - Davide Crepaldi
- International School for Advanced Studies (SISSA), Trieste, Italy
| | - Roberto Bottini
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
- Psychological Sciences Research Institute (IPSY) and Institute of NeuroScience (IoNS), University of Louvain, Louvain-la-Neuve, Belgium
- School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
| |
Collapse
|
10
|
Maimon A, Wald IY, Ben Oz M, Codron S, Netzer O, Heimler B, Amedi A. The Topo-Speech sensory substitution system as a method of conveying spatial information to the blind and vision impaired. Front Hum Neurosci 2023; 16:1058093. [PMID: 36776219 PMCID: PMC9909096 DOI: 10.3389/fnhum.2022.1058093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 12/13/2022] [Indexed: 01/27/2023] Open
Abstract
Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects' identity by employing naming in a spoken word and simultaneously conveying the objects' location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study's findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Sophie Codron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Ophir Netzer
- Gonda Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
11
|
Maimon A, Netzer O, Heimler B, Amedi A. Testing geometry and 3D perception in children following vision restoring cataract-removal surgery. Front Neurosci 2023; 16:962817. [PMID: 36711132 PMCID: PMC9879291 DOI: 10.3389/fnins.2022.962817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 12/19/2022] [Indexed: 01/13/2023] Open
Abstract
As neuroscience and rehabilitative techniques advance, age-old questions concerning the visual experience of those who gain sight after blindness, once thought to be philosophical alone, take center stage and become the target for scientific inquiries. In this study, we employ a battery of visual perception tasks to study the unique experience of a small group of children who have undergone vision-restoring cataract removal surgery as part of the Himalayan Cataract Project. We tested their abilities to perceive in three dimensions (3D) using a binocular rivalry task and the Brock string task, perceive visual illusions, use cross-modal mappings between touch and vision, and spatially group based on geometric cues. Some of the children in this study gained a sense of sight for the first time in their lives, having been born with bilateral congenital cataracts, while others suffered late-onset blindness in one eye alone. This study simultaneously supports yet raises further questions concerning Hubel and Wiesel's critical periods theory and provides additional insight into Molyneux's problem, the ability to correlate vision with touch quickly. We suggest that our findings present a relatively unexplored intermediate stage of 3D vision development. Importantly, we spotlight some essential geometrical perception visual abilities that strengthen the idea that spontaneous geometry intuitions arise independently from visual experience (and education), thus replicating and extending previous studies. We incorporate a new model, not previously explored, of testing children with congenital cataract removal surgeries who perform the task via vision. In contrast, previous work has explored these abilities in the congenitally blind via touch. Taken together, our findings provide insight into the development of what is commonly known as the visual system in the visually deprived and highlight the need to further empirically explore an amodal, task-based interpretation of specializations in the development and structure of the brain. Moreover, we propose a novel objective method, based on a simple binocular rivalry task and the Brock string task, for determining congenital (early) vs. late blindness where medical history and records are partial or lacking (e.g., as is often the case in cataract removal cases).
Collapse
Affiliation(s)
- Amber Maimon
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel,The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel,*Correspondence: Amber Maimon,
| | - Ophir Netzer
- Gonda Brain Research Center, Bar-Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel,The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
12
|
Gori M, Amadeo MB, Pavani F, Valzolgher C, Campus C. Temporal visual representation elicits early auditory-like responses in hearing but not in deaf individuals. Sci Rep 2022; 12:19036. [PMID: 36351944 PMCID: PMC9646881 DOI: 10.1038/s41598-022-22224-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 10/11/2022] [Indexed: 11/10/2022] Open
Abstract
It is evident that the brain is capable of large-scale reorganization following sensory deprivation, but the extent of such reorganization is to date, not clear. The auditory modality is the most accurate to represent temporal information, and deafness is an ideal clinical condition to study the reorganization of temporal representation when the audio signal is not available. Here we show that hearing, but not deaf individuals, show a strong ERP response to visual stimuli in temporal areas during a time-bisection task. This ERP response appears 50-90 ms after the flash and recalls some aspects of the N1 ERP component usually elicited by auditory stimuli. The same ERP is not evident for a visual space-bisection task, suggesting that the early recruitment of temporal cortex is specific for building a highly resolved temporal representation within the visual modality. These findings provide evidence that the lack of auditory input can interfere with typical development of complex visual temporal representations.
Collapse
Affiliation(s)
- Monica Gori
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| | - Maria Bianca Amadeo
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| | - Francesco Pavani
- grid.11696.390000 0004 1937 0351Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy ,grid.11696.390000 0004 1937 0351Centro Interateneo di Ricerca Cognizione, Linguaggio e Sordità (CIRCLeS), University of Trento, Trento, Italy ,grid.461862.f0000 0004 0614 7222Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Centre de Recherche en Neuroscience de Lyon (CRNL), Bron, France
| | - Chiara Valzolgher
- grid.11696.390000 0004 1937 0351Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy ,grid.461862.f0000 0004 0614 7222Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Centre de Recherche en Neuroscience de Lyon (CRNL), Bron, France
| | - Claudio Campus
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| |
Collapse
|
13
|
Sabourin CJ, Merrikhi Y, Lomber SG. Do blind people hear better? Trends Cogn Sci 2022; 26:999-1012. [PMID: 36207258 DOI: 10.1016/j.tics.2022.08.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 08/22/2022] [Accepted: 08/25/2022] [Indexed: 01/12/2023]
Abstract
For centuries, anecdotal evidence such as the perfect pitch of the blind piano tuner or blind musician has supported the notion that individuals who have lost their sight early in life have superior hearing abilities compared with sighted people. Recently, auditory psychophysical and functional imaging studies have identified that specific auditory enhancements in the early blind can be linked to activation in extrastriate visual cortex, suggesting crossmodal plasticity. Furthermore, the nature of the sensory reorganization in occipital cortex supports the concept of a task-based functional cartography for the cerebral cortex rather than a sensory-based organization. In total, studies of early-blind individuals provide valuable insights into mechanisms of cortical plasticity and principles of cerebral organization.
Collapse
Affiliation(s)
- Carina J Sabourin
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Biological and Biomedical Engineering Graduate Program, McGill University, Montreal, Quebec H3G 1Y6, Canada
| | - Yaser Merrikhi
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada
| | - Stephen G Lomber
- Department of Physiology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Biological and Biomedical Engineering Graduate Program, McGill University, Montreal, Quebec H3G 1Y6, Canada; Department of Psychology, McGill University, Montreal, Quebec H3G 1Y6, Canada; Department of Neurology and Neurosurgery, McGill University, Montreal, Quebec H3G 1Y6, Canada.
| |
Collapse
|
14
|
Gori M, Bertonati G, Campus C, Amadeo MB. Multisensory representations of space and time in sensory cortices. Hum Brain Mapp 2022; 44:656-667. [PMID: 36169038 PMCID: PMC9842891 DOI: 10.1002/hbm.26090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Giorgia Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly,Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - Claudio Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
15
|
Houwen S, Cox RFA, Roza M, Oude Lansink F, van Wolferen J, Rietman AB. Sensory processing in young children with visual impairments: Use and extension of the Sensory Profile. RESEARCH IN DEVELOPMENTAL DISABILITIES 2022; 127:104251. [PMID: 35569170 DOI: 10.1016/j.ridd.2022.104251] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 01/25/2022] [Accepted: 04/23/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Children with visual impairments (VI) are at risk for sensory processing difficulties. A widely used measure for sensory processing is the Sensory Profile (SP). However, the SP requires adaptation to accommodate for how children with VI experience sensory information. AIMS (1) To examine sensory processing patterns in young children with VI, (2) to develop VI-specific items to use in conjunction with the SP and to determine internal consistency and construct validity of these newly developed items, and (3) to examine the association between sensory processing and and emotional and behavioral problems. METHODS Twenty-six VI-specific items were added to the SP. The SP and these items were completed by caregivers of 90 children with VI between 3 and 8 years old. The Child Behavior Checklist (CBCL) was used to assess emotional and behavioral problems. RESULTS Three- to five-year-old children with VI have significantly more difficulties in three quadrants of the SP as compared to the norm group. Six- to eight-year-old children with VI have more difficulties in all quadrants. A reliable and valid VI-specific set of 15 items was established following psychometric evaluation. Age-related differences were found in the associations between the SP and CBCL. CONCLUSION Although further validation is recommended, this evaluation of the VI-specific item set suggests it has the potential to be a useful measure for children with VI.
Collapse
Affiliation(s)
- Suzanne Houwen
- University of Groningen, Faculty of Behavioral and Social Sciences, Inclusive and Special Needs Education Unit, Grote Kruisstraat 2/1, 9712 TS Groningen, the Netherlands.
| | - Ralf F A Cox
- University of Groningen, Faculty of Behavioral and Social Sciences, Department of Psychology, Grote Kruisstraat 2/1, 9712 TS Groningen, the Netherlands.
| | - Minette Roza
- Bartiméus Expertise Centre for the Visually Impaired, Postbus 1003, 3700 BA Zeist, the Netherlands.
| | - Femke Oude Lansink
- Bartiméus Expertise Centre for the Visually Impaired, Postbus 1003, 3700 BA Zeist, the Netherlands.
| | - Jannemieke van Wolferen
- Bartiméus Expertise Centre for the Visually Impaired, Postbus 1003, 3700 BA Zeist, the Netherlands.
| | - André B Rietman
- Erasmus Medical Center Sophia Children's Hospital, Department of Child and Adolescent Psychiatry/Psychology, Wytemaweg 80, 3015 CN Rotterdam, the Netherlands.
| |
Collapse
|
16
|
Maimon A, Yizhar O, Buchs G, Heimler B, Amedi A. A case study in phenomenology of visual experience with retinal prosthesis versus visual-to-auditory sensory substitution. Neuropsychologia 2022; 173:108305. [PMID: 35752268 PMCID: PMC9297294 DOI: 10.1016/j.neuropsychologia.2022.108305] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 04/30/2022] [Accepted: 06/13/2022] [Indexed: 11/26/2022]
Abstract
The phenomenology of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both devices. He offers a firsthand account into what he considers the great potential of combining sensory substitution devices with visual prostheses as part of a complete visual restoration protocol. While the Argus II retinal prosthesis alone provided him with immediate visual percepts by way of electrically stimulated phosphenes elicited by the device, the EyeMusic SSD requires extensive training from the onset. Yet following the extensive training program with the EyeMusic sensory substitution device, our subject reports that the sensory substitution device allowed him to experience a richer, more complex perceptual experience, that felt more "second nature" to him, while the Argus II prosthesis (which also requires training) did not allow him to achieve the same levels of automaticity and transparency. Following long-term use of the EyeMusic SSD, our subject reported that visual percepts representing mainly, but not limited to, colors portrayed by the EyeMusic SSD are elicited in association with auditory stimuli, indicating the acquisition of a high level of automaticity. Finally, the case study indicates an additive benefit to the combination of both devices on the user's subjective phenomenological visual experience.
Collapse
Affiliation(s)
- Amber Maimon
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| | - Or Yizhar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel; Max Planck Institute for Human Development, Research Group Adaptive Memory and Decision Making, Berlin, Germany; Max Planck Institute for Human Development, Max Planck Dahlem Campus of Cognition (MPDCC), Berlin, Germany
| | - Galit Buchs
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| |
Collapse
|
17
|
Arbel R, Heimler B, Amedi A. Congenitally blind adults can learn to identify face-shapes via auditory sensory substitution and successfully generalize some of the learned features. Sci Rep 2022; 12:4330. [PMID: 35288597 PMCID: PMC8921184 DOI: 10.1038/s41598-022-08187-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 02/22/2022] [Indexed: 11/24/2022] Open
Abstract
Unlike sighted individuals, congenitally blind individuals have little to no experience with face shapes. Instead, they rely on non-shape cues, such as voices, to perform character identification. The extent to which face-shape perception can be learned in adulthood via a different sensory modality (i.e., not vision) remains poorly explored. We used a visual-to-auditory Sensory Substitution Device (SSD) that enables conversion of visual images to the auditory modality while preserving their visual characteristics. Expert SSD users were systematically taught to identify cartoon faces via audition. Following a tailored training program lasting ~ 12 h, congenitally blind participants successfully identified six trained faces with high accuracy. Furthermore, they effectively generalized their identification to the untrained, inverted orientation of the learned faces. Finally, after completing the extensive 12-h training program, participants learned six new faces within 2 additional hours of training, suggesting internalization of face-identification processes. Our results document for the first time that facial features can be processed through audition, even in the absence of visual experience across the lifespan. Overall, these findings have important implications for both non-visual object recognition and visual rehabilitation practices and prompt the study of the neural processes underlying auditory face perception in the absence of vision.
Collapse
|
18
|
Hettwer MD, Lancaster TM, Raspor E, Hahn PK, Mota NR, Singer W, Reif A, Linden DEJ, Bittner RA. Evidence From Imaging Resilience Genetics for a Protective Mechanism Against Schizophrenia in the Ventral Visual Pathway. Schizophr Bull 2022; 48:551-562. [PMID: 35137221 PMCID: PMC9077432 DOI: 10.1093/schbul/sbab151] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
INTRODUCTION Illuminating neurobiological mechanisms underlying the protective effect of recently discovered common genetic resilience variants for schizophrenia is crucial for more effective prevention efforts. Current models implicate adaptive neuroplastic changes in the visual system and their pro-cognitive effects as a schizophrenia resilience mechanism. We investigated whether common genetic resilience variants might affect brain structure in similar neural circuits. METHOD Using structural magnetic resonance imaging, we measured the impact of an established schizophrenia polygenic resilience score (PRSResilience) on cortical volume, thickness, and surface area in 101 healthy subjects and in a replication sample of 33 224 healthy subjects (UK Biobank). FINDING We observed a significant positive whole-brain correlation between PRSResilience and cortical volume in the right fusiform gyrus (FFG) (r = 0.35; P = .0004). Post-hoc analyses in this cluster revealed an impact of PRSResilience on cortical surface area. The replication sample showed a positive correlation between PRSResilience and global cortical volume and surface area in the left FFG. CONCLUSION Our findings represent the first evidence of a neurobiological correlate of a genetic resilience factor for schizophrenia. They support the view that schizophrenia resilience emerges from strengthening neural circuits in the ventral visual pathway and an increased capacity for the disambiguation of social and nonsocial visual information. This may aid psychosocial functioning, ameliorate the detrimental effects of subtle perceptual and cognitive disturbances in at-risk individuals, and facilitate coping with the cognitive and psychosocial consequences of stressors. Our results thus provide a novel link between visual cognition, the vulnerability-stress concept, and schizophrenia resilience models.
Collapse
Affiliation(s)
- Meike D Hettwer
- Department of Psychiatry, Psychosomatic Medicine, and Psychotherapy, University Hospital Frankfurt, Goethe University, Frankfurt am Main, Germany,Max Planck School of Cognition, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany,Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany,Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Thomas M Lancaster
- School of Psychology, Bath University, Bath, UK,MRC Centre for Neuropsychiatric Genetics and Genomics, Division of Psychological Medicine and Clinical Neuroscience, School of Medicine, Cardiff University, Cardiff, UK
| | - Eva Raspor
- Department of Psychiatry, Psychosomatic Medicine, and Psychotherapy, University Hospital Frankfurt, Goethe University, Frankfurt am Main, Germany
| | - Peter K Hahn
- Department of Psychiatry, Psychosomatic Medicine, and Psychotherapy, University Hospital Frankfurt, Goethe University, Frankfurt am Main, Germany
| | - Nina Roth Mota
- Department of Human Genetics, Radboud University Medical Center, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands,Department of Psychiatry, Radboud University Medical Center, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Wolf Singer
- Ernst Strüngmann Institute for Neuroscience (ESI) in Cooperation with Max Planck Society, Frankfurt am Main, Germany,Max Planck Institute for Brain Research (MPI BR), Frankfurt am Main, Germany,Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main, Germany
| | - Andreas Reif
- Department of Psychiatry, Psychosomatic Medicine, and Psychotherapy, University Hospital Frankfurt, Goethe University, Frankfurt am Main, Germany
| | - David E J Linden
- MRC Centre for Neuropsychiatric Genetics and Genomics, Division of Psychological Medicine and Clinical Neuroscience, School of Medicine, Cardiff University, Cardiff, UK,School for Mental Health and Neuroscience, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Robert A Bittner
- To whom correspondence should be addressed; Heinrich-Hoffmann-Str. 10, D-60528 Frankfurt am Main, Germany; tel: 69-6301-84713, fax: 69-6301-81775, e-mail:
| |
Collapse
|
19
|
de Sousa AA, Todorov OS, Proulx MJ. A natural history of vertebrate vision loss: Insight from mammalian vision for human visual function. Neurosci Biobehav Rev 2022; 134:104550. [PMID: 35074313 DOI: 10.1016/j.neubiorev.2022.104550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 10/08/2021] [Accepted: 01/20/2022] [Indexed: 11/28/2022]
Abstract
Research on the origin of vision and vision loss in naturally "blind" animal species can reveal the tasks that vision fulfills and the brain's role in visual experience. Models that incorporate evolutionary history, natural variation in visual ability, and experimental manipulations can help disentangle visual ability at a superficial level from behaviors linked to vision but not solely reliant upon it, and could assist the translation of ophthalmological research in animal models to human treatments. To unravel the similarities between blind individuals and blind species, we review concepts of 'blindness' and its behavioral correlates across a range of species. We explore the ancestral emergence of vision in vertebrates, and the loss of vision in blind species with reference to an evolution-based classification scheme. We applied phylogenetic comparative methods to a mammalian tree to explore the evolution of visual acuity using ancestral state estimations. Future research into the natural history of vision loss could help elucidate the function of vision and inspire innovations in how to address vision loss in humans.
Collapse
Affiliation(s)
- Alexandra A de Sousa
- Centre for Health and Cognition, Bath Spa University, Bath, United Kingdom; UKRI Centre for Accessible, Responsible & Transparent Artificial Intelligence (ART:AI), University of Bath, United Kingdom.
| | - Orlin S Todorov
- School of Biological Sciences, The University of Queensland, St Lucia, Queensland, Australia
| | - Michael J Proulx
- UKRI Centre for Accessible, Responsible & Transparent Artificial Intelligence (ART:AI), University of Bath, United Kingdom; Department of Psychology, REVEAL Research Centre, University of Bath, Bath, United Kingdom
| |
Collapse
|
20
|
Downey G. Echolocation among the blind: an argument for an ontogenetic turn. JOURNAL OF THE ROYAL ANTHROPOLOGICAL INSTITUTE 2021. [DOI: 10.1111/1467-9655.13607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Greg Downey
- Macquarie School of Social Sciences Macquarie University Room B514, Level 5, 25B Wally's Walk NSW 2109 Australia
| |
Collapse
|
21
|
Colorophone 2.0: A Wearable Color Sonification Device Generating Live Stereo-Soundscapes-Design, Implementation, and Usability Audit. SENSORS 2021; 21:s21217351. [PMID: 34770658 PMCID: PMC8587929 DOI: 10.3390/s21217351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 10/29/2021] [Accepted: 11/01/2021] [Indexed: 11/20/2022]
Abstract
The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information—the visually impaired community. Despite the plethora of sensory substitution devices, developing systems that provide intuitive color sonification remains a challenge. This paper presents design considerations, development, and the usability audit of a sensory substitution device that converts spatial color information into soundscapes. The implemented wearable system uses a dedicated color space and continuously generates natural, spatialized sounds based on the information acquired from a camera. We developed two head-mounted prototype devices and two graphical user interface (GUI) versions. The first GUI is dedicated to researchers, and the second has been designed to be easily accessible for visually impaired persons. Finally, we ran fundamental usability tests to evaluate the new spatial color sonification algorithm and to compare the two prototypes. Furthermore, we propose recommendations for the development of the next iteration of the system.
Collapse
|
22
|
Late development of audio-visual integration in the vertical plane. CURRENT RESEARCH IN BEHAVIORAL SCIENCES 2021. [DOI: 10.1016/j.crbeha.2021.100043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|
23
|
Huang T, Zhen Z, Liu J. Semantic Relatedness Emerges in Deep Convolutional Neural Networks Designed for Object Recognition. Front Comput Neurosci 2021; 15:625804. [PMID: 33692678 PMCID: PMC7938322 DOI: 10.3389/fncom.2021.625804] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 02/01/2021] [Indexed: 11/22/2022] Open
Abstract
Human not only can effortlessly recognize objects, but also characterize object categories into semantic concepts with a nested hierarchical structure. One dominant view is that top-down conceptual guidance is necessary to form such hierarchy. Here we challenged this idea by examining whether deep convolutional neural networks (DCNNs) could learn relations among objects purely based on bottom-up perceptual experience of objects through training for object categorization. Specifically, we explored representational similarity among objects in a typical DCNN (e.g., AlexNet), and found that representations of object categories were organized in a hierarchical fashion, suggesting that the relatedness among objects emerged automatically when learning to recognize them. Critically, the emerged relatedness of objects in the DCNN was highly similar to the WordNet in human, implying that top-down conceptual guidance may not be a prerequisite for human learning the relatedness among objects. In addition, the developmental trajectory of the relatedness among objects during training revealed that the hierarchical structure was constructed in a coarse-to-fine fashion, and evolved into maturity before the establishment of object recognition ability. Finally, the fineness of the relatedness was greatly shaped by the demand of tasks that the DCNN performed, as the higher superordinate level of object classification was, the coarser the hierarchical structure of the relatedness emerged. Taken together, our study provides the first empirical evidence that semantic relatedness of objects emerged as a by-product of object recognition in DCNNs, implying that human may acquire semantic knowledge on objects without explicit top-down conceptual guidance.
Collapse
Affiliation(s)
- Taicheng Huang
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Zonglei Zhen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, China
| | - Jia Liu
- Department of Psychology, Tsinghua University, Beijing, China
| |
Collapse
|
24
|
Dzięgiel-Fivet G, Plewko J, Szczerbiński M, Marchewka A, Szwed M, Jednoróg K. Neural network for Braille reading and the speech-reading convergence in the blind: Similarities and differences to visual reading. Neuroimage 2021; 231:117851. [PMID: 33582273 DOI: 10.1016/j.neuroimage.2021.117851] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 02/04/2021] [Accepted: 02/05/2021] [Indexed: 10/22/2022] Open
Abstract
All writing systems represent units of spoken language. Studies on the neural correlates of reading in different languages show that this skill relies on access to brain areas dedicated to speech processing. Speech-reading convergence onto a common perisylvian network is therefore considered universal among different writing systems. Using fMRI, we test whether this holds true also for tactile Braille reading in the blind. The neural networks for Braille and visual reading overlapped in the left ventral occipitotemporal (vOT) cortex. Even though we showed similar perisylvian specialization for speech in both groups, blind subjects did not engage this speech system for reading. In contrast to the sighted, speech-reading convergence in the blind was absent in the perisylvian network. Instead, the blind engaged vOT not only in reading but also in speech processing. The involvement of the vOT in speech processing and its engagement in reading in the blind suggests that vOT is included in a modality independent language network in the blind, also evidenced by functional connectivity results. The analysis of individual speech-reading convergence suggests that there may be segregated neuronal populations in the vOT for speech processing and reading in the blind.
Collapse
Affiliation(s)
- Gabriela Dzięgiel-Fivet
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| | - Joanna Plewko
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | | | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Marcin Szwed
- Department of Psychology, Jagiellonian University, Cracow, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| |
Collapse
|
25
|
Visual motion processing recruits regions selective for auditory motion in early deaf individuals. Neuroimage 2021; 230:117816. [PMID: 33524580 DOI: 10.1016/j.neuroimage.2021.117816] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 01/18/2021] [Accepted: 01/25/2021] [Indexed: 01/24/2023] Open
Abstract
In early deaf individuals, the auditory deprived temporal brain regions become engaged in visual processing. In our study we tested further the hypothesis that intrinsic functional specialization guides the expression of cross-modal responses in the deprived auditory cortex. We used functional MRI to characterize the brain response to horizontal, radial and stochastic visual motion in early deaf and hearing individuals matched for the use of oral or sign language. Visual motion showed enhanced response in the 'deaf' mid-lateral planum temporale, a region selective to auditory motion as demonstrated by a separate auditory motion localizer in hearing people. Moreover, multivariate pattern analysis revealed that this reorganized temporal region showed enhanced decoding of motion categories in the deaf group, while visual motion-selective region hMT+/V5 showed reduced decoding when compared to hearing people. Dynamic Causal Modelling revealed that the 'deaf' motion-selective temporal region shows a specific increase of its functional interactions with hMT+/V5 and is now part of a large-scale visual motion selective network. In addition, we observed preferential responses to radial, compared to horizontal, visual motion in the 'deaf' right superior temporal cortex region that also show preferential response to approaching/receding sounds in the hearing brain. Overall, our results suggest that the early experience of auditory deprivation interacts with intrinsic constraints and triggers a large-scale reallocation of computational load between auditory and visual brain regions that typically support the multisensory processing of motion information.
Collapse
|
26
|
Abstract
What are the principles of brain organization? In the motor domain, separate pathways were found for reaching and grasping actions performed by the hand. To what extent is this organization specific to the hand or based on abstract action types, regardless of which body part performs them? We tested people born without hands who perform actions with their feet. Activity in frontoparietal association motor areas showed preference for an action type (reaching or grasping), regardless of whether it was performed by the foot in people born without hands or by the hand in typically-developed controls. These findings provide evidence that some association areas are organized based on abstract functions of action types, independent of specific sensorimotor experience and parameters of specific body parts. Many parts of the visuomotor system guide daily hand actions, like reaching for and grasping objects. Do these regions depend exclusively on the hand as a specific body part whose movement they guide, or are they organized for the reaching task per se, for any body part used as an effector? To address this question, we conducted a neuroimaging study with people born without upper limbs—individuals with dysplasia—who use the feet to act, as they and typically developed controls performed reaching and grasping actions with their dominant effector. Individuals with dysplasia have no prior experience acting with hands, allowing us to control for hand motor imagery when acting with another effector (i.e., foot). Primary sensorimotor cortices showed selectivity for the hand in controls and foot in individuals with dysplasia. Importantly, we found a preference based on action type (reaching/grasping) regardless of the effector used in the association sensorimotor cortex, in the left intraparietal sulcus and dorsal premotor cortex, as well as in the basal ganglia and anterior cerebellum. These areas also showed differential response patterns between action types for both groups. Intermediate areas along a posterior–anterior gradient in the left dorsal premotor cortex gradually transitioned from selectivity based on the body part to selectivity based on the action type. These findings indicate that some visuomotor association areas are organized based on abstract action functions independent of specific sensorimotor parameters, paralleling sensory feature-independence in visual and auditory cortices in people born blind and deaf. Together, they suggest association cortices across action and perception may support specific computations, abstracted from low-level sensorimotor elements.
Collapse
|
27
|
Abboud S, Cohen L. Distinctive Interaction Between Cognitive Networks and the Visual Cortex in Early Blind Individuals. Cereb Cortex 2020; 29:4725-4742. [PMID: 30715236 DOI: 10.1093/cercor/bhz006] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Revised: 12/19/2018] [Accepted: 01/08/2019] [Indexed: 01/20/2023] Open
Abstract
In early blind individuals, brain activation by a variety of nonperceptual cognitive tasks extends to the visual cortex, while in the sighted it is restricted to supramodal association areas. We hypothesized that such activation results from the integration of different sectors of the visual cortex into typical task-dependent networks. We tested this hypothesis with fMRI in blind and sighted subjects using tasks assessing speech comprehension, incidental long-term memory and both verbal and nonverbal executive control, in addition to collecting resting-state data. All tasks activated the visual cortex in blind relative to sighted subjects, which enabled its segmentation according to task sensitivity. We then assessed the unique brain-scale functional connectivity of the segmented areas during resting state. Language-related seeds were preferentially connected to frontal and temporal language areas; the seed derived from the executive task was connected to the right dorsal frontoparietal executive network; and the memory-related seed was uniquely connected to mesial frontoparietal areas involved in episodic memory retrieval. Thus, using a broad set of language, executive, and memory tasks in the same subjects, combined with resting state connectivity, we demonstrate the selective integration of different patches of the visual cortex into brain-scale networks with distinct localization, lateralization, and functional roles.
Collapse
Affiliation(s)
- Sami Abboud
- Institut du Cerveau et de la Moelle épinière, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, Paris, France
| | - Laurent Cohen
- Institut du Cerveau et de la Moelle épinière, ICM, Inserm U 1127, CNRS UMR 7225, Sorbonne Université, Paris, France.,Service de Neurologie 1, Hôpital de la Pitié Salpêtrière, AP-HP, Paris, France
| |
Collapse
|
28
|
Abstract
A speech signal carries information about meaning and about the talker conveying that meaning. It is now known that these two dimensions are related. There is evidence that gaining experience with a particular talker in one modality not only facilitates better phonetic perception in that modality, but also transfers across modalities to allow better phonetic perception in the other. This finding suggests that experience with a talker provides familiarity with some amodal properties of their articulation such that the experience can be shared across modalities. The present study investigates if experience with talker-specific articulatory information can also support cross-modal talker learning. In Experiment 1 we show that participants can learn to identify ten novel talkers from point-light and sinewave speech, expanding on prior work. Point-light and sinewave speech also supported similar talker identification accuracies, and similar patterns of talker confusions were found across stimulus types. Experiment 2 showed these stimuli could also support cross-modal talker matching, further expanding on prior work. Finally, in Experiment 3 we show that learning to identify talkers in one modality (visual-only point-light speech) facilitates learning of those same talkers in another modality (auditory-only sinewave speech). These results suggest that some of the information for talker identity takes a modality-independent form.
Collapse
|
29
|
Spagna A, Wu T, Kim K, Fan J. Supramodal executive control of attention: Evidence from unimodal and crossmodal dual conflict effects. Cortex 2020; 133:266-276. [PMID: 33157346 DOI: 10.1016/j.cortex.2020.09.018] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 07/22/2020] [Accepted: 09/17/2020] [Indexed: 02/08/2023]
Abstract
Although we have demonstrated that the executive control of attention acts supramodally as shown by significant correlation between conflict effect measures in visual and auditory tasks, no direct evidence of the equivalence in the computational mechanisms governing the allocation of executive control resources within and across modalities has been found. Here, in two independent groups of 40 participants each, we examined the interaction effect of conflict processing in both unimodal (visual) and crossmodal (visual and auditory) dual-conflict paradigms (flanker conflict processing in Task 1 and then in the following Task 2) with a manipulation of the stimulus onset asynchrony (SOA). In both the unimodal and the crossmodal dual-conflict paradigms, the conflict processing of Task 1 significantly interfered with the processing of Task 2 when the SOA was short, as shown by an additive interference effect of Task 1 on Task 2 under the time constraints. These results suggest that there is a unified supramodal entity that supports conflict processing by implementing comparable mechanisms in unimodal and crossmodal scenarios.
Collapse
Affiliation(s)
- Alfredo Spagna
- Department of Psychology, Columbia University in the City University of New York, NY, USA.
| | - Tingting Wu
- Department of Psychology, Queens College, The City University of New York, Queens, NY, USA
| | - Kevin Kim
- Department of Psychology, Queens College, The City University of New York, Queens, NY, USA
| | - Jin Fan
- Department of Psychology, Queens College, The City University of New York, Queens, NY, USA
| |
Collapse
|
30
|
Bianco V, Berchicci M, Livio Perri R, Quinzi F, Mussini E, Spinelli D, Di Russo F. Preparatory ERPs in visual, auditory, and somatosensory discriminative motor tasks. Psychophysiology 2020; 57:e13687. [PMID: 32970337 DOI: 10.1111/psyp.13687] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 08/03/2020] [Accepted: 08/17/2020] [Indexed: 11/27/2022]
Abstract
Previous event-related potential (ERP) studies mainly from the present research group showed a novel component, that is, the prefrontal negativity (pN), recorded in visual-motor discriminative tasks during the pre-stimulus phase. This component is concomitant to activity related to motor preparation, that is, the Bereitschaftspotential (BP). The pN component has been reported in experiments based on the visual modality only; for other modalities (acoustic and/or somatosensory) the presence of the pN warrants further investigation. This study represents a first step toward this direction; indeed, we aimed at describing the pN and the BP components in discriminative response tasks (DRTs) for three sensory modalities. In experiment 1 ERPs were recorded in 29 adults in visual and auditory DRT; an additional group of 15 adults participated to a somatosensory DRT (experiment 2). In line with previous results both the pN and the BP were clearly detectable in the visual modality. In the auditory modality the prefrontal pN was not detectable directly; however, the pN could be derived by subtraction of separate EEG traces recorded in a "passive" version of the same auditory task, in which motor responses were not required. In the somatosensory modality both the pN and the BP were detectable, although with lower amplitudes with respect to other two sensory modalities. Overall, regardless of the sensory modality, anticipatory task-related pN and BP components could be detected (or derived by subtraction) over both the prefrontal and motor cortices. These results support the view that anticipatory processes share common components among sensory modalities.
Collapse
Affiliation(s)
- Valentina Bianco
- Laboratory of Electrophysiology Processes, IRCCS Santa Lucia Foundation, Rome, Italy.,Laboratory of Cognitive Neuroscience, Department of Languages and Literatures, Communication, Education and Society, University of Udine, Udine, Italy
| | - Marika Berchicci
- Department of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy
| | | | - Federico Quinzi
- Laboratory of Electrophysiology Processes, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Elena Mussini
- Department of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy
| | - Donatella Spinelli
- Laboratory of Electrophysiology Processes, IRCCS Santa Lucia Foundation, Rome, Italy.,Department of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy
| | - Francesco Di Russo
- Laboratory of Electrophysiology Processes, IRCCS Santa Lucia Foundation, Rome, Italy.,Department of Movement, Human and Health Sciences, University of Rome "Foro Italico", Rome, Italy
| |
Collapse
|
31
|
Rinaldi L, Ciricugno A, Merabet LB, Vecchi T, Cattaneo Z. The Effect of Blindness on Spatial Asymmetries. Brain Sci 2020; 10:brainsci10100662. [PMID: 32977398 PMCID: PMC7597958 DOI: 10.3390/brainsci10100662] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 09/11/2020] [Accepted: 09/18/2020] [Indexed: 11/27/2022] Open
Abstract
The human cerebral cortex is asymmetrically organized with hemispheric lateralization pervading nearly all neural systems of the brain. Whether the lack of normal visual development affects hemispheric specialization subserving the deployment of visuospatial attention asymmetries is controversial. In principle, indeed, the lack of early visual experience may affect the lateralization of spatial functions, and the blind may rely on a different sensory input compared to the sighted. In this review article, we thus present a current state-of-the-art synthesis of empirical evidence concerning the effects of visual deprivation on the lateralization of various spatial processes (i.e., including line bisection, mirror symmetry, and localization tasks). Overall, the evidence reviewed indicates that spatial processes are supported by a right hemispheric network in the blind, hence, analogously to the sighted. Such a right-hemisphere dominance, however, seems more accentuated in the blind as compared to the sighted as indexed by the greater leftward bias shown in different spatial tasks. This is possibly the result of the more pronounced involvement of the right parietal cortex during spatial tasks in blind individuals compared to the sighted, as well as of the additional recruitment of the right occipital cortex, which would reflect the cross-modal plastic phenomena that largely characterize the blind brain.
Collapse
Affiliation(s)
- Luca Rinaldi
- Department of Brain and Behavioural Science, University of Pavia, Piazza Botta 6, 27100 Pavia, Italy;
- Correspondence:
| | | | - Lotfi B. Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA 02115, USA;
| | - Tomaso Vecchi
- Department of Brain and Behavioural Science, University of Pavia, Piazza Botta 6, 27100 Pavia, Italy;
- IRCCS Mondino Foundation, 27100 Pavia, Italy; (A.C.); (Z.C.)
| | - Zaira Cattaneo
- IRCCS Mondino Foundation, 27100 Pavia, Italy; (A.C.); (Z.C.)
- Department of Psychology, University of Milano-Bicocca, 20126 Milano, Italy
| |
Collapse
|
32
|
Englund M, Faridjoo S, Iyer CS, Krubitzer L. Available Sensory Input Determines Motor Performance and Strategy in Early Blind and Sighted Short-Tailed Opossums. iScience 2020; 23:101527. [PMID: 33083758 PMCID: PMC7516066 DOI: 10.1016/j.isci.2020.101527] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 08/05/2020] [Accepted: 08/31/2020] [Indexed: 01/13/2023] Open
Abstract
The early loss of vision results in a reorganized neocortex, affecting areas of the brain that process both the spared and lost senses, and leads to heightened abilities on discrimination tasks involving the spared senses. Here, we used performance measures and machine learning algorithms that quantify behavioral strategy to determine if and how early vision loss alters adaptive sensorimotor behavior. We tested opossums on a motor task involving somatosensation and found that early blind animals had increased limb placement accuracy compared with sighted controls, while showing similarities in crossing strategy. However, increased reliance on tactile inputs in early blind animals resulted in greater deficits in limb placement and behavioral flexibility when the whiskers were trimmed. These data show that compensatory cross-modal plasticity extends beyond sensory discrimination tasks to motor tasks involving the spared senses and highlights the importance of whiskers in guiding forelimb control. Early blind opossums outperform sighted controls during ladder rung walking Whisker trimming causes forelimb accuracy deficits in blind and sighted opossums Whisker trimming, but not the loss of vision, impacts stereotypical movements Both groups adopt conservative approaches to ladder crossing after whisker trimming
Collapse
Affiliation(s)
- Mackenzie Englund
- Department of Psychology, University of California, 135 Young Hall, 1 Shields Avenue, Davis, CA 95616, USA
| | - Samaan Faridjoo
- Department of Molecular and Cellular Biology, University of California, 149 Briggs Hall, 1 Shields Avenue, Davis, CA 95616, USA
| | - Christopher S Iyer
- Symbolic Systems Program, Stanford University, 460 Margaret Jacks Hall, 450 Serra Mall, Stanford, CA 94305, USA
| | - Leah Krubitzer
- Department of Psychology, University of California, 135 Young Hall, 1 Shields Avenue, Davis, CA 95616, USA.,Center for Neuroscience, University of California, 1544 Newton Court, Davis, CA 95618, USA
| |
Collapse
|
33
|
Lloyd-Esenkaya T, Lloyd-Esenkaya V, O'Neill E, Proulx MJ. Multisensory inclusive design with sensory substitution. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:37. [PMID: 32770416 PMCID: PMC7415050 DOI: 10.1186/s41235-020-00240-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 07/13/2020] [Indexed: 11/10/2022]
Abstract
Sensory substitution techniques are perceptual and cognitive phenomena used to represent one sensory form with an alternative. Current applications of sensory substitution techniques are typically focused on the development of assistive technologies whereby visually impaired users can acquire visual information via auditory and tactile cross-modal feedback. But despite their evident success in scientific research and furthering theory development in cognition, sensory substitution techniques have not yet gained widespread adoption within sensory-impaired populations. Here we argue that shifting the focus from assistive to mainstream applications may resolve some of the current issues regarding the use of sensory substitution devices to improve outcomes for those with disabilities. This article provides a tutorial guide on how to use research into multisensory processing and sensory substitution techniques from the cognitive sciences to design new inclusive cross-modal displays. A greater focus on developing inclusive mainstream applications could lead to innovative technologies that could be enjoyed by every person.
Collapse
Affiliation(s)
- Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK.,Department of Computer Science, University of Bath, Bath, UK
| | | | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK. .,Department of Psychology, University of Bath, Bath, UK.
| |
Collapse
|
34
|
Does (lack of) sight matter for V1? New light from the study of the blind brain. Neurosci Biobehav Rev 2020; 118:1-2. [PMID: 32711007 DOI: 10.1016/j.neubiorev.2020.07.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 07/14/2020] [Indexed: 12/20/2022]
|
35
|
Wang X, Men W, Gao J, Caramazza A, Bi Y. Two Forms of Knowledge Representations in the Human Brain. Neuron 2020; 107:383-393.e5. [PMID: 32386524 DOI: 10.1016/j.neuron.2020.04.010] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 03/05/2020] [Accepted: 04/06/2020] [Indexed: 01/09/2023]
Abstract
Sensory experience shapes what and how knowledge is stored in the brain-our knowledge about the color of roses depends in part on the activity of color-responsive neurons based on experiences of seeing roses. We compared the brain basis of color knowledge in congenitally (or early) blind individuals, whose color knowledge can only be obtained through language descriptions and/or cognitive inference, to that of sighted individuals whose color-knowledge benefits from both sensory experience and language. We found that some regions support color knowledge only in the sighted, whereas a region in the left dorsal anterior temporal lobe supports object-color knowledge in both the blind and sighted groups, indicating the existence of a sensory-independent knowledge coding system in both groups. Thus, there are (at least) two forms of object knowledge representations in the human brain: sensory-derived and language- and cognition-derived knowledge, supported by different brain systems.
Collapse
Affiliation(s)
- Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China
| | - Weiwei Men
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Jiahong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China; McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Alfonso Caramazza
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA; Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|
36
|
Castaldi E, Lunghi C, Morrone MC. Neuroplasticity in adult human visual cortex. Neurosci Biobehav Rev 2020; 112:542-552. [DOI: 10.1016/j.neubiorev.2020.02.028] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Revised: 12/30/2019] [Accepted: 02/20/2020] [Indexed: 12/27/2022]
|
37
|
Ortiz T, Ortiz-Teran L, Turrero A, Poch-Broto J, de Erausquin GA. A N400 ERP Study in letter recognition after passive tactile stimulation training in blind children and sighted controls. Restor Neurol Neurosci 2020; 37:197-206. [PMID: 31227674 DOI: 10.3233/rnn-180838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND We previously demonstrated that using a sensory substitution device (SSD) for one week, tactile stimulation results in faster activation of lateral occipital complex in blind children than in seeing controls. OBJECTIVE We used long-term haptic tactile stimulation training with an SSD to test if it results in stable cross-modal reassignment of visual pathways after six months, to provide high level processing of tactile semantic content. METHODS We enrolled 12 blind and 12 sighted children. The SSD transforms images to a stimulation matrix in contact with the dominant hand. Subjects underwent twice-daily training sessions, 5 days/week for six months. Children were asked to describe line orientation, name letters, and read words. ERP sessions were performed at baseline and 6 months to analyze the N400 ERP component and reaction times (RT). N400 sources were estimated with Low Resolution Electromagnetic Tomography (LORETA). SPM8 was used to make population-level inferences. RESULTS We found no group differences in RTs, accuracy of identifications, N400 latencies or distributions with the line task at 1 week or at 6 months. RTs on the letter recognition task were also similar. After 6 months, behavioral training increased accurate letter identification in both seeing and blind children (Chi 2 = 11906.934, p = 0.000), but the increase was larger in blind children (Chi 2 = 8.272, p = 0.004). Behavioral training shifted peak N400 amplitude to left occipital and bilateral parietal cortices in blind children, but to left precentral and postcentral and bilateral occipital cortices in sighted controls. CONCLUSIONS Blind children learn to recognize SSD-delivered letters better than seeing controls and had greater N400 amplitude in the occipital region. To the best of our knowledge, our results provide the first published example of standard letter recognition (not Braille) by children with blindness using a tactile delivery system.
Collapse
Affiliation(s)
- Tomas Ortiz
- Department of Psychiatry, Faculty of Medicine Universidad Complutense, Madrid, Spain
| | - Laura Ortiz-Teran
- Department of Radiology, Gordon Center for Medical Imaging, Massachusetts General Hospital Harvard University, Boston, USA
| | - Agustin Turrero
- Department of Biostatistics, Faculty of Medicine Universidad Complutense, Madrid, Spain
| | - Joaquin Poch-Broto
- Department of Ear, Nose and Throat, Hospital Clínico Universitario San Carlos, Madrid, Spain
| | - Gabriel A de Erausquin
- Department of Psychiatry and Neurology, Institute of Neuroscience, University of Texas Rio Grande Valley School of Medicine, Harlingen, USA
| |
Collapse
|
38
|
Gori M, Amadeo MB, Campus C. Spatial metric in blindness: behavioural and cortical processing. Neurosci Biobehav Rev 2020; 109:54-62. [PMID: 31899299 DOI: 10.1016/j.neubiorev.2019.12.031] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Revised: 11/30/2019] [Accepted: 12/29/2019] [Indexed: 11/29/2022]
Abstract
Visual modality dominates spatial perception and, in lack of vision, space representation might be altered. Here we review our work showing that blind individuals have a strong deficit when performing spatial bisection tasks (Gori et al., 2014). We also describe the neural correlates associated with this deficit, as blind individuals do not show the same ERP response mimicking the visual C1 reported in sighted people during spatial bisection (Campus et al., 2019). Interestingly, the deficit is not always evident in late blind individuals, and it is dependent on blindness duration. We report that the deficit disappears when one presents coherent temporal and spatial cues to blind people. This suggests that they may use time information to infer spatial maps (Gori et al., 2018). Finally, we propose a model to explain why blind individuals are impaired in this task, speculating that a lack of vision drives the construction of a multi-sensory cortical network that codes space based on temporal, rather than spatial, coordinates.
Collapse
Affiliation(s)
- Monica Gori
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano Di Tecnologia, Via E. Melen, 83, 16152 Genova, Italy.
| | - Maria Bianca Amadeo
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano Di Tecnologia, Via E. Melen, 83, 16152 Genova, Italy; Department of Informatics, Bioengineering, Robotics and Systems Engineering, Università Degli Studi Di Genova, via all'Opera Pia, 13, 16145 Genova, Italy
| | - Claudio Campus
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano Di Tecnologia, Via E. Melen, 83, 16152 Genova, Italy
| |
Collapse
|
39
|
The Cross-Modal Effects of Sensory Deprivation on Spatial and Temporal Processes in Vision and Audition: A Systematic Review on Behavioral and Neuroimaging Research since 2000. Neural Plast 2019; 2019:9603469. [PMID: 31885540 PMCID: PMC6914961 DOI: 10.1155/2019/9603469] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Revised: 07/06/2019] [Accepted: 10/31/2019] [Indexed: 01/12/2023] Open
Abstract
One of the most significant effects of neural plasticity manifests in the case of sensory deprivation when cortical areas that were originally specialized for the functions of the deprived sense take over the processing of another modality. Vision and audition represent two important senses needed to navigate through space and time. Therefore, the current systematic review discusses the cross-modal behavioral and neural consequences of deafness and blindness by focusing on spatial and temporal processing abilities, respectively. In addition, movement processing is evaluated as compiling both spatial and temporal information. We examine whether the sense that is not primarily affected changes in its own properties or in the properties of the deprived modality (i.e., temporal processing as the main specialization of audition and spatial processing as the main specialization of vision). References to the metamodal organization, supramodal functioning, and the revised neural recycling theory are made to address global brain organization and plasticity principles. Generally, according to the reviewed studies, behavioral performance is enhanced in those aspects for which both the deprived and the overtaking senses provide adequate processing resources. Furthermore, the behavioral enhancements observed in the overtaking sense (i.e., vision in the case of deafness and audition in the case of blindness) are clearly limited by the processing resources of the overtaking modality. Thus, the brain regions that were previously recruited during the behavioral performance of the deprived sense now support a similar behavioral performance for the overtaking sense. This finding suggests a more input-unspecific and processing principle-based organization of the brain. Finally, we highlight the importance of controlling for and stating factors that might impact neural plasticity and the need for further research into visual temporal processing in deaf subjects.
Collapse
|
40
|
Gu J, Liu B, Li X, Wang P, Wang B. Cross-modal representations in early visual and auditory cortices revealed by multi-voxel pattern analysis. Brain Imaging Behav 2019; 14:1908-1920. [PMID: 31183774 DOI: 10.1007/s11682-019-00135-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Primary sensory cortices can respond not only to their defined sensory modality but also to cross-modal information. In addition to the observed cross-modal phenomenon, it is valuable to research further whether cross-modal information can be valuable for categorizing stimuli and what effect other factors, such as experience and imagination, may have on cross-modal processing. In this study, we researched cross-modal information processing in the early visual cortex (EVC, including the visual area 1, 2, and 3 (V1, V2, and V3)) and auditory cortex (primary (A1) and secondary (A2) auditory cortex). Images and sound clips were presented to participants separately in two experiments in which participants' imagination and expectations were restricted by an orthogonal fixation task and the data were collected by functional magnetic resonance imaging (fMRI). We successfully decoded categories of the cross-modal stimuli in the ROIs except for V1 by multi-voxel pattern analysis (MVPA). It was further shown that familiar sounds had the advantage of classification accuracies in V2 and V3 when compared with unfamiliar sounds. The results of the cross-classification analysis showed that there was no significant similarity between the activity patterns induced by different stimulus modalities. Even though the cross-modal representation is robust when considering the restriction of top-down expectations and mental imagery in our experiments, the sound experience showed effects on cross-modal representation in V2 and V3. In addition, primary sensory cortices may receive information from different modalities in different ways, so the activity patterns between two modalities were not similar enough to complete the cross-classification successfully.
Collapse
Affiliation(s)
- Jin Gu
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, People's Republic of China.
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| |
Collapse
|
41
|
Halley AC, Krubitzer L. Not all cortical expansions are the same: the coevolution of the neocortex and the dorsal thalamus in mammals. Curr Opin Neurobiol 2019; 56:78-86. [PMID: 30658218 DOI: 10.1016/j.conb.2018.12.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 11/18/2018] [Accepted: 12/09/2018] [Indexed: 02/06/2023]
Abstract
A central question in comparative neurobiology concerns how evolution has produced brains with expanded neocortices, composed of more areas with unique connectivity and functional properties. Some mammalian lineages, such as primates, exhibit exceptionally large cortices relative to the amount of sensory inputs from the dorsal thalamus, and this expansion is associated with a larger number of distinct cortical areas, composing a larger proportion of the cortical sheet. We propose a link between the organization of the neocortex and its expansion relative to the size of the dorsal thalamus, based on a combination of work in comparative neuroanatomy and experimental research.
Collapse
Affiliation(s)
- Andrew C Halley
- Center for Neuroscience, University of California, Davis, CA, United States
| | - Leah Krubitzer
- Center for Neuroscience, University of California, Davis, CA, United States; Department of Psychology, University of California, Davis, CA, United States.
| |
Collapse
|
42
|
Gudi-Mindermann H, Rimmele JM, Nolte G, Bruns P, Engel AK, Röder B. Working memory training in congenitally blind individuals results in an integration of occipital cortex in functional networks. Behav Brain Res 2018; 348:31-41. [DOI: 10.1016/j.bbr.2018.04.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2017] [Revised: 04/02/2018] [Accepted: 04/03/2018] [Indexed: 10/17/2022]
|
43
|
Abstract
Early blindness causes fundamental alterations of neural function across more than 25% of cortex-changes that span the gamut from metabolism to behavior and collectively represent one of the most dramatic examples of plasticity in the human brain. The goal of this review is to describe how the remarkable behavioral and neuroanatomical compensations demonstrated by blind individuals provide insights into the extent, mechanisms, and limits of human brain plasticity.
Collapse
Affiliation(s)
- Ione Fine
- Department of Psychology, University of Washington, Seattle, Washington 98195, USA;
| | - Ji-Min Park
- Department of Psychology, University of Washington, Seattle, Washington 98195, USA;
| |
Collapse
|
44
|
Benetti S, Novello L, Maffei C, Rabini G, Jovicich J, Collignon O. White matter connectivity between occipital and temporal regions involved in face and voice processing in hearing and early deaf individuals. Neuroimage 2018; 179:263-274. [PMID: 29908936 DOI: 10.1016/j.neuroimage.2018.06.044] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2018] [Revised: 05/24/2018] [Accepted: 06/12/2018] [Indexed: 01/24/2023] Open
Abstract
Neuroplasticity following sensory deprivation has long inspired neuroscience research in the quest of understanding how sensory experience and genetics interact in developing the brain functional and structural architecture. Many studies have shown that sensory deprivation can lead to cross-modal functional recruitment of sensory deprived cortices. Little is known however about how structural reorganization may support these functional changes. In this study, we examined early deaf, hearing signer and hearing non-signer individuals using diffusion MRI to evaluate the potential structural connectivity linked to the functional recruitment of the temporal voice area by face stimuli in deaf individuals. More specifically, we characterized the structural connectivity between occipital, fusiform and temporal regions typically supporting voice- and face-selective processing. Despite the extensive functional reorganization for face processing in the temporal cortex of the deaf, macroscopic properties of these connections did not differ across groups. However, both occipito- and fusiform-temporal connections showed significant microstructural changes between groups (fractional anisotropy reduction, radial diffusivity increase). We propose that the reorganization of temporal regions after early auditory deprivation builds on intrinsic and mainly preserved anatomical connectivity between functionally specific temporal and occipital regions.
Collapse
Affiliation(s)
- Stefania Benetti
- Center for Mind/Brain Studies, University of Trento, 38123, Trento, Italy.
| | - Lisa Novello
- Center for Mind/Brain Studies, University of Trento, 38123, Trento, Italy
| | - Chiara Maffei
- Athinoula A. Martinos Center, Massachusetts General Hospital, Charlestown, MA, 01129, USA
| | - Giuseppe Rabini
- Center for Mind/Brain Studies, University of Trento, 38123, Trento, Italy
| | - Jorge Jovicich
- Center for Mind/Brain Studies, University of Trento, 38123, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Studies, University of Trento, 38123, Trento, Italy; Institute of Research in Psychology (IPSY) and in Neuroscience (IoNS), University of Louvain, 1348, Louvain-la-Neuve, Belgium.
| |
Collapse
|
45
|
Singh AK, Phillips F, Merabet LB, Sinha P. Why Does the Cortex Reorganize after Sensory Loss? Trends Cogn Sci 2018; 22:569-582. [PMID: 29907530 DOI: 10.1016/j.tics.2018.04.004] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Revised: 04/01/2018] [Accepted: 04/17/2018] [Indexed: 01/05/2023]
Abstract
A growing body of evidence demonstrates that the brain can reorganize dramatically following sensory loss. Although the existence of such neuroplastic crossmodal changes is not in doubt, the functional significance of these changes remains unclear. The dominant belief is that reorganization is compensatory. However, results thus far do not unequivocally indicate that sensory deprivation results in markedly enhanced abilities in other senses. Here, we consider alternative reasons besides sensory compensation that might drive the brain to reorganize after sensory loss. One such possibility is that the cortex reorganizes not to confer functional benefits, but to avoid undesirable physiological consequences of sensory deafferentation. Empirical assessment of the validity of this and other possibilities defines a rich program for future research.
Collapse
Affiliation(s)
- Amy Kalia Singh
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Flip Phillips
- Department of Psychology and Neuroscience, Skidmore College, Saratoga Springs, NY, USA
| | - Lotfi B Merabet
- Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, USA
| | - Pawan Sinha
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
46
|
Rinaldi L, Merabet LB, Vecchi T, Cattaneo Z. The spatial representation of number, time, and serial order following sensory deprivation: A systematic review. Neurosci Biobehav Rev 2018; 90:371-380. [PMID: 29746876 DOI: 10.1016/j.neubiorev.2018.04.021] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2017] [Revised: 03/15/2018] [Accepted: 04/27/2018] [Indexed: 11/16/2022]
Abstract
The spatial representation of numerical and temporal information is thought to be rooted in our multisensory experiences. Accordingly, we may expect visual or auditory deprivation to affect the way we represent numerical magnitude and time spatially. Here, we systematically review recent findings on how blind and deaf individuals represent abstract concepts such as magnitude and time (e.g., past/future, serial order of events) in a spatial format. Interestingly, available evidence suggests that sensory deprivation does not prevent the spatial "re-mapping" of abstract information, but differences compared to normally sighted and hearing individuals may emerge depending on the specific dimension considered (i.e., numerical magnitude, time as past/future, serial order). Herein we discuss how the study of sensory deprived populations may shed light on the specific, and possibly distinct, mechanisms subserving the spatial representation of these concepts. Furthermore, we pinpoint unresolved issues that need to be addressed by future studies to grasp a full understanding of the spatial representation of abstract information associated with visual and auditory deprivation.
Collapse
Affiliation(s)
- Luca Rinaldi
- Department of Psychology, University of Milano-Bicocca, Milano, Italy; NeuroMI, Milan Center for Neuroscience, Milano, Italy.
| | - Lotfi B Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, USA
| | - Tomaso Vecchi
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy; IRCCS Mondino Foundation, Pavia, Italy
| | - Zaira Cattaneo
- Department of Psychology, University of Milano-Bicocca, Milano, Italy; IRCCS Mondino Foundation, Pavia, Italy.
| |
Collapse
|
47
|
Santaniello G, Sebastián M, Carretié L, Fernández-Folgueiras U, Hinojosa JA. Haptic recognition memory following short-term visual deprivation: Behavioral and neural correlates from ERPs and alpha band oscillations. Biol Psychol 2018; 133:18-29. [PMID: 29360562 DOI: 10.1016/j.biopsycho.2018.01.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2017] [Revised: 12/29/2017] [Accepted: 01/11/2018] [Indexed: 10/18/2022]
Abstract
In the current study, we investigated the effects of short-term visual deprivation (2 h) on a haptic recognition memory task with familiar objects. Behavioral data, as well as event-related potentials (ERPs) and induced event-related oscillations (EROs) were analyzed. At the behavioral level, deprived participants showed speeded reaction times to new stimuli. Analyses of ERPs indicated that starting from 1000 ms the recognition of old objects elicited enhanced positive amplitudes only for the visually deprived group. Visual deprivation also influenced EROs. In this sense, we observed reduced power in the lower-1 alpha band for the processing of new compared to old stimuli between 500 and 750 ms. Overall, our data showed improved haptic recognition memory after a short period of visual deprivation. These effects were thought to reflect a compensatory mechanism that might have developed as an adaptive strategy for dealing with the environment when visual information is not available.
Collapse
Affiliation(s)
- Gerardo Santaniello
- Instituto Pluridisciplinar, Universidad Complutense de Madrid, 28040 Madrid, Spain.
| | - Manuel Sebastián
- Instituto Pluridisciplinar, Universidad Complutense de Madrid, 28040 Madrid, Spain; Facultad de Ciencias de la Salud, Universidad Católica San Antonio de Murcia, 30107 Guadalupe, Murcia, Spain
| | - Luis Carretié
- Facultad de Psicología, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | | | - José Antonio Hinojosa
- Instituto Pluridisciplinar, Universidad Complutense de Madrid, 28040 Madrid, Spain; Facultad de Psicología, Universidad Complutense de Madrid, 28223 Pozuelo de Alarcón, Madrid, Spain
| |
Collapse
|
48
|
Ricciardi E, Menicagli D, Leo A, Costantini M, Pietrini P, Sinigaglia C. Peripersonal space representation develops independently from visual experience. Sci Rep 2017; 7:17673. [PMID: 29247162 PMCID: PMC5732274 DOI: 10.1038/s41598-017-17896-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 12/01/2017] [Indexed: 11/09/2022] Open
Abstract
Our daily-life actions are typically driven by vision. When acting upon an object, we need to represent its visual features (e.g. shape, orientation, etc.) and to map them into our own peripersonal space. But what happens with people who have never had any visual experience? How can they map object features into their own peripersonal space? Do they do it differently from sighted agents? To tackle these questions, we carried out a series of behavioral experiments in sighted and congenitally blind subjects. We took advantage of a spatial alignment effect paradigm, which typically refers to a decrease of reaction times when subjects perform an action (e.g., a reach-to-grasp pantomime) congruent with that afforded by a presented object. To systematically examine peripersonal space mapping, we presented visual or auditory affording objects both within and outside subjects’ reach. The results showed that sighted and congenitally blind subjects did not differ in mapping objects into their own peripersonal space. Strikingly, this mapping occurred also when objects were presented outside subjects’ reach, but within the peripersonal space of another agent. This suggests that (the lack of) visual experience does not significantly affect the development of both one’s own and others’ peripersonal space representation.
Collapse
Affiliation(s)
| | - Dario Menicagli
- MOMILab, IMT School for Advanced Studies Lucca, I-55100, Lucca, Italy
| | - Andrea Leo
- MOMILab, IMT School for Advanced Studies Lucca, I-55100, Lucca, Italy.,Research Center "E. Piaggio", University of Pisa, Pisa, I-56100, Italy
| | - Marcello Costantini
- Department of Neuroscience and Imaging and Clinical Science, University G. d'Annunzio, Chieti, I-66100, Italy.,Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, I-66100, Italy.,Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK
| | - Pietro Pietrini
- MOMILab, IMT School for Advanced Studies Lucca, I-55100, Lucca, Italy
| | - Corrado Sinigaglia
- Department of Philosophy, University of Milan, via Festa del Perdono 7, I-20122, Milano, Italy. .,CSSA, Centre for the Study of Social Action, University of Milan, Milan, I-20122, Italy.
| |
Collapse
|
49
|
Peelen MV, Downing PE. Category selectivity in human visual cortex: Beyond visual object recognition. Neuropsychologia 2017; 105:177-183. [DOI: 10.1016/j.neuropsychologia.2017.03.033] [Citation(s) in RCA: 77] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2016] [Revised: 03/29/2017] [Accepted: 03/31/2017] [Indexed: 11/16/2022]
|
50
|
Evidence from Blindness for a Cognitively Pluripotent Cortex. Trends Cogn Sci 2017; 21:637-648. [DOI: 10.1016/j.tics.2017.06.003] [Citation(s) in RCA: 100] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Revised: 05/26/2017] [Accepted: 06/01/2017] [Indexed: 01/30/2023]
|