1
|
Domenici V, Collignon O, Lettieri G. Affect in the dark: Navigating the complex landscape of social cognition in blindness. PROGRESS IN BRAIN RESEARCH 2025; 292:175-202. [PMID: 40409920 DOI: 10.1016/bs.pbr.2025.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2025]
Abstract
Research on the consequence of blindness has primarily focused on how visual experience influences basic sensory abilities, mainly overlooking the intricate world of social cognition. However, social cognition abilities are crucial as they enable individuals to navigate complex interactions, understand others' perspectives, regulate emotions, and establish meaningful connections, all essential for successful adaptation and integration into society. Emotional and social signals are frequently conveyed through nonverbal visual cues, and understanding the foundational role vision plays in shaping everyday affective experiences is fundamental. Here, we aim to summarize existing research on social cognition in individuals with blindness. By doing so, we strive to offer a comprehensive overview of social processing in sensory deprivation while pinpointing areas that are still largely unexplored. By identifying gaps in current knowledge, this review paves the way for future investigations to reveal how visual experience shapes the development of emotional and social cognition in the mind and the brain.
Collapse
Affiliation(s)
- Veronica Domenici
- Affective Physiology and Interoception Group (API), MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy; Social and Affective Neuroscience Group (SANe), MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy; University of Camerino, Camerino, Italy
| | - Olivier Collignon
- Crossmodal Perception and Plasticity Laboratory, Institute of Research in Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-la-Neuve, Belgium; School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne, Switzerland
| | - Giada Lettieri
- Affective Physiology and Interoception Group (API), MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy; Crossmodal Perception and Plasticity Laboratory, Institute of Research in Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-la-Neuve, Belgium.
| |
Collapse
|
2
|
Kolarik AJ, Moore BCJ. Principles governing the effects of sensory loss on human abilities: An integrative review. Neurosci Biobehav Rev 2025; 169:105986. [PMID: 39710017 DOI: 10.1016/j.neubiorev.2024.105986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Revised: 12/02/2024] [Accepted: 12/17/2024] [Indexed: 12/24/2024]
Abstract
Blindness or deafness can significantly influence sensory abilities in intact modalities, affecting communication, orientation and navigation. Explanations for why certain abilities are enhanced and others degraded include: crossmodal cortical reorganization enhances abilities by providing additional neural processing resources; and sensory processing is impaired for tasks where calibration from the normally intact sense is required for good performance. However, these explanations are often specific to tasks or modalities, not accounting for why task-dependent enhancement or degradation are observed. This paper investigates whether sensory systems operate according to a theoretical framework comprising seven general principles (the perceptual restructuring hypothesis) spanning the various modalities. These principles predict whether an ability will be enhanced or degraded following sensory loss. Evidence from a wide range of studies is discussed, to assess the validity of the principles across different combinations of impaired sensory modalities (deafness or blindness) and intact modalities (vision, audition, touch, olfaction). It is concluded that sensory systems do operate broadly according to the principles of the framework, but with some exceptions.
Collapse
Affiliation(s)
- Andrew J Kolarik
- School of Psychology, University of East Anglia, Norwich, United Kingdom; Cambridge Hearing Group, Department of Psychology, University of Cambridge, Cambridge, United Kingdom; Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom.
| | - Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, University of Cambridge, Cambridge, United Kingdom; Vision and Eye Research Institute, School of Medicine, Anglia Ruskin University, Cambridge, United Kingdom.
| |
Collapse
|
3
|
Korczyk M, Rączy K, Szwed M. Mirror-invariance is not exclusively visual but extends to touch. Sci Rep 2024; 14:31094. [PMID: 39730799 PMCID: PMC11680880 DOI: 10.1038/s41598-024-82350-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2024] [Accepted: 12/04/2024] [Indexed: 12/29/2024] Open
Abstract
Mirror-invariance enables recognition of mirrored objects as identical. During reading acquisition, sighted readers must overcome this innate bias to distinguish between mirror-inverted letters ('d' vs. 'b'). Congenitally blind individuals seem to overcome mirror-invariance for Braille letters, too. Here, we investigated mirror-invariance across modalities and its modulation based on the objects' familiarity. Congenitally blind and sighted subjects participated in same-different judgment tasks using tactile (blind and blindfolded sighted subjects) and visual (sighted subjects) modalities. Stimuli included pairs of letters (Braille and Latin) and familiar non-linguistic stimuli: geometric figures and everyday objects, presented in identical ('p'/'p'), mirror ('p'/'q'), and different ('p'/'z') conditions. In the tactile modality, no group differences were found in shape judgment for non-linguistic stimuli. In the orientation-based task, higher expertise for haptic rather than visual geometric figures was observed in the sighted group. Sighted participants exhibit difficulties when judging the shape of Latin letters as identical to those presented in mirror orientation (signature of breaking mirror invariance), in comparison to the blind, who had no difficulties in mirror shape judgment for Braille and non-linguistic stimuli. Results suggest that mirror-invariance is modality-independent.
Collapse
Affiliation(s)
- Maksymilian Korczyk
- Department of Psychology, Jagiellonian University, ul. Ingardena 6, 30-060, Kraków, Poland.
| | - Katarzyna Rączy
- Institute of Psychology, University of Hamburg, 20146, Hamburg, Germany
| | - Marcin Szwed
- Department of Psychology, Jagiellonian University, ul. Ingardena 6, 30-060, Kraków, Poland.
| |
Collapse
|
4
|
Shayman CS, McCracken MK, Finney HC, Fino PC, Stefanucci JK, Creem-Regehr SH. Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions. J Vis 2024; 24:7. [PMID: 39382867 PMCID: PMC11469273 DOI: 10.1167/jov.24.11.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 08/14/2024] [Indexed: 10/10/2024] Open
Abstract
Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- Interdisciplinary Program in Neuroscience, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0002-5487-0007
| | - Maggie K McCracken
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0009-0006-5280-0546
| | - Hunter C Finney
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0009-0008-2324-5007
| | - Peter C Fino
- Department of Health and Kinesiology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0002-8621-3706
| | - Jeanine K Stefanucci
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0003-4238-2951
| | - Sarah H Creem-Regehr
- Department of Psychology, University of Utah, Salt Lake City, Utah, USA
- https://orcid.org/0000-0001-7740-1118
| |
Collapse
|
5
|
Ferrari C, Arioli M, Atias D, Merabet LB, Cattaneo Z. Perception and discrimination of real-life emotional vocalizations in early blind individuals. Front Psychol 2024; 15:1386676. [PMID: 38784630 PMCID: PMC11112099 DOI: 10.3389/fpsyg.2024.1386676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 04/16/2024] [Indexed: 05/25/2024] Open
Abstract
Introduction The capacity to understand others' emotions and react accordingly is a key social ability. However, it may be compromised in case of a profound sensory loss that limits the contribution of available contextual cues (e.g., facial expression, gestures, body posture) to interpret emotions expressed by others. In this study, we specifically investigated whether early blindness affects the capacity to interpret emotional vocalizations, whose valence may be difficult to recognize without a meaningful context. Methods We asked a group of early blind (N = 22) and sighted controls (N = 22) to evaluate the valence and the intensity of spontaneous fearful and joyful non-verbal vocalizations. Results Our data showed that emotional vocalizations presented alone (i.e., with no contextual information) are similarly ambiguous for blind and sighted individuals but are perceived as more intense by the former possibly reflecting their higher saliency when visual experience is unavailable. Disussion Our study contributes to a better understanding of how sensory experience shapes ememotion recognition.
Collapse
Affiliation(s)
- Chiara Ferrari
- Department of Humanities, University of Pavia, Pavia, Italy
- IRCCS Mondino Foundation, Pavia, Italy
| | - Maria Arioli
- Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy
| | - Doron Atias
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Lotfi B. Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, United States
| | - Zaira Cattaneo
- Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy
| |
Collapse
|
6
|
Maw KJ, Beattie G, Burns EJ. Cognitive strengths in neurodevelopmental disorders, conditions and differences: A critical review. Neuropsychologia 2024; 197:108850. [PMID: 38467371 DOI: 10.1016/j.neuropsychologia.2024.108850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 03/04/2024] [Accepted: 03/07/2024] [Indexed: 03/13/2024]
Abstract
Neurodevelopmental disorders are traditionally characterised by a range of associated cognitive impairments in, for example, sensory processing, facial recognition, visual imagery, attention, and coordination. In this critical review, we propose a major reframing, highlighting the variety of unique cognitive strengths that people with neurodevelopmental differences can exhibit. These include enhanced visual perception, strong spatial, auditory, and semantic memory, superior empathy and theory of mind, along with higher levels of divergent thinking. Whilst we acknowledge the heterogeneity of cognitive profiles in neurodevelopmental conditions, we present a more encouraging and affirmative perspective of these groups, contrasting with the predominant, deficit-based position prevalent throughout both cognitive and neuropsychological research. In addition, we provide a theoretical basis and rationale for these cognitive strengths, arguing for the critical role of hereditability, behavioural adaptation, neuronal-recycling, and we draw on psychopharmacological and social explanations. We present a table of potential strengths across conditions and invite researchers to systematically investigate these in their future work. This should help reduce the stigma around neurodiversity, instead promoting greater social inclusion and significant societal benefits.
Collapse
|
7
|
Lee S, Song Y, Hong H, Joo Y, Ha E, Shim Y, Hong SN, Kim J, Lyoo IK, Yoon S, Kim DW. Changes in Structural Covariance among Olfactory-related Brain Regions in Anosmia Patients. Exp Neurobiol 2024; 33:99-106. [PMID: 38724479 PMCID: PMC11089402 DOI: 10.5607/en24007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/01/2024] [Accepted: 04/09/2024] [Indexed: 05/15/2024] Open
Abstract
Anosmia, characterized by the loss of smell, is associated not only with dysfunction in the peripheral olfactory system but also with changes in several brain regions involved in olfactory processing. Specifically, the orbitofrontal cortex is recognized for its pivotal role in integrating olfactory information, engaging in bidirectional communication with the primary olfactory regions, including the olfactory cortex, amygdala, and entorhinal cortex. However, little is known about alterations in structural connections among these brain regions in patients with anosmia. In this study, high-resolution T1-weighted images were obtained from participants. Utilizing the volumes of key brain regions implicated in olfactory function, we employed a structural covariance approach to investigate brain reorganization patterns in patients with anosmia (n=22) compared to healthy individuals (n=30). Our structural covariance analysis demonstrated diminished connectivity between the amygdala and entorhinal cortex, components of the primary olfactory network, in patients with anosmia compared to healthy individuals (z=-2.22, FDR-corrected p=0.039). Conversely, connectivity between the orbitofrontal cortex-a major region in the extended olfactory network-and amygdala was found to be enhanced in the anosmia group compared to healthy individuals (z=2.32, FDR-corrected p=0.039). However, the structural connections between the orbitofrontal cortex and entorhinal cortex did not differ significantly between the groups (z=0.04, FDR-corrected p=0.968). These findings suggest a potential structural reorganization, particularly of higher-order cortical regions, possibly as a compensatory effort to interpret the limited olfactory information available in individuals with olfactory loss.
Collapse
Affiliation(s)
- Suji Lee
- College of Pharmacy, Dongduk Women's University, Seoul 02748, Korea
| | - Yumi Song
- Ewha Brain Institute, Ewha Womans University, Seoul 03760, Korea
- Department of Brain and Cognitive Sciences, Ewha Womans University, Seoul 03760, Korea
| | - Haejin Hong
- Ewha Brain Institute, Ewha Womans University, Seoul 03760, Korea
| | - Yoonji Joo
- Ewha Brain Institute, Ewha Womans University, Seoul 03760, Korea
| | - Eunji Ha
- Ewha Brain Institute, Ewha Womans University, Seoul 03760, Korea
| | - Youngeun Shim
- Ewha Brain Institute, Ewha Womans University, Seoul 03760, Korea
- Department of Brain and Cognitive Sciences, Ewha Womans University, Seoul 03760, Korea
| | - Seung-No Hong
- Department of Otorhinolaryngology-Head & Neck Surgery, Boramae Medical Center, Seoul National University College of Medicine, Seoul 07061, Korea
| | - Jungyoon Kim
- Ewha Brain Institute, Ewha Womans University, Seoul 03760, Korea
- Department of Brain and Cognitive Sciences, Ewha Womans University, Seoul 03760, Korea
| | - In Kyoon Lyoo
- Ewha Brain Institute, Ewha Womans University, Seoul 03760, Korea
- Department of Brain and Cognitive Sciences, Ewha Womans University, Seoul 03760, Korea
- Graduate School of Pharmaceutical Sciences, Ewha Womans University, Seoul 03760, Korea
| | - Sujung Yoon
- Ewha Brain Institute, Ewha Womans University, Seoul 03760, Korea
- Department of Brain and Cognitive Sciences, Ewha Womans University, Seoul 03760, Korea
| | - Dae Woo Kim
- Department of Otorhinolaryngology-Head & Neck Surgery, Boramae Medical Center, Seoul National University College of Medicine, Seoul 07061, Korea
| |
Collapse
|
8
|
Park WJ, Fine I. The perception of auditory motion in sighted and early blind individuals. Proc Natl Acad Sci U S A 2023; 120:e2310156120. [PMID: 38015842 PMCID: PMC10710053 DOI: 10.1073/pnas.2310156120] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 10/29/2023] [Indexed: 11/30/2023] Open
Abstract
Motion perception is a fundamental sensory task that plays a critical evolutionary role. In vision, motion processing is classically described using a motion energy model with spatiotemporally nonseparable filters suited for capturing the smooth continuous changes in spatial position over time afforded by moving objects. However, it is still not clear whether the filters underlying auditory motion discrimination are also continuous motion detectors or infer motion from comparing discrete sound locations over time (spatiotemporally separable). We used a psychophysical reverse correlation paradigm, where participants discriminated the direction of a motion signal in the presence of spatiotemporal noise, to determine whether the filters underlying auditory motion discrimination were spatiotemporally separable or nonseparable. We then examined whether these auditory motion filters were altered as a result of early blindness. We found that both sighted and early blind individuals have separable filters. However, early blind individuals show increased sensitivity to auditory motion, with reduced susceptibility to noise and filters that were more accurate in detecting motion onsets/offsets. Model simulations suggest that this reliance on separable filters is optimal given the limited spatial resolution of auditory input.
Collapse
Affiliation(s)
- Woon Ju Park
- Department of Psychology, University of Washington, Seattle, WA98195
| | - Ione Fine
- Department of Psychology, University of Washington, Seattle, WA98195
| |
Collapse
|
9
|
Bertonati G, Amadeo MB, Campus C, Gori M. Task-dependent spatial processing in the visual cortex. Hum Brain Mapp 2023; 44:5972-5981. [PMID: 37811869 PMCID: PMC10619374 DOI: 10.1002/hbm.26489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 07/31/2023] [Accepted: 08/30/2023] [Indexed: 10/10/2023] Open
Abstract
To solve spatial tasks, the human brain asks for support from the visual cortices. Nonetheless, representing spatial information is not fixed but depends on the reference frames in which the spatial inputs are involved. The present study investigates how the kind of spatial representations influences the recruitment of visual areas during multisensory spatial tasks. Our study tested participants in an electroencephalography experiment involving two audio-visual (AV) spatial tasks: a spatial bisection, in which participants estimated the relative position in space of an AV stimulus in relation to the position of two other stimuli, and a spatial localization, in which participants localized one AV stimulus in relation to themselves. Results revealed that spatial tasks specifically modulated the occipital event-related potentials (ERPs) after the onset of the stimuli. We observed a greater contralateral early occipital component (50-90 ms) when participants solved the spatial bisection, and a more robust later occipital response (110-160 ms) when they processed the spatial localization. This observation suggests that different spatial representations elicited by multisensory stimuli are sustained by separate neurophysiological mechanisms.
Collapse
Affiliation(s)
- G. Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - M. B. Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - C. Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - M. Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
10
|
Legrain V, Filbrich L, Vanderclausen C. Letter on the pain of blind people for the use of those who can see their pain. Pain 2023; 164:1451-1456. [PMID: 36728808 DOI: 10.1097/j.pain.0000000000002862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 12/12/2022] [Indexed: 02/03/2023]
Affiliation(s)
- Valéry Legrain
- Institute of Neuroscience, Université catholique de Louvain, Brussels, Belgium
- Psychological Sciences Research Institute, Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Louvain Bionics, Université catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Lieve Filbrich
- Institute of Neuroscience, Université catholique de Louvain, Brussels, Belgium
- Psychological Sciences Research Institute, Université catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Camille Vanderclausen
- Institute of Neuroscience, Université catholique de Louvain, Brussels, Belgium
- Neuropsychological Rehabilitation Unit, Saint-Luc University Hospital, Brussels, Belgium
| |
Collapse
|
11
|
Fine I, Park WJ. Do you hear what I see? How do early blind individuals experience object motion? Philos Trans R Soc Lond B Biol Sci 2023; 378:20210460. [PMID: 36511418 PMCID: PMC9745882 DOI: 10.1098/rstb.2021.0460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 09/13/2022] [Indexed: 12/15/2022] Open
Abstract
One of the most important tasks for 3D vision is tracking the movement of objects in space. The ability of early blind individuals to understand motion in the environment from noisy and unreliable auditory information is an impressive example of cortical adaptation that is only just beginning to be understood. Here, we compare visual and auditory motion processing, and discuss the effect of early blindness on the perception of auditory motion. Blindness leads to cross-modal recruitment of the visual motion area hMT+ for auditory motion processing. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion. We discuss how this dramatic shift in the cortical basis of motion processing might influence the perceptual experience of motion in early blind individuals. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Ione Fine
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA
| | - Woon Ju Park
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA
| |
Collapse
|
12
|
Maimon A, Wald IY, Ben Oz M, Codron S, Netzer O, Heimler B, Amedi A. The Topo-Speech sensory substitution system as a method of conveying spatial information to the blind and vision impaired. Front Hum Neurosci 2023; 16:1058093. [PMID: 36776219 PMCID: PMC9909096 DOI: 10.3389/fnhum.2022.1058093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 12/13/2022] [Indexed: 01/27/2023] Open
Abstract
Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects' identity by employing naming in a spoken word and simultaneously conveying the objects' location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study's findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Sophie Codron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Ophir Netzer
- Gonda Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
13
|
Mamus E, Speed LJ, Rissman L, Majid A, Özyürek A. Lack of Visual Experience Affects Multimodal Language Production: Evidence From Congenitally Blind and Sighted People. Cogn Sci 2023; 47:e13228. [PMID: 36607157 PMCID: PMC10078191 DOI: 10.1111/cogs.13228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/08/2022] [Accepted: 11/25/2022] [Indexed: 01/07/2023]
Abstract
The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
Collapse
Affiliation(s)
- Ezgi Mamus
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics
| | | | - Lilia Rissman
- Department of Psychology, University of Wisconsin - Madison
| | - Asifa Majid
- Department of Experimental Psychology, University of Oxford
| | - Aslı Özyürek
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics.,Donders Center for Cognition, Radboud University
| |
Collapse
|
14
|
Islam MS, Lee SW, Harden SM, Lim S. Effects of vibrotactile feedback on yoga practice. Front Sports Act Living 2022; 4:1005003. [PMID: 36385776 PMCID: PMC9659721 DOI: 10.3389/fspor.2022.1005003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 10/13/2022] [Indexed: 12/03/2022] Open
Abstract
Participating in physical exercise using remote platforms is challenging for people with vision impairment due to their lack of vision. Thus, there is a need to provide nonvisual feedback to this population to improve the performance and safety of remote exercise. In this study, the effects of different nonvisual types of feedback (verbal, vibrotactile, and combined verbal and vibrotactile) for movement correction were tested with 22 participants with normal vision to investigate the feasibility of the feedback system and pilot tested with four participants with impaired vision. The study with normal-vision participants found that nonvisual feedback successfully corrected an additional 11.2% of movements compared to the no-feedback condition. Vibrotactile feedback was the most time-efficient among other types of feedback in correcting poses. Participants with normal vision rated multimodal feedback as the most strongly preferred modality. In a pilot test, participants with impaired vision also showed a similar trend. Overall, the study found providing vibrotactile (or multimodal) feedback during physical exercise to be an effective way of improving exercise performance. Implications for future training platform development with vibrotactile or multimodal feedback for people with impaired vision are discussed.
Collapse
Affiliation(s)
- Md Shafiqul Islam
- Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, United States
| | - Sang Won Lee
- Department of Computer Science, Virginia Tech, Blacksburg, VA, United States
| | - Samantha M. Harden
- Department of Human Nutrition, Foods, and Exercise, Virginia Tech, Blacksburg, VA, United States
| | - Sol Lim
- Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, United States,*Correspondence: Sol Lim
| |
Collapse
|
15
|
Gori M, Bertonati G, Campus C, Amadeo MB. Multisensory representations of space and time in sensory cortices. Hum Brain Mapp 2022; 44:656-667. [PMID: 36169038 PMCID: PMC9842891 DOI: 10.1002/hbm.26090] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Giorgia Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly,Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - Claudio Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
16
|
Chouinard‐Leclaire C, Manescu S, Collignon O, Lepore F, Frasnelli J. Altered morphological traits along central olfactory centers in congenitally blind subjects. Eur J Neurosci 2022; 56:4486-4500. [DOI: 10.1111/ejn.15758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 06/28/2022] [Accepted: 06/30/2022] [Indexed: 11/27/2022]
Affiliation(s)
| | - Simona Manescu
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC) Université de Montréal Montréal Québec Canada
| | - Olivier Collignon
- Institutes for research in Psychology (IPSY) and Neurosciences (IoNS) University of Louvain Belgium
| | - Franco Lepore
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC) Université de Montréal Montréal Québec Canada
| | - Johannes Frasnelli
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC) Université de Montréal Montréal Québec Canada
- Department of Anatomy Université du Québec à Trois‐Rivières Canada
- Centre d’études avancées en médecine du sommeil (CÉAMS), Centre de Recherche de l’Hôpital du Sacré‐Cœur de Montréal, Centre intégré universitaire de santé et de services sociaux du Nord‐de‐l’Île‐de‐Montréal (CIUSSS du Nord‐de‐l’Île‐de‐Montréal) Canada
| |
Collapse
|
17
|
Thaler L, Norman LJ, De Vos HPJC, Kish D, Antoniou M, Baker CJ, Hornikx MCJ. Human Echolocators Have Better Localization Off Axis. Psychol Sci 2022; 33:1143-1153. [PMID: 35699555 DOI: 10.1177/09567976211068070] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Here, we report novel empirical results from a psychophysical experiment in which we tested the echolocation abilities of nine blind adult human experts in click-based echolocation. We found that they had better acuity in localizing a target and used lower intensity emissions (i.e., mouth clicks) when a target was placed 45° off to the side compared with when it was placed at 0° (straight ahead). We provide a possible explanation of the behavioral result in terms of binaural-intensity signals, which appear to change more rapidly around 45°. The finding that echolocators have better echo-localization off axis is surprising, because for human source localization (i.e., regular spatial hearing), it is well known that performance is best when targets are straight ahead (0°) and decreases as targets move farther to the side. This may suggest that human echolocation and source hearing rely on different acoustic cues and that human spatial hearing has more facets than previously thought.
Collapse
Affiliation(s)
| | - L J Norman
- Department of Psychology, Durham University
| | - H P J C De Vos
- Department of the Built Environment, Eindhoven University of Technology
| | - D Kish
- World Access for the Blind, Placentia, California
| | - M Antoniou
- Department of Electronic Electrical and Systems Engineering, University of Birmingham
| | - C J Baker
- Department of Electronic Electrical and Systems Engineering, University of Birmingham
| | - M C J Hornikx
- Department of the Built Environment, Eindhoven University of Technology
| |
Collapse
|
18
|
Bae EB, Jang H, Shim HJ. Enhanced Dichotic Listening and Temporal Sequencing Ability in Early-Blind Individuals. Front Psychol 2022; 13:840541. [PMID: 35619788 PMCID: PMC9127502 DOI: 10.3389/fpsyg.2022.840541] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 04/13/2022] [Indexed: 11/23/2022] Open
Abstract
Several studies have reported the better auditory performance of early-blind subjects over sighted subjects. However, few studies have compared the auditory functions of both hemispheres or evaluated interhemispheric transfer and binaural integration in blind individuals. Therefore, we evaluated whether there are differences in dichotic listening, auditory temporal sequencing ability, or speech perception in noise (all of which have been used to diagnose central auditory processing disorder) between early-blind subjects and sighted subjects. The study included 23 early-blind subjects and 22 age-matched sighted subjects. In the dichotic listening test (three-digit pair), the early-blind subjects achieved higher scores than the sighted subjects in the left ear (p = 0.003, Bonferroni’s corrected α = 0.05/6 = 0.008), but not in the right ear, indicating a right ear advantage in sighted subjects (p < 0.001) but not in early-blind subjects. In the frequency patterning test (five tones), the early-blind subjects performed better (both ears in the humming response, but the left ear only in the labeling response) than the sighted subjects (p < 0.008, Bonferroni’s corrected α = 0.05/6 = 0.008). Monosyllable perception in noise tended to be better in early-blind subjects than in sighted subjects at a signal-to-noise ratio of –8 (p = 0.054), the results at signal-to-noise ratios of –4, 0, +4, and +8 did not differ. Acoustic change complex responses to/ba/in babble noise, recorded with electroencephalography, showed a greater N1 peak amplitude at only FC5 electrode under a signal-to-noise ratio of –8 and –4 dB in the early-blind subjects than in the sighted subjects (p = 0.004 and p = 0.003, respectively, Bonferroni’s corrected α = 0.05/5 = 0.01). The results of this study revealed early-blind subjects exhibited some advantages in dichotic listening, and temporal sequencing ability compared to those shown in sighted subjects. These advantages may be attributable to the enhanced activity of the central auditory nervous system, especially the right hemisphere function, and the transfer of auditory information between the two hemispheres.
Collapse
Affiliation(s)
- Eun Bit Bae
- Department of Otorhinolaryngology-Head and Neck Surgery, Nowon Eulji Medical Center, Eulji University, Seoul, South Korea
| | - Hyunsook Jang
- Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, Hallym University, Chuncheon, South Korea
| | - Hyun Joon Shim
- Department of Otorhinolaryngology-Head and Neck Surgery, Nowon Eulji Medical Center, Eulji University, Seoul, South Korea
| |
Collapse
|
19
|
Delayed Auditory Brainstem Responses (ABR) in children after sight-recovery. Neuropsychologia 2021; 163:108089. [PMID: 34801518 DOI: 10.1016/j.neuropsychologia.2021.108089] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 10/29/2021] [Accepted: 11/15/2021] [Indexed: 01/25/2023]
Abstract
Studies in non-human animal models have revealed that in early development, the onset of visual input gates the critical period closure of some auditory functions. The study of rare individuals whose sight was restored after a period of congenital blindness offers the rare opportunity to assess whether early visual input is a prerequisite for the full development of auditory functions in humans as well. Here, we investigated whether a few months of delayed visual onset would affect the development of Auditory Brainstem Responses (ABRs). ABRs are widely used in the clinical practice to assess both functionality and development of the subcortical auditory pathway and, provide reliable data at the individual level. We collected Auditory Brainstem Responses from two case studies, young children (both having less than 5 years of age) who experienced a transient visual deprivation since birth due to congenital bilateral dense cataracts (BC), and who acquired sight at about two months of age. As controls, we tested 41 children (sighted controls, SC) with typical development, as well as two children who were treated (at about two months of age) for congenital monocular cataracts (MC). The SC group data served to predict, at the individual level, wave latencies of each BC and MC participant. Statistics were performed both at the single subject as well as at the group levels on latencies of main ABR waves (I, III, V and SN10). Results revealed delayed response latencies for both BC children compared with the SC group starting from the wave III. Conversely, no difference emerged between MC children and the SC group. These findings suggest that in case the onset of patterned visual input is delayed, the functional development of the subcortical auditory pathway lags behind typical developmental trajectories. Ultimately results are in favor of the presence of a crossmodal sensitive period in the human subcortical auditory system.
Collapse
|
20
|
Performing Simulated Basic Life Support without Seeing: Blind vs. Blindfolded People. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph182010724. [PMID: 34682471 PMCID: PMC8536197 DOI: 10.3390/ijerph182010724] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 09/24/2021] [Accepted: 10/04/2021] [Indexed: 01/08/2023]
Abstract
Previous pilot experience has shown the ability of visually impaired and blind people (BP) to learn basic life support (BLS), but no studies have compared their abilities with blindfolded people (BFP) after participating in the same instructor-led, real-time feedback training. Twenty-nine BP and 30 BFP participated in this quasi-experimental trial. Training consisted of a 1 h theoretical and practical training session with an additional 30 min afterwards, led by nurses with prior experience in BLS training of various collectives. Quantitative quality of chest compressions (CC), AED use and BLS sequence were evaluated by means of a simulation scenario. BP’s median time to start CC was less than 35 s. Global and specific components of CC quality were similar between groups, except for compression rate (BFP: 123.4 + 15.2 vs. BP: 110.8 + 15.3 CC/min; p = 0.002). Mean compression depth was below the recommended target in both groups, and optimal CC depth was achieved by 27.6% of blind and 23.3% of blindfolded people (p = 0.288). Time to discharge was significantly longer in BFP than BP (86.0 + 24.9 vs. 66.0 + 27.0 s; p = 0.004). Thus, after an adapted and short training program, blind people were revealed to have abilities comparable to those of blindfolded people in learning and performing the BLS sequence and CC.
Collapse
|
21
|
Partial visual loss disrupts the relationship between judged room size and sound source distance. Exp Brain Res 2021; 240:81-96. [PMID: 34623459 PMCID: PMC8803715 DOI: 10.1007/s00221-021-06235-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 09/25/2021] [Indexed: 11/18/2022]
Abstract
Visual spatial information plays an important role in calibrating auditory space. Blindness results in deficits in a number of auditory abilities, which have been explained in terms of the hypothesis that visual information is needed to calibrate audition. When judging the size of a novel room when only auditory cues are available, normally sighted participants may use the location of the farthest sound source to infer the nearest possible distance of the far wall. However, for people with partial visual loss (distinct from blindness in that some vision is present), such a strategy may not be reliable if vision is needed to calibrate auditory cues for distance. In the current study, participants were presented with sounds at different distances (ranging from 1.2 to 13.8 m) in a simulated reverberant (T60 = 700 ms) or anechoic room. Farthest distance judgments and room size judgments (volume and area) were obtained from blindfolded participants (18 normally sighted, 38 partially sighted) for speech, music, and noise stimuli. With sighted participants, the judged room volume and farthest sound source distance estimates were positively correlated (p < 0.05) for all conditions. Participants with visual losses showed no significant correlations for any of the conditions tested. A similar pattern of results was observed for the correlations between farthest distance and room floor area estimates. Results demonstrate that partial visual loss disrupts the relationship between judged room size and sound source distance that is shown by sighted participants.
Collapse
|
22
|
Thaler L, Norman LJ. No effect of 10-week training in click-based echolocation on auditory localization in people who are blind. Exp Brain Res 2021; 239:3625-3633. [PMID: 34609546 PMCID: PMC8599323 DOI: 10.1007/s00221-021-06230-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 09/18/2021] [Indexed: 11/16/2022]
Abstract
What factors are important in the calibration of mental representations of auditory space? A substantial body of research investigating the audiospatial abilities of people who are blind has shown that visual experience might be an important factor for accurate performance in some audiospatial tasks. Yet, it has also been shown that long-term experience using click-based echolocation might play a similar role, with blind expert echolocators demonstrating auditory localization abilities that are superior to those of people who are blind and who do not use click-based echolocation by Vercillo et al. (Neuropsychologia 67: 35–40, 2015). Based on this hypothesis we might predict that training in click-based echolocation may lead to improvement in performance in auditory localization tasks in people who are blind. Here we investigated this hypothesis in a sample of 12 adult people who have been blind from birth. We did not find evidence for an improvement in performance in auditory localization after 10 weeks of training despite significant improvement in echolocation ability. It is possible that longer-term experience with click-based echolocation is required for effects to develop, or that other factors can explain the association between echolocation expertise and superior auditory localization. Considering the practical relevance of click-based echolocation for people who are visually impaired, future research should address these questions.
Collapse
Affiliation(s)
- Lore Thaler
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK.
| | - Liam J Norman
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK
| |
Collapse
|
23
|
Arioli M, Ricciardi E, Cattaneo Z. Social cognition in the blind brain: A coordinate-based meta-analysis. Hum Brain Mapp 2020; 42:1243-1256. [PMID: 33320395 PMCID: PMC7927293 DOI: 10.1002/hbm.25289] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 10/05/2020] [Accepted: 10/31/2020] [Indexed: 01/04/2023] Open
Abstract
Social cognition skills are typically acquired on the basis of visual information (e.g., the observation of gaze, facial expressions, gestures). In light of this, a critical issue is whether and how the lack of visual experience affects neurocognitive mechanisms underlying social skills. This issue has been largely neglected in the literature on blindness, despite difficulties in social interactions may be particular salient in the life of blind individuals (especially children). Here we provide a meta-analysis of neuroimaging studies reporting brain activations associated to the representation of self and others' in early blind individuals and in sighted controls. Our results indicate that early blindness does not critically impact on the development of the "social brain," with social tasks performed on the basis of auditory or tactile information driving consistent activations in nodes of the action observation network, typically active during actual observation of others in sighted individuals. Interestingly though, activations along this network appeared more left-lateralized in the blind than in sighted participants. These results may have important implications for the development of specific training programs to improve social skills in blind children and young adults.
Collapse
Affiliation(s)
- Maria Arioli
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | | | - Zaira Cattaneo
- Department of Psychology, University of Milano-Bicocca, Milan, Italy.,IRCCS Mondino Foundation, Pavia, Italy
| |
Collapse
|