1
|
Snir A, Cieśla K, Ozdemir G, Vekslar R, Amedi A. Localizing 3D motion through the fingertips: Following in the footsteps of elephants. iScience 2024; 27:109820. [PMID: 38799571 PMCID: PMC11126990 DOI: 10.1016/j.isci.2024.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/07/2024] [Accepted: 04/24/2024] [Indexed: 05/29/2024] Open
Abstract
Each sense serves a different specific function in spatial perception, and they all form a joint multisensory spatial representation. For instance, hearing enables localization in the entire 3D external space, while touch traditionally only allows localization of objects on the body (i.e., within the peripersonal space alone). We use an in-house touch-motion algorithm (TMA) to evaluate individuals' capability to understand externalized 3D information through touch, a skill that was not acquired during an individual's development or in evolution. Four experiments demonstrate quick learning and high accuracy in localization of motion using vibrotactile inputs on fingertips and successful audio-tactile integration in background noise. Subjective responses in some participants imply spatial experiences through visualization and perception of tactile "moving" sources beyond reach. We discuss our findings with respect to developing new skills in an adult brain, including combining a newly acquired "sense" with an existing one and computation-based brain organization.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Gizem Ozdemir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| |
Collapse
|
2
|
Kim H, Kim JS, Chung CK. Visual Mental Imagery and Neural Dynamics of Sensory Substitution in the Blindfolded Subject. Neuroimage 2024:120621. [PMID: 38797383 DOI: 10.1016/j.neuroimage.2024.120621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/29/2024] Open
Abstract
Although one can recognize the environment by soundscape substituting vision to auditory signal, whether subjects could perceive the soundscape as visual or visual-like sensation has been questioned. In this study, we investigated hierarchical process to elucidate the recruitment mechanism of visual areas by soundscape stimuli in blindfolded subjects. Twenty-two healthy subjects were repeatedly trained to recognize soundscape stimuli converted by visual shape information of letters. An effective connectivity method called dynamic causal modeling (DCM) was employed to reveal how the brain was hierarchically organized to recognize soundscape stimuli. The visual mental imagery model generated cortical source signals of five regions of interest better than auditory bottom-up, cross-modal perception, and mixed models. Spectral couplings between brain areas in the visual mental imagery model were analyzed. While within-frequency coupling is apparent in bottom-up processing where sensory information is transmitted, cross-frequency coupling is prominent in top-down processing, corresponding to the expectation and interpretation of information. Sensory substitution in the brain of blindfolded subjects derived visual mental imagery by combining bottom-up and top-down processing.
Collapse
Affiliation(s)
- HongJune Kim
- Dept. of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea; Clinical Research Institute, Konkuk University Medical Center Seoul, Republic of Korea
| | - June Sic Kim
- Clinical Research Institute, Konkuk University Medical Center Seoul, Republic of Korea; Research Institute of Biomedical Science & Technology, Konkuk University, Seoul, Republic of Korea.
| | - Chun Kee Chung
- Dept. of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea; Interdisciplinary Program in Neuroscience, Seoul National University, Seoul, Republic of Korea; Dept. of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea; Neuroscience Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
| |
Collapse
|
3
|
Tian S, Chen L, Wang X, Li G, Fu Z, Ji Y, Lu J, Wang X, Shan S, Bi Y. Vision matters for shape representation: Evidence from sculpturing and drawing in the blind. Cortex 2024; 174:241-255. [PMID: 38582629 DOI: 10.1016/j.cortex.2024.02.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/23/2024] [Accepted: 02/27/2024] [Indexed: 04/08/2024]
Abstract
Shape is a property that could be perceived by vision and touch, and is classically considered to be supramodal. While there is mounting evidence for the shared cognitive and neural representation space between visual and tactile shape, previous research tended to rely on dissimilarity structures between objects and had not examined the detailed properties of shape representation in the absence of vision. To address this gap, we conducted three explicit object shape knowledge production experiments with congenitally blind and sighted participants, who were asked to produce verbal features, 3D clay models, and 2D drawings of familiar objects with varying levels of tactile exposure, including tools, large nonmanipulable objects, and animals. We found that the absence of visual experience (i.e., in the blind group) led to stronger differences in animals than in tools and large objects, suggesting that direct tactile experience of objects is essential for shape representation when vision is unavailable. For tools with rich tactile/manipulation experiences, the blind produced overall good shapes comparable to the sighted, yet also showed intriguing differences. The blind group had more variations and a systematic bias in the geometric property of tools (making them stubbier than the sighted), indicating that visual experience contributes to aligning internal representations and calibrating overall object configurations, at least for tools. Taken together, the object shape representation reflects the intricate orchestration of vision, touch and language.
Collapse
Affiliation(s)
- Shuang Tian
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Lingjuan Chen
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Guochao Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ze Fu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yufeng Ji
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Jiahui Lu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaosha Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Shiguang Shan
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China; Chinese Institute for Brain Research, Beijing, China.
| |
Collapse
|
4
|
Yang YC, Wei XY, Zhang YY, Xu CY, Cheng JM, Gong ZG, Chen H, Huang YW, Yuan J, Xu HH, Wang H, Zhan SH, Tan WL. Modulation of temporal and occipital cortex by acupuncture in non-menstrual MWoA patients: a rest BOLD fMRI study. BMC Complement Med Ther 2024; 24:43. [PMID: 38245739 PMCID: PMC10799457 DOI: 10.1186/s12906-024-04349-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 01/12/2024] [Indexed: 01/22/2024] Open
Abstract
OBJECTIVE To investigate the changes in amplitude of low-frequency fluctuation (ALFF) and degree centrality (DC) values before and after acupuncture in young women with non-menstrual migraine without aura (MWoA) through rest blood-oxygen-level-dependent functional magnetic resonance imaging (BOLD fMRI). METHODS Patients with non-menstrual MWoA (Group 1, n = 50) and healthy controls (Group 2, n = 50) were recruited. fMRI was performed in Group 1 at 2 time points: before acupuncture (time point 1, TP1); and after the end of all acupuncture sessions (time point 2, TP2), and performed in Group 2 as a one-time scan. Patients in Group 1 were assessed with the Migraine Disability Assessment Questionnaire (MIDAS) and the Short-Form McGill Pain Questionnaire (SF-MPQ) at TP1 and TP2 after fMRI was performed. The ALFF and DC values were compared within Group 1 at two time points and between Group 1 and Group2. The correlation between ALFF and DC values with the statistical differences and the clinical scales scores were analyzed. RESULTS Brain activities increased in the left fusiform gyrus and right angular gyrus, left middle occipital gyrus, and bilateral prefrontal cortex and decreased in left inferior parietal lobule in Group 1, which had different ALFF values compared with Group 2 at TP1. The bilateral fusiform gyrus, bilateral inferior temporal gyrus and right middle temporal gyrus increased and right angular gyrus, right superior marginal gyrus, right inferior parietal lobule, right middle occipital gyrus, right superior frontal gyrus, right middle frontal gyrus, right anterior central gyrus, and right supplementary motor area decreased in activity in Group 1 had different DC values compared with Group 2 at TP1. ALFF and DC values of right inferior temporal gyrus, right fusiform gyrus and right middle temporal gyrus were decreased in Group1 at TP1 compared with TP2. ALFF values in the left middle occipital area were positively correlated with the pain degree at TP1 in Group1 (correlation coefficient r, r = 0.827, r = 0.343; P < 0.01, P = 0.015). The DC values of the right inferior temporal area were positively correlated with the pain degree at TP1 in Group 1 (r = 0.371; P = 0.008). CONCLUSION Spontaneous brain activity and network changes in young women with non-menstrual MwoA were altered by acupuncture. The right temporal area may be an important target for acupuncture modulated brain function in young women with non-menstrual MwoA.
Collapse
Affiliation(s)
- Yu-Chan Yang
- Department of Radiology, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Xiang-Yu Wei
- Institute of Acupuncture and Anesthesia, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Ying-Ying Zhang
- Department of Radiology, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Chun-Yang Xu
- Department of Radiology, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Jian-Ming Cheng
- Department of Radiology, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Zhi-Gang Gong
- Department of Radiology, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Hui Chen
- Department of Radiology, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Yan-Wen Huang
- Department of Radiology, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Jie Yuan
- Department of Radiology, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Hui-Hui Xu
- Department of Radiology, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Hui Wang
- Department of Radiology, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
| | - Song-Hua Zhan
- Department of Radiology, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China.
| | - Wen-Li Tan
- Department of Radiology, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China.
| |
Collapse
|
5
|
Tivadar RI, Franceschiello B, Minier A, Murray MM. Learning and navigating digitally rendered haptic spatial layouts. NPJ SCIENCE OF LEARNING 2023; 8:61. [PMID: 38102127 PMCID: PMC10724186 DOI: 10.1038/s41539-023-00208-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/28/2023] [Indexed: 12/17/2023]
Abstract
Learning spatial layouts and navigating through them rely not simply on sight but rather on multisensory processes, including touch. Digital haptics based on ultrasounds are effective for creating and manipulating mental images of individual objects in sighted and visually impaired participants. Here, we tested if this extends to scenes and navigation within them. Using only tactile stimuli conveyed via ultrasonic feedback on a digital touchscreen (i.e., a digital interactive map), 25 sighted, blindfolded participants first learned the basic layout of an apartment based on digital haptics only and then one of two trajectories through it. While still blindfolded, participants successfully reconstructed the haptically learned 2D spaces and navigated these spaces. Digital haptics were thus an effective means to learn and translate, on the one hand, 2D images into 3D reconstructions of layouts and, on the other hand, navigate actions within real spaces. Digital haptics based on ultrasounds represent an alternative learning tool for complex scenes as well as for successful navigation in previously unfamiliar layouts, which can likely be further applied in the rehabilitation of spatial functions and mitigation of visual impairments.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- Centre for Integrative and Complementary Medicine, Department of Anesthesiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Cognitive Computational Neuroscience Group, Institute for Computer Science, University of Bern, Bern, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Benedetta Franceschiello
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
- Institute of Systems Engineering, School of Engineering, University of Applied Sciences Western Switzerland (HES-SO Valais), Sion, Switzerland
| | - Astrid Minier
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Micah M Murray
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| |
Collapse
|
6
|
Plaza PL, Renier L, Rosemann S, De Volder AG, Rauschecker JP. Sound-encoded faces activate the left fusiform face area in the early blind. PLoS One 2023; 18:e0286512. [PMID: 37992062 PMCID: PMC10664868 DOI: 10.1371/journal.pone.0286512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 05/17/2023] [Indexed: 11/24/2023] Open
Abstract
Face perception in humans and nonhuman primates is accomplished by a patchwork of specialized cortical regions. How these regions develop has remained controversial. In sighted individuals, facial information is primarily conveyed via the visual modality. Early blind individuals, on the other hand, can recognize shapes using auditory and tactile cues. Here we demonstrate that such individuals can learn to distinguish faces from houses and other shapes by using a sensory substitution device (SSD) presenting schematic faces as sound-encoded stimuli in the auditory modality. Using functional MRI, we then asked whether a face-selective brain region like the fusiform face area (FFA) shows selectivity for faces in the same subjects, and indeed, we found evidence for preferential activation of the left FFA by sound-encoded faces. These results imply that FFA development does not depend on experience with visual faces per se but may instead depend on exposure to the geometry of facial configurations.
Collapse
Affiliation(s)
- Paula L. Plaza
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| | - Laurent Renier
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
- Neural Rehabilitation Laboratory, Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Stephanie Rosemann
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| | - Anne G. De Volder
- Neural Rehabilitation Laboratory, Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Josef P. Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| |
Collapse
|
7
|
Arbel R, Heimler B, Amedi A. Rapid plasticity in the ventral visual stream elicited by a newly learnt auditory script in congenitally blind adults. Neuropsychologia 2023; 190:108685. [PMID: 37741551 DOI: 10.1016/j.neuropsychologia.2023.108685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 08/07/2023] [Accepted: 09/20/2023] [Indexed: 09/25/2023]
Abstract
Accumulating evidence in the last decades has given rise to a new theory of brain organization, positing that cortical regions are recruited for specific tasks irrespective of the sensory modality via which information is channeled. For instance, the visual reading network has been shown to be recruited for reading via the tactile Braille code in congenitally blind adults. Yet, how rapidly non-typical sensory input modulates activity in typically visual regions is yet to be explored. To this aim, we developed a novel reading orthography, termed OVAL, enabling congenitally blind adults to quickly acquire reading via the auditory modality. OVAL uses the EyeMusic, a visual-to-auditory sensory-substitution-device (SSD) to transform visually presented letters optimized for auditory transformation into sound. Using fMRI, we show modulation in the right ventral visual stream following 2-h of same-day training. Crucially, following more extensive training (i.e., ∼12 h) we show that OVAL reading recruits the left ventral visual stream including the location of the Visual Word Form Area, a key graphene-responsive region within the visual reading network. Our results show that while after 2 h of SSD training we can already observe the recruitment of the deprived ventral visual stream by auditory stimuli, computation-selective cross-modal recruitment requires longer training to establish.
Collapse
Affiliation(s)
- Roni Arbel
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; Department of Pediatrics, Hadassah Mount Scopus Hospital, Jerusalem, Israel.
| | - Benedetta Heimler
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Institute for Brain, Mind and Technology, Ivcher School of Psychology, Reichman University, Herzeliya, Israel; Center of Advanced Technologies in Rehabilitation (CATR), The Chaim Sheba Medical Center, Tel Hashomer, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Institute for Brain, Mind and Technology, Ivcher School of Psychology, Reichman University, Herzeliya, Israel
| |
Collapse
|
8
|
Kang J, Bertani R, Raheel K, Soteriou M, Rosenzweig J, Valentin A, Goadsby PJ, Tahmasian M, Moran R, Ilic K, Ockelford A, Rosenzweig I. Mental Imagery in Dreams of Congenitally Blind People. Brain Sci 2023; 13:1394. [PMID: 37891763 PMCID: PMC10605848 DOI: 10.3390/brainsci13101394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/29/2023] Open
Abstract
It is unclear to what extent the absence of vision affects the sensory sensitivity for oneiric construction. Similarly, the presence of visual imagery in the mentation of dreams of congenitally blind people has been largely disputed. We investigate the presence and nature of oneiric visuo-spatial impressions by analysing 180 dreams of seven congenitally blind people identified from the online database DreamBank. A higher presence of auditory, haptic, olfactory, and gustatory sensation in dreams of congenitally blind people was demonstrated, when compared to normally sighted individuals. Nonetheless, oneiric visual imagery in reports of congenitally blind subjects was also noted, in opposition to some previous studies, and raising questions about the possible underlying neuro-mechanisms.
Collapse
Affiliation(s)
- Jungwoo Kang
- Sleep and Brain Plasticity Centre, Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience (IoPPN), King’s College London, London WC2R 2LS, UK
| | - Rita Bertani
- Sleep and Brain Plasticity Centre, Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience (IoPPN), King’s College London, London WC2R 2LS, UK
| | - Kausar Raheel
- Sleep and Brain Plasticity Centre, Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience (IoPPN), King’s College London, London WC2R 2LS, UK
| | - Matthew Soteriou
- Department of Philosophy, King’s College London, London WC2R 2LS, UK
| | - Jan Rosenzweig
- Department of Engineering, King’s College London, London WC2R 2LS, UK
| | - Antonio Valentin
- Basic and Clinical Neuroscience, IoPPN, King’s College London, London WC2R 2LS, UK
| | - Peter J. Goadsby
- NIHR-Wellcome Trust King’s Clinical Research Facility, King’s College London, London WC2R 2LS, UK
| | - Masoud Tahmasian
- Institute of Neuroscience and Medicine, Brain and Behaviour (INM-7), Research Centre Jülich, 52428 Jülich, Germany
| | - Rosalyn Moran
- Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience (IoPPN), King’s College London, London WC2R 2LS, UK
| | - Katarina Ilic
- Sleep and Brain Plasticity Centre, Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience (IoPPN), King’s College London, London WC2R 2LS, UK
- BRAIN, Department of Neuroimaging, King’s College London, London WC2R 2LS, UK
| | - Adam Ockelford
- Centre for Learning, Teaching and Human Development, School of Education, University of Roehampton, London SW15 5PJ, UK
| | - Ivana Rosenzweig
- Sleep and Brain Plasticity Centre, Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience (IoPPN), King’s College London, London WC2R 2LS, UK
- Sleep Disorders Centre, Guy’s and St Thomas’ NHS Foundation Trust, London SE1 1UL, UK
| |
Collapse
|
9
|
Damera SR, Malone PS, Stevens BW, Klein R, Eberhardt SP, Auer ET, Bernstein LE, Riesenhuber M. Metamodal Coupling of Vibrotactile and Auditory Speech Processing Systems through Matched Stimulus Representations. J Neurosci 2023; 43:4984-4996. [PMID: 37197979 PMCID: PMC10324991 DOI: 10.1523/jneurosci.1710-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 03/10/2023] [Accepted: 04/29/2023] [Indexed: 05/19/2023] Open
Abstract
It has been postulated that the brain is organized by "metamodal," sensory-independent cortical modules capable of performing tasks (e.g., word recognition) in both "standard" and novel sensory modalities. Still, this theory has primarily been tested in sensory-deprived individuals, with mixed evidence in neurotypical subjects, thereby limiting its support as a general principle of brain organization. Critically, current theories of metamodal processing do not specify requirements for successful metamodal processing at the level of neural representations. Specification at this level may be particularly important in neurotypical individuals, where novel sensory modalities must interface with existing representations for the standard sense. Here we hypothesized that effective metamodal engagement of a cortical area requires congruence between stimulus representations in the standard and novel sensory modalities in that region. To test this, we first used fMRI to identify bilateral auditory speech representations. We then trained 20 human participants (12 female) to recognize vibrotactile versions of auditory words using one of two auditory-to-vibrotactile algorithms. The vocoded algorithm attempted to match the encoding scheme of auditory speech while the token-based algorithm did not. Crucially, using fMRI, we found that only in the vocoded group did trained-vibrotactile stimuli recruit speech representations in the superior temporal gyrus and lead to increased coupling between them and somatosensory areas. Our results advance our understanding of brain organization by providing new insight into unlocking the metamodal potential of the brain, thereby benefitting the design of novel sensory substitution devices that aim to tap into existing processing streams in the brain.SIGNIFICANCE STATEMENT It has been proposed that the brain is organized by "metamodal," sensory-independent modules specialized for performing certain tasks. This idea has inspired therapeutic applications, such as sensory substitution devices, for example, enabling blind individuals "to see" by transforming visual input into soundscapes. Yet, other studies have failed to demonstrate metamodal engagement. Here, we tested the hypothesis that metamodal engagement in neurotypical individuals requires matching the encoding schemes between stimuli from the novel and standard sensory modalities. We trained two groups of subjects to recognize words generated by one of two auditory-to-vibrotactile transformations. Critically, only vibrotactile stimuli that were matched to the neural encoding of auditory speech engaged auditory speech areas after training. This suggests that matching encoding schemes is critical to unlocking the brain's metamodal potential.
Collapse
Affiliation(s)
- Srikanth R Damera
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Patrick S Malone
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Benson W Stevens
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Richard Klein
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Silvio P Eberhardt
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | - Edward T Auer
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | - Lynne E Bernstein
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | | |
Collapse
|
10
|
Ilic K, Bertani R, Lapteva N, Drakatos P, Delogu A, Raheel K, Soteriou M, Mutti C, Steier J, Carmichael DW, Goadsby PJ, Ockelford A, Rosenzweig I. Visuo-spatial imagery in dreams of congenitally and early blind: a systematic review. Front Integr Neurosci 2023; 17:1204129. [PMID: 37457556 PMCID: PMC10347682 DOI: 10.3389/fnint.2023.1204129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 06/19/2023] [Indexed: 07/18/2023] Open
Abstract
Background The presence of visual imagery in dreams of congenitally blind people has long been a matter of substantial controversy. We set to systematically review body of published work on the presence and nature of oneiric visuo-spatial impressions in congenitally and early blind subjects across different areas of research, from experimental psychology, functional neuroimaging, sensory substitution, and sleep research. Methods Relevant studies were identified using the following databases: EMBASE, MEDLINE and PsychINFO. Results Studies using diverse imaging techniques and sensory substitution devices broadly suggest that the "blind" occipital cortex may be able to integrate non-visual sensory inputs, and thus possibly also generate visuo-spatial impressions. Visual impressions have also been reported by blind subjects who had near-death or out-of-body experiences. Conclusion Deciphering the mechanistic nature of these visual impression could open new possibility in utilization of neuroplasticity and its potential role for treatment of neurodisability.
Collapse
Affiliation(s)
- Katarina Ilic
- Department of Neuroimaging, Sleep and Brain Plasticity Centre, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, United Kingdom
- BRAIN, Imaging Centre, CNS, King’s College London, London, United Kingdom
| | - Rita Bertani
- Department of Neuroimaging, Sleep and Brain Plasticity Centre, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, United Kingdom
| | - Neda Lapteva
- Department of Neuroimaging, Sleep and Brain Plasticity Centre, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, United Kingdom
| | - Panagis Drakatos
- Department of Neuroimaging, Sleep and Brain Plasticity Centre, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, United Kingdom
- School of Basic and Medical Biosciences, Faculty of Life Sciences and Medicine, King’s College London, London, United Kingdom
- Sleep Disorders Centre, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - Alessio Delogu
- Department of Basic and Clinical Neuroscience, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, United Kingdom
| | - Kausar Raheel
- Department of Neuroimaging, Sleep and Brain Plasticity Centre, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, United Kingdom
| | - Matthew Soteriou
- Department of Philosophy, King’s College London, London, United Kingdom
| | - Carlotta Mutti
- Department of General and Specialized Medicine, Sleep Disorders Center, University Hospital of Parma, Parma, Italy
| | - Joerg Steier
- School of Basic and Medical Biosciences, Faculty of Life Sciences and Medicine, King’s College London, London, United Kingdom
- Sleep Disorders Centre, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - David W. Carmichael
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Peter J. Goadsby
- NIHR-Wellcome Trust King’s Clinical Research Facility, King’s College London, London, United Kingdom
| | - Adam Ockelford
- Centre for Learning, Teaching and Human Development, School of Education, University of Roehampton, London, United Kingdom
| | - Ivana Rosenzweig
- Department of Neuroimaging, Sleep and Brain Plasticity Centre, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, United Kingdom
- Sleep Disorders Centre, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| |
Collapse
|
11
|
Rayes RK, Mazorow RN, Mrotek LA, Scheidt RA. Utility and Usability of Two Forms of Supplemental Vibrotactile Kinesthetic Feedback for Enhancing Movement Accuracy and Efficiency in Goal-Directed Reaching. SENSORS (BASEL, SWITZERLAND) 2023; 23:5455. [PMID: 37420621 DOI: 10.3390/s23125455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 05/25/2023] [Accepted: 06/06/2023] [Indexed: 07/09/2023]
Abstract
Recent advances in wearable sensors and computing have made possible the development of novel sensory augmentation technologies that promise to enhance human motor performance and quality of life in a wide range of applications. We compared the objective utility and subjective user experience for two biologically inspired ways to encode movement-related information into supplemental feedback for the real-time control of goal-directed reaching in healthy, neurologically intact adults. One encoding scheme mimicked visual feedback encoding by converting real-time hand position in a Cartesian frame of reference into supplemental kinesthetic feedback provided by a vibrotactile display attached to the non-moving arm and hand. The other approach mimicked proprioceptive encoding by providing real-time arm joint angle information via the vibrotactile display. We found that both encoding schemes had objective utility in that after a brief training period, both forms of supplemental feedback promoted improved reach accuracy in the absence of concurrent visual feedback over performance levels achieved using proprioception alone. Cartesian encoding promoted greater reductions in target capture errors in the absence of visual feedback (Cartesian: 59% improvement; Joint Angle: 21% improvement). Accuracy gains promoted by both encoding schemes came at a cost in terms of temporal efficiency; target capture times were considerably longer (1.5 s longer) when reaching with supplemental kinesthetic feedback than without. Furthermore, neither encoding scheme yielded movements that were particularly smooth, although movements made with joint angle encoding were smoother than movements with Cartesian encoding. Participant responses on user experience surveys indicate that both encoding schemes were motivating and that both yielded passable user satisfaction scores. However, only Cartesian endpoint encoding was found to have passable usability; participants felt more competent using Cartesian encoding than joint angle encoding. These results are expected to inform future efforts to develop wearable technology to enhance the accuracy and efficiency of goal-directed actions using continuous supplemental kinesthetic feedback.
Collapse
Affiliation(s)
- Ramsey K Rayes
- Joint Department of Biomedical Engineering, Marquette University and the Medical College of Wisconsin, Milwaukee, WI 53233, USA
- Medical School, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Rachel N Mazorow
- Joint Department of Biomedical Engineering, Marquette University and the Medical College of Wisconsin, Milwaukee, WI 53233, USA
| | - Leigh A Mrotek
- Joint Department of Biomedical Engineering, Marquette University and the Medical College of Wisconsin, Milwaukee, WI 53233, USA
| | - Robert A Scheidt
- Joint Department of Biomedical Engineering, Marquette University and the Medical College of Wisconsin, Milwaukee, WI 53233, USA
| |
Collapse
|
12
|
Schmidt V, König SU, Dilawar R, Sánchez Pacheco T, König P. Improved Spatial Knowledge Acquisition through Sensory Augmentation. Brain Sci 2023; 13:brainsci13050720. [PMID: 37239192 DOI: 10.3390/brainsci13050720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 04/13/2023] [Accepted: 04/20/2023] [Indexed: 05/28/2023] Open
Abstract
Sensory augmentation provides novel opportunities to broaden our knowledge of human perception through external sensors that record and transmit information beyond natural perception. To assess whether such augmented senses affect the acquisition of spatial knowledge during navigation, we trained a group of 27 participants for six weeks with an augmented sense for cardinal directions called the feelSpace belt. Then, we recruited a control group that did not receive the augmented sense and the corresponding training. All 53 participants first explored the Westbrook virtual reality environment for two and a half hours spread over five sessions before assessing their spatial knowledge in four immersive virtual reality tasks measuring cardinal, route, and survey knowledge. We found that the belt group acquired significantly more accurate cardinal and survey knowledge, which was measured in pointing accuracy, distance, and rotation estimates. Interestingly, the augmented sense also positively affected route knowledge, although to a lesser degree. Finally, the belt group reported a significant increase in the use of spatial strategies after training, while the groups' ratings were comparable at baseline. The results suggest that six weeks of training with the feelSpace belt led to improved survey and route knowledge acquisition. Moreover, the findings of our study could inform the development of assistive technologies for individuals with visual or navigational impairments, which may lead to enhanced navigation skills and quality of life.
Collapse
Affiliation(s)
- Vincent Schmidt
- Neurobiopsychology Group, Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090 Osnabrück, Germany
| | - Sabine U König
- Neurobiopsychology Group, Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090 Osnabrück, Germany
| | - Rabia Dilawar
- Neurobiopsychology Group, Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090 Osnabrück, Germany
| | - Tracy Sánchez Pacheco
- Neurobiopsychology Group, Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090 Osnabrück, Germany
| | - Peter König
- Neurobiopsychology Group, Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090 Osnabrück, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
| |
Collapse
|
13
|
Yizhar O, Tal Z, Amedi A. Loss of action-related function and connectivity in the blind extrastriate body area. Front Neurosci 2023; 17:973525. [PMID: 36968509 PMCID: PMC10035577 DOI: 10.3389/fnins.2023.973525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 02/23/2023] [Indexed: 03/11/2023] Open
Abstract
The Extrastriate Body Area (EBA) participates in the visual perception and motor actions of body parts. We recently showed that EBA’s perceptual function develops independently of visual experience, responding to stimuli with body-part information in a supramodal fashion. However, it is still unclear if the EBA similarly maintains its action-related function. Here, we used fMRI to study motor-evoked responses and connectivity patterns in the congenitally blind brain. We found that, unlike the case of perception, EBA does not develop an action-related response without visual experience. In addition, we show that congenital blindness alters EBA’s connectivity profile in a counterintuitive way—functional connectivity with sensorimotor cortices dramatically decreases, whereas connectivity with perception-related visual occipital cortices remains high. To the best of our knowledge, we show for the first time that action-related functions and connectivity in the visual cortex could be contingent on visuomotor experience. We further discuss the role of the EBA within the context of visuomotor control and predictive coding theory.
Collapse
Affiliation(s)
- Or Yizhar
- Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind and Technology, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- *Correspondence: Or Yizhar,
| | - Zohar Tal
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Amir Amedi
- Ivcher School of Psychology, The Institute for Brain, Mind and Technology, Reichman University, Herzliya, Israel
- The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
14
|
Bang JW, Hamilton-Fletcher G, Chan KC. Visual Plasticity in Adulthood: Perspectives from Hebbian and Homeostatic Plasticity. Neuroscientist 2023; 29:117-138. [PMID: 34382456 PMCID: PMC9356772 DOI: 10.1177/10738584211037619] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
The visual system retains profound plastic potential in adulthood. In the current review, we summarize the evidence of preserved plasticity in the adult visual system during visual perceptual learning as well as both monocular and binocular visual deprivation. In each condition, we discuss how such evidence reflects two major cellular mechanisms of plasticity: Hebbian and homeostatic processes. We focus on how these two mechanisms work together to shape plasticity in the visual system. In addition, we discuss how these two mechanisms could be further revealed in future studies investigating cross-modal plasticity in the visual system.
Collapse
Affiliation(s)
- Ji Won Bang
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Giles Hamilton-Fletcher
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Kevin C. Chan
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Neuroscience Institute, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Center for Neural Science, College of Arts and Science, New York University, New York, NY, USA
| |
Collapse
|
15
|
Maimon A, Wald IY, Ben Oz M, Codron S, Netzer O, Heimler B, Amedi A. The Topo-Speech sensory substitution system as a method of conveying spatial information to the blind and vision impaired. Front Hum Neurosci 2023; 16:1058093. [PMID: 36776219 PMCID: PMC9909096 DOI: 10.3389/fnhum.2022.1058093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 12/13/2022] [Indexed: 01/27/2023] Open
Abstract
Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects' identity by employing naming in a spoken word and simultaneously conveying the objects' location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study's findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel,The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel,*Correspondence: Amber Maimon,
| | - Iddo Yehoshua Wald
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel,The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel,The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Sophie Codron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel,The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Ophir Netzer
- Gonda Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel,The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
16
|
Maimon A, Netzer O, Heimler B, Amedi A. Testing geometry and 3D perception in children following vision restoring cataract-removal surgery. Front Neurosci 2023; 16:962817. [PMID: 36711132 PMCID: PMC9879291 DOI: 10.3389/fnins.2022.962817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 12/19/2022] [Indexed: 01/13/2023] Open
Abstract
As neuroscience and rehabilitative techniques advance, age-old questions concerning the visual experience of those who gain sight after blindness, once thought to be philosophical alone, take center stage and become the target for scientific inquiries. In this study, we employ a battery of visual perception tasks to study the unique experience of a small group of children who have undergone vision-restoring cataract removal surgery as part of the Himalayan Cataract Project. We tested their abilities to perceive in three dimensions (3D) using a binocular rivalry task and the Brock string task, perceive visual illusions, use cross-modal mappings between touch and vision, and spatially group based on geometric cues. Some of the children in this study gained a sense of sight for the first time in their lives, having been born with bilateral congenital cataracts, while others suffered late-onset blindness in one eye alone. This study simultaneously supports yet raises further questions concerning Hubel and Wiesel's critical periods theory and provides additional insight into Molyneux's problem, the ability to correlate vision with touch quickly. We suggest that our findings present a relatively unexplored intermediate stage of 3D vision development. Importantly, we spotlight some essential geometrical perception visual abilities that strengthen the idea that spontaneous geometry intuitions arise independently from visual experience (and education), thus replicating and extending previous studies. We incorporate a new model, not previously explored, of testing children with congenital cataract removal surgeries who perform the task via vision. In contrast, previous work has explored these abilities in the congenitally blind via touch. Taken together, our findings provide insight into the development of what is commonly known as the visual system in the visually deprived and highlight the need to further empirically explore an amodal, task-based interpretation of specializations in the development and structure of the brain. Moreover, we propose a novel objective method, based on a simple binocular rivalry task and the Brock string task, for determining congenital (early) vs. late blindness where medical history and records are partial or lacking (e.g., as is often the case in cataract removal cases).
Collapse
Affiliation(s)
- Amber Maimon
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel,The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel,*Correspondence: Amber Maimon,
| | - Ophir Netzer
- Gonda Brain Research Center, Bar-Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel,The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
17
|
Shvadron S, Snir A, Maimon A, Yizhar O, Harel S, Poradosu K, Amedi A. Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device. Front Hum Neurosci 2023; 17:1058617. [PMID: 36936618 PMCID: PMC10017858 DOI: 10.3389/fnhum.2023.1058617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/09/2023] [Indexed: 03/06/2023] Open
Abstract
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
Collapse
Affiliation(s)
- Shira Shvadron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- *Correspondence: Shira Shvadron,
| | - Adi Snir
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Or Yizhar
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck Dahlem Campus of Cognition (MPDCC), Max Planck Institute for Human Development, Berlin, Germany
| | - Sapir Harel
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Keinan Poradosu
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Weizmann Institute of Science, Rehovot, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
18
|
Gori M, Amadeo MB, Pavani F, Valzolgher C, Campus C. Temporal visual representation elicits early auditory-like responses in hearing but not in deaf individuals. Sci Rep 2022; 12:19036. [PMID: 36351944 PMCID: PMC9646881 DOI: 10.1038/s41598-022-22224-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 10/11/2022] [Indexed: 11/10/2022] Open
Abstract
It is evident that the brain is capable of large-scale reorganization following sensory deprivation, but the extent of such reorganization is to date, not clear. The auditory modality is the most accurate to represent temporal information, and deafness is an ideal clinical condition to study the reorganization of temporal representation when the audio signal is not available. Here we show that hearing, but not deaf individuals, show a strong ERP response to visual stimuli in temporal areas during a time-bisection task. This ERP response appears 50-90 ms after the flash and recalls some aspects of the N1 ERP component usually elicited by auditory stimuli. The same ERP is not evident for a visual space-bisection task, suggesting that the early recruitment of temporal cortex is specific for building a highly resolved temporal representation within the visual modality. These findings provide evidence that the lack of auditory input can interfere with typical development of complex visual temporal representations.
Collapse
Affiliation(s)
- Monica Gori
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| | - Maria Bianca Amadeo
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| | - Francesco Pavani
- grid.11696.390000 0004 1937 0351Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy ,grid.11696.390000 0004 1937 0351Centro Interateneo di Ricerca Cognizione, Linguaggio e Sordità (CIRCLeS), University of Trento, Trento, Italy ,grid.461862.f0000 0004 0614 7222Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Centre de Recherche en Neuroscience de Lyon (CRNL), Bron, France
| | - Chiara Valzolgher
- grid.11696.390000 0004 1937 0351Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy ,grid.461862.f0000 0004 0614 7222Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Centre de Recherche en Neuroscience de Lyon (CRNL), Bron, France
| | - Claudio Campus
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| |
Collapse
|
19
|
Martolini C, Amadeo MB, Campus C, Cappagli G, Gori M. Effects of audio-motor training on spatial representations in long-term late blindness. Neuropsychologia 2022; 176:108391. [DOI: 10.1016/j.neuropsychologia.2022.108391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 08/16/2022] [Accepted: 10/01/2022] [Indexed: 11/15/2022]
|
20
|
Chen Y, Liu Y, Song Y, Zhao S, Li B, Sun J, Liu L. Therapeutic applications and potential mechanisms of acupuncture in migraine: A literature review and perspectives. Front Neurosci 2022; 16:1022455. [PMID: 36340786 PMCID: PMC9630645 DOI: 10.3389/fnins.2022.1022455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 09/30/2022] [Indexed: 11/16/2022] Open
Abstract
Acupuncture is commonly used as a treatment for migraines. Animal studies have suggested that acupuncture can decrease neuropeptides, immune cells, and proinflammatory and excitatory neurotransmitters, which are associated with the pathogenesis of neuroinflammation. In addition, acupuncture participates in the development of peripheral and central sensitization through modulation of the release of neuronal-sensitization-related mediators (brain-derived neurotrophic factor, glutamate), endocannabinoid system, and serotonin system activation. Clinical studies have demonstrated that acupuncture may be a beneficial migraine treatment, particularly in decreasing pain intensity, duration, emotional comorbidity, and days of acute medication intake. However, specific clinical effectiveness has not been substantiated, and the mechanisms underlying its efficacy remain obscure. With the development of biomedical and neuroimaging techniques, the neural mechanism of acupuncture in migraine has gained increasing attention. Neuroimaging studies have indicated that acupuncture may alter the abnormal functional activity and connectivity of the descending pain modulatory system, default mode network, thalamus, frontal-parietal network, occipital-temporal network, and cerebellum. Acupuncture may reduce neuroinflammation, regulate peripheral and central sensitization, and normalize abnormal brain activity, thereby preventing pain signal transmission. To summarize the effects and neural mechanisms of acupuncture in migraine, we performed a systematic review of literature about migraine and acupuncture. We summarized the characteristics of current clinical studies, including the types of participants, study designs, and clinical outcomes. The published findings from basic neuroimaging studies support the hypothesis that acupuncture alters abnormal neuroplasticity and brain activity. The benefits of acupuncture require further investigation through basic and clinical studies.
Collapse
|
21
|
Korczyk M, Zimmermann M, Bola Ł, Szwed M. Superior visual rhythm discrimination in expert musicians is most likely not related to cross-modal recruitment of the auditory cortex. Front Psychol 2022; 13:1036669. [PMID: 36337485 PMCID: PMC9632485 DOI: 10.3389/fpsyg.2022.1036669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Accepted: 10/06/2022] [Indexed: 11/25/2022] Open
Abstract
Training can influence behavioral performance and lead to brain reorganization. In particular, training in one modality, for example, auditory, can improve performance in another modality, for example, visual. Previous research suggests that one of the mechanisms behind this phenomenon could be the cross-modal recruitment of the sensory areas, for example, the auditory cortex. Studying expert musicians offers a chance to explore this process. Rhythm is an aspect of music that can be presented in various modalities. We designed an fMRI experiment in which professional pianists and non-musicians discriminated between two sequences of rhythms presented auditorily (series of sounds) or visually (series of flashes). Behavioral results showed that musicians performed in both visual and auditory rhythmic tasks better than non-musicians. We found no significant between-group differences in fMRI activations within the auditory cortex. However, we observed that musicians had increased activation in the right Inferior Parietal Lobe when compared to non-musicians. We conclude that the musicians’ superior visual rhythm discrimination is not related to cross-modal recruitment of the auditory cortex; instead, it could be related to activation in higher-level, multimodal areas in the cortex.
Collapse
Affiliation(s)
| | | | - Łukasz Bola
- Intitute of Psychology, Jagiellonian University, Kraków, Poland
- Institute of Psychology, Polish Academy of Sciences, Warszawa, Poland
| | - Marcin Szwed
- Intitute of Psychology, Jagiellonian University, Kraków, Poland
- *Correspondence: Marcin Szwed,
| |
Collapse
|
22
|
Arbel R, Heimler B, Amedi A. Face shape processing via visual-to-auditory sensory substitution activates regions within the face processing networks in the absence of visual experience. Front Neurosci 2022; 16:921321. [PMID: 36263367 PMCID: PMC9576157 DOI: 10.3389/fnins.2022.921321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 09/05/2022] [Indexed: 11/16/2022] Open
Abstract
Previous evidence suggests that visual experience is crucial for the emergence and tuning of the typical neural system for face recognition. To challenge this conclusion, we trained congenitally blind adults to recognize faces via visual-to-auditory sensory-substitution (SDD). Our results showed a preference for trained faces over other SSD-conveyed visual categories in the fusiform gyrus and in other known face-responsive-regions of the deprived ventral visual stream. We also observed a parametric modulation in the same cortical regions, for face orientation (upright vs. inverted) and face novelty (trained vs. untrained). Our results strengthen the conclusion that there is a predisposition for sensory-independent and computation-specific processing in specific cortical regions that can be retained in life-long sensory deprivation, independently of previous perceptual experience. They also highlight that if the right training is provided, such cortical preference maintains its tuning to what were considered visual-specific face features.
Collapse
Affiliation(s)
- Roni Arbel
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Pediatrics, Hadassah University Hospital-Mount Scopus, Jerusalem, Israel
- *Correspondence: Roni Arbel,
| | - Benedetta Heimler
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind, and Technology, Reichman University, Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation, Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind, and Technology, Reichman University, Herzeliya, Israel
| |
Collapse
|
23
|
Karim AKMR, Proulx MJ, de Sousa AA, Likova LT. Do we enjoy what we sense and perceive? A dissociation between aesthetic appreciation and basic perception of environmental objects or events. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:904-951. [PMID: 35589909 PMCID: PMC10159614 DOI: 10.3758/s13415-022-01004-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/27/2022] [Indexed: 05/06/2023]
Abstract
This integrative review rearticulates the notion of human aesthetics by critically appraising the conventional definitions, offerring a new, more comprehensive definition, and identifying the fundamental components associated with it. It intends to advance holistic understanding of the notion by differentiating aesthetic perception from basic perceptual recognition, and by characterizing these concepts from the perspective of information processing in both visual and nonvisual modalities. To this end, we analyze the dissociative nature of information processing in the brain, introducing a novel local-global integrative model that differentiates aesthetic processing from basic perceptual processing. This model builds on the current state of the art in visual aesthetics as well as newer propositions about nonvisual aesthetics. This model comprises two analytic channels: aesthetics-only channel and perception-to-aesthetics channel. The aesthetics-only channel primarily involves restricted local processing for quality or richness (e.g., attractiveness, beauty/prettiness, elegance, sublimeness, catchiness, hedonic value) analysis, whereas the perception-to-aesthetics channel involves global/extended local processing for basic feature analysis, followed by restricted local processing for quality or richness analysis. We contend that aesthetic processing operates independently of basic perceptual processing, but not independently of cognitive processing. We further conjecture that there might be a common faculty, labeled as aesthetic cognition faculty, in the human brain for all sensory aesthetics albeit other parts of the brain can also be activated because of basic sensory processing prior to aesthetic processing, particularly during the operation of the second channel. This generalized model can account not only for simple and pure aesthetic experiences but for partial and complex aesthetic experiences as well.
Collapse
Affiliation(s)
- A K M Rezaul Karim
- Department of Psychology, University of Dhaka, Dhaka, 1000, Bangladesh.
- Envision Research Institute, 610 N. Main St., Wichita, KS, USA.
- The Smith-Kettlewell Eye Research Institute, 2318 Fillmore St., San Francisco, CA, USA.
| | | | | | - Lora T Likova
- The Smith-Kettlewell Eye Research Institute, 2318 Fillmore St., San Francisco, CA, USA
| |
Collapse
|
24
|
Maimon A, Yizhar O, Buchs G, Heimler B, Amedi A. A case study in phenomenology of visual experience with retinal prosthesis versus visual-to-auditory sensory substitution. Neuropsychologia 2022; 173:108305. [PMID: 35752268 PMCID: PMC9297294 DOI: 10.1016/j.neuropsychologia.2022.108305] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 04/30/2022] [Accepted: 06/13/2022] [Indexed: 11/26/2022]
Abstract
The phenomenology of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both devices. He offers a firsthand account into what he considers the great potential of combining sensory substitution devices with visual prostheses as part of a complete visual restoration protocol. While the Argus II retinal prosthesis alone provided him with immediate visual percepts by way of electrically stimulated phosphenes elicited by the device, the EyeMusic SSD requires extensive training from the onset. Yet following the extensive training program with the EyeMusic sensory substitution device, our subject reports that the sensory substitution device allowed him to experience a richer, more complex perceptual experience, that felt more "second nature" to him, while the Argus II prosthesis (which also requires training) did not allow him to achieve the same levels of automaticity and transparency. Following long-term use of the EyeMusic SSD, our subject reported that visual percepts representing mainly, but not limited to, colors portrayed by the EyeMusic SSD are elicited in association with auditory stimuli, indicating the acquisition of a high level of automaticity. Finally, the case study indicates an additive benefit to the combination of both devices on the user's subjective phenomenological visual experience.
Collapse
Affiliation(s)
- Amber Maimon
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| | - Or Yizhar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel; Max Planck Institute for Human Development, Research Group Adaptive Memory and Decision Making, Berlin, Germany; Max Planck Institute for Human Development, Max Planck Dahlem Campus of Cognition (MPDCC), Berlin, Germany
| | - Galit Buchs
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| |
Collapse
|
25
|
Cieśla K, Wolak T, Lorens A, Mentzel M, Skarżyński H, Amedi A. Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding. Sci Rep 2022; 12:3206. [PMID: 35217676 PMCID: PMC8881456 DOI: 10.1038/s41598-022-06855-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 01/28/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise. After a short session (30-45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14-16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. The number of sentence repetitions needed for both types of training to complete the task was comparable. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. In addition, both before and after training most of the participants (70-80%) showed better performance (by mean 4-6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). The least significant effect of both training types was found in the third test condition, i.e. when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. We discuss the implications of these novel findings with respect to basic science. In particular, we show that even in adulthood, i.e. long after the classical "critical periods" of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations.
Collapse
Affiliation(s)
- K Cieśla
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel. .,World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland.
| | - T Wolak
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Lorens
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - M Mentzel
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| | - H Skarżyński
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Amedi
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
26
|
Stiles NRB, Weiland JD, Patel VR. Visual-tactile shape perception in the visually restored with artificial vision. J Vis 2022; 22:14. [PMID: 35195673 PMCID: PMC8883179 DOI: 10.1167/jov.22.2.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Retinal prostheses partially restore vision to late blind patients with retinitis pigmentosa through electrical stimulation of still-viable retinal ganglion cells. We investigated whether the late blind can perform visual–tactile shape matching following the partial restoration of vision via retinal prostheses after decades of blindness. We tested for visual–visual, tactile–tactile, and visual–tactile two-dimensional shape matching with six Argus II retinal prosthesis patients, ten sighted controls, and eight sighted controls with simulated ultra-low vision. In the Argus II patients, the visual–visual shape matching performance was significantly greater than chance. Although the visual–tactile shape matching performance of the Argus II patients was not significantly greater than chance, it was significantly higher with longer duration of prosthesis use. The sighted controls using natural vision and the sighted controls with simulated ultra-low vision both performed the visual–visual and visual–tactile shape matching tasks significantly more accurately than the Argus II patients. The tactile–tactile matching was not significantly different between the Argus II patients and sighted controls with or without simulated ultra-low vision. These results show that experienced retinal prosthesis patients can match shapes across the senses and integrate artificial vision with somatosensation. The correlation of retinal prosthesis patients’ crossmodal shape matching performance with the duration of device use supports the value of experience to crossmodal shape learning. These crossmodal shape matching results in Argus II patients are the first step toward understanding crossmodal perception after artificial visual restoration.
Collapse
Affiliation(s)
- Noelle R B Stiles
- Department of Ophthalmology, University of Southern California, Los Angeles, CA, USA.,
| | - James D Weiland
- Departments of Biomedical Engineering and Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, MI, USA.,
| | - Vivek R Patel
- Department of Ophthalmology, University of California, Irvine, Irvine, CA, USA.,
| |
Collapse
|
27
|
Alipour A, Beggs JM, Brown JW, James TW. A computational examination of the two-streams hypothesis: which pathway needs a longer memory? Cogn Neurodyn 2022; 16:149-165. [PMID: 35126775 PMCID: PMC8807798 DOI: 10.1007/s11571-021-09703-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 06/26/2021] [Accepted: 07/14/2021] [Indexed: 02/03/2023] Open
Abstract
The two visual streams hypothesis is a robust example of neural functional specialization that has inspired countless studies over the past four decades. According to one prominent version of the theory, the fundamental goal of the dorsal visual pathway is the transformation of retinal information for visually-guided motor behavior. To that end, the dorsal stream processes input using absolute (or veridical) metrics only when the movement is initiated, necessitating very little, or no, memory. Conversely, because the ventral visual pathway does not involve motor behavior (its output does not influence the real world), the ventral stream processes input using relative (or illusory) metrics and can accumulate or integrate sensory evidence over long time constants, which provides a substantial capacity for memory. In this study, we tested these relations between functional specialization, processing metrics, and memory by training identical recurrent neural networks to perform either a viewpoint-invariant object classification task or an orientation/size determination task. The former task relies on relative metrics, benefits from accumulating sensory evidence, and is usually attributed to the ventral stream. The latter task relies on absolute metrics, can be computed accurately in the moment, and is usually attributed to the dorsal stream. To quantify the amount of memory required for each task, we chose two types of neural network models. Using a long-short-term memory (LSTM) recurrent network, we found that viewpoint-invariant object categorization (object task) required a longer memory than orientation/size determination (orientation task). Additionally, to dissect this memory effect, we considered factors that contributed to longer memory in object tasks. First, we used two different sets of objects, one with self-occlusion of features and one without. Second, we defined object classes either strictly by visual feature similarity or (more liberally) by semantic label. The models required greater memory when features were self-occluded and when object classes were defined by visual feature similarity, showing that self-occlusion and visual similarity among object task samples are contributing to having a long memory. The same set of tasks modeled using modified leaky-integrator echo state recurrent networks (LiESN), however, did not replicate the results, except under some conditions. This may be because LiESNs cannot perform fine-grained memory adjustments due to their network-wide memory coefficient and fixed recurrent weights. In sum, the LSTM simulations suggest that longer memory is advantageous for performing viewpoint-invariant object classification (a putative ventral stream function) because it allows for interpolation of features across viewpoints. The results further suggest that orientation/size determination (a putative dorsal stream function) does not benefit from longer memory. These findings are consistent with the two visual streams theory of functional specialization. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s11571-021-09703-z.
Collapse
Affiliation(s)
- Abolfazl Alipour
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| | - John M Beggs
- Program in Neuroscience, Indiana University, Bloomington, IN USA
- Department of Physics, Indiana University, Bloomington, IN USA
| | - Joshua W Brown
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| | - Thomas W James
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| |
Collapse
|
28
|
Longin L, Deroy O. Augmenting perception: How artificial intelligence transforms sensory substitution. Conscious Cogn 2022; 99:103280. [PMID: 35114632 DOI: 10.1016/j.concog.2022.103280] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 11/26/2021] [Accepted: 01/12/2022] [Indexed: 01/28/2023]
Abstract
What happens when artificial sensors are coupled with the human senses? Using technology to extend the senses is an old human dream, on which sensory substitution and other augmentation technologies have already delivered. Laser tactile canes, corneal implants and magnetic belts can correct or extend what individuals could otherwise perceive. Here we show why accommodating intelligent sensory augmentation devices not just improves but also changes the way of thinking and classifying former sensory augmentation devices. We review the benefits in terms of signal processing and show why non-linear transformation is more than a mere improvement compared to classical linear transformation.
Collapse
Affiliation(s)
- Louis Longin
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, LMU-Munich, Geschwister-Scholl-Platz 1, 80359 Munich, Germany.
| | - Ophelia Deroy
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, LMU-Munich, Geschwister-Scholl-Platz 1, 80359 Munich, Germany; Munich Center for Neurosciences-Brain & Mind, Großhaderner Str. 2, 82152 Planegg-Martinsried, Germany; Institute of Philosophy, School of Advanced Study, University of London, London WC1E 7HU, United Kingdom
| |
Collapse
|
29
|
Fecteau S. Influencing Human Behavior with Noninvasive Brain Stimulation: Direct Human Brain Manipulation Revisited. Neuroscientist 2022; 29:317-331. [PMID: 35057668 PMCID: PMC10159214 DOI: 10.1177/10738584211067744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The use of tools to perturb brain activity can generate important insights into brain physiology and offer valuable therapeutic approaches for brain disorders. Furthermore, the potential of such tools to enhance normal behavior has become increasingly recognized, and this has led to the development of various noninvasive technologies that provides a broader access to the human brain. While providing a brief survey of brain manipulation procedures used in the past decades, this review aims at stimulating an informed discussion on the use of these new technologies to investigate the human. It highlights the importance to revisit the past use of this unique armamentarium and proceed to a detailed analysis of its present state, especially in regard to human behavioral regulation.
Collapse
|
30
|
Mahon BZ. Domain-specific connectivity drives the organization of object knowledge in the brain. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:221-244. [PMID: 35964974 DOI: 10.1016/b978-0-12-823493-8.00028-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The goal of this chapter is to review neuropsychological and functional MRI findings that inform a theory of the causes of functional specialization for semantic categories within occipito-temporal cortex-the ventral visual processing pathway. The occipito-temporal pathway supports visual object processing and recognition. The theoretical framework that drives this review considers visual object recognition through the lens of how "downstream" systems interact with the outputs of visual recognition processes. Those downstream processes include conceptual interpretation, grasping and object use, navigating and orienting in an environment, physical reasoning about the world, and inferring future actions and the inner mental states of agents. The core argument of this chapter is that innately constrained connectivity between occipito-temporal areas and other regions of the brain is the basis for the emergence of neural specificity for a limited number of semantic domains in the brain.
Collapse
Affiliation(s)
- Bradford Z Mahon
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States.
| |
Collapse
|
31
|
Impact of a Vibrotactile Belt on Emotionally Challenging Everyday Situations of the Blind. SENSORS 2021; 21:s21217384. [PMID: 34770689 PMCID: PMC8587958 DOI: 10.3390/s21217384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 10/31/2021] [Accepted: 11/03/2021] [Indexed: 11/16/2022]
Abstract
Spatial orientation and navigation depend primarily on vision. Blind people lack this critical source of information. To facilitate wayfinding and to increase the feeling of safety for these people, the "feelSpace belt" was developed. The belt signals magnetic north as a fixed reference frame via vibrotactile stimulation. This study investigates the effect of the belt on typical orientation and navigation tasks and evaluates the emotional impact. Eleven blind subjects wore the belt daily for seven weeks. Before, during and after the study period, they filled in questionnaires to document their experiences. A small sub-group of the subjects took part in behavioural experiments before and after four weeks of training, i.e., a straight-line walking task to evaluate the belt's effect on keeping a straight heading, an angular rotation task to examine effects on egocentric orientation, and a triangle completion navigation task to test the ability to take shortcuts. The belt reduced subjective discomfort and increased confidence during navigation. Additionally, the participants felt safer wearing the belt in various outdoor situations. Furthermore, the behavioural tasks point towards an intuitive comprehension of the belt. Altogether, the blind participants benefited from the vibrotactile belt as an assistive technology in challenging everyday situations.
Collapse
|
32
|
Zhe X, Chen L, Zhang D, Tang M, Gao J, Ai K, Liu W, Lei X, Zhang X. Cortical Areas Associated With Multisensory Integration Showing Altered Morphology and Functional Connectivity in Relation to Reduced Life Quality in Vestibular Migraine. Front Hum Neurosci 2021; 15:717130. [PMID: 34483869 PMCID: PMC8415788 DOI: 10.3389/fnhum.2021.717130] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 07/26/2021] [Indexed: 01/21/2023] Open
Abstract
Background: Increasing evidence suggests that the temporal and parietal lobes are associated with multisensory integration and vestibular migraine. However, temporal and parietal lobe structural and functional connectivity (FC) changes related to vestibular migraine need to be further investigated. Methods: Twenty-five patients with vestibular migraine (VM) and 27 age- and sex- matched healthy controls participated in this study. Participants completed standardized questionnaires assessing migraine and vertigo-related clinical features. Cerebral cortex characteristics [i.e., thickness (CT), fractal dimension (FD), sulcus depth (SD), and the gyrification index (GI)] were evaluated using an automated Computational Anatomy Toolbox (CAT12). Regions with significant differences were used in a seed-based comparison of resting-state FC conducted with DPABI. The relationship between changes in cortical characteristics or FC and clinical features was also analyzed in the patients with VM. Results: Relative to controls, patients with VM showed significantly thinner CT in the bilateral inferior temporal gyrus, left middle temporal gyrus, and the right superior parietal lobule. A shallower SD was observed in the right superior and inferior parietal lobule. FD and GI did not differ significantly between the two groups. A negative correlation was found between CT in the right inferior temporal gyrus, as well as the left middle temporal gyrus, and the Dizziness Handicap Inventory (DHI) score in VM patients. Furthermore, patients with VM exhibited weaker FC between the left inferior/middle temporal gyrus and the left medial superior frontal gyrus, supplementary motor area. Conclusion: Our data revealed cortical structural and resting-state FC abnormalities associated with multisensory integration, contributing to a lower quality of life. These observations suggest a role for multisensory integration in patients with VM pathophysiology. Future research should focus on using a task-based fMRI to measure multisensory integration.
Collapse
Affiliation(s)
- Xia Zhe
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Li Chen
- Department of Neurology, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Dongsheng Zhang
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Min Tang
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Jie Gao
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Kai Ai
- Department of Clinical Science, Philips Healthcare, Xi'an, China
| | - Weijun Liu
- Consumables and Reagents Department, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Xiaoyan Lei
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Xiaoling Zhang
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| |
Collapse
|
33
|
Sakai H, Ueda S, Ueno K, Kumada T. Neuroplastic Reorganization Induced by Sensory Augmentation for Self-Localization During Locomotion. FRONTIERS IN NEUROERGONOMICS 2021; 2:691993. [PMID: 38235242 PMCID: PMC10790880 DOI: 10.3389/fnrgo.2021.691993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 07/21/2021] [Indexed: 01/19/2024]
Abstract
Sensory skills can be augmented through training and technological support. This process is underpinned by neural plasticity in the brain. We previously demonstrated that auditory-based sensory augmentation can be used to assist self-localization during locomotion. However, the neural mechanisms underlying this phenomenon remain unclear. Here, by using functional magnetic resonance imaging, we aimed to identify the neuroplastic reorganization induced by sensory augmentation training for self-localization during locomotion. We compared activation in response to auditory cues for self-localization before, the day after, and 1 month after 8 days of sensory augmentation training in a simulated driving environment. Self-localization accuracy improved after sensory augmentation training, compared with the control (normal driving) condition; importantly, sensory augmentation training resulted in auditory responses not only in temporal auditory areas but also in higher-order somatosensory areas extending to the supramarginal gyrus and the parietal operculum. This sensory reorganization had disappeared by 1 month after the end of the training. These results suggest that the use of auditory cues for self-localization during locomotion relies on multimodality in higher-order somatosensory areas, despite substantial evidence that information for self-localization during driving is estimated from visual cues on the proximal part of the road. Our findings imply that the involvement of higher-order somatosensory, rather than visual, areas is crucial for acquiring augmented sensory skills for self-localization during locomotion.
Collapse
Affiliation(s)
- Hiroyuki Sakai
- Human Science Laboratory, Toyota Central R&D Laboratories, Inc., Tokyo, Japan
| | - Sayako Ueda
- TOYOTA Collaboration Center, RIKEN Center for Brain Science, Wako, Japan
| | - Kenichi Ueno
- Support Unit for Functional Magnetic Resonance Imaging, RIKEN Center for Brain Science, Wako, Japan
| | | |
Collapse
|
34
|
Netzer O, Heimler B, Shur A, Behor T, Amedi A. Backward spatial perception can be augmented through a novel visual-to-auditory sensory substitution algorithm. Sci Rep 2021; 11:11944. [PMID: 34099756 PMCID: PMC8184900 DOI: 10.1038/s41598-021-88595-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 02/08/2021] [Indexed: 11/23/2022] Open
Abstract
Can humans extend and augment their natural perceptions during adulthood? Here, we address this fascinating question by investigating the extent to which it is possible to successfully augment visual spatial perception to include the backward spatial field (a region where humans are naturally blind) via other sensory modalities (i.e., audition). We thus developed a sensory-substitution algorithm, the “Topo-Speech” which conveys identity of objects through language, and their exact locations via vocal-sound manipulations, namely two key features of visual spatial perception. Using two different groups of blindfolded sighted participants, we tested the efficacy of this algorithm to successfully convey location of objects in the forward or backward spatial fields following ~ 10 min of training. Results showed that blindfolded sighted adults successfully used the Topo-Speech to locate objects on a 3 × 3 grid either positioned in front of them (forward condition), or behind their back (backward condition). Crucially, performances in the two conditions were entirely comparable. This suggests that novel spatial sensory information conveyed via our existing sensory systems can be successfully encoded to extend/augment human perceptions. The implications of these results are discussed in relation to spatial perception, sensory augmentation and sensory rehabilitation.
Collapse
Affiliation(s)
- Ophir Netzer
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Benedetta Heimler
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzeliya, Israel.,Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel.,Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Shur
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Tomer Behor
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center Herzliya, Herzeliya, Israel. .,Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel.
| |
Collapse
|
35
|
Buchs G, Haimler B, Kerem M, Maidenbaum S, Braun L, Amedi A. A self-training program for sensory substitution devices. PLoS One 2021; 16:e0250281. [PMID: 33905446 PMCID: PMC8078811 DOI: 10.1371/journal.pone.0250281] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/01/2021] [Indexed: 11/30/2022] Open
Abstract
Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.
Collapse
Affiliation(s)
- Galit Buchs
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- * E-mail: (AA); (GB)
| | - Benedetta Haimler
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation (CATR), The Chaim Sheba Medical Center, Ramat Gan, Israel
| | - Menachem Kerem
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
| | - Shachar Maidenbaum
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Biomedical Engineering, Ben Gurion University, Beersheba, Israel
| | - Liraz Braun
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- * E-mail: (AA); (GB)
| |
Collapse
|
36
|
Hofstetter S, Zuiderbaan W, Heimler B, Dumoulin SO, Amedi A. Topographic maps and neural tuning for sensory substitution dimensions learned in adulthood in a congenital blind subject. Neuroimage 2021; 235:118029. [PMID: 33836269 DOI: 10.1016/j.neuroimage.2021.118029] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 01/18/2021] [Accepted: 03/30/2021] [Indexed: 01/28/2023] Open
Abstract
Topographic maps, a key principle of brain organization, emerge during development. It remains unclear, however, whether topographic maps can represent a new sensory experience learned in adulthood. MaMe, a congenitally blind individual, has been extensively trained in adulthood for perception of a 2D auditory-space (soundscape) where the y- and x-axes are represented by pitch and time, respectively. Using population receptive field mapping we found neural populations tuned topographically to pitch, not only in the auditory cortices but also in the parietal and occipito-temporal cortices. Topographic neural tuning to time was revealed in the parietal and occipito-temporal cortices. Some of these maps were found to represent both axes concurrently, enabling MaMe to represent unique locations in the soundscape space. This case study provides proof of concept for the existence of topographic maps tuned to the newly learned soundscape dimensions. These results suggest that topographic maps can be adapted or recycled in adulthood to represent novel sensory experiences.
Collapse
Affiliation(s)
- Shir Hofstetter
- Spinoza Centre for Neuroimaging, Meibergdreef 75, Amsterdam, BK 1105 Netherlands.
| | - Wietske Zuiderbaan
- Spinoza Centre for Neuroimaging, Meibergdreef 75, Amsterdam, BK 1105 Netherlands
| | - Benedetta Heimler
- The Baruch Ivcher Institute for Brain, Mind & Technology, School of Psychology, Interdisciplinary Center (IDC) Herzliya, P.O. Box 167, Herzliya 46150, Israel; Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Serge O Dumoulin
- Spinoza Centre for Neuroimaging, Meibergdreef 75, Amsterdam, BK 1105 Netherlands; Department of Experimental and Applied Psychology, VU University Amsterdam, Amsterdam, BT 1181, Netherlands; Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, CS 3584, Netherlands.
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Mind & Technology, School of Psychology, Interdisciplinary Center (IDC) Herzliya, P.O. Box 167, Herzliya 46150, Israel.
| |
Collapse
|
37
|
Paré S, Bleau M, Djerourou I, Malotaux V, Kupers R, Ptito M. Spatial navigation with horizontally spatialized sounds in early and late blind individuals. PLoS One 2021; 16:e0247448. [PMID: 33635892 PMCID: PMC7909643 DOI: 10.1371/journal.pone.0247448] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 02/07/2021] [Indexed: 12/02/2022] Open
Abstract
Blind individuals often report difficulties to navigate and to detect objects placed outside their peri-personal space. Although classical sensory substitution devices could be helpful in this respect, these devices often give a complex signal which requires intensive training to analyze. New devices that provide a less complex output signal are therefore needed. Here, we evaluate a smartphone-based sensory substitution device that offers navigation guidance based on strictly spatial cues in the form of horizontally spatialized sounds. The system uses multiple sensors to either detect obstacles at a distance directly in front of the user or to create a 3D map of the environment (detection and avoidance mode, respectively), and informs the user with auditory feedback. We tested 12 early blind, 11 late blind and 24 blindfolded-sighted participants for their ability to detect obstacles and to navigate in an obstacle course. The three groups did not differ in the number of objects detected and avoided. However, early blind and late blind participants were faster than their sighted counterparts to navigate through the obstacle course. These results are consistent with previous research on sensory substitution showing that vision can be replaced by other senses to improve performance in a wide variety of tasks in blind individuals. This study offers new evidence that sensory substitution devices based on horizontally spatialized sounds can be used as a navigation tool with a minimal amount of training.
Collapse
Affiliation(s)
- Samuel Paré
- École d’Optométrie, Université de Montréal, Québec, Canada
| | - Maxime Bleau
- École d’Optométrie, Université de Montréal, Québec, Canada
| | | | - Vincent Malotaux
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Ron Kupers
- École d’Optométrie, Université de Montréal, Québec, Canada
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
- Institute of Neuroscience and Pharmacology (INF), University of Copenhagen, Copenhagen, Denmark
| | - Maurice Ptito
- École d’Optométrie, Université de Montréal, Québec, Canada
- Institute of Neuroscience and Pharmacology (INF), University of Copenhagen, Copenhagen, Denmark
- * E-mail:
| |
Collapse
|
38
|
Ptito M, Bleau M, Djerourou I, Paré S, Schneider FC, Chebat DR. Brain-Machine Interfaces to Assist the Blind. Front Hum Neurosci 2021; 15:638887. [PMID: 33633557 PMCID: PMC7901898 DOI: 10.3389/fnhum.2021.638887] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 01/19/2021] [Indexed: 12/31/2022] Open
Abstract
The loss or absence of vision is probably one of the most incapacitating events that can befall a human being. The importance of vision for humans is also reflected in brain anatomy as approximately one third of the human brain is devoted to vision. It is therefore unsurprising that throughout history many attempts have been undertaken to develop devices aiming at substituting for a missing visual capacity. In this review, we present two concepts that have been prevalent over the last two decades. The first concept is sensory substitution, which refers to the use of another sensory modality to perform a task that is normally primarily sub-served by the lost sense. The second concept is cross-modal plasticity, which occurs when loss of input in one sensory modality leads to reorganization in brain representation of other sensory modalities. Both phenomena are training-dependent. We also briefly describe the history of blindness from ancient times to modernity, and then proceed to address the means that have been used to help blind individuals, with an emphasis on modern technologies, invasive (various type of surgical implants) and non-invasive devices. With the advent of brain imaging, it has become possible to peer into the neural substrates of sensory substitution and highlight the magnitude of the plastic processes that lead to a rewired brain. Finally, we will address the important question of the value and practicality of the available technologies and future directions.
Collapse
Affiliation(s)
- Maurice Ptito
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
| | - Maxime Bleau
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| | - Ismaël Djerourou
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| | - Samuel Paré
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| | - Fabien C. Schneider
- TAPE EA7423 University of Lyon-Saint Etienne, Saint Etienne, France
- Neuroradiology Unit, University Hospital of Saint-Etienne, Saint-Etienne, France
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israël
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel, Israël
| |
Collapse
|
39
|
Zilbershtain-Kra Y, Graffi S, Ahissar E, Arieli A. Active sensory substitution allows fast learning via effective motor-sensory strategies. iScience 2021; 24:101918. [PMID: 33392481 PMCID: PMC7773576 DOI: 10.1016/j.isci.2020.101918] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 10/25/2020] [Accepted: 12/07/2020] [Indexed: 11/28/2022] Open
Abstract
We examined the development of new sensing abilities in adults by training participants to perceive remote objects through their fingers. Using an Active-Sensing based sensory Substitution device (ASenSub), participants quickly learned to perceive fast via the new modality and preserved their high performance for more than 20 months. Both sighted and blind participants exhibited almost complete transfer of performance from 2D images to novel 3D physical objects. Perceptual accuracy and speed using the ASenSub were, on average, 300% and 600% better than previous reports for 2D images and 3D objects. This improvement is attributed to the ability of the participants to employ their own motor-sensory strategies. Sighted participants dominant strategy was based on motor-sensory convergence on the most informative regions of objects, similarly to fixation patterns in vision. Congenitally, blind participants did not show such a tendency, and many of their exploratory procedures resembled those observed with natural touch.
Collapse
Affiliation(s)
- Yael Zilbershtain-Kra
- The Department of Neurobiology, Weizmann Institute of Science, 234 Herzl Street, Rehovot 76100, Israel
| | - Shmuel Graffi
- The Department of Neurobiology, Weizmann Institute of Science, 234 Herzl Street, Rehovot 76100, Israel
| | - Ehud Ahissar
- The Department of Neurobiology, Weizmann Institute of Science, 234 Herzl Street, Rehovot 76100, Israel
| | - Amos Arieli
- The Department of Neurobiology, Weizmann Institute of Science, 234 Herzl Street, Rehovot 76100, Israel
| |
Collapse
|
40
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
41
|
Perrotta MV, Asgeirsdottir T, Eagleman DM. Deciphering Sounds Through Patterns of Vibration on the Skin. Neuroscience 2021; 458:77-86. [PMID: 33465416 DOI: 10.1016/j.neuroscience.2021.01.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 12/04/2020] [Accepted: 01/05/2021] [Indexed: 11/26/2022]
Abstract
Sensory substitution refers to the concept of feeding information to the brain via an atypical sensory pathway. We here examined the degree to which participants (deaf and hard of hearing) can learn to identify sounds that are algorithmically translated into spatiotemporal patterns of vibration on the skin of the wrist. In a three-alternative forced choice task, participants could determine the identity of up to 95% and on average 70% of the stimuli simply by the spatial pattern of vibrations on the skin. Performance improved significantly over the course of 1 month. Younger participants tended to score better, possibly because of higher brain plasticity, more sensitive skin, or better skills at playing digital games. Similar results were obtained with pattern discrimination, in which a pattern representing the sound of one word was presented to the skin, followed by that of a second word. Participants answered whether the word was the same or different. With minimal difference pairs (distinguished by only one phoneme, such as "house" and "mouse"), the best performance was 83% (average of 62%), while with non-minimal pairs (such as "house" and "zip") the best performance was 100% (average of 70%). Collectively, these results demonstrate that participants are capable of using the channel of the skin to interpret auditory stimuli, opening the way for low-cost, wearable sensory substitution for the deaf and hard of hearing communities.
Collapse
Affiliation(s)
| | | | - David M Eagleman
- Neosensory, 4 West 4th Street, Suite 301, San Mateo, CA 94402, USA; Department of Psychiatry and Behavioral Sciences, Stanford University, 401 Quarry Road, Stanford, CA 94304, USA.
| |
Collapse
|
42
|
Matuszewski J, Kossowski B, Bola Ł, Banaszkiewicz A, Paplińska M, Gyger L, Kherif F, Szwed M, Frackowiak RS, Jednoróg K, Draganski B, Marchewka A. Brain plasticity dynamics during tactile Braille learning in sighted subjects: Multi-contrast MRI approach. Neuroimage 2020; 227:117613. [PMID: 33307223 DOI: 10.1016/j.neuroimage.2020.117613] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/20/2020] [Accepted: 11/29/2020] [Indexed: 01/11/2023] Open
Abstract
A growing body of empirical evidence supports the notion of diverse neurobiological processes underlying learning-induced plasticity changes in the human brain. There are still open questions about how brain plasticity depends on cognitive task complexity, how it supports interactions between brain systems and with what temporal and spatial trajectory. We investigated brain and behavioural changes in sighted adults during 8-months training of tactile Braille reading whilst monitoring brain structure and function at 5 different time points. We adopted a novel multivariate approach that includes behavioural data and specific MRI protocols sensitive to tissue properties to assess local functional and structural and myelin changes over time. Our results show that while the reading network, located in the ventral occipitotemporal cortex, rapidly adapts to tactile input, sensory areas show changes in grey matter volume and intra-cortical myelin at different times. This approach has allowed us to examine and describe neuroplastic mechanisms underlying complex cognitive systems and their (sensory) inputs and (motor) outputs differentially, at a mesoscopic level.
Collapse
Affiliation(s)
- Jacek Matuszewski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| | - Bartosz Kossowski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Łukasz Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland; Institute of Psychology, Jagiellonian University, Krakow, Poland
| | - Anna Banaszkiewicz
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | | | - Lucien Gyger
- LREN, Department for Clinical Neurosciences, CHUV, University of Lausanne, Lausanne, Switzerland
| | - Ferath Kherif
- LREN, Department for Clinical Neurosciences, CHUV, University of Lausanne, Lausanne, Switzerland
| | - Marcin Szwed
- Institute of Psychology, Jagiellonian University, Krakow, Poland
| | | | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Bogdan Draganski
- LREN, Department for Clinical Neurosciences, CHUV, University of Lausanne, Lausanne, Switzerland; Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| |
Collapse
|
43
|
Yin T, Sun G, Tian Z, Liu M, Gao Y, Dong M, Wu F, Li Z, Liang F, Zeng F, Lan L. The Spontaneous Activity Pattern of the Middle Occipital Gyrus Predicts the Clinical Efficacy of Acupuncture Treatment for Migraine Without Aura. Front Neurol 2020; 11:588207. [PMID: 33240209 PMCID: PMC7680874 DOI: 10.3389/fneur.2020.588207] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 09/30/2020] [Indexed: 12/11/2022] Open
Abstract
The purpose of the present study was to explore whether and to what extent the neuroimaging markers could predict the relief of the symptoms of patients with migraine without aura (MWoA) following a 4-week acupuncture treatment period. In study 1, the advanced multivariate pattern analysis was applied to perform a classification analysis between 40 patients with MWoA and 40 healthy subjects (HS) based on the z-transformed amplitude of low-frequency fluctuation (zALFF) maps. In study 2, the meaningful classifying features were selected as predicting features and the support vector regression models were constructed to predict the clinical efficacy of acupuncture in reducing the frequency of migraine attacks and headache intensity in 40 patients with MWoA. In study 3, a region of interest-based comparison between the pre- and post-treatment zALFF maps was conducted in 33 patients with MwoA to assess the changes in predicting features after acupuncture intervention. The zALFF value of the foci in the bilateral middle occipital gyrus, right fusiform gyrus, left insula, and left superior cerebellum could discriminate patients with MWoA from HS with higher than 70% accuracy. The zALFF value of the clusters in the right and left middle occipital gyrus could effectively predict the relief of headache intensity (R 2 = 0.38 ± 0.059, mean squared error = 2.626 ± 0.325) and frequency of migraine attacks (R 2 = 0.284 ± 0.072, mean squared error = 20.535 ± 2.701) after the 4-week acupuncture treatment period. Moreover, the zALFF values of these two clusters were both significantly reduced after treatment. The present study demonstrated the feasibility and validity of applying machine learning technologies and individual cerebral spontaneous activity patterns to predict acupuncture treatment outcomes in patients with MWoA. The data provided a quantitative benchmark for selecting acupuncture for MWoA.
Collapse
Affiliation(s)
- Tao Yin
- Acupuncture and Tuina School/The 3rd Teaching Hospital, Chengdu University of Traditional Chinese Medicine, Chengdu, China.,Acupuncture and Brain Science Research Center, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Guojuan Sun
- Department of Gynecology, Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Zilei Tian
- Acupuncture and Tuina School/The 3rd Teaching Hospital, Chengdu University of Traditional Chinese Medicine, Chengdu, China.,Acupuncture and Brain Science Research Center, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Mailan Liu
- College of Acupuncture and Moxibustion and Tui-na, Hunan University of Chinese Medicine, Changsha, China
| | - Yujie Gao
- Traditional Chinese Medicine School, Ningxia Medical University, Yinchuan, China
| | - Mingkai Dong
- Department of Acupuncture and Moxibustion, Xinjin Hospital of Traditional Chinese Medicine, Chengdu, China
| | - Feng Wu
- Department of Acupuncture and Moxibustion, Changsha Hospital of Traditional Chinese Medicine, Changsha, China
| | - Zhengjie Li
- Acupuncture and Tuina School/The 3rd Teaching Hospital, Chengdu University of Traditional Chinese Medicine, Chengdu, China.,Acupuncture and Brain Science Research Center, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Fanrong Liang
- Acupuncture and Tuina School/The 3rd Teaching Hospital, Chengdu University of Traditional Chinese Medicine, Chengdu, China.,Key Laboratory of Sichuan Province for Acupuncture and Chronobiology, Chengdu, China
| | - Fang Zeng
- Acupuncture and Tuina School/The 3rd Teaching Hospital, Chengdu University of Traditional Chinese Medicine, Chengdu, China.,Acupuncture and Brain Science Research Center, Chengdu University of Traditional Chinese Medicine, Chengdu, China.,Key Laboratory of Sichuan Province for Acupuncture and Chronobiology, Chengdu, China
| | - Lei Lan
- Acupuncture and Tuina School/The 3rd Teaching Hospital, Chengdu University of Traditional Chinese Medicine, Chengdu, China.,Acupuncture and Brain Science Research Center, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| |
Collapse
|
44
|
Cognitive and Affective Assessment of Navigation and Mobility Tasks for the Visually Impaired via Electroencephalography and Behavioral Signals. SENSORS 2020; 20:s20205821. [PMID: 33076251 PMCID: PMC7602506 DOI: 10.3390/s20205821] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 10/12/2020] [Accepted: 10/13/2020] [Indexed: 11/25/2022]
Abstract
This paper presented the assessment of cognitive load (as an effective real-time index of task difficulty) and the level of brain activation during an experiment in which eight visually impaired subjects performed two types of tasks while using the white cane and the Sound of Vision assistive device with three types of sensory input—audio, haptic, and multimodal (audio and haptic simultaneously). The first task was to identify object properties and the second to navigate and avoid obstacles in both the virtual environment and real-world settings. The results showed that the haptic stimuli were less intuitive than the audio ones and that the navigation with the Sound of Vision device increased cognitive load and working memory. Visual cortex asymmetry was lower in the case of multimodal stimulation than in the case of separate stimulation (audio or haptic). There was no correlation between visual cortical activity and the number of collisions during navigation, regardless of the type of navigation or sensory input. The visual cortex was activated when using the device, but only for the late-blind users. For all the subjects, the navigation with the Sound of Vision device induced a low negative valence, in contrast with the white cane navigation.
Collapse
|
45
|
Ratan Murty NA, Teng S, Beeler D, Mynick A, Oliva A, Kanwisher N. Visual experience is not necessary for the development of face-selectivity in the lateral fusiform gyrus. Proc Natl Acad Sci U S A 2020; 117:23011-23020. [PMID: 32839334 PMCID: PMC7502773 DOI: 10.1073/pnas.2004607117] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
The fusiform face area responds selectively to faces and is causally involved in face perception. How does face-selectivity in the fusiform arise in development, and why does it develop so systematically in the same location across individuals? Preferential cortical responses to faces develop early in infancy, yet evidence is conflicting on the central question of whether visual experience with faces is necessary. Here, we revisit this question by scanning congenitally blind individuals with fMRI while they haptically explored 3D-printed faces and other stimuli. We found robust face-selective responses in the lateral fusiform gyrus of individual blind participants during haptic exploration of stimuli, indicating that neither visual experience with faces nor fovea-biased inputs is necessary for face-selectivity to arise in the lateral fusiform gyrus. Our results instead suggest a role for long-range connectivity in specifying the location of face-selectivity in the human brain.
Collapse
Affiliation(s)
- N Apurva Ratan Murty
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Santani Teng
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA 94115
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - David Beeler
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Anna Mynick
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| |
Collapse
|
46
|
Heimler B, Amedi A. Are critical periods reversible in the adult brain? Insights on cortical specializations based on sensory deprivation studies. Neurosci Biobehav Rev 2020; 116:494-507. [DOI: 10.1016/j.neubiorev.2020.06.034] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 06/07/2020] [Accepted: 06/25/2020] [Indexed: 02/06/2023]
|
47
|
Scurry AN, Huber E, Matera C, Jiang F. Increased Right Posterior STS Recruitment Without Enhanced Directional-Tuning During Tactile Motion Processing in Early Deaf Individuals. Front Neurosci 2020; 14:864. [PMID: 32982667 PMCID: PMC7477335 DOI: 10.3389/fnins.2020.00864] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 07/24/2020] [Indexed: 01/19/2023] Open
Abstract
Upon early sensory deprivation, the remaining modalities often exhibit cross-modal reorganization, such as primary auditory cortex (PAC) recruitment for visual motion processing in early deafness (ED). Previous studies of compensatory plasticity in ED individuals have given less attention to tactile motion processing. In the current study, we aimed to examine the effects of early auditory deprivation on tactile motion processing. We simulated four directions of tactile motion on each participant's right index finger and characterized their tactile motion responses and directional-tuning profiles using population receptive field analysis. Similar tactile motion responses were found within primary (SI) and secondary (SII) somatosensory cortices between ED and hearing control groups, whereas ED individuals showed a reduced proportion of voxels with directionally tuned responses in SI contralateral to stimulation. There were also significant but minimal responses to tactile motion within PAC for both groups. While early deaf individuals show significantly larger recruitment of right posterior superior temporal sulcus (pSTS) region upon tactile motion stimulation, there was no evidence of enhanced directional tuning. Greater recruitment of right pSTS region is consistent with prior studies reporting reorganization of multimodal areas due to sensory deprivation. The absence of increased directional tuning within the right pSTS region may suggest a more distributed population of neurons dedicated to processing tactile spatial information as a consequence of early auditory deprivation.
Collapse
Affiliation(s)
- Alexandra N Scurry
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Elizabeth Huber
- Department of Speech and Hearing Sciences, Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, United States
| | - Courtney Matera
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Fang Jiang
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| |
Collapse
|
48
|
Chebat DR, Schneider FC, Ptito M. Spatial Competence and Brain Plasticity in Congenital Blindness via Sensory Substitution Devices. Front Neurosci 2020; 14:815. [PMID: 32848575 PMCID: PMC7406645 DOI: 10.3389/fnins.2020.00815] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 07/10/2020] [Indexed: 12/22/2022] Open
Abstract
In congenital blindness (CB), tactile, and auditory information can be reinterpreted by the brain to compensate for visual information through mechanisms of brain plasticity triggered by training. Visual deprivation does not cause a cognitive spatial deficit since blind people are able to acquire spatial knowledge about the environment. However, this spatial competence takes longer to achieve but is eventually reached through training-induced plasticity. Congenitally blind individuals can further improve their spatial skills with the extensive use of sensory substitution devices (SSDs), either visual-to-tactile or visual-to-auditory. Using a combination of functional and anatomical neuroimaging techniques, our recent work has demonstrated the impact of spatial training with both visual to tactile and visual to auditory SSDs on brain plasticity, cortical processing, and the achievement of certain forms of spatial competence. The comparison of performances between CB and sighted people using several different sensory substitution devices in perceptual and sensory-motor tasks uncovered the striking ability of the brain to rewire itself during perceptual learning and to interpret novel sensory information even during adulthood. We discuss here the implications of these findings for helping blind people in navigation tasks and to increase their accessibility to both real and virtual environments.
Collapse
Affiliation(s)
- Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel, Israel
| | - Fabien C. Schneider
- Department of Radiology, University of Lyon, Saint-Etienne, France
- Neuroradiology Unit, University Hospital of Saint-Etienne, Saint-Etienne, France
| | - Maurice Ptito
- BRAIN Lab, Department of Neuroscience and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- Chaire de Recherche Harland Sanders en Sciences de la Vision, École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| |
Collapse
|
49
|
Jicol C, Lloyd-Esenkaya T, Proulx MJ, Lange-Smith S, Scheller M, O'Neill E, Petrini K. Efficiency of Sensory Substitution Devices Alone and in Combination With Self-Motion for Spatial Navigation in Sighted and Visually Impaired. Front Psychol 2020; 11:1443. [PMID: 32754082 PMCID: PMC7381305 DOI: 10.3389/fpsyg.2020.01443] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 05/29/2020] [Indexed: 11/13/2022] Open
Abstract
Human adults can optimally combine vision with self-motion to facilitate navigation. In the absence of visual input (e.g., dark environments and visual impairments), sensory substitution devices (SSDs), such as The vOICe or BrainPort, which translate visual information into auditory or tactile information, could be used to increase navigation precision when integrated together or with self-motion. In Experiment 1, we compared and assessed together The vOICe and BrainPort in aerial maps task performed by a group of sighted participants. In Experiment 2, we examined whether sighted individuals and a group of visually impaired (VI) individuals could benefit from using The vOICe, with and without self-motion, to accurately navigate a three-dimensional (3D) environment. In both studies, 3D motion tracking data were used to determine the level of precision with which participants performed two different tasks (an egocentric and an allocentric task) and three different conditions (two unisensory conditions and one multisensory condition). In Experiment 1, we found no benefit of using the devices together. In Experiment 2, the sighted performance during The vOICe was almost as good as that for self-motion despite a short training period, although we found no benefit (reduction in variability) of using The vOICe and self-motion in combination compared to the two in isolation. In contrast, the group of VI participants did benefit from combining The vOICe and self-motion despite the low number of trials. Finally, while both groups became more accurate in their use of The vOICe with increased trials, only the VI group showed an increased level of accuracy in the combined condition. Our findings highlight how exploiting non-visual multisensory integration to develop new assistive technologies could be key to help blind and VI persons, especially due to their difficulty in attaining allocentric information.
Collapse
Affiliation(s)
- Crescent Jicol
- Department of Psychology, University of Bath, Bath, United Kingdom
| | | | - Michael J Proulx
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Simon Lange-Smith
- School of Sport and Exercise Sciences, Liverpool John Moores University, Liverpool, United Kingdom
| | - Meike Scheller
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, United Kingdom
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
50
|
Kirsch LP, Job X, Auvray M. Mixing up the Senses: Sensory Substitution Is Not a Form of Artificially Induced Synaesthesia. Multisens Res 2020; 34:297-322. [PMID: 33706280 DOI: 10.1163/22134808-bja10010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Accepted: 05/26/2020] [Indexed: 11/19/2022]
Abstract
Sensory Substitution Devices (SSDs) are typically used to restore functionality of a sensory modality that has been lost, like vision for the blind, by recruiting another sensory modality such as touch or audition. Sensory substitution has given rise to many debates in psychology, neuroscience and philosophy regarding the nature of experience when using SSDs. Questions first arose as to whether the experience of sensory substitution is represented by the substituted information, the substituting information, or a multisensory combination of the two. More recently, parallels have been drawn between sensory substitution and synaesthesia, a rare condition in which individuals involuntarily experience a percept in one sensory or cognitive pathway when another one is stimulated. Here, we explore the efficacy of understanding sensory substitution as a form of 'artificial synaesthesia'. We identify several problems with previous suggestions for a link between these two phenomena. Furthermore, we find that sensory substitution does not fulfil the essential criteria that characterise synaesthesia. We conclude that sensory substitution and synaesthesia are independent of each other and thus, the 'artificial synaesthesia' view of sensory substitution should be rejected.
Collapse
Affiliation(s)
- Louise P Kirsch
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France
| | - Xavier Job
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France
| | - Malika Auvray
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France
| |
Collapse
|