1
|
Norman LJ, Hartley T, Thaler L. Changes in primary visual and auditory cortex of blind and sighted adults following 10 weeks of click-based echolocation training. Cereb Cortex 2024; 34:bhae239. [PMID: 38897817 PMCID: PMC11186672 DOI: 10.1093/cercor/bhae239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 05/14/2024] [Accepted: 05/29/2024] [Indexed: 06/21/2024] Open
Abstract
Recent work suggests that the adult human brain is very adaptable when it comes to sensory processing. In this context, it has also been suggested that structural "blueprints" may fundamentally constrain neuroplastic change, e.g. in response to sensory deprivation. Here, we trained 12 blind participants and 14 sighted participants in echolocation over a 10-week period, and used MRI in a pre-post design to measure functional and structural brain changes. We found that blind participants and sighted participants together showed a training-induced increase in activation in left and right V1 in response to echoes, a finding difficult to reconcile with the view that sensory cortex is strictly organized by modality. Further, blind participants and sighted participants showed a training induced increase in activation in right A1 in response to sounds per se (i.e. not echo-specific), and this was accompanied by an increase in gray matter density in right A1 in blind participants and in adjacent acoustic areas in sighted participants. The similarity in functional results between sighted participants and blind participants is consistent with the idea that reorganization may be governed by similar principles in the two groups, yet our structural analyses also showed differences between the groups suggesting that a more nuanced view may be required.
Collapse
Affiliation(s)
- Liam J Norman
- Department of Psychology, Durham University, Durham, DH1 3LE, UK
| | - Tom Hartley
- Department of Psychology and York Biomedical Research Institute, University of York, Heslington, YO10 5DD, UK
| | - Lore Thaler
- Department of Psychology, Durham University, Durham, DH1 3LE, UK
| |
Collapse
|
2
|
Liu Y, Gao Y, Shu H, Li Q, Ge Q, Liao X, Pan Y, Wu J, Su T, Zhang L, Liang R, Shao Y. Altered brain network centrality in patients with orbital fracture: A resting‑state functional MRI study. Exp Ther Med 2023; 26:552. [PMID: 37941594 PMCID: PMC10628639 DOI: 10.3892/etm.2023.12251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 02/23/2023] [Indexed: 11/10/2023] Open
Abstract
The present study aimed to investigate potential functional network brain-activity abnormalities in individuals with orbital fracture (OF) using the voxel-wise degree centrality (DC) technique. The present study included 20 patients with OF (12 males and 8 females) and 20 healthy controls (HC; 12 males and 8 females), who were matched for gender, age and educational attainment. Functional magnetic resonance imaging (fMRI) in the resting state has been widely applied in several fields. Receiver operating characteristic (ROC) curves were calculated to distinguish between patients with OF and HCs. In addition, correlation analyses were performed between behavioral performance and average DC values in various locations. The DC technique was used to assess unprompted brain activity. Right cerebellum 9 region (Cerebelum_9_R) and left cerebellar peduncle 2 area (Cerebelum_Crus2_L) DC values of patients with OF were increased compared with those in HCs. Cerebelum_9_R and Cerebelum_Crus2_L had area under the ROC curve values of 0.983 and 1.000, respectively. Patients with OF appear to have several brain regions that exhibited aberrant brain network characteristics, which raises the possibility of neuropathic causes and offers novel therapeutic options.
Collapse
Affiliation(s)
- Yinuo Liu
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Jiangxi Centre of National Clinical Ophthalmology Institute, Nanchang, Jiangxi 330006, P.R. China
- The Second Clinical Medical College, Nanchang University, Nanchang, Jiangxi 330006, P.R. China
| | - Yuxuan Gao
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Jiangxi Centre of National Clinical Ophthalmology Institute, Nanchang, Jiangxi 330006, P.R. China
| | - Huiye Shu
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Jiangxi Centre of National Clinical Ophthalmology Institute, Nanchang, Jiangxi 330006, P.R. China
| | - Qiuyu Li
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Jiangxi Centre of National Clinical Ophthalmology Institute, Nanchang, Jiangxi 330006, P.R. China
| | - Qianmin Ge
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Jiangxi Centre of National Clinical Ophthalmology Institute, Nanchang, Jiangxi 330006, P.R. China
| | - Xulin Liao
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong 999077, P.R. China
| | - Yicong Pan
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Jiangxi Centre of National Clinical Ophthalmology Institute, Nanchang, Jiangxi 330006, P.R. China
| | - Jieli Wu
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Eye Institute of Xiamen University, Xiamen University School of Medicine, Xiamen, Fujian 361102, P.R. China
| | - Ting Su
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Eye Institute of Xiamen University, Xiamen University School of Medicine, Xiamen, Fujian 361102, P.R. China
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA
| | - Lijuan Zhang
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Jiangxi Centre of National Clinical Ophthalmology Institute, Nanchang, Jiangxi 330006, P.R. China
| | - Rongbin Liang
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Jiangxi Centre of National Clinical Ophthalmology Institute, Nanchang, Jiangxi 330006, P.R. China
| | - Yi Shao
- Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Jiangxi Centre of National Clinical Ophthalmology Institute, Nanchang, Jiangxi 330006, P.R. China
| |
Collapse
|
3
|
Objective Evaluation of Obstacle Perception Using Spontaneous Body Movements of Blind People Evoked by Movements of Acoustic Virtual Wall. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2022. [DOI: 10.1155/2022/9475983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Obstacle perception using sound is the ability to detect silent objects, such as walls and poles. It is very important for blind people to recognize their environment using acoustic information through their auditory sense when walking or conducting various daily activities. In this paper, to develop an objective method for evaluating the degree of obstacle perception acquisition in the education and rehabilitation of the blind, the authors measured the spontaneous body movements evoked by the approach of an acoustic virtual wall. Ten blind persons who have experienced obstacle perception in their daily life, and seven sighted persons with no such experience participated in the experiment. The reciprocal (approach and receding) movements of the virtual wall were presented using simulated reflected sound, and the spontaneous body movements of the subjects were measured. As the results indicate, eight of the ten blind participants showed large maximum values for the correlation function between the wall and their body movements, whereas six of the seven sighted participants showed small maximum values. These results indicate that body movements can be used for an objective evaluation of obstacle perception. In particular, it was determined that the maximum value of the correlation function is the most appropriate for such an evaluation, because it does not depend on the subject’s physique.
Collapse
|
4
|
Bleau M, Paré S, Chebat DR, Kupers R, Nemargut JP, Ptito M. Neural substrates of spatial processing and navigation in blindness: An activation likelihood estimation meta-analysis. Front Neurosci 2022; 16:1010354. [PMID: 36340755 PMCID: PMC9630591 DOI: 10.3389/fnins.2022.1010354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 09/30/2022] [Indexed: 12/02/2022] Open
Abstract
Even though vision is considered the best suited sensory modality to acquire spatial information, blind individuals can form spatial representations to navigate and orient themselves efficiently in space. Consequently, many studies support the amodality hypothesis of spatial representations since sensory modalities other than vision contribute to the formation of spatial representations, independently of visual experience and imagery. However, given the high variability in abilities and deficits observed in blind populations, a clear consensus about the neural representations of space has yet to be established. To this end, we performed a meta-analysis of the literature on the neural correlates of spatial processing and navigation via sensory modalities other than vision, like touch and audition, in individuals with early and late onset blindness. An activation likelihood estimation (ALE) analysis of the neuroimaging literature revealed that early blind individuals and sighted controls activate the same neural networks in the processing of non-visual spatial information and navigation, including the posterior parietal cortex, frontal eye fields, insula, and the hippocampal complex. Furthermore, blind individuals also recruit primary and associative occipital areas involved in visuo-spatial processing via cross-modal plasticity mechanisms. The scarcity of studies involving late blind individuals did not allow us to establish a clear consensus about the neural substrates of spatial representations in this specific population. In conclusion, the results of our analysis on neuroimaging studies involving early blind individuals support the amodality hypothesis of spatial representations.
Collapse
Affiliation(s)
- Maxime Bleau
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
| | - Samuel Paré
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel University, Ariel, Israel
| | - Ron Kupers
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
- Institute of Neuroscience, Faculty of Medicine, Université de Louvain, Brussels, Belgium
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
| | | | - Maurice Ptito
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- *Correspondence: Maurice Ptito,
| |
Collapse
|
5
|
Sakai H, Ueda S, Ueno K, Kumada T. Neuroplastic Reorganization Induced by Sensory Augmentation for Self-Localization During Locomotion. FRONTIERS IN NEUROERGONOMICS 2021; 2:691993. [PMID: 38235242 PMCID: PMC10790880 DOI: 10.3389/fnrgo.2021.691993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 07/21/2021] [Indexed: 01/19/2024]
Abstract
Sensory skills can be augmented through training and technological support. This process is underpinned by neural plasticity in the brain. We previously demonstrated that auditory-based sensory augmentation can be used to assist self-localization during locomotion. However, the neural mechanisms underlying this phenomenon remain unclear. Here, by using functional magnetic resonance imaging, we aimed to identify the neuroplastic reorganization induced by sensory augmentation training for self-localization during locomotion. We compared activation in response to auditory cues for self-localization before, the day after, and 1 month after 8 days of sensory augmentation training in a simulated driving environment. Self-localization accuracy improved after sensory augmentation training, compared with the control (normal driving) condition; importantly, sensory augmentation training resulted in auditory responses not only in temporal auditory areas but also in higher-order somatosensory areas extending to the supramarginal gyrus and the parietal operculum. This sensory reorganization had disappeared by 1 month after the end of the training. These results suggest that the use of auditory cues for self-localization during locomotion relies on multimodality in higher-order somatosensory areas, despite substantial evidence that information for self-localization during driving is estimated from visual cues on the proximal part of the road. Our findings imply that the involvement of higher-order somatosensory, rather than visual, areas is crucial for acquiring augmented sensory skills for self-localization during locomotion.
Collapse
Affiliation(s)
- Hiroyuki Sakai
- Human Science Laboratory, Toyota Central R&D Laboratories, Inc., Tokyo, Japan
| | - Sayako Ueda
- TOYOTA Collaboration Center, RIKEN Center for Brain Science, Wako, Japan
| | - Kenichi Ueno
- Support Unit for Functional Magnetic Resonance Imaging, RIKEN Center for Brain Science, Wako, Japan
| | | |
Collapse
|
6
|
Abstract
There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perception and production of speech, music and other sounds. More recent evidence indicates that there are computational distinctions between the rostral and caudal primate auditory cortex that may underlie functional differences in auditory processing. These functional differences may originate from differences in the response times and temporal profiles of neurons in the rostral and caudal auditory cortex, suggesting that computational accounts of primate auditory pathways should focus on the implications of these temporal response differences.
Collapse
|
7
|
Abstract
Human speech perception is a paradigm example of the complexity of human linguistic processing; however, it is also the dominant way of expressing vocal identity and is critically important for social interactions. Here, I review the ways that the speech, the talker, and the social nature of speech interact and how this may be computed in the human brain, using models and approaches from nonhuman primate studies. I explore the extent to which domain-general approaches may be able to account for some of these neural findings. Finally, I address the importance of extending these findings into a better understanding of the social use of speech in conversations.
Collapse
Affiliation(s)
- Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
8
|
Thaler L, Zhang X, Antoniou M, Kish DC, Cowie D. The flexible action system: Click-based echolocation may replace certain visual functionality for adaptive walking. J Exp Psychol Hum Percept Perform 2019; 46:21-35. [PMID: 31556685 PMCID: PMC6936248 DOI: 10.1037/xhp0000697] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People use sensory, in particular visual, information to guide actions such as walking around obstacles, grasping or reaching. However, it is presently unclear how malleable the sensorimotor system is. The present study investigated this by measuring how click-based echolocation may be used to avoid obstacles while walking. We tested 7 blind echolocation experts, 14 sighted, and 10 blind echolocation beginners. For comparison, we also tested 10 sighted participants, who used vision. To maximize the relevance of our research for people with vision impairments, we also included a condition where the long cane was used and considered obstacles at different elevations. Motion capture and sound data were acquired simultaneously. We found that echolocation experts walked just as fast as sighted participants using vision, and faster than either sighted or blind echolocation beginners. Walking paths of echolocation experts indicated early and smooth adjustments, similar to those shown by sighted people using vision and different from later and more abrupt adjustments of beginners. Further, for all participants, the use of echolocation significantly decreased collision frequency with obstacles at head, but not ground level. Further analyses showed that participants who made clicks with higher spectral frequency content walked faster, and that for experts higher clicking rates were associated with faster walking. The results highlight that people can use novel sensory information (here, echolocation) to guide actions, demonstrating the action system’s ability to adapt to changes in sensory input. They also highlight that regular use of echolocation enhances sensory-motor coordination for walking in blind people. Vision loss has negative consequences for people’s mobility. The current report demonstrates that echolocation might replace certain visual functionality for adaptive walking. Importantly, the report also highlights that echolocation and long cane are complementary mobility techniques. The findings have direct relevance for professionals involved in mobility instruction and for people who are blind.
Collapse
Affiliation(s)
| | - Xinyu Zhang
- School of Information and Electronics, Beijing Institute of Technology
| | - Michail Antoniou
- Department of Electronic Electrical and Systems Engineering, School of Engineering, University of Birmingham
| | | | | |
Collapse
|
9
|
Abstract
This study investigated the influence of body motion on an echolocation task. We asked a group of blindfolded novice sighted participants to walk along a corridor, made with plastic sound-reflecting panels. By self-generating mouth clicks, the participants attempted to understand some spatial properties of the corridor, i.e. a left turn, a right turn or a dead end. They were asked to explore the corridor and stop whenever they were confident about the corridor shape. Their body motion was captured by a camera system and coded. Most participants were able to accomplish the task with the percentage of correct guesses above the chance level. We found a mutual interaction between some kinematic variables that can lead to optimal echolocation skills. These variables are head motion, accounting for spatial exploration, the motion stop-point of the person and the amount of correct guesses about the spatial structure. The results confirmed that sighted people are able to use self-generated echoes to navigate in a complex environment. The inter-individual variability and the quality of echolocation tasks seems to depend on how and how much the space is explored.
Collapse
|
10
|
Thaler L, Foresteire D. Visual sensory stimulation interferes with people's ability to echolocate object size. Sci Rep 2017; 7:13069. [PMID: 29026115 PMCID: PMC5638915 DOI: 10.1038/s41598-017-12967-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 09/14/2017] [Indexed: 12/03/2022] Open
Abstract
Echolocation is the ability to use sound-echoes to infer spatial information about the environment. People can echolocate for example by making mouth clicks. Previous research suggests that echolocation in blind people activates brain areas that process light in sighted people. Research has also shown that echolocation in blind people may replace vision for calibration of external space. In the current study we investigated if echolocation may also draw on ‘visual’ resources in the sighted brain. To this end, we paired a sensory interference paradigm with an echolocation task. We found that exposure to an uninformative visual stimulus (i.e. white light) while simultaneously echolocating significantly reduced participants’ ability to accurately judge object size. In contrast, a tactile stimulus (i.e. vibration on the skin) did not lead to a significant change in performance (neither in sighted, nor blind echo expert participants). Furthermore, we found that the same visual stimulus did not affect performance in auditory control tasks that required detection of changes in sound intensity, sound frequency or sound location. The results suggest that processing of visual and echo-acoustic information draw on common neural resources.
Collapse
Affiliation(s)
- L Thaler
- Department of Psychology, Durham University, Durham, United Kingdom.
| | - D Foresteire
- Department of Psychology, Durham University, Durham, United Kingdom
| |
Collapse
|
11
|
Chen Y, Crawford JD. Cortical Activation during Landmark-Centered vs. Gaze-Centered Memory of Saccade Targets in the Human: An FMRI Study. Front Syst Neurosci 2017; 11:44. [PMID: 28690501 PMCID: PMC5481872 DOI: 10.3389/fnsys.2017.00044] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2017] [Accepted: 06/06/2017] [Indexed: 11/13/2022] Open
Abstract
A remembered saccade target could be encoded in egocentric coordinates such as gaze-centered, or relative to some external allocentric landmark that is independent of the target or gaze (landmark-centered). In comparison to egocentric mechanisms, very little is known about such a landmark-centered representation. Here, we used an event-related fMRI design to identify brain areas supporting these two types of spatial coding (i.e., landmark-centered vs. gaze-centered) for target memory during the Delay phase where only target location, not saccade direction, was specified. The paradigm included three tasks with identical display of visual stimuli but different auditory instructions: Landmark Saccade (remember target location relative to a visual landmark, independent of gaze), Control Saccade (remember original target location relative to gaze fixation, independent of the landmark), and a non-spatial control, Color Report (report target color). During the Delay phase, the Control and Landmark Saccade tasks activated overlapping areas in posterior parietal cortex (PPC) and frontal cortex as compared to the color control, but with higher activation in PPC for target coding in the Control Saccade task and higher activation in temporal and occipital cortex for target coding in Landmark Saccade task. Gaze-centered directional selectivity was observed in superior occipital gyrus and inferior occipital gyrus, whereas landmark-centered directional selectivity was observed in precuneus and midposterior intraparietal sulcus. During the Response phase after saccade direction was specified, the parietofrontal network in the left hemisphere showed higher activation for rightward than leftward saccades. Our results suggest that cortical activation for coding saccade target direction relative to a visual landmark differs from gaze-centered directional selectivity for target memory, from the mechanisms for other types of allocentric tasks, and from the directionally selective mechanisms for saccade planning and execution.
Collapse
Affiliation(s)
- Ying Chen
- Center for Vision Research, York University, TorontoON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, TorontoON, Canada.,Canadian Action and Perception Network, TorontoON, Canada
| | - J D Crawford
- Center for Vision Research, York University, TorontoON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, TorontoON, Canada.,Canadian Action and Perception Network, TorontoON, Canada.,Vision: Science to Applications Program, York University, TorontoON, Canada
| |
Collapse
|
12
|
Rowan D, Papadopoulos T, Archer L, Goodhew A, Cozens H, Lopez RG, Edwards D, Holmes H, Allen R. The detection of 'virtual' objects using echoes by humans: Spectral cues. Hear Res 2017; 350:205-216. [PMID: 28511103 DOI: 10.1016/j.heares.2017.04.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2016] [Revised: 04/07/2017] [Accepted: 04/14/2017] [Indexed: 11/30/2022]
Abstract
Some blind people use echoes to detect discrete, silent objects to support their spatial orientation/navigation, independence, safety and wellbeing. The acoustical features that people use for this are not well understood. Listening to changes in spectral shape due to the presence of an object could be important for object detection and avoidance, especially at short range, although it is currently not known whether it is possible with echolocation-related sounds. Bands of noise were convolved with recordings of binaural impulse responses of objects in an anechoic chamber to create 'virtual objects', which were analysed and played to sighted and blind listeners inexperienced in echolocation. The sounds were also manipulated to remove cues unrelated to spectral shape. Most listeners could accurately detect hard flat objects using changes in spectral shape. The useful spectral changes for object detection occurred above approximately 3 kHz, as with object localisation. However, energy in the sounds below 3 kHz was required to exploit changes in spectral shape for object detection, whereas energy below 3 kHz impaired object localisation. Further recordings showed that the spectral changes were diminished by room reverberation. While good high-frequency hearing is generally important for echolocation, the optimal echo-generating stimulus will probably depend on the task.
Collapse
Affiliation(s)
- Daniel Rowan
- Institute of Sound and Vibration Research, University of Southampton, Southampton, Hants, SO17 1BJ, UK.
| | - Timos Papadopoulos
- Biodiversity Institute, Department of Zoology, and Machine Learning Research Group, Department of Engineering Science, University of Oxford, Oxford, UK
| | | | | | | | | | - David Edwards
- Yeovil District Hospital NHS Foundation Trust, Yeovil, UK
| | | | - Robert Allen
- Institute of Sound and Vibration Research, University of Southampton, Southampton, Hants, SO17 1BJ, UK
| |
Collapse
|
13
|
Kolarik AJ, Scarfe AC, Moore BCJ, Pardhan S. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation. PLoS One 2017; 12:e0175750. [PMID: 28407000 PMCID: PMC5391114 DOI: 10.1371/journal.pone.0175750] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2016] [Accepted: 03/30/2017] [Indexed: 11/18/2022] Open
Abstract
Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.
Collapse
Affiliation(s)
- Andrew J. Kolarik
- Vision and Eye Research Unit (VERU), Postgraduate Medical Institute, Anglia Ruskin University, Cambridge, United Kingdom
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- Centre for the Study of the Senses, Institute of Philosophy, University of London, London, United Kingdom
- * E-mail:
| | - Amy C. Scarfe
- Vision and Eye Research Unit (VERU), Postgraduate Medical Institute, Anglia Ruskin University, Cambridge, United Kingdom
- Department of Clinical Engineering, Medical Imaging and Medical Physics Directorate, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, United Kingdom
| | - Brian C. J. Moore
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Shahina Pardhan
- Vision and Eye Research Unit (VERU), Postgraduate Medical Institute, Anglia Ruskin University, Cambridge, United Kingdom
| |
Collapse
|
14
|
Human Exploration of Enclosed Spaces through Echolocation. J Neurosci 2017; 37:1614-1627. [PMID: 28073936 DOI: 10.1523/jneurosci.1566-12.2016] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2012] [Revised: 11/23/2016] [Accepted: 12/01/2016] [Indexed: 11/21/2022] Open
Abstract
Some blind humans have developed echolocation, as a method of navigation in space. Echolocation is a truly active sense because subjects analyze echoes of dedicated, self-generated sounds to assess space around them. Using a special virtual space technique, we assess how humans perceive enclosed spaces through echolocation, thereby revealing the interplay between sensory and vocal-motor neural activity while humans perform this task. Sighted subjects were trained to detect small changes in virtual-room size analyzing real-time generated echoes of their vocalizations. Individual differences in performance were related to the type and number of vocalizations produced. We then asked subjects to estimate virtual-room size with either active or passive sounds while measuring their brain activity with fMRI. Subjects were better at estimating room size when actively vocalizing. This was reflected in the hemodynamic activity of vocal-motor cortices, even after individual motor and sensory components were removed. Activity in these areas also varied with perceived room size, although the vocal-motor output was unchanged. In addition, thalamic and auditory-midbrain activity was correlated with perceived room size; a likely result of top-down auditory pathways for human echolocation, comparable with those described in echolocating bats. Our data provide evidence that human echolocation is supported by active sensing, both behaviorally and in terms of brain activity. The neural sensory-motor coupling complements the fundamental acoustic motor-sensory coupling via the environment in echolocation.SIGNIFICANCE STATEMENT Passive listening is the predominant method for examining brain activity during echolocation, the auditory analysis of self-generated sounds. We show that sighted humans perform better when they actively vocalize than during passive listening. Correspondingly, vocal motor and cerebellar activity is greater during active echolocation than vocalization alone. Motor and subcortical auditory brain activity covaries with the auditory percept, although motor output is unchanged. Our results reveal behaviorally relevant neural sensory-motor coupling during echolocation.
Collapse
|
15
|
Thaler L, Goodale MA. Echolocation in humans: an overview. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2016; 7:382-393. [PMID: 27538733 DOI: 10.1002/wcs.1408] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2016] [Revised: 06/23/2016] [Accepted: 06/27/2016] [Indexed: 01/08/2023]
Abstract
Bats and dolphins are known for their ability to use echolocation. They emit bursts of sounds and listen to the echoes that bounce back to detect the objects in their environment. What is not as well-known is that some blind people have learned to do the same thing, making mouth clicks, for example, and using the returning echoes from those clicks to sense obstacles and objects of interest in their surroundings. The current review explores some of the research that has examined human echolocation and the changes that have been observed in the brains of echolocation experts. We also discuss potential applications and assistive technology based on echolocation. Blind echolocation experts can sense small differences in the location of objects, differentiate between objects of various sizes and shapes, and even between objects made of different materials, just by listening to the reflected echoes from mouth clicks. It is clear that echolocation may enable some blind people to do things that are otherwise thought to be impossible without vision, potentially providing them with a high degree of independence in their daily lives and demonstrating that echolocation can serve as an effective mobility strategy in the blind. Neuroimaging has shown that the processing of echoes activates brain regions in blind echolocators that would normally support vision in the sighted brain, and that the patterns of these activations are modulated by the information carried by the echoes. This work is shedding new light on just how plastic the human brain is. WIREs Cogn Sci 2016, 7:382-393. doi: 10.1002/wcs.1408 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Lore Thaler
- Department of Psychology, Durham University, Durham, UK.
| | - Melvyn A Goodale
- The Brain and Mind Institute, Department of Psychology, University of Western Ontario, Ontario, Canada
| |
Collapse
|
16
|
An assessment of auditory-guided locomotion in an obstacle circumvention task. Exp Brain Res 2016; 234:1725-35. [PMID: 26879767 PMCID: PMC4851710 DOI: 10.1007/s00221-016-4567-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2015] [Accepted: 11/30/2015] [Indexed: 12/11/2022]
Abstract
This study investigated how effectively audition can be used to guide navigation around an obstacle. Ten blindfolded normally sighted participants navigated around a 0.6 × 2 m obstacle while producing self-generated mouth click sounds. Objective movement performance was measured using a Vicon motion capture system. Performance with full vision without generating sound was used as a baseline for comparison. The obstacle’s location was varied randomly from trial to trial: it was either straight ahead or 25 cm to the left or right relative to the participant. Although audition provided sufficient information to detect the obstacle and guide participants around it without collision in the majority of trials, buffer space (clearance between the shoulder and obstacle), overall movement times, and number of velocity corrections were significantly (p < 0.05) greater with auditory guidance than visual guidance. Collisions sometime occurred under auditory guidance, suggesting that audition did not always provide an accurate estimate of the space between the participant and obstacle. Unlike visual guidance, participants did not always walk around the side that afforded the most space during auditory guidance. Mean buffer space was 1.8 times higher under auditory than under visual guidance. Results suggest that sound can be used to generate buffer space when vision is unavailable, allowing navigation around an obstacle without collision in the majority of trials.
Collapse
|