1
|
Norman LJ, Hartley T, Thaler L. Changes in primary visual and auditory cortex of blind and sighted adults following 10 weeks of click-based echolocation training. Cereb Cortex 2024; 34:bhae239. [PMID: 38897817 DOI: 10.1093/cercor/bhae239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 05/14/2024] [Accepted: 05/29/2024] [Indexed: 06/21/2024] Open
Abstract
Recent work suggests that the adult human brain is very adaptable when it comes to sensory processing. In this context, it has also been suggested that structural "blueprints" may fundamentally constrain neuroplastic change, e.g. in response to sensory deprivation. Here, we trained 12 blind participants and 14 sighted participants in echolocation over a 10-week period, and used MRI in a pre-post design to measure functional and structural brain changes. We found that blind participants and sighted participants together showed a training-induced increase in activation in left and right V1 in response to echoes, a finding difficult to reconcile with the view that sensory cortex is strictly organized by modality. Further, blind participants and sighted participants showed a training induced increase in activation in right A1 in response to sounds per se (i.e. not echo-specific), and this was accompanied by an increase in gray matter density in right A1 in blind participants and in adjacent acoustic areas in sighted participants. The similarity in functional results between sighted participants and blind participants is consistent with the idea that reorganization may be governed by similar principles in the two groups, yet our structural analyses also showed differences between the groups suggesting that a more nuanced view may be required.
Collapse
Affiliation(s)
- Liam J Norman
- Department of Psychology, Durham University, Durham, DH1 3LE, UK
| | - Tom Hartley
- Department of Psychology and York Biomedical Research Institute, University of York, Heslington, YO10 5DD, UK
| | - Lore Thaler
- Department of Psychology, Durham University, Durham, DH1 3LE, UK
| |
Collapse
|
2
|
Gabdreshov G, Magzymov D, Yensebayev N. Preliminary investigation of SEZUAL device for basic material identification and simple spatial navigation for blind and visually impaired people. Disabil Rehabil Assist Technol 2024; 19:1343-1350. [PMID: 36756982 DOI: 10.1080/17483107.2023.2176555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 12/27/2022] [Accepted: 01/31/2023] [Indexed: 02/10/2023]
Abstract
PURPOSE we present a preliminary set of experimental studies that demonstrates device-aided echolocation enabling in blind and visually impaired individuals. The proposed device emits a click-like sound into the surrounding space and returning sound is perceived by participants to infer the surrounding environment. MATERIALS AND METHODS two sets of experiments were set up to evaluate the echolocation abilities of nine blind participants. The first setup was designed to identify four material types based on the sound reflection properties of materials, such as glass, metal, wood, and ceramics. The second setup was navigation through a basic maze with the device. RESULTS experimental data demonstrate that the use of the proposed device enables active echolocation abilities in blind participants, particularly for material identification and spatial mobility. CONCLUSION the proposed device can potentially be used to rehabilitate disabled blind and visually impaired individuals in terms of spatial mobility and orientation.
Collapse
|
3
|
Thaler L, Castillo-Serrano JG, Kish D, Norman LJ. Effects of type of emission and masking sound, and their spatial correspondence, on blind and sighted people's ability to echolocate. Neuropsychologia 2024; 196:108822. [PMID: 38342179 DOI: 10.1016/j.neuropsychologia.2024.108822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 01/30/2024] [Accepted: 02/08/2024] [Indexed: 02/13/2024]
Abstract
Ambient sound can mask acoustic signals. The current study addressed how echolocation in people is affected by masking sound, and the role played by type of sound and spatial (i.e. binaural) similarity. We also investigated the role played by blindness and long-term experience with echolocation, by testing echolocation experts, as well as blind and sighted people new to echolocation. Results were obtained in two echolocation tasks where participants listened to binaural recordings of echolocation and masking sounds, and either localized echoes in azimuth or discriminated echo audibility. Echolocation and masking sounds could be either clicks or broad band noise. An adaptive staircase method was used to adjust signal-to-noise ratios (SNRs) based on participants' responses. When target and masker had the same binaural cues (i.e. both were monoaural sounds), people performed better (i.e. had lower SNRs) when target and masker used different types of sound (e.g. clicks in noise-masker or noise in clicks-masker), as compared to when target and masker used the same type of sound (e.g. clicks in click-, or noise in noise-masker). A very different pattern of results was observed when masker and target differed in their binaural cues, in which case people always performed better when clicks were the masker, regardless of type of emission used. Further, direct comparison between conditions with and without binaural difference revealed binaural release from masking only when clicks were used as emissions and masker, but not otherwise (i.e. when noise was used as masker or emission). This suggests that echolocation with clicks or noise may differ in their sensitivity to binaural cues. We observed the same pattern of results for echolocation experts, and blind and sighted people new to echolocation, suggesting a limited role played by long-term experience or blindness. In addition to generating novel predictions for future work, the findings also inform instruction in echolocation for people who are blind or sighted.
Collapse
Affiliation(s)
- L Thaler
- Department of Psychology, Durham University, South Road, Durham, DH1 5AY, UK.
| | | | - D Kish
- World Access for the Blind, 1007 Marino Drive, Placentia, CA, 92870, USA
| | - L J Norman
- Department of Psychology, Durham University, South Road, Durham, DH1 5AY, UK
| |
Collapse
|
4
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
5
|
Teng S, Danforth C, Paternoster N, Ezeana M, Puri A. Object recognition via echoes: quantifying the crossmodal transfer of three-dimensional shape information between echolocation, vision, and haptics. Front Neurosci 2024; 18:1288635. [PMID: 38440393 PMCID: PMC10909950 DOI: 10.3389/fnins.2024.1288635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 02/05/2024] [Indexed: 03/06/2024] Open
Abstract
Active echolocation allows blind individuals to explore their surroundings via self-generated sounds, similarly to dolphins and other echolocating animals. Echolocators emit sounds, such as finger snaps or mouth clicks, and parse the returning echoes for information about their surroundings, including the location, size, and material composition of objects. Because a crucial function of perceiving objects is to enable effective interaction with them, it is important to understand the degree to which three-dimensional shape information extracted from object echoes is useful in the context of other modalities such as haptics or vision. Here, we investigated the resolution of crossmodal transfer of object-level information between acoustic echoes and other senses. First, in a delayed match-to-sample task, blind expert echolocators and sighted control participants inspected common (everyday) and novel target objects using echolocation, then distinguished the target object from a distractor using only haptic information. For blind participants, discrimination accuracy was overall above chance and similar for both common and novel objects, whereas as a group, sighted participants performed above chance for the common, but not novel objects, suggesting that some coarse object information (a) is available to both expert blind and novice sighted echolocators, (b) transfers from auditory to haptic modalities, and (c) may be facilitated by prior object familiarity and/or material differences, particularly for novice echolocators. Next, to estimate an equivalent resolution in visual terms, we briefly presented blurred images of the novel stimuli to sighted participants (N = 22), who then performed the same haptic discrimination task. We found that visuo-haptic discrimination performance approximately matched echo-haptic discrimination for a Gaussian blur kernel σ of ~2.5°. In this way, by matching visual and echo-based contributions to object discrimination, we can estimate the quality of echoacoustic information that transfers to other sensory modalities, predict theoretical bounds on perception, and inform the design of assistive techniques and technology available for blind individuals.
Collapse
Affiliation(s)
- Santani Teng
- Smith-Kettlewell Eye Research Institute, San Francisco, CA, United States
| | - Caroline Danforth
- Department of Biology, University of Central Arkansas, Conway, AR, United States
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Nickolas Paternoster
- Department of Biology, University of Central Arkansas, Conway, AR, United States
- Department of Psychology, Cornell University, Ithaca, NY, United States
| | - Michael Ezeana
- Department of Biology, University of Central Arkansas, Conway, AR, United States
- Georgetown University School of Medicine, Washington, DC, United States
| | - Amrita Puri
- Department of Biology, University of Central Arkansas, Conway, AR, United States
| |
Collapse
|
6
|
Chow JK, Palmeri TJ, Pluck G, Gauthier I. Evidence for an amodal domain-general object recognition ability. Cognition 2023; 238:105542. [PMID: 37419065 DOI: 10.1016/j.cognition.2023.105542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 06/26/2023] [Accepted: 06/27/2023] [Indexed: 07/09/2023]
Abstract
A general object recognition ability predicts performance across a variety of high-level visual tests, categories, and performance in haptic recognition. Does this ability extend to auditory recognition? Vision and haptics tap into similar representations of shape and texture. In contrast, features of auditory perception like pitch, timbre, or loudness do not readily translate into shape percepts related to edges, surfaces, or spatial arrangement of parts. We find that an auditory object recognition ability correlates highly with a visual object recognition ability after controlling for general intelligence, perceptual speed, low-level visual ability, and memory ability. Auditory object recognition was a stronger predictor of visual object recognition than all control measures across two experiments, even though those control variables were also tested visually. These results point towards a single high-level ability used in both vision and audition. Much work highlights how the integration of visual and auditory information is important in specific domains (e.g., speech, music), with evidence for some overlap of visual and auditory neural representations. Our results are the first to reveal a domain-general ability, o, that predicts object recognition performance in both visual and auditory tests. Because o is domain-general, it reveals mechanisms that apply across a wide range of situations, independent of experience and knowledge. As o is distinct from general intelligence, it is well positioned to potentially add predictive validity when explaining individual differences in a variety of tasks, above and beyond measures of common cognitive abilities like general intelligence and working memory.
Collapse
Affiliation(s)
- Jason K Chow
- Department of Psychology, Vanderbilt University, USA.
| | | | - Graham Pluck
- Faculty of Psychology, Chulalongkorn University, Thailand
| | | |
Collapse
|
7
|
Higgins NC, Pupo DA, Ozmeral EJ, Eddins DA. Head movement and its relation to hearing. Front Psychol 2023; 14:1183303. [PMID: 37448716 PMCID: PMC10338176 DOI: 10.3389/fpsyg.2023.1183303] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 06/07/2023] [Indexed: 07/15/2023] Open
Abstract
Head position at any point in time plays a fundamental role in shaping the auditory information that reaches a listener, information that continuously changes as the head moves and reorients to different listening situations. The connection between hearing science and the kinesthetics of head movement has gained interest due to technological advances that have increased the feasibility of providing behavioral and biological feedback to assistive listening devices that can interpret movement patterns that reflect listening intent. Increasing evidence also shows that the negative impact of hearing deficits on mobility, gait, and balance may be mitigated by prosthetic hearing device intervention. Better understanding of the relationships between head movement, full body kinetics, and hearing health, should lead to improved signal processing strategies across a range of assistive and augmented hearing devices. The purpose of this review is to introduce the wider hearing community to the kinesiology of head movement and to place it in the context of hearing and communication with the goal of expanding the field of ecologically-specific listener behavior.
Collapse
Affiliation(s)
- Nathan C. Higgins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - Daniel A. Pupo
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
- School of Aging Studies, University of South Florida, Tampa, FL, United States
| | - Erol J. Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - David A. Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| |
Collapse
|
8
|
Thaler L, Di Gregorio G, Foresteire D. 6-hour Training in click-based echolocation changes practice in visual impairment professionals. FRONTIERS IN REHABILITATION SCIENCES 2023; 4:1098624. [PMID: 37284336 PMCID: PMC10239887 DOI: 10.3389/fresc.2023.1098624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 04/25/2023] [Indexed: 06/08/2023]
Abstract
Click-based echolocation can support mobility and orientation in people with vision impairments (VI) when used alongside other mobility methods. Only a small number of people with VI use click-based echolocation. Previous research about echolocation addresses the skill of echolocation per se to understand how echolocation works, and its brain basis. Our report is the first to address the question of professional practice for people with VI, i.e., a very different focus. VI professionals are well placed to affect how a person with VI might learn about, experience or use click-based echolocation. Thus, we here investigated if training in click-based echolocation for VI professionals might lead to a change in their professional practice. The training was delivered via 6-h workshops throughout the UK. It was free to attend, and people signed up via a publicly available website. We received follow-up feedback in the form of yes/no answers and free text comments. Yes/no answers showed that 98% of participants had changed their professional practice as a consequence of the training. Free text responses were analysed using content analysis, and we found that 32%, 11.7% and 46.6% of responses indicated a change in information processing, verbal influencing or instruction and practice, respectively. This attests to the potential of VI professionals to act as multipliers of training in click-based echolocation with the potential to improve the lives of people with VI. The training we evaluated here could feasibly be integrated into VI Rehabilitation or VI Habilitation training as implemented at higher education institutions (HEIs) or continuing professional development (CPD).
Collapse
|
9
|
Maimon A, Wald IY, Ben Oz M, Codron S, Netzer O, Heimler B, Amedi A. The Topo-Speech sensory substitution system as a method of conveying spatial information to the blind and vision impaired. Front Hum Neurosci 2023; 16:1058093. [PMID: 36776219 PMCID: PMC9909096 DOI: 10.3389/fnhum.2022.1058093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 12/13/2022] [Indexed: 01/27/2023] Open
Abstract
Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects' identity by employing naming in a spoken word and simultaneously conveying the objects' location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study's findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel,The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel,*Correspondence: Amber Maimon,
| | - Iddo Yehoshua Wald
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel,The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel,The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Sophie Codron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel,The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Ophir Netzer
- Gonda Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel,The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
10
|
Read L, Deverell L. EchoRead Programme: Learning echolocation skills through self-paced
professional development during the COVID-19 pandemic. BRITISH JOURNAL OF VISUAL IMPAIRMENT 2022:02646196221131735. [PMCID: PMC9623408 DOI: 10.1177/02646196221131735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Echolocation is used by people with low vision or blindness to support their
navigation. Internationally, Orientation and Mobility (O&M) Specialists have
learned echolocation skills and how to teach them to clients, through formal
workshops with a subject matter expert. However, COVID-19 has limited access to
these in-person professional development opportunities. This study investigated
whether an O&M professional could learn echolocation skills in a self-paced
programme with only the support of a lay assistant. We developed the EchoRead
Programme to equip an individual O&M Specialist to learn basic echolocation
skills in 4 hours. This auto-ethnographical perspective describes how the draft
programme was trialled by one trainee O&M Specialist in her home and local
neighbourhood. She developed sufficient skills to complete most of the seated,
standing, and walking tasks in the programme, but needed more support developing
tongue-clicking and recognising driveways when shorelining fences. She found it
was important to use learning environments that were graduated in physical and
audio complexity. The EchoRead Programme was then trialled and revised by an
experienced O&M Specialist, beginning at home, then exploring a range of
venues available within a 5 km radius – the roaming range allowed during COVID
lockdown. The resulting EchoRead Programme can equip O&M professionals to be
self-directed in learning early echolocation skills, using online and locally
available resources. This programme could be especially useful for vision
professionals and their clients, who have limited access to in-person learning
opportunities with colleagues or peers because of geographical isolation, low
resources, or a global pandemic.
Collapse
Affiliation(s)
- Leah Read
- Blind Low Vision New Zealand, New Zealand
| | - Lil Deverell
- Lil Deverell, Swinburne University of
Technology, Hawthorn, VIC 3122, Australia.
| |
Collapse
|
11
|
Kreidy C, Martiniello N, Nemargut JP, Wittich W. How Face Masks Affect the Use of Echolocation by Individuals With Visual Impairments During COVID-19: International Cross-sectional Online Survey. Interact J Med Res 2022; 11:e39366. [PMID: 36223434 PMCID: PMC9604170 DOI: 10.2196/39366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 09/26/2022] [Accepted: 10/12/2022] [Indexed: 11/13/2022] Open
Abstract
Background Although a critical safety measure, preliminary studies have suggested that the use of a face mask may pose a problem for some users with disabilities. To date, little is known about how the wearing of a traditional face mask may pose a barrier to individuals with visual impairments who draw on auditory cues and echolocation techniques during independent travel. Objective The goal of this study was to document the difficulties, if any, encountered during orientation and mobility due to the use of a face mask during the COVID-19 pandemic and the strategies used to address these barriers. Methods In total, 135 individuals aged 18 years and older who self-identified as being blind, being deafblind, or having low vision and who could communicate in either English or French completed an anonymous cross-sectional online survey between March 29 and August 23, 2021. Results In total, 135 respondents (n=52, 38.5%, men; n=83, 61.5%, women) between the ages of 18 and 79 (mean 48.22, SD 14.48) years participated. Overall, 78 (57.7%) self-identified as blind and 57 (42.3%) as having low vision. In addition, 13 (9.6%) identified as having a combined vision and hearing loss and 3 (2.2%) as deafblind. The most common face coverings used were cloth (n=119, 88.1%) and surgical masks (n=74, 54.8%). Among the barriers raised, participants highlighted that face masks made it more difficult to locate people (n=86, 63.7%), communicate with others (n=101, 74.8%), and locate landmarks (n=82, 60.7%). Although the percentage of those who used a white cane before the pandemic did not substantially change, 6 (14.6%) of the 41 participants who were guide dog users prior to the pandemic reported no longer working with a guide dog at the time of the survey. Moreover, although guide dog users reported the highest level of confidence with independent travel before the pandemic, they indicated the lowest level of confidence a year after the pandemic began. Conclusions These results suggest that participants were less able to draw on nonvisual cues during independent travel and social interactions due to the use of a facemask, contributing to a reduction in perceived self-confidence and independence. Findings inform the development of evidence-based recommendations to address identified barriers.
Collapse
Affiliation(s)
- Chantal Kreidy
- School of Optometry, University of Montreal, Montréal, QC, Canada
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Montreal, QC, Canada
| | - Natalina Martiniello
- School of Optometry, University of Montreal, Montréal, QC, Canada
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Montreal, QC, Canada
- Centre de Réadaptation Lethbridge-Layton-Mackay du Centres Intégrés Universitaires de Santé et de Services Sociaux du Centre-Ouest-de-l'Île-de-Montréal, Montreal, QC, Canada
| | - Joseph Paul Nemargut
- School of Optometry, University of Montreal, Montréal, QC, Canada
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Montreal, QC, Canada
| | - Walter Wittich
- School of Optometry, University of Montreal, Montréal, QC, Canada
- Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal, Montreal, QC, Canada
- Centre de Réadaptation Lethbridge-Layton-Mackay du Centres Intégrés Universitaires de Santé et de Services Sociaux du Centre-Ouest-de-l'Île-de-Montréal, Montreal, QC, Canada
- Institut Nazareth et Louis-Braille du Centres Intégrés de Santé et de Services Sociaux de la Montérégie-Centre, Longueuil, QC, Canada
| |
Collapse
|
12
|
McKenzie T, Schlecht SJ, Pulkki V. The auditory perceived aperture position of the transition between rooms. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1871. [PMID: 36182311 DOI: 10.1121/10.0014178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 09/02/2022] [Indexed: 06/16/2023]
Abstract
This exploratory study investigates the phenomenon of the auditory perceived aperture position (APAP): the point at which one feels they are in the boundary between two adjoined spaces, judged only using auditory senses. The APAP is likely the combined perception of multiple simultaneous auditory cue changes, such as energy, reverberation time, envelopment, decay slope shape, and the direction, amplitude, and colouration of direct and reverberant sound arrivals. A framework for a rendering-free listening test is presented and conducted in situ, avoiding possible inaccuracies from acoustic simulations, impulse response measurements, and auralisation to assess how close the APAP is to the physical aperture position under blindfold conditions, for multiple source positions and two room pairs. Results indicate that the APAP is generally within ± 1 m of the physical aperture position, though reverberation amount, listener orientation, and source position affect precision. Comparison to objective metrics suggests that the APAP generally falls within the period of greatest acoustical change. This study illustrates the non-trivial nature of acoustical room transitions and the detail required for their plausible reproduction in dynamic rendering and game audio engines.
Collapse
Affiliation(s)
- Thomas McKenzie
- Acoustics Lab, Department of Signal Processing and Acoustics, Aalto University, 00076 Espoo, Finland
| | - Sebastian J Schlecht
- Acoustics Lab, Department of Signal Processing and Acoustics, Aalto University, 00076 Espoo, Finland
| | - Ville Pulkki
- Acoustics Lab, Department of Signal Processing and Acoustics, Aalto University, 00076 Espoo, Finland
| |
Collapse
|
13
|
Thaler L, Norman LJ, De Vos HPJC, Kish D, Antoniou M, Baker CJ, Hornikx MCJ. Human Echolocators Have Better Localization Off Axis. Psychol Sci 2022; 33:1143-1153. [PMID: 35699555 DOI: 10.1177/09567976211068070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Here, we report novel empirical results from a psychophysical experiment in which we tested the echolocation abilities of nine blind adult human experts in click-based echolocation. We found that they had better acuity in localizing a target and used lower intensity emissions (i.e., mouth clicks) when a target was placed 45° off to the side compared with when it was placed at 0° (straight ahead). We provide a possible explanation of the behavioral result in terms of binaural-intensity signals, which appear to change more rapidly around 45°. The finding that echolocators have better echo-localization off axis is surprising, because for human source localization (i.e., regular spatial hearing), it is well known that performance is best when targets are straight ahead (0°) and decreases as targets move farther to the side. This may suggest that human echolocation and source hearing rely on different acoustic cues and that human spatial hearing has more facets than previously thought.
Collapse
Affiliation(s)
| | - L J Norman
- Department of Psychology, Durham University
| | - H P J C De Vos
- Department of the Built Environment, Eindhoven University of Technology
| | - D Kish
- World Access for the Blind, Placentia, California
| | - M Antoniou
- Department of Electronic Electrical and Systems Engineering, University of Birmingham
| | - C J Baker
- Department of Electronic Electrical and Systems Engineering, University of Birmingham
| | - M C J Hornikx
- Department of the Built Environment, Eindhoven University of Technology
| |
Collapse
|
14
|
Andrade R, Baker S, Waycott J, Vetere F. A Participatory Design Approach to Creating Echolocation-Enabled Virtual Environments. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2022. [DOI: 10.1145/3516448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
As virtual environments—in the form of videogames and augmented and virtual reality experiences—become more popular, it is important to ensure that they are accessible to all. Previous research has identified echolocation as a useful interaction approach to enable people with visual impairment to access virtual environments. In this paper, we further investigate the usefulness of echolocation to explore virtual environments. We follow a participatory design approach that comprised a focus group session coupled with two fast prototyping and evaluation iterations. During the focus group session, expert echolocators produced a series of seven design recommendations, of which we implemented and trialed four. Our trials revealed that the use of ambient sounds, the ability to place landmarks, directional control, and the ability to use pre-recorded mouth-clicks produced by expert echolocators improved the overall experience of our participants by facilitating the detection of openings and obstacles. The recommendations presented and evaluated in this paper may help to develop virtual environments that support a broader range of users while recognising the value of the lived experience of people with disability as a source of knowledge.
Collapse
|
15
|
de Sousa AA, Todorov OS, Proulx MJ. A natural history of vertebrate vision loss: Insight from mammalian vision for human visual function. Neurosci Biobehav Rev 2022; 134:104550. [PMID: 35074313 DOI: 10.1016/j.neubiorev.2022.104550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 10/08/2021] [Accepted: 01/20/2022] [Indexed: 11/28/2022]
Abstract
Research on the origin of vision and vision loss in naturally "blind" animal species can reveal the tasks that vision fulfills and the brain's role in visual experience. Models that incorporate evolutionary history, natural variation in visual ability, and experimental manipulations can help disentangle visual ability at a superficial level from behaviors linked to vision but not solely reliant upon it, and could assist the translation of ophthalmological research in animal models to human treatments. To unravel the similarities between blind individuals and blind species, we review concepts of 'blindness' and its behavioral correlates across a range of species. We explore the ancestral emergence of vision in vertebrates, and the loss of vision in blind species with reference to an evolution-based classification scheme. We applied phylogenetic comparative methods to a mammalian tree to explore the evolution of visual acuity using ancestral state estimations. Future research into the natural history of vision loss could help elucidate the function of vision and inspire innovations in how to address vision loss in humans.
Collapse
Affiliation(s)
- Alexandra A de Sousa
- Centre for Health and Cognition, Bath Spa University, Bath, United Kingdom; UKRI Centre for Accessible, Responsible & Transparent Artificial Intelligence (ART:AI), University of Bath, United Kingdom.
| | - Orlin S Todorov
- School of Biological Sciences, The University of Queensland, St Lucia, Queensland, Australia
| | - Michael J Proulx
- UKRI Centre for Accessible, Responsible & Transparent Artificial Intelligence (ART:AI), University of Bath, United Kingdom; Department of Psychology, REVEAL Research Centre, University of Bath, Bath, United Kingdom
| |
Collapse
|
16
|
Neidhardt A, Schneiderwind C, Klein F. Perceptual Matching of Room Acoustics for Auditory Augmented Reality in Small Rooms - Literature Review and Theoretical Framework. Trends Hear 2022; 26:23312165221092919. [PMID: 35505625 PMCID: PMC9073123 DOI: 10.1177/23312165221092919] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
For the realization of auditory augmented reality (AAR), it is important that the room acoustical properties of the virtual elements are perceived in agreement with the acoustics of the actual environment. This perceptual matching of room acoustics is the subject reviewed in this paper. Realizations of AAR that fulfill the listeners’ expectations were achieved based on pre-characterization of the room acoustics, for example, by measuring acoustic impulse responses or creating detailed room models for acoustic simulations. For future applications, the goal is to realize an online adaptation in (close to) real-time. Perfect physical matching is hard to achieve with these practical constraints. For this reason, an understanding of the essential psychoacoustic cues is of interest and will help to explore options for simplifications. This paper reviews a broad selection of previous studies and derives a theoretical framework to examine possibilities for psychoacoustical optimization of room acoustical matching.
Collapse
|
17
|
Downey G. Echolocation among the blind: an argument for an ontogenetic turn. JOURNAL OF THE ROYAL ANTHROPOLOGICAL INSTITUTE 2021. [DOI: 10.1111/1467-9655.13607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Greg Downey
- Macquarie School of Social Sciences Macquarie University Room B514, Level 5, 25B Wally's Walk NSW 2109 Australia
| |
Collapse
|
18
|
Thaler L, Norman LJ. No effect of 10-week training in click-based echolocation on auditory localization in people who are blind. Exp Brain Res 2021; 239:3625-3633. [PMID: 34609546 PMCID: PMC8599323 DOI: 10.1007/s00221-021-06230-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 09/18/2021] [Indexed: 11/16/2022]
Abstract
What factors are important in the calibration of mental representations of auditory space? A substantial body of research investigating the audiospatial abilities of people who are blind has shown that visual experience might be an important factor for accurate performance in some audiospatial tasks. Yet, it has also been shown that long-term experience using click-based echolocation might play a similar role, with blind expert echolocators demonstrating auditory localization abilities that are superior to those of people who are blind and who do not use click-based echolocation by Vercillo et al. (Neuropsychologia 67: 35–40, 2015). Based on this hypothesis we might predict that training in click-based echolocation may lead to improvement in performance in auditory localization tasks in people who are blind. Here we investigated this hypothesis in a sample of 12 adult people who have been blind from birth. We did not find evidence for an improvement in performance in auditory localization after 10 weeks of training despite significant improvement in echolocation ability. It is possible that longer-term experience with click-based echolocation is required for effects to develop, or that other factors can explain the association between echolocation expertise and superior auditory localization. Considering the practical relevance of click-based echolocation for people who are visually impaired, future research should address these questions.
Collapse
Affiliation(s)
- Lore Thaler
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK.
| | - Liam J Norman
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK
| |
Collapse
|
19
|
Norman LJ, Dodsworth C, Foresteire D, Thaler L. Human click-based echolocation: Effects of blindness and age, and real-life implications in a 10-week training program. PLoS One 2021; 16:e0252330. [PMID: 34077457 PMCID: PMC8171922 DOI: 10.1371/journal.pone.0252330] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 05/13/2021] [Indexed: 01/19/2023] Open
Abstract
Understanding the factors that determine if a person can successfully learn a novel sensory skill is essential for understanding how the brain adapts to change, and for providing rehabilitative support for people with sensory loss. We report a training study investigating the effects of blindness and age on the learning of a complex auditory skill: click-based echolocation. Blind and sighted participants of various ages (21-79 yrs; median blind: 45 yrs; median sighted: 26 yrs) trained in 20 sessions over the course of 10 weeks in various practical and virtual navigation tasks. Blind participants also took part in a 3-month follow up survey assessing the effects of the training on their daily life. We found that both sighted and blind people improved considerably on all measures, and in some cases performed comparatively to expert echolocators at the end of training. Somewhat surprisingly, sighted people performed better than those who were blind in some cases, although our analyses suggest that this might be better explained by the younger age (or superior binaural hearing) of the sighted group. Importantly, however, neither age nor blindness was a limiting factor in participants' rate of learning (i.e. their difference in performance from the first to the final session) or in their ability to apply their echolocation skills to novel, untrained tasks. Furthermore, in the follow up survey, all participants who were blind reported improved mobility, and 83% reported better independence and wellbeing. Overall, our results suggest that the ability to learn click-based echolocation is not strongly limited by age or level of vision. This has positive implications for the rehabilitation of people with vision loss or in the early stages of progressive vision loss.
Collapse
Affiliation(s)
- Liam J. Norman
- Department of Psychology, Durham University, Durham, United Kingdom
| | | | | | - Lore Thaler
- Department of Psychology, Durham University, Durham, United Kingdom
- * E-mail:
| |
Collapse
|
20
|
Kolarik AJ, Moore BCJ, Cirstea S, Aggius-Vella E, Gori M, Campus C, Pardhan S. Factors Affecting Auditory Estimates of Virtual Room Size: Effects of Stimulus, Level, and Reverberation. Perception 2021; 50:646-663. [PMID: 34053354 DOI: 10.1177/03010066211020598] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
When vision is unavailable, auditory level and reverberation cues provide important spatial information regarding the environment, such as the size of a room. We investigated how room-size estimates were affected by stimulus type, level, and reverberation. In Experiment 1, 15 blindfolded participants estimated room size after performing a distance bisection task in virtual rooms that were either anechoic (with level cues only) or reverberant (with level and reverberation cues) with a relatively short reverberation time of T60 = 400 milliseconds. Speech, noise, or clicks were presented at distances between 1.9 and 7.1 m. The reverberant room was judged to be significantly larger than the anechoic room (p < .05) for all stimuli. In Experiment 2, only the reverberant room was used and the overall level of all sounds was equalized, so only reverberation cues were available. Ten blindfolded participants took part. Room-size estimates were significantly larger for speech than for clicks or noise. The results show that when level and reverberation cues are present, reverberation increases judged room size. Even relatively weak reverberation cues provide room-size information, which could potentially be used by blind or visually impaired individuals encountering novel rooms.
Collapse
Affiliation(s)
- Andrew J Kolarik
- Anglia Ruskin University, Cambridge, UK.,Anglia Ruskin University, Cambridge, UK
| | - Brian C J Moore
- Anglia Ruskin University, Cambridge, UK; University of Cambridge, Cambridge, UK.,Anglia Ruskin University, Cambridge, UK
| | - Silvia Cirstea
- Anglia Ruskin University, Cambridge, UK.,Anglia Ruskin University, Cambridge, UK
| | - Elena Aggius-Vella
- Fondazione Istituto Italiano di Tecnologia, Genoa, Italy; Institute for Mind, Brain and Technology, Herzeliya, Israel.,Anglia Ruskin University, Cambridge, UK
| | | | - Claudio Campus
- Fondazione Istituto Italiano di Tecnologia, Genoa, Italy.,Anglia Ruskin University, Cambridge, UK
| | | |
Collapse
|
21
|
Kritly L, Sluyts Y, Pelegrín-García D, Glorieux C, Rychtáriková M. Discrimination of 2D wall textures by passive echolocation for different reflected-to-direct level difference configurations. PLoS One 2021; 16:e0251397. [PMID: 34043655 PMCID: PMC8158938 DOI: 10.1371/journal.pone.0251397] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 04/25/2021] [Indexed: 11/19/2022] Open
Abstract
In this work, we study people's ability to discriminate between different 2D textures of walls by passive listening to a pre-recorded tongue click in an auralized echolocation scenario. In addition, the impact of artificially enhancing the early reflection magnitude by 6dB and of removing the direct component while equalizing the loudness was investigated. Listening test results for different textures, ranging from a flat wall to a staircase, were assessed using a 2 Alternative-Forced-Choice (2AFC) method, in which 14 sighted, untrained participants were indicating 2 equally perceived stimuli out of 3 presented stimuli. The average performance of the listening subjects to discriminate between different textures was found to be significantly higher for walls at 5m distance, without overlap between the reflected and direct sound, compared to the same walls at 0.8m distance. Enhancing the reflections as well as removing the direct sound were found to be beneficial to differentiate textures. This finding highlights the importance of forward masking in the discrimination process. The overall texture discriminability was found to be larger for the walls reflecting with a higher spectral coloration.
Collapse
Affiliation(s)
- Léopold Kritly
- Research Department of Architecture—Building and Room Acoustics, Faculty of Architecture, KU Leuven, Brussel, Belgium
- EPF–Graduate School of Engineering, Sceaux, France
| | - Yannick Sluyts
- Research Department of Architecture—Building and Room Acoustics, Faculty of Architecture, KU Leuven, Brussel, Belgium
| | - David Pelegrín-García
- ZMB Lab. of Acoustics, Department of Physics and Astronomy, KU Leuven, Heverlee, Belgium
| | - Christ Glorieux
- ZMB Lab. of Acoustics, Department of Physics and Astronomy, KU Leuven, Heverlee, Belgium
| | - Monika Rychtáriková
- Research Department of Architecture—Building and Room Acoustics, Faculty of Architecture, KU Leuven, Brussel, Belgium
- Faculty of Civil Engineering, STU Bratislava, Bratislava, Slovakia
| |
Collapse
|
22
|
Effectiveness of time-varying echo information for target geometry identification in bat-inspired human echolocation. PLoS One 2021; 16:e0250517. [PMID: 33951069 PMCID: PMC8099053 DOI: 10.1371/journal.pone.0250517] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Accepted: 04/07/2021] [Indexed: 11/20/2022] Open
Abstract
Bats use echolocation through flexible active sensing via ultrasounds to identify environments suitable for their habitat and foraging. Mimicking the sensing strategies of bats for echolocation, this study examined how humans acquire new acoustic-sensing abilities, and proposes effective strategies for humans. A target geometry identification experiment—involving 15 sighted people without experience of echolocation—was conducted using two targets with different geometries, based on a new sensing system. Broadband frequency-modulated pulses with short inter-pulse intervals (16 ms) were used as a synthetic echolocation signal. Such pulses mimic buzz signals emitted by bats for echolocation prior to capturing their prey. The study participants emitted the signal from a loudspeaker by tapping on Android devices. Because the signal included high-frequency signals up to 41 kHz, the emitted signal and echoes from a stationary or rotating target were recorded using a 1/7-scaled miniature dummy head. Binaural sounds, whose pitch was down-converted, were presented through headphones. This way, time-varying echo information was made available as an acoustic cue for target geometry identification under a rotating condition, as opposed to a stationary one. In both trials, with (i.e., training trials) and without (i.e., test trials) answer feedback immediately after the participants answered, the participants identified the geometries under the rotating condition. Majority of the participants reported using time-varying patterns in terms of echo intensity, timbre, and/or pitch under the rotating condition. The results suggest that using time-varying patterns in echo intensity, timbre, and/or pitch enables humans to identify target geometries. However, performance significantly differed by condition (i.e., stationary vs. rotating) only in the test trials. This difference suggests that time-varying echo information is effective for identifying target geometry through human echolocation especially when echolocators are unable to obtain answer feedback during sensing.
Collapse
|
23
|
Andrade R, Waycott J, Baker S, Vetere F. Echolocation as a Means for People with Visual Impairment (PVI) to Acquire Spatial Knowledge of Virtual Space. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2021. [DOI: 10.1145/3448273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
In virtual environments, spatial information is communicated visually. This prevents people with visual impairment (PVI) from accessing such spaces. In this article, we investigate whether echolocation could be used as a tool to convey spatial information by answering the following research questions: What features of virtual space can be perceived by PVI through the use of echolocation? How does active echolocation support PVI in acquiring spatial knowledge of a virtual space? And what are PVI’s opinions regarding the use of echolocation to acquire landmark and survey knowledge of virtual space? To answer these questions, we conducted a two-part within-subjects experiment with 12 people who were blind or had a visual impairment and found that size and materials of rooms and 90-degree turns were detectable through echolocation, participants preferred using echoes derived from footsteps rather than from artificial sound pulses, and echolocation supported the acquisition of mental maps of a virtual space. Ultimately, we propose that appropriately designed echolocation in virtual environments improves understanding of spatial information and access to digital games for PVI.
Collapse
Affiliation(s)
- Ronny Andrade
- The University of Melbourne, Parkville, VIC, Australia
| | - Jenny Waycott
- The University of Melbourne, Parkville, VIC, Australia
| | - Steven Baker
- The University of Melbourne, Parkville, VIC, Australia
| | - Frank Vetere
- The University of Melbourne, Parkville, VIC, Australia
| |
Collapse
|
24
|
Ptito M, Bleau M, Djerourou I, Paré S, Schneider FC, Chebat DR. Brain-Machine Interfaces to Assist the Blind. Front Hum Neurosci 2021; 15:638887. [PMID: 33633557 PMCID: PMC7901898 DOI: 10.3389/fnhum.2021.638887] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 01/19/2021] [Indexed: 12/31/2022] Open
Abstract
The loss or absence of vision is probably one of the most incapacitating events that can befall a human being. The importance of vision for humans is also reflected in brain anatomy as approximately one third of the human brain is devoted to vision. It is therefore unsurprising that throughout history many attempts have been undertaken to develop devices aiming at substituting for a missing visual capacity. In this review, we present two concepts that have been prevalent over the last two decades. The first concept is sensory substitution, which refers to the use of another sensory modality to perform a task that is normally primarily sub-served by the lost sense. The second concept is cross-modal plasticity, which occurs when loss of input in one sensory modality leads to reorganization in brain representation of other sensory modalities. Both phenomena are training-dependent. We also briefly describe the history of blindness from ancient times to modernity, and then proceed to address the means that have been used to help blind individuals, with an emphasis on modern technologies, invasive (various type of surgical implants) and non-invasive devices. With the advent of brain imaging, it has become possible to peer into the neural substrates of sensory substitution and highlight the magnitude of the plastic processes that lead to a rewired brain. Finally, we will address the important question of the value and practicality of the available technologies and future directions.
Collapse
Affiliation(s)
- Maurice Ptito
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
| | - Maxime Bleau
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| | - Ismaël Djerourou
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| | - Samuel Paré
- École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| | - Fabien C. Schneider
- TAPE EA7423 University of Lyon-Saint Etienne, Saint Etienne, France
- Neuroradiology Unit, University Hospital of Saint-Etienne, Saint-Etienne, France
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israël
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel, Israël
| |
Collapse
|
25
|
Castillo-Serrano JG, Norman LJ, Foresteire D, Thaler L. Increased emission intensity can compensate for the presence of noise in human click-based echolocation. Sci Rep 2021; 11:1750. [PMID: 33462283 PMCID: PMC7813859 DOI: 10.1038/s41598-021-81220-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Accepted: 01/04/2021] [Indexed: 11/09/2022] Open
Abstract
Echolocating bats adapt their emissions to succeed in noisy environments. In the present study we investigated if echolocating humans can detect a sound-reflecting surface in the presence of noise and if intensity of echolocation emissions (i.e. clicks) changes in a systematic pattern. We tested people who were blind and had experience in echolocation, as well as blind and sighted people who had no experience in echolocation prior to the study. We used an echo-detection paradigm where participants listened to binaural recordings of echolocation sounds (i.e. they did not make their own click emissions), and where intensity of emissions and echoes changed adaptively based on participant performance (intensity of echoes was yoked to intensity of emissions). We found that emission intensity had to systematically increase to compensate for weaker echoes relative to background noise. In fact, emission intensity increased so that spectral power of echoes exceeded spectral power of noise by 12 dB in 4-kHz and 5-kHz frequency bands. The effects were the same across all participant groups, suggesting that this effect occurs independently of long-time experience with echolocation. Our findings demonstrate for the first time that people can echolocate in the presence of noise and suggest that one potential strategy to deal with noise is to increase emission intensity to maintain signal-to-noise ratio of certain spectral components of the echoes.
Collapse
Affiliation(s)
- J G Castillo-Serrano
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK
| | - L J Norman
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK
| | - D Foresteire
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK
| | - L Thaler
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK.
| |
Collapse
|
26
|
Fujitsuka Y, Sumiya M, Ashihara K, Yoshino K, Nagatani Y, Kobayasi KI, Hiryu S. Two-dimensional shape discrimination by sighted people using simulated virtual echoes. JASA EXPRESS LETTERS 2021; 1:011202. [PMID: 36154088 DOI: 10.1121/10.0003194] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
In this study, a new research method using psychoacoustic experiments and acoustic simulations is proposed for human echolocation research. A shape discrimination experiment was conducted for sighted people using pitch-converted virtual echoes from targets of dissimilar two-dimensional (2D) shapes. These echoes were simulated using a three-dimensional acoustic simulation based on a finite-difference time-domain method from Bossy, Talmat, and Laugier [(2004). J. Acoust. Soc. Am. 115, 2314-2324]. The experimental and simulation results suggest that the echo timbre and pitch determined based on the sound interference may be effective acoustic cues for 2D shape discrimination. The newly developed research method may lead to more efficient future studies of human echolocation.
Collapse
Affiliation(s)
- Yumi Fujitsuka
- Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, 610-0394, Japan
| | - Miwa Sumiya
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Seika-cho, 619-0289, Japan
| | - Kaoru Ashihara
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba 305-8566, Japan
| | - Kazuki Yoshino
- Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, 610-0394, Japan
| | - Yoshiki Nagatani
- Pixie Dust Technologies, Inc., Chiyoda-ku, 101-0061, Japan , , , , , ,
| | - Kohta I Kobayasi
- Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, 610-0394, Japan
| | - Shizuko Hiryu
- Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, 610-0394, Japan
| |
Collapse
|
27
|
Norman LJ, Thaler L. Perceptual constancy with a novel sensory skill. J Exp Psychol Hum Percept Perform 2020; 47:269-281. [PMID: 33271045 PMCID: PMC7818673 DOI: 10.1037/xhp0000888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Making sense of the world requires perceptual constancy—the stable perception of an object across changes in one’s sensation of it. To investigate whether constancy is intrinsic to perception, we tested whether humans can learn a form of constancy that is unique to a novel sensory skill (here, the perception of objects through click-based echolocation). Participants judged whether two echoes were different either because: (a) the clicks were different, or (b) the objects were different. For differences carried through spectral changes (but not level changes), blind expert echolocators spontaneously showed a high constancy ability (mean d′ = 1.91) compared to sighted and blind people new to echolocation (mean d′ = 0.69). Crucially, sighted controls improved rapidly in this ability through training, suggesting that constancy emerges in a domain with which the perceiver has no prior experience. This provides strong evidence that constancy is intrinsic to human perception. This study shows that people who learn a new skill to sense their environment - here: listening to sound echoes - can correctly represent the physical properties of objects. This result has implications for effectively rehabilitating people with sensory loss.
Collapse
|
28
|
Norman LJ, Thaler L. Stimulus uncertainty affects perception in human echolocation: Timing, level, and spectrum. J Exp Psychol Gen 2020; 149:2314-2331. [PMID: 32324025 PMCID: PMC7727089 DOI: 10.1037/xge0000775] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The human brain may use recent sensory experience to create sensory templates that are then compared to incoming sensory input, that is, "knowing what to listen for." This can lead to greater perceptual sensitivity, as long as the relevant properties of the target stimulus can be reliably estimated from past sensory experiences. Echolocation is an auditory skill probably best understood in bats, but humans can also echolocate. Here we investigated for the first time whether echolocation in humans involves the use of sensory templates derived from recent sensory experiences. Our results showed that when there was certainty in the acoustic properties of the echo relative to the emission, either in temporal onset, spectral content or level, people detected the echo more accurately than when there was uncertainty. In addition, we found that people were more accurate when the emission's spectral content was certain but, surprisingly, not when either its level or temporal onset was certain. Importantly, the lack of an effect of temporal onset of the emission is counter to that found previously for tasks using nonecholocation sounds, suggesting that the underlying mechanisms might be different for echolocation and nonecholocation sounds. Importantly, the effects of stimulus certainty were no different for people with and without experience in echolocation, suggesting that stimulus-specific sensory templates can be used in a skill that people have never used before. From an applied perspective our results suggest that echolocation instruction should encourage users to make clicks that are similar to one another in their spectral content. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
29
|
Navigation and perception of spatial layout in virtual echo-acoustic space. Cognition 2020; 197:104185. [PMID: 31951856 PMCID: PMC7033557 DOI: 10.1016/j.cognition.2020.104185] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Revised: 01/03/2020] [Accepted: 01/07/2020] [Indexed: 11/20/2022]
Abstract
Successful navigation involves finding the way, planning routes, and avoiding collisions. Whilst previous research has shown that people can navigate using non-visual cues, it is not clear to what degree learned non-visual navigational abilities generalise to 'new' environments. Furthermore, the ability to successfully avoid collisions has not been investigated separately from the ability to perceive spatial layout or to orient oneself in space. Here, we address these important questions using a virtual echolocation paradigm in sighted people. Fourteen sighted blindfolded participants completed 20 virtual navigation training sessions over the course of 10 weeks. In separate sessions, before and after training, we also tested their ability to perceive the spatial layout of virtual echo-acoustic space. Furthermore, three blind echolocation experts completed the tasks without training, thus validating our virtual echo-acoustic paradigm. We found that over the course of 10 weeks sighted people became better at navigating, i.e. they reduced collisions and time needed to complete the route, and increased success rates. This also generalised to 'new' (i.e. untrained) virtual spaces. In addition, after training, their ability to judge spatial layout was better than before training. The data suggest that participants acquired a 'true' sensory driven navigational ability using echo-acoustics. In addition, we show that people not only developed navigational skills related to avoidance of collisions and finding safe passage, but also processes related to spatial perception and orienting. In sum, our results provide strong support for the idea that navigation is a skill which people can achieve via various modalities, here: echolocation.
Collapse
|
30
|
The Echobot: An automated system for stimulus presentation in studies of human echolocation. PLoS One 2019; 14:e0223327. [PMID: 31584971 PMCID: PMC6777781 DOI: 10.1371/journal.pone.0223327] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Accepted: 09/18/2019] [Indexed: 11/19/2022] Open
Abstract
Echolocation is the detection and localization of objects by listening to the sounds they reflect. Early studies of human echolocation used real objects that the experimental leader positioned manually before each experimental trial. The advantage of this procedure is the use of realistic stimuli; the disadvantage is that manually shifting stimuli between trials is very time consuming making it difficult to use psychophysical methods based on the presentation of hundreds of stimuli. The present study tested a new automated system for stimulus presentation, the Echobot, that overcomes this disadvantage. We tested 15 sighted participants with no prior experience of echolocation on their ability to detect the reflection of a loudspeaker-generated click from a 50 cm circular aluminum disk. The results showed that most participants were able to detect the sound reflections. Performance varied considerably, however, with mean individual thresholds of detection ranging from 1 to 3.2 m distance from the disk. Three participants in the loudspeaker experiment also tested using self-generated vocalization. One participant performed better using vocalization and one much worse than in the loudspeaker experiment, illustrating that performance in echolocation experiments using vocalizations not only measures the ability to detect sound reflections, but also the ability to produce efficient echolocation signals. Overall, the present experiments show that the Echobot may be a useful tool in research on human echolocation.
Collapse
|
31
|
Thaler L, Zhang X, Antoniou M, Kish DC, Cowie D. The flexible action system: Click-based echolocation may replace certain visual functionality for adaptive walking. J Exp Psychol Hum Percept Perform 2019; 46:21-35. [PMID: 31556685 PMCID: PMC6936248 DOI: 10.1037/xhp0000697] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People use sensory, in particular visual, information to guide actions such as walking around obstacles, grasping or reaching. However, it is presently unclear how malleable the sensorimotor system is. The present study investigated this by measuring how click-based echolocation may be used to avoid obstacles while walking. We tested 7 blind echolocation experts, 14 sighted, and 10 blind echolocation beginners. For comparison, we also tested 10 sighted participants, who used vision. To maximize the relevance of our research for people with vision impairments, we also included a condition where the long cane was used and considered obstacles at different elevations. Motion capture and sound data were acquired simultaneously. We found that echolocation experts walked just as fast as sighted participants using vision, and faster than either sighted or blind echolocation beginners. Walking paths of echolocation experts indicated early and smooth adjustments, similar to those shown by sighted people using vision and different from later and more abrupt adjustments of beginners. Further, for all participants, the use of echolocation significantly decreased collision frequency with obstacles at head, but not ground level. Further analyses showed that participants who made clicks with higher spectral frequency content walked faster, and that for experts higher clicking rates were associated with faster walking. The results highlight that people can use novel sensory information (here, echolocation) to guide actions, demonstrating the action system’s ability to adapt to changes in sensory input. They also highlight that regular use of echolocation enhances sensory-motor coordination for walking in blind people. Vision loss has negative consequences for people’s mobility. The current report demonstrates that echolocation might replace certain visual functionality for adaptive walking. Importantly, the report also highlights that echolocation and long cane are complementary mobility techniques. The findings have direct relevance for professionals involved in mobility instruction and for people who are blind.
Collapse
Affiliation(s)
| | - Xinyu Zhang
- School of Information and Electronics, Beijing Institute of Technology
| | - Michail Antoniou
- Department of Electronic Electrical and Systems Engineering, School of Engineering, University of Birmingham
| | | | | |
Collapse
|
32
|
Thaler L, De Vos HPJC, Kish D, Antoniou M, Baker CJ, Hornikx MCJ. Human Click-Based Echolocation of Distance: Superfine Acuity and Dynamic Clicking Behaviour. J Assoc Res Otolaryngol 2019; 20:499-510. [PMID: 31286299 PMCID: PMC6797687 DOI: 10.1007/s10162-019-00728-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Accepted: 06/06/2019] [Indexed: 01/25/2023] Open
Abstract
Some people who are blind have trained themselves in echolocation using mouth clicks. Here, we provide the first report of psychophysical and clicking data during echolocation of distance from a group of 8 blind people with experience in mouth click-based echolocation (daily use for > 3 years). We found that experienced echolocators can detect changes in distance of 3 cm at a reference distance of 50 cm, and a change of 7 cm at a reference distance of 150 cm, regardless of object size (i.e. 28.5 cm vs. 80 cm diameter disk). Participants made mouth clicks that were more intense and they made more clicks for weaker reflectors (i.e. same object at farther distance, or smaller object at same distance), but number and intensity of clicks were adjusted independently from one another. The acuity we found is better than previous estimates based on samples of sighted participants without experience in echolocation or individual experienced participants (i.e. single blind echolocators tested) and highlights adaptation of the perceptual system in blind human echolocators. Further, the dynamic adaptive clicking behaviour we observed suggests that number and intensity of emissions serve separate functions to increase SNR. The data may serve as an inspiration for low-cost (i.e. non-array based) artificial ‘cognitive’ sonar and radar systems, i.e. signal design, adaptive pulse repetition rate and intensity. It will also be useful for instruction and guidance for new users of echolocation.
Collapse
Affiliation(s)
- Lore Thaler
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK.
| | - H P J C De Vos
- Eindhoven University of Technology, Eindhoven, The Netherlands
| | - D Kish
- World Access for the Blind, Placentia, CA, USA
| | - M Antoniou
- Department of Electronic Electrical and Systems Engineering, University of Birmingham, Birmingham, UK
| | - C J Baker
- Department of Electronic Electrical and Systems Engineering, University of Birmingham, Birmingham, UK
| | - M C J Hornikx
- Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
33
|
Andreasen A, Geronazzo M, Nilsson NC, Zovnercuka J, Konovalov K, Serafin S. Auditory Feedback for Navigation with Echoes in Virtual Environments: Training Procedure and Orientation Strategies. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1876-1886. [PMID: 30794514 DOI: 10.1109/tvcg.2019.2898787] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Being able to hear objects in an environment, for example using echolocation, is a challenging task. The main goal of the current work is to use virtual environments (VEs) to train novice users to navigate using echolocation. Previous studies have shown that musicians are able to differentiate sound pulses from reflections. This paper presents design patterns for VE simulators for both training and testing procedures, while classifying users' navigation strategies in the VE. Moreover, the paper presents features that increase users' performance in VEs. We report the findings of two user studies: a pilot test that helped improve the sonic interaction design, and a primary study exposing participants to a spatial orientation task during four conditions which were early reflections (RF), late reverberation (RV), early reflections-reverberation (RR) and visual stimuli (V). The latter study allowed us to identify navigation strategies among the users. Some users (10/26) reported an ability to create spatial cognitive maps during the test with auditory echoes, which may explain why this group performed better than the remaining participants in the RR condition.
Collapse
|
34
|
Sumiya M, Ashihara K, Yoshino K, Gogami M, Nagatani Y, Kobayasi KI, Watanabe Y, Hiryu S. Bat-inspired signal design for target discrimination in human echolocation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:2221. [PMID: 31046316 DOI: 10.1121/1.5097166] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Accepted: 03/22/2019] [Indexed: 06/09/2023]
Abstract
Echolocating bats exhibit sophisticated sonar behaviors using ultrasounds with actively adjusted acoustic characteristics (e.g., frequency and time-frequency structure) depending on the situation. In this study, the utility of ultrasound in human echolocation was examined. By listening to ultrasonic echoes with a shifted pitch to be audible, the participants (i.e., sighted echolocation novices) could discriminate the three-dimensional (3D) roundness of edge contours. This finding suggests that sounds with suitable wavelengths (i.e., ultrasounds) can provide useful information about 3D shapes. In addition, the shape, texture, and material discrimination experiments were conducted using ultrasonic echoes binaurally measured with a 1/7 scaled miniature dummy head. The acoustic and statistical analyses showed that intensity and timbre cues were useful for shape and texture discriminations, respectively. Furthermore, in the discrimination of objects with various features (e.g., acrylic board and artificial grass), the perceptual distances between objects were more dispersed when frequency-modulated sweep signals were used than when a constant-frequency signal was used. These suggest that suitable signal design, i.e., echolocation sounds employed by bats, allowed echolocation novices to discriminate the 3D shape and texture. This top-down approach using human subjects may be able to efficiently help interpret the sensory perception, "seeing by sound," in bat biosonar.
Collapse
Affiliation(s)
- Miwa Sumiya
- Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, 610-0394, Japan
| | - Kaoru Ashihara
- Human Informatics Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba 305-8568, Japan
| | - Kazuki Yoshino
- Department of Electronic Engineering, Kobe City College of Technology, Kobe, 651-2194, Japan
| | - Masaki Gogami
- Department of Electronic Engineering, Kobe City College of Technology, Kobe, 651-2194, Japan
| | - Yoshiki Nagatani
- Department of Electronic Engineering, Kobe City College of Technology, Kobe, 651-2194, Japan
| | - Kohta I Kobayasi
- Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, 610-0394, Japan
| | - Yoshiaki Watanabe
- Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, 610-0394, Japan
| | - Shizuko Hiryu
- Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, 610-0394, Japan
| |
Collapse
|
35
|
Dooley JC, Krubitzer LA. Alterations in cortical and thalamic connections of somatosensory cortex following early loss of vision. J Comp Neurol 2018; 527:1675-1688. [PMID: 30444542 DOI: 10.1002/cne.24582] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 09/26/2018] [Accepted: 11/01/2018] [Indexed: 01/31/2023]
Abstract
Early loss of vision produces dramatic changes in the functional organization and connectivity of the neocortex in cortical areas that normally process visual inputs, such as the primary and second visual area. This loss also results in alterations in the size, functional organization, and neural response properties of the primary somatosensory area, S1. However, the anatomical substrate for these functional changes in S1 has never been described. In the present investigation, we quantified the cortical and subcortical connections of S1 in animals that were bilaterally enucleated very early in development, prior to the formation of retino-geniculate and thalamocortical pathways. We found that S1 receives dense inputs from novel cortical fields, and that the density of existing cortical and thalamocortical connections was altered. Our results demonstrate that sensory systems develop in tandem and that alterations in sensory input in one system can affect the connections and organization of other sensory systems. Thus, therapeutic intervention following early loss of vision should focus not only on restoring vision, but also on augmenting the natural plasticity of the spared systems.
Collapse
Affiliation(s)
- James C Dooley
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, Iowa
| | - Leah A Krubitzer
- Center for Neuroscience, University of California, Davis, California.,Department of Psychology, University of California, Davis, California
| |
Collapse
|
36
|
Negen J, Wen L, Thaler L, Nardini M. Bayes-Like Integration of a New Sensory Skill with Vision. Sci Rep 2018; 8:16880. [PMID: 30442895 PMCID: PMC6237778 DOI: 10.1038/s41598-018-35046-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Accepted: 10/22/2018] [Indexed: 11/09/2022] Open
Abstract
Humans are effective at dealing with noisy, probabilistic information in familiar settings. One hallmark of this is Bayesian Cue Combination: combining multiple noisy estimates to increase precision beyond the best single estimate, taking into account their reliabilities. Here we show that adults also combine a novel audio cue to distance, akin to human echolocation, with a visual cue. Following two hours of training, subjects were more precise given both cues together versus the best single cue. This persisted when we changed the novel cue's auditory frequency. Reliability changes also led to a re-weighting of cues without feedback, showing that they learned something more flexible than a rote decision rule for specific stimuli. The main findings replicated with a vibrotactile cue. These results show that the mature sensory apparatus can learn to flexibly integrate new sensory skills. The findings are unexpected considering previous empirical results and current models of multisensory learning.
Collapse
Affiliation(s)
- James Negen
- Department of Psychology, Durham University Durham, DH1 3LE, Durham, UK.
| | - Lisa Wen
- Department of Psychology, Durham University Durham, DH1 3LE, Durham, UK
| | - Lore Thaler
- Department of Psychology, Durham University Durham, DH1 3LE, Durham, UK
| | - Marko Nardini
- Department of Psychology, Durham University Durham, DH1 3LE, Durham, UK
| |
Collapse
|
37
|
Abstract
This study investigated the influence of body motion on an echolocation task. We asked a group of blindfolded novice sighted participants to walk along a corridor, made with plastic sound-reflecting panels. By self-generating mouth clicks, the participants attempted to understand some spatial properties of the corridor, i.e. a left turn, a right turn or a dead end. They were asked to explore the corridor and stop whenever they were confident about the corridor shape. Their body motion was captured by a camera system and coded. Most participants were able to accomplish the task with the percentage of correct guesses above the chance level. We found a mutual interaction between some kinematic variables that can lead to optimal echolocation skills. These variables are head motion, accounting for spatial exploration, the motion stop-point of the person and the amount of correct guesses about the spatial structure. The results confirmed that sighted people are able to use self-generated echoes to navigate in a complex environment. The inter-individual variability and the quality of echolocation tasks seems to depend on how and how much the space is explored.
Collapse
|
38
|
Plasticity based on compensatory effector use in the association but not primary sensorimotor cortex of people born without hands. Proc Natl Acad Sci U S A 2018; 115:7801-7806. [PMID: 29997174 PMCID: PMC6065047 DOI: 10.1073/pnas.1803926115] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
What forces direct brain organization and its plasticity? When brain regions are deprived of their input, which regions reorganize based on compensation for the disability and experience, and which regions show topographically constrained plasticity? People born without hands activate their primary sensorimotor hand region while moving body parts used to compensate for this disability (e.g., their feet). This was taken to suggest a neural organization based on functions, such as performing manual-like dexterous actions, rather than on body parts, in primary sensorimotor cortex. We tested the selectivity for the compensatory body parts in the primary and association sensorimotor cortex of people born without hands (dysplasic individuals). Despite clear compensatory foot use, the primary sensorimotor hand area in the dysplasic subjects showed preference for adjacent body parts that are not compensatorily used as effectors. This suggests that function-based organization, proposed for congenital blindness and deafness, does not apply to the primary sensorimotor cortex deprivation in dysplasia. These findings stress the roles of neuroanatomical constraints like topographical proximity and connectivity in determining the functional development of primary cortex even in extreme, congenital deprivation. In contrast, increased and selective foot movement preference was found in dysplasics' association cortex in the inferior parietal lobule. This suggests that the typical motor selectivity of this region for manual actions may correspond to high-level action representations that are effector-invariant. These findings reveal limitations to compensatory plasticity and experience in modifying brain organization of early topographical cortex compared with association cortices driven by function-based organization.
Collapse
|
39
|
Massiceti D, Hicks SL, van Rheede JJ. Stereosonic vision: Exploring visual-to-auditory sensory substitution mappings in an immersive virtual reality navigation paradigm. PLoS One 2018; 13:e0199389. [PMID: 29975734 PMCID: PMC6033394 DOI: 10.1371/journal.pone.0199389] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2017] [Accepted: 06/06/2018] [Indexed: 01/16/2023] Open
Abstract
Sighted people predominantly use vision to navigate spaces, and sight loss has negative consequences for independent navigation and mobility. The recent proliferation of devices that can extract 3D spatial information from visual scenes opens up the possibility of using such mobility-relevant information to assist blind and visually impaired people by presenting this information through modalities other than vision. In this work, we present two new methods for encoding visual scenes using spatial audio: simulated echolocation and distance-dependent hum volume modulation. We implemented both methods in a virtual reality (VR) environment and tested them using a 3D motion-tracking device. This allowed participants to physically walk through virtual mobility scenarios, generating data on real locomotion behaviour. Blindfolded sighted participants completed two tasks: maze navigation and obstacle avoidance. Results were measured against a visual baseline in which participants performed the same two tasks without blindfolds. Task completion time, speed and number of collisions were used as indicators of successful navigation, with additional metrics exploring detailed dynamics of performance. In both tasks, participants were able to navigate using only audio information after minimal instruction. While participants were 65% slower using audio compared to the visual baseline, they reduced their audio navigation time by an average 21% over just 6 trials. Hum volume modulation proved over 20% faster than simulated echolocation in both mobility scenarios, and participants also showed the greatest improvement with this sonification method. Nevertheless, we do speculate that simulated echolocation remains worth exploring as it provides more spatial detail and could therefore be more useful in more complex environments. The fact that participants were intuitively able to successfully navigate space with two new visual-to-audio mappings for conveying spatial information motivates the further exploration of these and other mappings with the goal of assisting blind and visually impaired individuals with independent mobility.
Collapse
Affiliation(s)
- Daniela Massiceti
- Department of Engineering Science, University of Oxford, Oxford, United Kingdom
- * E-mail:
| | - Stephen Lloyd Hicks
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Joram Jacob van Rheede
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
40
|
Norman LJ, Thaler L. Human Echolocation for Target Detection Is More Accurate With Emissions Containing Higher Spectral Frequencies, and This Is Explained by Echo Intensity. Iperception 2018; 9:2041669518776984. [PMID: 29854377 PMCID: PMC5968665 DOI: 10.1177/2041669518776984] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 04/21/2018] [Indexed: 12/01/2022] Open
Abstract
Humans can learn to use acoustic echoes to detect and classify objects. Echolocators typically use tongue clicks to induce these echoes, and there is some evidence that higher spectral frequency content of an echolocator’s tongue click is associated with better echolocation performance. This may be explained by the intensity of the echoes. The current study tested experimentally (a) if emissions with higher spectral frequencies lead to better performance for target detection, and (b) if this is mediated by echo intensity. Participants listened to sound recordings that contained an emission and sometimes an echo from an object. The peak spectral frequency of the emission was varied between 3.5 and 4.5 kHz. Participants judged whether they heard the object in these recordings and did the same under conditions in which the intensity of the echoes had been digitally equated. Participants performed better using emissions with higher spectral frequencies, but this advantage was eliminated when the intensity of the echoes was equated. These results demonstrate that emissions with higher spectral frequencies can benefit echolocation performance in conditions where they lead to an increase in echo intensity. The findings suggest that people who train to echolocate should be instructed to make emissions (e.g. mouth clicks) with higher spectral frequency content.
Collapse
Affiliation(s)
- L J Norman
- Department of Psychology, Durham University, Durham, UK
| | - L Thaler
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
41
|
Abstract
Restoring vision to the blind by retinal repair has been a dream of medicine for centuries, and the first successful procedures have recently been performed. Although we are still far from the restoration of high-resolution vision, step-by-step developments are overcoming crucial bottlenecks in therapy development and have enabled the restoration of some visual function in patients with specific blindness-causing diseases. Here, we discuss the current state of vision restoration and the problems related to retinal repair. We describe new model systems and translational technologies, as well as the clinical conditions in which new methods may help to combat blindness.
Collapse
|
42
|
Thaler L, De Vos R, Kish D, Antoniou M, Baker C, Hornikx M. Human echolocators adjust loudness and number of clicks for detection of reflectors at various azimuth angles. Proc Biol Sci 2018; 285:20172735. [PMID: 29491173 PMCID: PMC5832709 DOI: 10.1098/rspb.2017.2735] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Accepted: 02/06/2018] [Indexed: 11/15/2022] Open
Abstract
In bats it has been shown that they adjust their emissions to situational demands. Here we report similar findings for human echolocation. We asked eight blind expert echolocators to detect reflectors positioned at various azimuth angles. The same 17.5 cm diameter circular reflector placed at 100 cm distance at 0°, 45° or 90° with respect to straight ahead was detected with 100% accuracy, but performance dropped to approximately 80% when it was placed at 135° (i.e. somewhat behind) and to chance levels (50%) when placed at 180° (i.e. right behind). This can be explained based on poorer target ensonification owing to the beam pattern of human mouth clicks. Importantly, analyses of sound recordings show that echolocators increased loudness and numbers of clicks for reflectors at farther angles. Echolocators were able to reliably detect reflectors when level differences between echo and emission were as low as -27 dB, which is much lower than expected based on previous work. Increasing intensity and numbers of clicks improves signal-to-noise ratio and in this way compensates for weaker target reflections. Our results are, to our knowledge, the first to show that human echolocation experts adjust their emissions to improve sensory sampling. An implication from our findings is that human echolocators accumulate information from multiple samples.
Collapse
Affiliation(s)
- L Thaler
- Department of Psychology, Durham University, Science Site, South Road, Durham DH1 3LE, UK
| | - R De Vos
- Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands
| | - D Kish
- World Access for the Blind, Placentia 92870, CA, USA
| | - M Antoniou
- Department of Electronic Electrical and Systems Engineering, School of Engineering, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - C Baker
- Department of Electronic Electrical and Systems Engineering, School of Engineering, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - M Hornikx
- Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands
| |
Collapse
|
43
|
Thaler L, Foresteire D. Visual sensory stimulation interferes with people's ability to echolocate object size. Sci Rep 2017; 7:13069. [PMID: 29026115 PMCID: PMC5638915 DOI: 10.1038/s41598-017-12967-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 09/14/2017] [Indexed: 12/03/2022] Open
Abstract
Echolocation is the ability to use sound-echoes to infer spatial information about the environment. People can echolocate for example by making mouth clicks. Previous research suggests that echolocation in blind people activates brain areas that process light in sighted people. Research has also shown that echolocation in blind people may replace vision for calibration of external space. In the current study we investigated if echolocation may also draw on ‘visual’ resources in the sighted brain. To this end, we paired a sensory interference paradigm with an echolocation task. We found that exposure to an uninformative visual stimulus (i.e. white light) while simultaneously echolocating significantly reduced participants’ ability to accurately judge object size. In contrast, a tactile stimulus (i.e. vibration on the skin) did not lead to a significant change in performance (neither in sighted, nor blind echo expert participants). Furthermore, we found that the same visual stimulus did not affect performance in auditory control tasks that required detection of changes in sound intensity, sound frequency or sound location. The results suggest that processing of visual and echo-acoustic information draw on common neural resources.
Collapse
Affiliation(s)
- L Thaler
- Department of Psychology, Durham University, Durham, United Kingdom.
| | - D Foresteire
- Department of Psychology, Durham University, Durham, United Kingdom
| |
Collapse
|
44
|
Mouth-clicks used by blind expert human echolocators - signal description and model based signal synthesis. PLoS Comput Biol 2017; 13:e1005670. [PMID: 28859082 PMCID: PMC5578488 DOI: 10.1371/journal.pcbi.1005670] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2017] [Accepted: 07/05/2017] [Indexed: 11/19/2022] Open
Abstract
Echolocation is the ability to use sound-echoes to infer spatial information about the environment. Some blind people have developed extraordinary proficiency in echolocation using mouth-clicks. The first step of human biosonar is the transmission (mouth click) and subsequent reception of the resultant sound through the ear. Existing head-related transfer function (HRTF) data bases provide descriptions of reception of the resultant sound. For the current report, we collected a large database of click emissions with three blind people expertly trained in echolocation, which allowed us to perform unprecedented analyses. Specifically, the current report provides the first ever description of the spatial distribution (i.e. beam pattern) of human expert echolocation transmissions, as well as spectro-temporal descriptions at a level of detail not available before. Our data show that transmission levels are fairly constant within a 60° cone emanating from the mouth, but levels drop gradually at further angles, more than for speech. In terms of spectro-temporal features, our data show that emissions are consistently very brief (~3ms duration) with peak frequencies 2-4kHz, but with energy also at 10kHz. This differs from previous reports of durations 3-15ms and peak frequencies 2-8kHz, which were based on less detailed measurements. Based on our measurements we propose to model transmissions as sum of monotones modulated by a decaying exponential, with angular attenuation by a modified cardioid. We provide model parameters for each echolocator. These results are a step towards developing computational models of human biosonar. For example, in bats, spatial and spectro-temporal features of emissions have been used to derive and test model based hypotheses about behaviour. The data we present here suggest similar research opportunities within the context of human echolocation. Relatedly, the data are a basis to develop synthetic models of human echolocation that could be virtual (i.e. simulated) or real (i.e. loudspeaker, microphones), and which will help understanding the link between physical principles and human behaviour.
Collapse
|
45
|
Thaler L, Castillo-Serrano J. People's Ability to Detect Objects Using Click-Based Echolocation: A Direct Comparison between Mouth-Clicks and Clicks Made by a Loudspeaker. PLoS One 2016; 11:e0154868. [PMID: 27135407 PMCID: PMC4852930 DOI: 10.1371/journal.pone.0154868] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2016] [Accepted: 04/20/2016] [Indexed: 11/19/2022] Open
Abstract
Echolocation is the ability to use reflected sound to obtain information about the spatial environment. Echolocation is an active process that requires both the production of the emission as well as the sensory processing of the resultant sound. Appreciating the general usefulness of echo-acoustic cues for people, in particular those with vision impairments, various devices have been built that exploit the principle of echolocation to obtain and provide information about the environment. It is common to all these devices that they do not require the person to make a sound. Instead, the device produces the emission autonomously and feeds a resultant sound back to the user. Here we tested if echolocation performance in a simple object detection task was affected by the use of a head-mounted loudspeaker as compared to active clicking. We found that 27 sighted participants new to echolocation did generally better when they used a loudspeaker as compared to mouth-clicks, and that two blind participants with experience in echolocation did equally well with mouth clicks and the speaker. Importantly, performance of sighted participants’ was not statistically different from performance of blind experts when they used the speaker. Based on acoustic click data collected from a subset of our participants, those participants whose mouth clicks were more similar to the speaker clicks, and thus had higher peak frequencies and sound intensity, did better. We conclude that our results are encouraging for the consideration and development of assistive devices that exploit the principle of echolocation.
Collapse
Affiliation(s)
- Lore Thaler
- Department of Psychology, Durham University, Durham, United Kingdom
- * E-mail:
| | | |
Collapse
|