1
|
de Paz C, Huertas JA, Ibáñez-Gijón J, Martín-Gonzalo JA, Varas AB, Travieso D. Enhancing navigation and obstacle avoidance with a vibrotactile device as secondary electronic travel aid. Disabil Rehabil Assist Technol 2025; 20:1140-1150. [PMID: 39412489 DOI: 10.1080/17483107.2024.2417264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 10/04/2024] [Accepted: 10/07/2024] [Indexed: 05/16/2025]
Abstract
People with visual impairments commonly rely on the use of a white cane to navigate and avoid obstacles. Although this analog tool is highly reliable and easy to use, its drawback is the impossibility to anticipate obstacles beyond reach and routes, as well as obstacles above waist level. Electronic travel aids (ETAs) and sensory substitution devices (SSDs) are new technological solutions designed to enhance the tactile and/or auditory capabilities to access the information needed to overcome those drawbacks. In the present study, 25 individuals with visual impairments used the T-Sight, a vibrotactile SSD, and/or the white cane in a navigation task involving obstacle avoidance. While the performance achieved with the device, measured by the number of collisions and walking speed, did not surpass the white cane, the SSD did have a positive impact on ambulation. Participants reduced the number of white cane touches towards environmental obstacles and performed obstacle avoidance maneuvers earlier. These results demonstrate the potential of vibrotactile devices to address the limitations of the white cane.
Collapse
Affiliation(s)
- Carlos de Paz
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| | | | | | | | - Ana Belén Varas
- Escuela de Fisioterapia de la ONCE, Universidad Autónoma de Madrid, Madrid, Spain
| | - David Travieso
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| |
Collapse
|
2
|
Bordeau C, Scalvini F, Migniot C, Dubois J, Ambard M. Localization abilities with a visual-to-auditory substitution device are modulated by the spatial arrangement of the scene. Atten Percept Psychophys 2025:10.3758/s13414-025-03065-y. [PMID: 40281272 DOI: 10.3758/s13414-025-03065-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2025] [Indexed: 04/29/2025]
Abstract
Visual-to-auditory substitution devices convert visual images into soundscapes. They are intended for use by blind people in everyday situations with various obstacles that need to be localized simultaneously, as well as irrelevant objects that must be ignored. It is therefore important to establish the extent to which substitution devices make it possible to localize obstacles in complex scenes. In this study, we used a substitution device that combines spatial acoustic cues and pitch modulation to convey spatial information. Nineteen blindfolded sighted participants had to point at a virtual target that was displayed alone or among distractors to evaluate their ability to perform a localization task in minimalist and complex virtual scenes. The spatial configuration of the scene was manipulated by varying the number of distractors and their spatial arrangement relative to the target. While elevation localization abilities were not impaired by the presence of distractors, the ability to localize the azimuth of the target was modulated when a large number of distractors were displayed at the same elevation as the target. The elevation localization performance tends to confirm that pitch modulation is effective to convey elevation information with the device in various spatial configurations. Conversely, the impairment to azimuth localization seems to result from segregation difficulties that arise when the spatial configuration of the objects does not allow pitch segregation. This must be considered in the design of substitution devices in order to help blind people correctly evaluate the risks posed by different situations.
Collapse
Affiliation(s)
- Camille Bordeau
- University of Burgundy, CNRS, LEAD Umr5022, 21000, Dijon, France.
- Aix Marseille University, CNRS, CRPN, Marseille, France.
| | - Florian Scalvini
- Imvia UR 7535-University of Burgundy, Dijon, France
- IMT Atlantique, LaTIM U1101 INSERM, Brest, France
| | | | | | - Maxime Ambard
- University of Burgundy, CNRS, LEAD Umr5022, 21000, Dijon, France
| |
Collapse
|
3
|
Ramôa G, Schmidt V, Schwarz T, Stiefelhagen R, König P. SONOICE! a Sonar-Voice dynamic user interface for assisting individuals with blindness and visual impairment in pinpointing elements in 2D tactile readers. FRONTIERS IN REHABILITATION SCIENCES 2024; 5:1368983. [PMID: 39246576 PMCID: PMC11377411 DOI: 10.3389/fresc.2024.1368983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 08/09/2024] [Indexed: 09/10/2024]
Abstract
Pinpointing elements on large tactile surfaces is challenging for individuals with blindness and visual impairment (BVI) seeking to access two-dimensional (2D) information. This is particularly evident when using 2D tactile readers, devices designed to provide 2D information using static tactile representations with audio explanations. Traditional pinpointing methods, such as sighted assistance and trial-and-error, are limited and inefficient, while alternative pinpointing user interfaces (UI) are still emerging and need advancement. To address these limitations, we develop three distinct navigation UIs using a user-centred design approach: Sonar (proximity-radar sonification), Voice (direct clock-system speech instructions), and Sonoice, a new method that combines elements of both. The navigation UIs were incorporated into the Tactonom Reader device to conduct a trial study with ten BVI participants. Our UIs exhibited superior performance and higher user satisfaction than the conventional trial-and-error approach, showcasing scalability to varied assistive technology and their effectiveness regardless of graphic complexity. The innovative Sonoice approach achieved the highest efficiency in pinpointing elements, but user satisfaction was highest with the Sonar approach. Surprisingly, participant preferences varied and did not always align with their most effective strategy, underscoring the importance of accommodating individual user preferences and contextual factors when choosing between the three UIs. While more extensive training may reveal further differences between these UIs, our results emphasise the significance of offering diverse options to meet user needs. Altogether, the results provide valuable insights for improving the functionality of 2D tactile readers, thereby contributing to the future development of accessible technology.
Collapse
Affiliation(s)
- Gaspar Ramôa
- Research Department, Inventivio GmbH, Nürnberg, Germany
| | - Vincent Schmidt
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Thorsten Schwarz
- ACCESS@KIT, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Rainer Stiefelhagen
- ACCESS@KIT, Karlsruhe Institute of Technology, Karlsruhe, Germany
- HCI@KIT, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Peter König
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
- Institute of Neurophysiology & Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
4
|
Guarese R, Pretty E, Renata A, Polson D, Zambetta F. Exploring Audio Interfaces for Vertical Guidance in Augmented Reality via Hand-Based Feedback. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2818-2828. [PMID: 38437120 DOI: 10.1109/tvcg.2024.3372040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
This research proposes an evaluation of pitch-based sonification methods via user experiments in real-life scenarios, specifically vertical guidance, with the aim of standardizing the use of audio interfaces in AR in guidance tasks. Using literature on assistive technology for people who are blind or visually impaired, we aim to generalize their applicability to a broader population and for different use cases. We propose and test sonification methods for vertical guidance in a series of hand-navigation assessments with users without visual feedback. Including feedback from a visually impaired expert in digital accessibility, results (N=19) outlined that methods that do not rely on memorizing pitch had the most promising accuracy and self-reported workload performances. Ultimately, we argue for audio AR's ability to enhance user performance in different scenarios, from video games to finding objects in a pantry.
Collapse
|
5
|
Paré S, Bleau M, Dricot L, Ptito M, Kupers R. Brain structural changes in blindness: a systematic review and an anatomical likelihood estimation (ALE) meta-analysis. Neurosci Biobehav Rev 2023; 150:105165. [PMID: 37054803 DOI: 10.1016/j.neubiorev.2023.105165] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 03/23/2023] [Accepted: 04/09/2023] [Indexed: 04/15/2023]
Abstract
In recent decades, numerous structural brain imaging studies investigated purported morphometric changes in early (EB) and late onset blindness (LB). The results of these studies have not yielded very consistent results, neither with respect to the type, nor to the anatomical locations of the brain morphometric alterations. To better characterize the effects of blindness on brain morphometry, we performed a systematic review and an Anatomical-Likelihood-Estimation (ALE) coordinate-based-meta-analysis of 65 eligible studies on brain structural changes in EB and LB, including 890 EB, 466 LB and 1257 sighted controls. Results revealed atrophic changes throughout the whole extent of the retino-geniculo-striate system in both EB and LB, whereas changes in areas beyond the occipital lobe occurred in EB only. We discuss the nature of some of the contradictory findings with respect to the used brain imaging methodologies and characteristics of the blind populations such as the onset, duration and cause of blindness. Future studies should aim for much larger sample sizes, eventually by merging data from different brain imaging centers using the same imaging sequences, opt for multimodal structural brain imaging, and go beyond a purely structural approach by combining functional with structural connectivity network analyses.
Collapse
Affiliation(s)
- Samuel Paré
- School of Optometry, University of Montreal, Montreal, Qc, Canada
| | - Maxime Bleau
- School of Optometry, University of Montreal, Montreal, Qc, Canada
| | - Laurence Dricot
- Institute of NeuroScience (IoNS), Université catholique de Louvain (UCLouvain), Bruxelles, Belgium
| | - Maurice Ptito
- School of Optometry, University of Montreal, Montreal, Qc, Canada; Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, Qc, Canada; Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
| | - Ron Kupers
- School of Optometry, University of Montreal, Montreal, Qc, Canada; Institute of NeuroScience (IoNS), Université catholique de Louvain (UCLouvain), Bruxelles, Belgium; Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark.
| |
Collapse
|
6
|
Bleau M, Paré S, Chebat DR, Kupers R, Nemargut JP, Ptito M. Neural substrates of spatial processing and navigation in blindness: An activation likelihood estimation meta-analysis. Front Neurosci 2022; 16:1010354. [PMID: 36340755 PMCID: PMC9630591 DOI: 10.3389/fnins.2022.1010354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 09/30/2022] [Indexed: 12/02/2022] Open
Abstract
Even though vision is considered the best suited sensory modality to acquire spatial information, blind individuals can form spatial representations to navigate and orient themselves efficiently in space. Consequently, many studies support the amodality hypothesis of spatial representations since sensory modalities other than vision contribute to the formation of spatial representations, independently of visual experience and imagery. However, given the high variability in abilities and deficits observed in blind populations, a clear consensus about the neural representations of space has yet to be established. To this end, we performed a meta-analysis of the literature on the neural correlates of spatial processing and navigation via sensory modalities other than vision, like touch and audition, in individuals with early and late onset blindness. An activation likelihood estimation (ALE) analysis of the neuroimaging literature revealed that early blind individuals and sighted controls activate the same neural networks in the processing of non-visual spatial information and navigation, including the posterior parietal cortex, frontal eye fields, insula, and the hippocampal complex. Furthermore, blind individuals also recruit primary and associative occipital areas involved in visuo-spatial processing via cross-modal plasticity mechanisms. The scarcity of studies involving late blind individuals did not allow us to establish a clear consensus about the neural substrates of spatial representations in this specific population. In conclusion, the results of our analysis on neuroimaging studies involving early blind individuals support the amodality hypothesis of spatial representations.
Collapse
Affiliation(s)
- Maxime Bleau
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
| | - Samuel Paré
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel University, Ariel, Israel
| | - Ron Kupers
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
- Institute of Neuroscience, Faculty of Medicine, Université de Louvain, Brussels, Belgium
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
| | | | - Maurice Ptito
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- *Correspondence: Maurice Ptito,
| |
Collapse
|
7
|
Busaeed S, Katib I, Albeshri A, Corchado JM, Yigitcanlar T, Mehmood R. LidSonic V2.0: A LiDAR and Deep-Learning-Based Green Assistive Edge Device to Enhance Mobility for the Visually Impaired. SENSORS (BASEL, SWITZERLAND) 2022; 22:7435. [PMID: 36236546 PMCID: PMC9570831 DOI: 10.3390/s22197435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/20/2022] [Accepted: 09/26/2022] [Indexed: 06/16/2023]
Abstract
Over a billion people around the world are disabled, among whom 253 million are visually impaired or blind, and this number is greatly increasing due to ageing, chronic diseases, and poor environments and health. Despite many proposals, the current devices and systems lack maturity and do not completely fulfill user requirements and satisfaction. Increased research activity in this field is required in order to encourage the development, commercialization, and widespread acceptance of low-cost and affordable assistive technologies for visual impairment and other disabilities. This paper proposes a novel approach using a LiDAR with a servo motor and an ultrasonic sensor to collect data and predict objects using deep learning for environment perception and navigation. We adopted this approach using a pair of smart glasses, called LidSonic V2.0, to enable the identification of obstacles for the visually impaired. The LidSonic system consists of an Arduino Uno edge computing device integrated into the smart glasses and a smartphone app that transmits data via Bluetooth. Arduino gathers data, operates the sensors on the smart glasses, detects obstacles using simple data processing, and provides buzzer feedback to visually impaired users. The smartphone application collects data from Arduino, detects and classifies items in the spatial environment, and gives spoken feedback to the user on the detected objects. In comparison to image-processing-based glasses, LidSonic uses far less processing time and energy to classify obstacles using simple LiDAR data, according to several integer measurements. We comprehensively describe the proposed system's hardware and software design, having constructed their prototype implementations and tested them in real-world environments. Using the open platforms, WEKA and TensorFlow, the entire LidSonic system is built with affordable off-the-shelf sensors and a microcontroller board costing less than USD 80. Essentially, we provide designs of an inexpensive, miniature green device that can be built into, or mounted on, any pair of glasses or even a wheelchair to help the visually impaired. Our approach enables faster inference and decision-making using relatively low energy with smaller data sizes, as well as faster communications for edge, fog, and cloud computing.
Collapse
Affiliation(s)
- Sahar Busaeed
- Faculty of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh 11564, Saudi Arabia
| | - Iyad Katib
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Aiiad Albeshri
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Juan M. Corchado
- Bisite Research Group, University of Salamanca, 37007 Salamanca, Spain
- Air Institute, IoT Digital Innovation Hub, 37188 Salamanca, Spain
- Department of Electronics, Information and Communication, Faculty of Engineering, Osaka Institute of Technology, Osaka 535-8585, Japan
| | - Tan Yigitcanlar
- School of Architecture and Built Environment, Queensland University of Technology, 2 George Street, Brisbane, QLD 4000, Australia
| | - Rashid Mehmood
- High Performance Computing Center, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
8
|
Kilian J, Neugebauer A, Scherffig L, Wahl S. The Unfolding Space Glove: A Wearable Spatio-Visual to Haptic Sensory Substitution Device for Blind People. SENSORS 2022; 22:s22051859. [PMID: 35271009 PMCID: PMC8914703 DOI: 10.3390/s22051859] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 02/18/2022] [Accepted: 02/22/2022] [Indexed: 02/04/2023]
Abstract
This paper documents the design, implementation and evaluation of the Unfolding Space Glove—an open source sensory substitution device. It transmits the relative position and distance of nearby objects as vibratory stimuli to the back of the hand and thus enables blind people to haptically explore the depth of their surrounding space, assisting with navigation tasks such as object recognition and wayfinding. The prototype requires no external hardware, is highly portable, operates in all lighting conditions, and provides continuous and immediate feedback—all while being visually unobtrusive. Both blind (n = 8) and blindfolded sighted participants (n = 6) completed structured training and obstacle courses with both the prototype and a white long cane to allow performance comparisons to be drawn between them. The subjects quickly learned how to use the glove and successfully completed all of the trials, though still being slower with it than with the cane. Qualitative interviews revealed a high level of usability and user experience. Overall, the results indicate the general processability of spatial information through sensory substitution using haptic, vibrotactile interfaces. Further research would be required to evaluate the prototype’s capabilities after extensive training and to derive a fully functional navigation aid from its features.
Collapse
Affiliation(s)
- Jakob Kilian
- Köln International School of Design, TH Köln, 50678 Köln, Germany; (J.K.); (L.S.)
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
| | - Alexander Neugebauer
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
| | - Lasse Scherffig
- Köln International School of Design, TH Köln, 50678 Köln, Germany; (J.K.); (L.S.)
| | - Siegfried Wahl
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
- Correspondence: ; Tel.: +49-7071-29-84512
| |
Collapse
|
9
|
Blindness and the Reliability of Downwards Sensors to Avoid Obstacles: A Study with the EyeCane. SENSORS 2021; 21:s21082700. [PMID: 33921202 PMCID: PMC8070041 DOI: 10.3390/s21082700] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 04/07/2021] [Accepted: 04/09/2021] [Indexed: 11/17/2022]
Abstract
Vision loss has dramatic repercussions on the quality of life of affected people, particularly with respect to their orientation and mobility. Many devices are available to help blind people to navigate in their environment. The EyeCane is a recently developed electronic travel aid (ETA) that is inexpensive and easy to use, allowing for the detection of obstacles lying ahead within a 2 m range. The goal of this study was to investigate the potential of the EyeCane as a primary aid for spatial navigation. Three groups of participants were recruited: early blind, late blind, and sighted. They were first trained with the EyeCane and then tested in a life-size obstacle course with four obstacles types: cube, door, post, and step. Subjects were requested to cross the corridor while detecting, identifying, and avoiding the obstacles. Each participant had to perform 12 runs with 12 different obstacles configurations. All participants were able to learn quickly to use the EyeCane and successfully complete all trials. Amongst the various obstacles, the step appeared to prove the hardest to detect and resulted in more collisions. Although the EyeCane was effective for detecting obstacles lying ahead, its downward sensor did not reliably detect those on the ground, rendering downward obstacles more hazardous for navigation.
Collapse
|