1
|
Bordeau C, Scalvini F, Migniot C, Dubois J, Ambard M. Localization abilities with a visual-to-auditory substitution device are modulated by the spatial arrangement of the scene. Atten Percept Psychophys 2025:10.3758/s13414-025-03065-y. [PMID: 40281272 DOI: 10.3758/s13414-025-03065-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2025] [Indexed: 04/29/2025]
Abstract
Visual-to-auditory substitution devices convert visual images into soundscapes. They are intended for use by blind people in everyday situations with various obstacles that need to be localized simultaneously, as well as irrelevant objects that must be ignored. It is therefore important to establish the extent to which substitution devices make it possible to localize obstacles in complex scenes. In this study, we used a substitution device that combines spatial acoustic cues and pitch modulation to convey spatial information. Nineteen blindfolded sighted participants had to point at a virtual target that was displayed alone or among distractors to evaluate their ability to perform a localization task in minimalist and complex virtual scenes. The spatial configuration of the scene was manipulated by varying the number of distractors and their spatial arrangement relative to the target. While elevation localization abilities were not impaired by the presence of distractors, the ability to localize the azimuth of the target was modulated when a large number of distractors were displayed at the same elevation as the target. The elevation localization performance tends to confirm that pitch modulation is effective to convey elevation information with the device in various spatial configurations. Conversely, the impairment to azimuth localization seems to result from segregation difficulties that arise when the spatial configuration of the objects does not allow pitch segregation. This must be considered in the design of substitution devices in order to help blind people correctly evaluate the risks posed by different situations.
Collapse
Affiliation(s)
- Camille Bordeau
- University of Burgundy, CNRS, LEAD Umr5022, 21000, Dijon, France.
- Aix Marseille University, CNRS, CRPN, Marseille, France.
| | - Florian Scalvini
- Imvia UR 7535-University of Burgundy, Dijon, France
- IMT Atlantique, LaTIM U1101 INSERM, Brest, France
| | | | | | - Maxime Ambard
- University of Burgundy, CNRS, LEAD Umr5022, 21000, Dijon, France
| |
Collapse
|
2
|
Bleau M, Paré S, Chebat DR, Kupers R, Nemargut JP, Ptito M. Neural substrates of spatial processing and navigation in blindness: An activation likelihood estimation meta-analysis. Front Neurosci 2022; 16:1010354. [PMID: 36340755 PMCID: PMC9630591 DOI: 10.3389/fnins.2022.1010354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 09/30/2022] [Indexed: 12/02/2022] Open
Abstract
Even though vision is considered the best suited sensory modality to acquire spatial information, blind individuals can form spatial representations to navigate and orient themselves efficiently in space. Consequently, many studies support the amodality hypothesis of spatial representations since sensory modalities other than vision contribute to the formation of spatial representations, independently of visual experience and imagery. However, given the high variability in abilities and deficits observed in blind populations, a clear consensus about the neural representations of space has yet to be established. To this end, we performed a meta-analysis of the literature on the neural correlates of spatial processing and navigation via sensory modalities other than vision, like touch and audition, in individuals with early and late onset blindness. An activation likelihood estimation (ALE) analysis of the neuroimaging literature revealed that early blind individuals and sighted controls activate the same neural networks in the processing of non-visual spatial information and navigation, including the posterior parietal cortex, frontal eye fields, insula, and the hippocampal complex. Furthermore, blind individuals also recruit primary and associative occipital areas involved in visuo-spatial processing via cross-modal plasticity mechanisms. The scarcity of studies involving late blind individuals did not allow us to establish a clear consensus about the neural substrates of spatial representations in this specific population. In conclusion, the results of our analysis on neuroimaging studies involving early blind individuals support the amodality hypothesis of spatial representations.
Collapse
Affiliation(s)
- Maxime Bleau
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
| | - Samuel Paré
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel University, Ariel, Israel
| | - Ron Kupers
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
- Institute of Neuroscience, Faculty of Medicine, Université de Louvain, Brussels, Belgium
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
| | | | - Maurice Ptito
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- *Correspondence: Maurice Ptito,
| |
Collapse
|
3
|
Kilian J, Neugebauer A, Scherffig L, Wahl S. The Unfolding Space Glove: A Wearable Spatio-Visual to Haptic Sensory Substitution Device for Blind People. SENSORS 2022; 22:s22051859. [PMID: 35271009 PMCID: PMC8914703 DOI: 10.3390/s22051859] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 02/18/2022] [Accepted: 02/22/2022] [Indexed: 02/04/2023]
Abstract
This paper documents the design, implementation and evaluation of the Unfolding Space Glove—an open source sensory substitution device. It transmits the relative position and distance of nearby objects as vibratory stimuli to the back of the hand and thus enables blind people to haptically explore the depth of their surrounding space, assisting with navigation tasks such as object recognition and wayfinding. The prototype requires no external hardware, is highly portable, operates in all lighting conditions, and provides continuous and immediate feedback—all while being visually unobtrusive. Both blind (n = 8) and blindfolded sighted participants (n = 6) completed structured training and obstacle courses with both the prototype and a white long cane to allow performance comparisons to be drawn between them. The subjects quickly learned how to use the glove and successfully completed all of the trials, though still being slower with it than with the cane. Qualitative interviews revealed a high level of usability and user experience. Overall, the results indicate the general processability of spatial information through sensory substitution using haptic, vibrotactile interfaces. Further research would be required to evaluate the prototype’s capabilities after extensive training and to derive a fully functional navigation aid from its features.
Collapse
Affiliation(s)
- Jakob Kilian
- Köln International School of Design, TH Köln, 50678 Köln, Germany; (J.K.); (L.S.)
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
| | - Alexander Neugebauer
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
| | - Lasse Scherffig
- Köln International School of Design, TH Köln, 50678 Köln, Germany; (J.K.); (L.S.)
| | - Siegfried Wahl
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
- Correspondence: ; Tel.: +49-7071-29-84512
| |
Collapse
|
4
|
Design and Development of a Wearable Assistive Device Integrating a Fuzzy Decision Support System for Blind and Visually Impaired People. MICROMACHINES 2021; 12:mi12091082. [PMID: 34577725 PMCID: PMC8466919 DOI: 10.3390/mi12091082] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 08/30/2021] [Accepted: 09/06/2021] [Indexed: 11/30/2022]
Abstract
In this article, a new design of a wearable navigation support system for blind and visually impaired people (BVIP) is proposed. The proposed navigation system relies primarily on sensors, real-time processing boards, a fuzzy logic-based decision support system, and a user interface. It uses sensor data as inputs and provides the desired safety orientation to the BVIP. The user is informed about the decision based on a mixed voice–haptic interface. The navigation aid system contains two wearable obstacle detection systems managed by an embedded controller. The control system adopts the Robot Operating System (ROS) architecture supported by the Beagle Bone Black master board that meets the real-time constraints. The data acquisition and obstacle avoidance are carried out by several nodes managed by the ROS to finally deliver a mixed haptic–voice message for guidance of the BVIP. A fuzzy logic-based decision support system was implemented to help BVIP to choose a safe direction. The system has been applied to blindfolded persons and visually impaired persons. Both types of users found the system promising and pointed out its potential to become a good navigation aid in the future.
Collapse
|
5
|
Analysis and Validation of Cross-Modal Generative Adversarial Network for Sensory Substitution. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18126216. [PMID: 34201269 PMCID: PMC8228544 DOI: 10.3390/ijerph18126216] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/03/2021] [Accepted: 06/03/2021] [Indexed: 11/20/2022]
Abstract
Visual-auditory sensory substitution has demonstrated great potential to help visually impaired and blind groups to recognize objects and to perform basic navigational tasks. However, the high latency between visual information acquisition and auditory transduction may contribute to the lack of the successful adoption of such aid technologies in the blind community; thus far, substitution methods have remained only laboratory-scale research or pilot demonstrations. This high latency for data conversion leads to challenges in perceiving fast-moving objects or rapid environmental changes. To reduce this latency, prior analysis of auditory sensitivity is necessary. However, existing auditory sensitivity analyses are subjective because they were conducted using human behavioral analysis. Therefore, in this study, we propose a cross-modal generative adversarial network-based evaluation method to find an optimal auditory sensitivity to reduce transmission latency in visual-auditory sensory substitution, which is related to the perception of visual information. We further conducted a human-based assessment to evaluate the effectiveness of the proposed model-based analysis in human behavioral experiments. We conducted experiments with three participant groups, including sighted users (SU), congenitally blind (CB) and late-blind (LB) individuals. Experimental results from the proposed model showed that the temporal length of the auditory signal for sensory substitution could be reduced by 50%. This result indicates the possibility of improving the performance of the conventional vOICe method by up to two times. We confirmed that our experimental results are consistent with human assessment through behavioral experiments. Analyzing auditory sensitivity with deep learning models has the potential to improve the efficiency of sensory substitution.
Collapse
|
6
|
Paré S, Bleau M, Djerourou I, Malotaux V, Kupers R, Ptito M. Spatial navigation with horizontally spatialized sounds in early and late blind individuals. PLoS One 2021; 16:e0247448. [PMID: 33635892 PMCID: PMC7909643 DOI: 10.1371/journal.pone.0247448] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 02/07/2021] [Indexed: 12/02/2022] Open
Abstract
Blind individuals often report difficulties to navigate and to detect objects placed outside their peri-personal space. Although classical sensory substitution devices could be helpful in this respect, these devices often give a complex signal which requires intensive training to analyze. New devices that provide a less complex output signal are therefore needed. Here, we evaluate a smartphone-based sensory substitution device that offers navigation guidance based on strictly spatial cues in the form of horizontally spatialized sounds. The system uses multiple sensors to either detect obstacles at a distance directly in front of the user or to create a 3D map of the environment (detection and avoidance mode, respectively), and informs the user with auditory feedback. We tested 12 early blind, 11 late blind and 24 blindfolded-sighted participants for their ability to detect obstacles and to navigate in an obstacle course. The three groups did not differ in the number of objects detected and avoided. However, early blind and late blind participants were faster than their sighted counterparts to navigate through the obstacle course. These results are consistent with previous research on sensory substitution showing that vision can be replaced by other senses to improve performance in a wide variety of tasks in blind individuals. This study offers new evidence that sensory substitution devices based on horizontally spatialized sounds can be used as a navigation tool with a minimal amount of training.
Collapse
Affiliation(s)
- Samuel Paré
- École d’Optométrie, Université de Montréal, Québec, Canada
| | - Maxime Bleau
- École d’Optométrie, Université de Montréal, Québec, Canada
| | | | - Vincent Malotaux
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Ron Kupers
- École d’Optométrie, Université de Montréal, Québec, Canada
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
- Institute of Neuroscience and Pharmacology (INF), University of Copenhagen, Copenhagen, Denmark
| | - Maurice Ptito
- École d’Optométrie, Université de Montréal, Québec, Canada
- Institute of Neuroscience and Pharmacology (INF), University of Copenhagen, Copenhagen, Denmark
- * E-mail:
| |
Collapse
|
7
|
Deep video-to-video transformations for accessibility with an application to photosensitivity. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2019.01.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
8
|
Lloyd-Esenkaya T, Lloyd-Esenkaya V, O'Neill E, Proulx MJ. Multisensory inclusive design with sensory substitution. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:37. [PMID: 32770416 PMCID: PMC7415050 DOI: 10.1186/s41235-020-00240-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 07/13/2020] [Indexed: 11/10/2022]
Abstract
Sensory substitution techniques are perceptual and cognitive phenomena used to represent one sensory form with an alternative. Current applications of sensory substitution techniques are typically focused on the development of assistive technologies whereby visually impaired users can acquire visual information via auditory and tactile cross-modal feedback. But despite their evident success in scientific research and furthering theory development in cognition, sensory substitution techniques have not yet gained widespread adoption within sensory-impaired populations. Here we argue that shifting the focus from assistive to mainstream applications may resolve some of the current issues regarding the use of sensory substitution devices to improve outcomes for those with disabilities. This article provides a tutorial guide on how to use research into multisensory processing and sensory substitution techniques from the cognitive sciences to design new inclusive cross-modal displays. A greater focus on developing inclusive mainstream applications could lead to innovative technologies that could be enjoyed by every person.
Collapse
Affiliation(s)
- Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK.,Department of Computer Science, University of Bath, Bath, UK
| | | | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK. .,Department of Psychology, University of Bath, Bath, UK.
| |
Collapse
|
9
|
Chebat DR, Schneider FC, Ptito M. Spatial Competence and Brain Plasticity in Congenital Blindness via Sensory Substitution Devices. Front Neurosci 2020; 14:815. [PMID: 32848575 PMCID: PMC7406645 DOI: 10.3389/fnins.2020.00815] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 07/10/2020] [Indexed: 12/22/2022] Open
Abstract
In congenital blindness (CB), tactile, and auditory information can be reinterpreted by the brain to compensate for visual information through mechanisms of brain plasticity triggered by training. Visual deprivation does not cause a cognitive spatial deficit since blind people are able to acquire spatial knowledge about the environment. However, this spatial competence takes longer to achieve but is eventually reached through training-induced plasticity. Congenitally blind individuals can further improve their spatial skills with the extensive use of sensory substitution devices (SSDs), either visual-to-tactile or visual-to-auditory. Using a combination of functional and anatomical neuroimaging techniques, our recent work has demonstrated the impact of spatial training with both visual to tactile and visual to auditory SSDs on brain plasticity, cortical processing, and the achievement of certain forms of spatial competence. The comparison of performances between CB and sighted people using several different sensory substitution devices in perceptual and sensory-motor tasks uncovered the striking ability of the brain to rewire itself during perceptual learning and to interpret novel sensory information even during adulthood. We discuss here the implications of these findings for helping blind people in navigation tasks and to increase their accessibility to both real and virtual environments.
Collapse
Affiliation(s)
- Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel, Israel
| | - Fabien C. Schneider
- Department of Radiology, University of Lyon, Saint-Etienne, France
- Neuroradiology Unit, University Hospital of Saint-Etienne, Saint-Etienne, France
| | - Maurice Ptito
- BRAIN Lab, Department of Neuroscience and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- Chaire de Recherche Harland Sanders en Sciences de la Vision, École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| |
Collapse
|
10
|
Isaksson J, Jansson T, Nilsson J. Audomni: Super-Scale Sensory Supplementation to Increase the Mobility of Blind and Low-Vision Individuals-A Pilot Study. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1187-1197. [PMID: 32286992 DOI: 10.1109/tnsre.2020.2985626] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Blindness and low vision have severe effects on individuals' quality of life and socioeconomic cost; a main contributor of which is a prevalent and acutely decreased mobility level. To alleviate this, numerous technological solutions have been proposed in the last 70 years; however, none has become widespread. METHOD In this paper, we introduce the vision-to-audio, super-scale sensory substitution/supplementation device Audomni; we address the field-encompassing issues of ill-motivated and overabundant test methodologies and metrics; and we utilize our proposed Desire of Use model to evaluate proposed pilot user tests, their results, and Audomni itself. RESULTS Audomni holds a spatial resolution of 80 x 60 pixels at ~1.2° angular resolution and close to real-time temporal resolution, outdoor-viable technology, and several novel differentiation methods. The tests indicated that Audomni has a low learning curve, and several key mobility subtasks were accomplished; however, the tests would benefit from higher real-life motivation and data collection affordability. CONCLUSION Audomni shows promise to be a viable mobility device - with some addressable issues. Employing Desire of Use to design future tests should provide both high real-life motivation and relevance to them. SIGNIFICANCE As far as we know, Audomni features the greatest information conveyance rate in the field, yet seems to offer comprehensible and fairly intuitive sonification; this work is also the first to utilize Desire of Use as a tool to evaluate user tests, a device, and to lay out an overarching project aim.
Collapse
|
11
|
Richardson M, Thar J, Alvarez J, Borchers J, Ward J, Hamilton-Fletcher G. How Much Spatial Information Is Lost in the Sensory Substitution Process? Comparing Visual, Tactile, and Auditory Approaches. Perception 2019; 48:1079-1103. [PMID: 31547778 DOI: 10.1177/0301006619873194] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study, novice users discriminated the location of two objects at 1.2 m using devices that transformed a 16 × 8-depth map into spatially distributed patterns of light, sound, or touch on the abdomen. Results showed that through active sensing, participants could discriminate the vertical position of objects to a visual angle of 1°, 14°, and 21°, and their distance to 2 cm, 8 cm, and 29 cm using these visual, auditory, and haptic SSDs, respectively. Visual SSDs significantly outperformed auditory and tactile SSDs on vertical localisation, whereas for depth perception, all devices significantly differed from one another (visual > auditory > haptic). Our findings highlight the high level of acuity possible for SSDs even with low spatial resolutions (e.g., 16 × 8) and quantify the level of information loss attributable to this transformation for the SSD user. Finally, we discuss ways of closing this “modality gap” found in SSDs and conclude that this process is best benchmarked against performance with SSDs that return to their primary modality (e.g., visuospatial into visual).
Collapse
Affiliation(s)
| | - Jan Thar
- Media Computing Group, RWTH Aachen University, Germany
| | - James Alvarez
- Department of Psychology, University of Sussex, Brighton, UK
| | - Jan Borchers
- Media Computing Group, RWTH Aachen University, Germany
| | - Jamie Ward
- Department of Psychology, University of Sussex, Brighton, UK; Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| | - Giles Hamilton-Fletcher
- Department of Psychology, University of Sussex, Brighton, UK; Neuroimaging and Visual Science Laboratory, New York University Langone Health, NY, USA
| |
Collapse
|
12
|
Enhanced Depth Navigation Through Augmented Reality Depth Mapping in Patients with Low Vision. Sci Rep 2019; 9:11230. [PMID: 31375713 PMCID: PMC6677879 DOI: 10.1038/s41598-019-47397-w] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 07/15/2019] [Indexed: 11/29/2022] Open
Abstract
Patients diagnosed with Retinitis Pigmentosa (RP) show, in the advanced stage of the disease, severely restricted peripheral vision causing poor mobility and decline in quality of life. This vision loss causes difficulty identifying obstacles and their relative distances. Thus, RP patients use mobility aids such as canes to navigate, especially in dark environments. A number of high-tech visual aids using virtual reality (VR) and sensory substitution have been developed to support or supplant traditional visual aids. These have not achieved widespread use because they are difficult to use or block off residual vision. This paper presents a unique depth to high-contrast pseudocolor mapping overlay developed and tested on a Microsoft Hololens 1 as a low vision aid for RP patients. A single-masked and randomized trial of the AR pseudocolor low vision aid to evaluate real world mobility and near obstacle avoidance was conducted consisting of 10 RP subjects. An FDA-validated functional obstacle course and a custom-made grasping setup were used. The use of the AR visual aid reduced collisions by 50% in mobility testing (p = 0.02), and by 70% in grasp testing (p = 0.03). This paper introduces a new technique, the pseudocolor wireframe, and reports the first significant statistics showing improvements for the population of RP patients with mobility and grasp.
Collapse
|
13
|
Kubanek M, Bobulski J. Device for Acoustic Support of Orientation in the Surroundings for Blind People. SENSORS 2018; 18:s18124309. [PMID: 30563278 PMCID: PMC6308681 DOI: 10.3390/s18124309] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 11/27/2018] [Accepted: 12/04/2018] [Indexed: 11/16/2022]
Abstract
The constant development of modern technologies allows the creation of new and, above all, mobile devices supporting people with disabilities. All work carried out to improve the lives of people with disabilities is an important element of the field of science. The work presents matters related to the anatomy and physiology of hearing, imaginative abilities of blind people and devices supporting these people. The authors elaborated a prototype of an electronic device that supports the orientation of blind people in the environment by means of sound signals. Sounds are denoted to present to a blind person a simplified map of the depth of space in front of the device user. An innovative element of the work is the use of Kinect sensor, scanning the space in front of the user, as well as a set of developed algorithms for learning and generating acoustic space, taking into account the inclination of the head. The experiments carried out indicate the correct interpretation of the modeled audible signals, and the tests carried out on persons with impaired vision organs demonstrate high efficiency of the developed concept.
Collapse
Affiliation(s)
- Mariusz Kubanek
- Institute of Computer and Information Sciences, Czestochowa University of Technology, 42-201 Czestochowa, Poland.
| | - Janusz Bobulski
- Institute of Computer and Information Sciences, Czestochowa University of Technology, 42-201 Czestochowa, Poland.
| |
Collapse
|
14
|
Schinazi VR, Thrash T, Chebat DR. Spatial navigation by congenitally blind individuals. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2015; 7:37-58. [PMID: 26683114 PMCID: PMC4737291 DOI: 10.1002/wcs.1375] [Citation(s) in RCA: 73] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2015] [Revised: 10/16/2015] [Accepted: 11/17/2015] [Indexed: 11/08/2022]
Abstract
Spatial navigation in the absence of vision has been investigated from a variety of perspectives and disciplines. These different approaches have progressed our understanding of spatial knowledge acquisition by blind individuals, including their abilities, strategies, and corresponding mental representations. In this review, we propose a framework for investigating differences in spatial knowledge acquisition by blind and sighted people consisting of three longitudinal models (i.e., convergent, cumulative, and persistent). Recent advances in neuroscience and technological devices have provided novel insights into the different neural mechanisms underlying spatial navigation by blind and sighted people and the potential for functional reorganization. Despite these advances, there is still a lack of consensus regarding the extent to which locomotion and wayfinding depend on amodal spatial representations. This challenge largely stems from methodological limitations such as heterogeneity in the blind population and terminological ambiguity related to the concept of cognitive maps. Coupled with an over‐reliance on potential technological solutions, the field has diffused into theoretical and applied branches that do not always communicate. Here, we review research on navigation by congenitally blind individuals with an emphasis on behavioral and neuroscientific evidence, as well as the potential of technological assistance. Throughout the article, we emphasize the need to disentangle strategy choice and performance when discussing the navigation abilities of the blind population. WIREs Cogn Sci 2016, 7:37–58. doi: 10.1002/wcs.1375 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Victor R Schinazi
- Department of Humanities, Social, and Political Sciences, ETH Zürich, Zürich, Switzerland
| | - Tyler Thrash
- Department of Humanities, Social, and Political Sciences, ETH Zürich, Zürich, Switzerland
| | | |
Collapse
|
15
|
A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor. Appl Bionics Biomech 2015; 2015:479857. [PMID: 27057135 PMCID: PMC4745441 DOI: 10.1155/2015/479857] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Revised: 07/15/2015] [Accepted: 07/29/2015] [Indexed: 11/26/2022] Open
Abstract
For a number of years, scientists have been trying to develop aids that can make visually impaired
people more independent and aware of their surroundings. Computer-based automatic navigation tools
are one example of this, motivated by the increasing miniaturization of electronics and the improvement
in processing power and sensing capabilities. This paper presents a complete navigation system based
on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is
based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images
from a camera using corner detection, while input from the depth sensor provides the corresponding
distance. The combination is both efficient and robust. The system not only identifies hurdles but
also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move
right. The system has been tested in real time by both blindfolded and blind people at different indoor
and outdoor locations, demonstrating that it operates adequately.
Collapse
|