1
|
Ciuffreda I, Casaccia S, Revel GM. A Multi-Sensor Fusion Approach Based on PIR and Ultrasonic Sensors Installed on a Robot to Localise People in Indoor Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:6963. [PMID: 37571746 PMCID: PMC10422386 DOI: 10.3390/s23156963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 08/02/2023] [Accepted: 08/03/2023] [Indexed: 08/13/2023]
Abstract
This work illustrates an innovative localisation sensor network that uses multiple PIR and ultrasonic sensors installed on a mobile social robot to localise occupants in indoor environments. The system presented aims to measure movement direction and distance to reconstruct the movement of a person in an indoor environment by using sensor activation strategies and data processing techniques. The data collected are then analysed using both a supervised (Decision Tree) and an unsupervised (K-Means) machine learning algorithm to extract the direction and distance of occupant movement from the measurement system, respectively. Tests in a controlled environment have been conducted to assess the accuracy of the methodology when multiple PIR and ultrasonic sensor systems are used. In addition, a qualitative evaluation of the system's ability to reconstruct the movement of the occupant has been performed. The system proposed can reconstruct the direction of an occupant with an accuracy of 70.7% and uncertainty in distance measurement of 6.7%.
Collapse
Affiliation(s)
- Ilaria Ciuffreda
- Department of Industrial Engineering and Mathematical Sciences, Polytechnic University of Marche, 60131 Ancona, Italy; (S.C.); (G.M.R.)
| | | | | |
Collapse
|
2
|
Hoogsteen KM, Szpiro S, Kreiman G, Peli E. Beyond the Cane: Describing Urban Scenes to Blind People for Mobility Tasks. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2022; 15. [DOI: 10.1145/3522757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Blind people face difficulties with independent mobility, impacting employment prospects, social inclusion, and quality of life. Given the advancements in computer vision, with more efficient and effective automated information extraction from visual scenes, it is important to determine what information is worth conveying to blind travelers, especially since people have a limited capacity to receive and process sensory information. We aimed to investigate which objects in a street scene are useful to describe and how those objects should be described. Thirteen cane-using participants, five of whom were early blind, took part in two urban walking experiments. In the first experiment, participants were asked to voice their information needs in the form of questions to the experimenter. In the second experiment, participants were asked to score scene descriptions and navigation instructions, provided by the experimenter, in terms of their usefulness. The descriptions included a variety of objects with various annotations per object. Additionally, we asked participants to rank order the objects and the different descriptions per object in terms of priority and explain why the provided information is or is not useful to them. The results reveal differences between early and late blind participants. Late blind participants requested information more frequently and prioritized information about objects’ locations. Our results illustrate how different factors, such as the level of detail, relative position, and what type of information is provided when describing an object, affected the usefulness of scene descriptions. Participants explained how they (indirectly) used information, but they were frequently unable to explain their ratings. The results distinguish between various types of travel information, underscore the importance of featuring these types at multiple levels of abstraction, and highlight gaps in current understanding of travel information needs. Elucidating the information needs of blind travelers is critical for the development of more useful assistive technologies.
Collapse
Affiliation(s)
- Karst M.P. Hoogsteen
- Schepens Eye Research Institute, Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Sarit Szpiro
- Department of Special Education, University of Haifa, Haifa, Israel
| | - Gabriel Kreiman
- Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, Cambridge, Massachusetts, United States of America
| | - Eli Peli
- Schepens Eye Research Institute, Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
3
|
Oladele DA, Markus ED, Abu-Mahfouz AM. Adaptability of Assistive Mobility Devices and the Role of the Internet of Medical Things: Comprehensive Review. JMIR Rehabil Assist Technol 2021; 8:e29610. [PMID: 34779786 PMCID: PMC8663709 DOI: 10.2196/29610] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 06/29/2021] [Accepted: 09/12/2021] [Indexed: 01/22/2023] Open
Abstract
Background With the projected upsurge in the percentage of people with some form of disability, there has been a significant increase in the need for assistive mobility devices. However, for mobility aids to be effective, such devices should be adapted to the user’s needs. This can be achieved by improving the confidence of the acquired information (interaction between the user, the environment, and the device) following design specifications. Therefore, there is a need for literature review on the adaptability of assistive mobility devices. Objective In this study, we aim to review the adaptability of assistive mobility devices and the role of the internet of medical things in terms of the acquired information for assistive mobility devices. We review internet-enabled assistive mobility technologies and non–internet of things (IoT) assistive mobility devices. These technologies will provide awareness of the status of adaptive mobility technology and serve as a source and reference regarding information to health care professionals and researchers. Methods We performed a literature review search on the following databases of academic references and journals: Google Scholar, ScienceDirect, Institute of Electrical and Electronics Engineers, Springer, and websites of assistive mobility and foundations presenting studies on assistive mobility found through a generic Google search (including the World Health Organization website). The following keywords were used: assistive mobility OR assistive robots, assistive mobility devices, internet-enabled assistive mobility technologies, IoT Framework OR IoT Architecture AND for Healthcare, assisted navigation OR autonomous navigation, mobility AND aids OR devices, adaptability of assistive technology, adaptive mobility devices, pattern recognition, autonomous navigational systems, human-robot interfaces, motor rehabilitation devices, perception, and ambient assisted living. Results We identified 13,286 results (excluding titles that were not relevant to this study). Then, through a narrative review, we selected 189 potential studies (189/13,286, 1.42%) from the existing literature on the adaptability of assistive mobility devices and IoT frameworks for assistive mobility and conducted a critical analysis. Of the 189 potential studies, 82 (43.4%) were selected for analysis after meeting the inclusion criteria. On the basis of the type of technologies presented in the reviewed articles, we proposed a categorization of the adaptability of smart assistive mobility devices in terms of their interaction with the user (user system interface), perception techniques, and communication and sensing frameworks. Conclusions We discussed notable limitations of the reviewed literature studies. The findings revealed that an improvement in the adaptation of assistive mobility systems would require a reduction in training time and avoidance of cognitive overload. Furthermore, sensor fusion and classification accuracy are critical for achieving real-world testing requirements. Finally, the trade-off between cost and performance should be considered in the commercialization of these devices.
Collapse
Affiliation(s)
- Daniel Ayo Oladele
- Department of Electrical, Electronic and Computer Engineering, Central University of Technology, Bloemfontein, South Africa
| | - Elisha Didam Markus
- Department of Electrical, Electronic and Computer Engineering, Central University of Technology, Bloemfontein, South Africa
| | | |
Collapse
|
4
|
Pundlik S, Baliutaviciute V, Moharrer M, Bowers AR, Luo G. Home-Use Evaluation of a Wearable Collision Warning Device for Individuals With Severe Vision Impairments: A Randomized Clinical Trial. JAMA Ophthalmol 2021; 139:998-1005. [PMID: 34292298 PMCID: PMC8299358 DOI: 10.1001/jamaophthalmol.2021.2624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
IMPORTANCE There is scant rigorous evidence about the real-world mobility benefit of electronic mobility aids. OBJECTIVE To evaluate the effect of a collision warning device on the number of contacts experienced by blind and visually impaired people in their daily mobility. DESIGN, SETTING, AND PARTICIPANTS In this double-masked randomized clinical trial, participants used a collision warning device during their daily mobility over a period of 4 weeks. A volunteer sample of 31 independently mobile individuals with severe visual impairments, including total blindness and peripheral visual field restrictions, who used a long cane or guide dog as their habitual mobility aid completed the study. The study was conducted from January 2018 to December 2019. INTERVENTIONS The device automatically detected collision hazards using a chest-mounted video camera. It randomly switched between 2 modes: active mode (intervention condition), where it provided alerts for detected collision threats via 2 vibrotactile wristbands, and silent mode (control condition), where the device still detected collisions but did not provide any warnings to the user. Scene videos along with the collision warning information were recorded by the device. Potential collisions detected by the device were reviewed and scored, including contacts with the hazards, by 2 independent reviewers. Participants and reviewers were masked to the device operation mode. MAIN OUTCOMES AND MEASURES Rate of contacts per 100 hazards per hour, compared between the 2 device modes within each participant. Modified intention-to-treat analysis was used. RESULTS Of the 31 included participants, 18 (58%) were male, and the median (range) age was 61 (25-73) years. A total of 19 participants (61%) had a visual acuity (VA) of light perception or worse, and 28 (90%) reported a long cane as their habitual mobility aid. The median (interquartile range) number of contacts was lower in the active mode compared with silent mode (9.3 [6.6-14.9] vs 13.8 [6.9-24.3]; difference, 4.5; 95% CI, 1.5-10.7; P < .001). Controlling for demographic characteristics, presence of VA better than light perception, and fall history, the rate of contacts significantly reduced in the active mode compared with the silent mode (β = 0.63; 95% CI, 0.54-0.73; P < .001). CONCLUSIONS AND RELEVANCE In this study involving 31 visually impaired participants, the collision warnings were associated with a reduced rate of contacts with obstacles in daily mobility, indicating the potential of the device to augment habitual mobility aids. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT03057496.
Collapse
Affiliation(s)
- Shrinivas Pundlik
- Schepens Eye Research Institute of Mass Eye and Ear, Boston, Massachusetts,Harvard Medical School Department of Ophthalmology, Boston, Massachusetts
| | - Vilte Baliutaviciute
- Schepens Eye Research Institute of Mass Eye and Ear, Boston, Massachusetts,The Family Institute, Northwestern University, Evanston, Illinois
| | - Mojtaba Moharrer
- Schepens Eye Research Institute of Mass Eye and Ear, Boston, Massachusetts,Harvard Medical School Department of Ophthalmology, Boston, Massachusetts
| | - Alex R. Bowers
- Schepens Eye Research Institute of Mass Eye and Ear, Boston, Massachusetts,Harvard Medical School Department of Ophthalmology, Boston, Massachusetts
| | - Gang Luo
- Schepens Eye Research Institute of Mass Eye and Ear, Boston, Massachusetts,Harvard Medical School Department of Ophthalmology, Boston, Massachusetts
| |
Collapse
|
5
|
Chaudary B, Pohjolainen S, Aziz S, Arhippainen L, Pulli P. Teleguidance-based remote navigation assistance for visually impaired and blind people-usability and user experience. VIRTUAL REALITY 2021; 27:141-158. [PMID: 34054327 PMCID: PMC8142295 DOI: 10.1007/s10055-021-00536-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Accepted: 05/05/2021] [Indexed: 06/12/2023]
Abstract
This paper reports the development of a specialized teleguidance-based navigation assistance system for the blind and the visually impaired. We present findings from a usability and user experience study conducted with 11 blind and visually impaired participants and a sighted caretaker. Participants sent live video feed of their field of view to the remote caretaker's terminal from a smartphone camera attached to their chest. The caretaker used this video feed to guide them through indoor and outdoor navigation scenarios using a combination of haptic and voice-based communication. Haptic feedback was provided through vibrating actuators installed in the grip of a Smart Cane. Two haptic methods for directional guidance were tested: (1) two vibrating actuators to guide left and right movement and (2) a single vibrating actuator with differentiating vibration patterns for the same purpose. Users feedback was collected using a meCUE 2.0 standardized questionnaire, interviews, and group discussions. Participants' perceptions toward the proposed navigation assistance system were positive. Blind participants preferred vibrational guidance with two actuators, while partially blind participants preferred the single actuator method. Familiarity with cane use and age were important factors in the choice of haptic methods by both blind and partially blind users. It was found that smartphone camera provided sufficient field of view for remote assistance; position and angle are nonetheless important considerations. Ultimately, more research is needed to confirm our preliminary findings. We also present an expanded evaluation model developed to carry out further research on assistive systems.
Collapse
Affiliation(s)
- Babar Chaudary
- Faculty of Information Technology and Electrical Engineering, OASIS Research Unit, University of Oulu, PO Box 3000, 90014 Oulu, Finland
| | - Sami Pohjolainen
- Faculty of Information Technology and Electrical Engineering, OASIS Research Unit, University of Oulu, PO Box 3000, 90014 Oulu, Finland
| | | | - Leena Arhippainen
- Faculty of Information Technology and Electrical Engineering, INTERACT Research Unit, INTERACT Research Unit, University of Oulu, PO Box 8000, 90014 Oulu, Finland
| | - Petri Pulli
- Faculty of Information Technology and Electrical Engineering, OASIS Research Unit, University of Oulu, PO Box 3000, 90014 Oulu, Finland
| |
Collapse
|
6
|
Chang KJ, Dillon LL, Deverell L, Boon MY, Keay L. Orientation and mobility outcome measures. Clin Exp Optom 2021; 103:434-448. [DOI: 10.1111/cxo.13004] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Revised: 08/27/2019] [Accepted: 10/09/2019] [Indexed: 11/26/2022] Open
Affiliation(s)
- Kuo‐yi Jade Chang
- School of Public Health, The University of Sydney, Sydney, Australia,
- Injury Division, The George Institute for Global Health, Sydney, Australia,
| | - Lisa Lorraine Dillon
- Injury Division, The George Institute for Global Health, Sydney, Australia,
- Faculty of Medicine, The University of New South Wales, Sydney, Australia,
| | - Lil Deverell
- School of Health Sciences, Swinburne University of Technology, Melbourne, Australia,
| | - Mei Ying Boon
- School of Optometry and Vision Science, The University of New South Wales, Sydney, Australia,
| | - Lisa Keay
- Injury Division, The George Institute for Global Health, Sydney, Australia,
- Faculty of Medicine, The University of New South Wales, Sydney, Australia,
| |
Collapse
|
7
|
Efficient Multi-Object Detection and Smart Navigation Using Artificial Intelligence for Visually Impaired People. ENTROPY 2020; 22:e22090941. [PMID: 33286711 PMCID: PMC7597210 DOI: 10.3390/e22090941] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 08/17/2020] [Accepted: 08/22/2020] [Indexed: 11/09/2022]
Abstract
Visually impaired people face numerous difficulties in their daily life, and technological interventions may assist them to meet these challenges. This paper proposes an artificial intelligence-based fully automatic assistive technology to recognize different objects, and auditory inputs are provided to the user in real time, which gives better understanding to the visually impaired person about their surroundings. A deep-learning model is trained with multiple images of objects that are highly relevant to the visually impaired person. Training images are augmented and manually annotated to bring more robustness to the trained model. In addition to computer vision-based techniques for object recognition, a distance-measuring sensor is integrated to make the device more comprehensive by recognizing obstacles while navigating from one place to another. The auditory information that is conveyed to the user after scene segmentation and obstacle identification is optimized to obtain more information in less time for faster processing of video frames. The average accuracy of this proposed method is 95.19% and 99.69% for object detection and recognition, respectively. The time complexity is low, allowing a user to perceive the surrounding scene in real time.
Collapse
|
8
|
van Nispen RMA, Virgili G, Hoeben M, Langelaan M, Klevering J, Keunen JEE, van Rens GHMB. Low vision rehabilitation for better quality of life in visually impaired adults. Cochrane Database Syst Rev 2020; 1:CD006543. [PMID: 31985055 PMCID: PMC6984642 DOI: 10.1002/14651858.cd006543.pub2] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
BACKGROUND Low vision rehabilitation aims to optimise the use of residual vision after severe vision loss, but also aims to teach skills in order to improve visual functioning in daily life. Other aims include helping people to adapt to permanent vision loss and improving psychosocial functioning. These skills promote independence and active participation in society. Low vision rehabilitation should ultimately improve quality of life (QOL) for people who have visual impairment. OBJECTIVES To assess the effectiveness of low vision rehabilitation interventions on health-related QOL (HRQOL), vision-related QOL (VRQOL) or visual functioning and other closely related patient-reported outcomes in visually impaired adults. SEARCH METHODS We searched relevant electronic databases and trials registers up to 18 September 2019. SELECTION CRITERIA We included randomised controlled trials (RCTs) investigating HRQOL, VRQOL and related outcomes of adults, with an irreversible visual impairment (World Health Organization criteria). We included studies that compared rehabilitation interventions with active or inactive control. DATA COLLECTION AND ANALYSIS We used standard methods expected by Cochrane. We assessed the certainty of the evidence using the GRADE approach. MAIN RESULTS We included 44 studies (73 reports) conducted in North America, Australia, Europe and Asia. Considering the clinical diversity of low vision rehabilitation interventions, the studies were categorised into four groups of related intervention types (and by comparator): (1) psychological therapies and/or group programmes, (2) methods of enhancing vision, (3) multidisciplinary rehabilitation programmes, (4) other programmes. Comparators were no care or waiting list as an inactive control group, usual care or other active control group. Participants included in the reported studies were mainly older adults with visual impairment or blindness, often as a result of age-related macular degeneration (AMD). Study settings were often hospitals or low vision rehabilitation services. Effects were measured at the short-term (six months or less) in most studies. Not all studies reported on funding, but those who did were supported by public or non-profit funders (N = 31), except for two studies. Compared to inactive comparators, we found very low-certainty evidence of no beneficial effects on HRQOL that was imprecisely estimated for psychological therapies and/or group programmes (SMD 0.26, 95% CI -0.28 to 0.80; participants = 183; studies = 1) and an imprecise estimate suggesting little or no effect of multidisciplinary rehabilitation programmes (SMD -0.08, 95% CI -0.37 to 0.21; participants = 183; studies = 2; I2 = 0%); no data were available for methods of enhancing vision or other programmes. Regarding VRQOL, we found low- or very low-certainty evidence of imprecisely estimated benefit with psychological therapies and/or group programmes (SMD -0.23, 95% CI -0.53 to 0.08; studies = 2; I2 = 24%) and methods of enhancing vision (SMD -0.19, 95% CI -0.54 to 0.15; participants = 262; studies = 5; I2 = 34%). Two studies using multidisciplinary rehabilitation programmes showed beneficial but inconsistent results, of which one study, which was at low risk of bias and used intensive rehabilitation, recorded a very large and significant effect (SMD: -1.64, 95% CI -2.05 to -1.24), and the other a small and uncertain effect (SMD -0.42, 95%: -0.90 to 0.07). Compared to active comparators, we found very low-certainty evidence of small or no beneficial effects on HRQOL that were imprecisely estimated with psychological therapies and/or group programmes including no difference (SMD -0.09, 95% CI -0.39 to 0.20; participants = 600; studies = 4; I2 = 67%). We also found very low-certainty evidence of small or no beneficial effects with methods of enhancing vision, that were imprecisely estimated (SMD -0.09, 95% CI -0.28 to 0.09; participants = 443; studies = 2; I2 = 0%) and multidisciplinary rehabilitation programmes (SMD -0.10, 95% CI -0.31 to 0.12; participants = 375; studies = 2; I2 = 0%). Concerning VRQOL, low-certainty evidence of small or no beneficial effects that were imprecisely estimated, was found with psychological therapies and/or group programmes (SMD -0.11, 95% CI -0.24 to 0.01; participants = 1245; studies = 7; I2 = 19%) and moderate-certainty evidence of small effects with methods of enhancing vision (SMD -0.24, 95% CI -0.40 to -0.08; participants = 660; studies = 7; I2 = 16%). No additional benefit was found with multidisciplinary rehabilitation programmes (SMD 0.01, 95% CI -0.18 to 0.20; participants = 464; studies = 3; I2 = 0%; low-certainty evidence). Among secondary outcomes, very low-certainty evidence of a significant and large, but imprecisely estimated benefit on self-efficacy or self-esteem was found for psychological therapies and/or group programmes versus waiting list or no care (SMD -0.85, 95% CI -1.48 to -0.22; participants = 456; studies = 5; I2 = 91%). In addition, very low-certainty evidence of a significant and large estimated benefit on depression was found for psychological therapies and/or group programmes versus waiting list or no care (SMD -1.23, 95% CI -2.18 to -0.28; participants = 456; studies = 5; I2 = 94%), and moderate-certainty evidence of a small benefit versus usual care (SMD -0.14, 95% CI -0.25 to -0.04; participants = 1334; studies = 9; I2 = 0%). ln the few studies in which (serious) adverse events were reported, these seemed unrelated to low vision rehabilitation. AUTHORS' CONCLUSIONS In this Cochrane Review, no evidence of benefit was found of diverse types of low vision rehabilitation interventions on HRQOL. We found low- and moderate-certainty evidence, respectively, of a small benefit on VRQOL in studies comparing psychological therapies or methods for enhancing vision with active comparators. The type of rehabilitation varied among studies, even within intervention groups, but benefits were detected even if compared to active control groups. Studies were conducted on adults with visual impairment mainly of older age, living in high-income countries and often having AMD. Most of the included studies on low vision rehabilitation had a short follow-up, Despite these limitations, the consistent direction of the effects in this review towards benefit justifies further research activities of better methodological quality including longer maintenance effects and costs of several types of low vision rehabilitation. Research on the working mechanisms of components of rehabilitation interventions in different settings, including low-income countries, is also needed.
Collapse
Affiliation(s)
- Ruth MA van Nispen
- Amsterdam University Medical Centers, Vrije UniversiteitDepartment of Ophthalmology, Amsterdam Public Health research instituteAmsterdamNetherlands
| | - Gianni Virgili
- University of FlorenceDepartment of Neurosciences, Psychology, Drug Research and Child Health (NEUROFARBA)Largo Palagi, 1FlorenceItaly50134
| | - Mirke Hoeben
- Amsterdam University Medical Centers, Vrije UniversiteitDepartment of Ophthalmology, Amsterdam Public Health research instituteAmsterdamNetherlands
| | - Maaike Langelaan
- Netherlands institute for health services, NIVEL researchP.O. Box 1568UtrechtNetherlands3500 BN
| | - Jeroen Klevering
- Radboud University Medical CenterDepartment of OphthalmologyNijmegenNetherlands
| | - Jan EE Keunen
- Radboud University Medical CenterDepartment of OphthalmologyNijmegenNetherlands
| | - Ger HMB van Rens
- Amsterdam University Medical Centers, Vrije UniversiteitDepartment of Ophthalmology, Amsterdam Public Health research instituteAmsterdamNetherlands
- Elkerliek HospitalDepartment of OphthalmologyHelmondNetherlands
| | | |
Collapse
|
9
|
Insight on Electronic Travel Aids for Visually Impaired People: A Review on the Electromagnetic Technology. ELECTRONICS 2019. [DOI: 10.3390/electronics8111281] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This review deals with a comprehensive description of the available electromagnetic travel aids for visually impaired and blind people. This challenging task is considered as an outstanding research area due to the rapid growth in the number of people with visual impairments. For decades, different technologies have been employed for solving the crucial challenge of improving the mobility of visually impaired people, but a suitable solution has not yet been developed. Focusing this contribution on the electromagnetic technology, the state-of-the-art of available solutions is demonstrated. Electronic travel aids based on electromagnetic technology have been identified as an emerging technology due to their high level of achievable performance in terms of accuracy, flexibility, lightness, and cost-effectiveness.
Collapse
|
10
|
Preliminary Evaluation of a Wearable Camera-based Collision Warning Device for Blind Individuals. Optom Vis Sci 2019; 95:747-756. [PMID: 30169353 DOI: 10.1097/opx.0000000000001264] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
SIGNIFICANCE This work describes a preliminary evaluation of a wearable collision warning device for blind individuals. The device was found to provide mobility benefit in subjects without (or deprived of) vision. This preliminary evaluation will facilitate further testing of this developmental stage device in more naturalistic conditions. PURPOSE We developed a wearable video camera-based device that provided tridirectional collision warnings (right, center, and left) via differential feedback of two vibrotactile wristbands. We evaluated its mobility benefit in blind and normally sighted (NS) blindfolded individuals in indoor mobility courses. METHODS Three evaluation experiments were conducted. First, the ability of the device to provide warnings for hanging objects not detected by a long cane was evaluated in eight NS and four blind subjects in an obstacle course with and without the device. Second, the accuracy of collision warning direction assignment was evaluated in 10 NS subjects as they walked toward a hanging object at random offsets and verbally reported the obstacle offset position with respect to their walking path based on the wristbands' vibrotactile feedback. Third, the mobility benefit of collision warning direction information was evaluated by 10 NS and 4 blind subjects when walking with and without differential wristband feedback. RESULTS In experiment 1, collisions reduced significantly from a median of 11.5 without to 4 with the device (P < .001). Percent preferred walking speed reduced only slightly from 41% without to 36% with the device (P = .04). In experiment 2, the most likely reported relative obstacle positions were consistent with the actual positions. In experiment 3, subjects made more correct navigational decisions with than without the collision warning direction information (91% vs. 69%, P < .001). CONCLUSIONS Substantial mobility benefit of the device was seen in detection of aboveground collision threats missed by a long cane and in enabling better navigational decision making based on the tridirectional collision warning information.
Collapse
|
11
|
Wearable Travel Aid for Environment Perception and Navigation of Visually Impaired People. ELECTRONICS 2019. [DOI: 10.3390/electronics8060697] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Assistive devices for visually impaired people (VIP) which support daily traveling and improve social inclusion are developing fast. Most of them try to solve the problem of navigation or obstacle avoidance, and other works focus on helping VIP to recognize their surrounding objects. However, very few of them couple both capabilities (i.e., navigation and recognition). Aiming at the above needs, this paper presents a wearable assistive device that allows VIP to (i) navigate safely and quickly in unfamiliar environment, and (ii) to recognize the objects in both indoor and outdoor environments. The device consists of a consumer Red, Green, Blue and Depth (RGB-D) camera and an Inertial Measurement Unit (IMU), which are mounted on a pair of eyeglasses, and a smartphone. The device leverages the ground height continuity among adjacent image frames to segment the ground accurately and rapidly, and then search the moving direction according to the ground. A lightweight Convolutional Neural Network (CNN)-based object recognition system is developed and deployed on the smartphone to increase the perception ability of VIP and promote the navigation system. It can provide the semantic information of surroundings, such as the categories, locations, and orientations of objects. Human–machine interaction is performed through audio module (a beeping sound for obstacle alert, speech recognition for understanding the user commands, and speech synthesis for expressing semantic information of surroundings). We evaluated the performance of the proposed system through many experiments conducted in both indoor and outdoor scenarios, demonstrating the efficiency and safety of the proposed assistive system.
Collapse
|
12
|
Katzschmann RK, Araki B, Rus D. Safe Local Navigation for Visually Impaired Users With a Time-of-Flight and Haptic Feedback Device. IEEE Trans Neural Syst Rehabil Eng 2019. [PMID: 29522402 DOI: 10.1109/tnsre.2018.2800665] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This paper presents ALVU (Array of Lidars and Vibrotactile Units), a contactless, intuitive, hands-free, and discreet wearable device that allows visually impaired users to detect low- and high-hanging obstacles, as well as physical boundaries in their immediate environment. The solution allows for safe local navigation in both confined and open spaces by enabling the user to distinguish free space from obstacles. The device presented is composed of two parts: a sensor belt and a haptic strap. The sensor belt is an array of time-of-flight distance sensors worn around the front of a user's waist, and the pulses of infrared light provide reliable and accurate measurements of the distances between the user and surrounding obstacles or surfaces. The haptic strap communicates the measured distances through an array of vibratory motors worn around the user's upper abdomen, providing haptic feedback. The linear vibration motors are combined with a point-loaded pretensioned applicator to transmit isolated vibrations to the user. We validated the device's capability in an extensive user study entailing 162 trials with 12 blind users. Users wearing the device successfully walked through hallways, avoided obstacles, and detected staircases.
Collapse
|
13
|
Phamduy P, Rizzo JR, Hudson TE, Torre M, Levon K, Porfiri M. Communicating through Touch: Macro Fiber Composites for Tactile Stimulation on the Abdomen. IEEE TRANSACTIONS ON HAPTICS 2018; 11:174-184. [PMID: 29927741 DOI: 10.1109/toh.2017.2781244] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Research into sensory substitution systems has expanded, as alternative senses are utilized in real-time to afford object recognition or spatial understanding. Tactile stimulation has long shown promise as a communicatory strategy when applied unobtrusively to the redundant surface areas of the skin. Here, a novel belt, integrating a matrix of macro fiber composites, is purposed to deliver tactile stimuli to the abdomen. The design and development of the belt is presented and a systematic experimental study is conducted to analyze the impact of frequency and duty cycle. The belt is a beta precursor to a soft haptic feedback device that will enable situational awareness and obstacle avoidance through the localization of tactile stimulation relative to a body-centric frame of reference in a local environment.
Collapse
|
14
|
Ye C, Qian X. 3-D Object Recognition of a Robotic Navigation Aid for the Visually Impaired. IEEE Trans Neural Syst Rehabil Eng 2018; 26:441-450. [PMID: 28880185 PMCID: PMC5843551 DOI: 10.1109/tnsre.2017.2748419] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
This paper presents a 3-D object recognition method and its implementation on a robotic navigation aid to allow real-time detection of indoor structural objects for the navigation of a blind person. The method segments a point cloud into numerous planar patches and extracts their inter-plane relationships (IPRs).Based on the existing IPRs of the object models, the method defines six high level features (HLFs) and determines the HLFs for each patch. A Gaussian-mixture-model-based plane classifier is then devised to classify each planar patch into one belonging to a particular object model. Finally, a recursive plane clustering procedure is used to cluster the classified planes into the model objects. As the proposed method uses geometric context to detect an object, it is robust to the object's visual appearance change. As a result, it is ideal for detecting structural objects (e.g., stairways, doorways, and so on). In addition, it has high scalability and parallelism. The method is also capable of detecting some indoor non-structural objects. Experimental results demonstrate that the proposed method has a high success rate in object recognition.
Collapse
|
15
|
Kim DS, Emerson RW, Naghshineh K. Effect of cane length and swing arc width on drop-off and obstacle detection with the long cane. BRITISH JOURNAL OF VISUAL IMPAIRMENT 2017; 35:217-231. [PMID: 29276326 DOI: 10.1177/0264619617700936] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A repeated-measures design with block randomization was used for the study, in which 15 adults with visual impairments attempted to detect the drop-offs and obstacles with the canes of different lengths, swinging the cane in different widths (narrow vs wide). Participants detected the drop-offs significantly more reliably with the standard-length cane (79.5% ± 6.5% of the time) than with the extended-length cane (67.6% ± 9.1%), p <.001. The drop-off detection threshold of the standard-length cane (4.1 ± 1.1 cm) was also significantly smaller than that of the extended-length cane (6.5±1.8cm), p <.001. In addition, participants detected drop-offs at a significantly higher percentage when they swung the cane approximately 3 cm beyond the widest part of the body (78.6% ± 7.6%) than when they swung it substantially wider (30 cm; 68.5% ± 8.3%), p <.001. In contrast, neither cane length (p =.074) nor cane swing arc width (p =.185) had a significant effect on obstacle detection performance. The findings of the study may help orientation and mobility specialists recommend appropriate cane length and cane swing arc width to visually impaired cane users.
Collapse
|
16
|
Zhang H, Ye C. An Indoor Wayfinding System Based on Geometric Features Aided Graph SLAM for the Visually Impaired. IEEE Trans Neural Syst Rehabil Eng 2017; 25:1592-1604. [PMID: 28320671 PMCID: PMC5659309 DOI: 10.1109/tnsre.2017.2682265] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This paper presents a 6-degree of freedom (DOF) pose estimation (PE) method and an indoor wayfinding system based on the method for the visually impaired. The PE method involves two-graph simultaneous localization and mapping (SLAM) processes to reduce the accumulative pose error of the device. In the first step, the floor plane is extracted from the 3-D camera's point cloud and added as a landmark node into the graph for 6-DOF SLAM to reduce roll, pitch, and Z errors. In the second step, the wall lines are extracted and incorporated into the graph for 3-DOF SLAM to reduce X , Y , and yaw errors. The method reduces the 6-DOF pose error and results in more accurate pose with less computational time than the state-of-the-art planar SLAM methods. Based on the PE method, a wayfinding system is developed for navigating a visually impaired person in an indoor environment. The system uses the estimated pose and floor plan to locate the device user in a building and guides the user by announcing the points of interest and navigational commands through a speech interface. Experimental results validate the effectiveness of the PE method and demonstrate that the system may substantially ease an indoor navigation task.
Collapse
|
17
|
Lin BS, Lee CC, Chiang PY. Simple Smartphone-Based Guiding System for Visually Impaired People. SENSORS 2017; 17:s17061371. [PMID: 28608811 PMCID: PMC5492085 DOI: 10.3390/s17061371] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2017] [Revised: 06/08/2017] [Accepted: 06/09/2017] [Indexed: 11/25/2022]
Abstract
Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.
Collapse
Affiliation(s)
- Bor-Shing Lin
- Department of Computer Science and Information Engineering, National Taipei University, New Taipei City 23741, Taiwan.
| | - Cheng-Che Lee
- Department of Computer Science and Information Engineering, National Taipei University, New Taipei City 23741, Taiwan.
| | - Pei-Ying Chiang
- Department of Computer Science and Information Engineering, National Taipei University of Technology, Taipei 10608, Taiwan.
| |
Collapse
|
18
|
Rizzo JR, Conti K, Thomas T, Hudson TE, Wall Emerson R, Kim DS. A new primary mobility tool for the visually impaired: A white cane-adaptive mobility device hybrid. Assist Technol 2017; 30:219-225. [PMID: 28506151 DOI: 10.1080/10400435.2017.1312634] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
This article describes pilot testing of an adaptive mobility device-hybrid (AMD-H) combining properties of two primary mobility tools for people who are blind: the long cane and adaptive mobility devices (AMDs). The long cane is the primary mobility tool used by people who are blind and visually impaired for independent and safe mobility and AMDs are adaptive devices that are often lightweight frames approximately body width in lateral dimension that are simply pushed forward to clear the space in front of a person. The prototype cane built for this study had a wing apparatus that could be folded around the shaft of a cane but when unfolded, deployed two wheeled wings 25 cm (9.8 in) to each side of the canetip. This project explored drop-off and obstacle detection for 6 adults with visual impairment using the deployed AMD-H and a standard long cane. The AMD-H improved obstacle detection overall, and was most effective for the smallest obstacles (2 and 6 inch diameter). The AMD-H cut the average drop off threshold from 1.79 inches (4.55 cm) to .96 inches (2.44 cm). All participants showed a decrease in drop off detection threshold and an increase in detection rate (13.9% overall). For drop offs of 1 in (2.54 cm) and 3 in (7.62 cm), all participants showed large improvements with the AMD-H, ranging from 8.4 to 50%. The larger drop offs of 5 in (12.7 cm) and 7 in (17.8 cm) were well detected by both types of canes.
Collapse
Affiliation(s)
- John-Ross Rizzo
- a Department of Physical Medicine & Rehabilitation , NYU School of Medicine , New York , New York , USA.,b Department of Neurology , NYU School of Medicine , New York , New York , USA.,c Tactile Navigation Tools, LLC , New York , New York , USA
| | - Kyle Conti
- a Department of Physical Medicine & Rehabilitation , NYU School of Medicine , New York , New York , USA
| | - Teena Thomas
- a Department of Physical Medicine & Rehabilitation , NYU School of Medicine , New York , New York , USA
| | - Todd E Hudson
- a Department of Physical Medicine & Rehabilitation , NYU School of Medicine , New York , New York , USA.,b Department of Neurology , NYU School of Medicine , New York , New York , USA.,c Tactile Navigation Tools, LLC , New York , New York , USA
| | - Robert Wall Emerson
- d Department of Blindness and Low Vision Studies , Western Michigan University , Kalamazoo , Michigan , USA
| | - Dae Shik Kim
- d Department of Blindness and Low Vision Studies , Western Michigan University , Kalamazoo , Michigan , USA
| |
Collapse
|
19
|
Gao Y, Chandrawanshi R, Nau AC, Tse ZTH. Wearable Virtual White Cane Network for navigating people with visual impairment. Proc Inst Mech Eng H 2015; 229:681-8. [PMID: 26334037 DOI: 10.1177/0954411915599017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Navigating the world with visual impairments presents inconveniences and safety concerns. Although a traditional white cane is the most commonly used mobility aid due to its low cost and acceptable functionality, electronic traveling aids can provide more functionality as well as additional benefits. The Wearable Virtual Cane Network is an electronic traveling aid that utilizes ultrasound sonar technology to scan the surrounding environment for spatial information. The Wearable Virtual Cane Network is composed of four sensing nodes: one on each of the user's wrists, one on the waist, and one on the ankle. The Wearable Virtual Cane Network employs vibration and sound to communicate object proximity to the user. While conventional navigation devices are typically hand-held and bulky, the hands-free design of our prototype allows the user to perform other tasks while using the Wearable Virtual Cane Network. When the Wearable Virtual Cane Network prototype was tested for distance resolution and range detection limits at various displacements and compared with a traditional white cane, all participants performed significantly above the control bar (p < 4.3 × 10(-5), standard t-test) in distance estimation. Each sensor unit can detect an object with a surface area as small as 1 cm(2) (1 cm × 1 cm) located 70 cm away. Our results showed that the walking speed for an obstacle course was increased by 23% on average when subjects used the Wearable Virtual Cane Network rather than the white cane. The obstacle course experiment also shows that the use of the white cane in combination with the Wearable Virtual Cane Network can significantly improve navigation over using either the white cane or the Wearable Virtual Cane Network alone (p < 0.05, paired t-test).
Collapse
Affiliation(s)
- Yabiao Gao
- College of Engineering, The University of Georgia, Athens, GA, USA
| | - Rahul Chandrawanshi
- Department of Mechanical Engineering, Indian Institute of Technology (Banaras Hindu University), Varanasi, Varanasi, India
| | - Amy C Nau
- Korb & Associates, Boston, MA, USA Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Zion Tsz Ho Tse
- College of Engineering, The University of Georgia, Athens, GA, USA
| |
Collapse
|