1
|
Ruan L, Hamilton-Fletcher G, Beheshti M, Hudson TE, Porfiri M, Rizzo JR. Multi-faceted sensory substitution using wearable technology for curb alerting: a pilot investigation with persons with blindness and low vision. Disabil Rehabil Assist Technol 2025:1-14. [PMID: 39954234 DOI: 10.1080/17483107.2025.2463541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Revised: 01/11/2025] [Accepted: 02/01/2025] [Indexed: 02/17/2025]
Abstract
Curbs separate the edge of raised sidewalks from the street and are crucial to locate in urban environments as they help delineate safe pedestrian zones from dangerous vehicular lanes. However, the curbs themselves are also significant navigation hazards, particularly for people who are blind or have low vision (pBLV). The challenges faced by pBLV in detecting and properly orienting themselves for these abrupt elevation changes can lead to falls and serious injuries. Despite recent advancements in assistive technologies, the detection and early warning of curbs remains a largely unsolved challenge. This paper aims to tackle this gap by introducing a novel, multi-faceted sensory substitution approach hosted on a smart wearable; the platform leverages an RGB camera and an embedded system to capture and segment curbs in real time and provide early warning and orientation information. The system utilizes a YOLOv8 segmentation model which has been trained on our custom curb dataset to interpret camera input. The system output consists of adaptive auditory beeps, abstract sonifications, and speech, which convey curb distance and orientation. Through human-subjects experimentation, we demonstrate the effectiveness of the system as compared to the white cane. Results show that our system can provide advanced warning through a larger safety window than the cane, while offering nearly identical curb orientation information. Future enhancements will focus on expanding our curb segmentation dataset, improving distance estimations through advanced 3D sensors and AI-models, refining system calibration and stability, and developing user-centric sonification methods to cater for a diverse range of visual impairments.
Collapse
Affiliation(s)
- Ligao Ruan
- Department of Mechanical and Aerospace Engineering, NYU Tandon School of Engineering, New York, New York, USA
| | - Giles Hamilton-Fletcher
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, New York, New York, USA
- Department of Radiology, NYU Grossman School of Medicine, New York, New York, USA
| | - Mahya Beheshti
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, New York, New York, USA
| | - Todd E Hudson
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, New York, New York, USA
| | - Maurizio Porfiri
- Department of Mechanical and Aerospace Engineering, NYU Tandon School of Engineering, New York, New York, USA
- Center for Urban Science and Progress, NYU Tandon School of Engineering, New York, New York, USA
| | - John-Ross Rizzo
- Department of Mechanical and Aerospace Engineering, NYU Tandon School of Engineering, New York, New York, USA
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, New York, New York, USA
- Center for Urban Science and Progress, NYU Tandon School of Engineering, New York, New York, USA
| |
Collapse
|
2
|
Ricci FS, Liguori L, Palermo E, Rizzo JR, Porfiri M. Navigation Training for Persons With Visual Disability Through Multisensory Assistive Technology: Mixed Methods Experimental Study. JMIR Rehabil Assist Technol 2024; 11:e55776. [PMID: 39556804 PMCID: PMC11612587 DOI: 10.2196/55776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 04/08/2024] [Accepted: 10/14/2024] [Indexed: 11/20/2024] Open
Abstract
BACKGROUND Visual disability is a growing problem for many middle-aged and older adults. Conventional mobility aids, such as white canes and guide dogs, have notable limitations that have led to increasing interest in electronic travel aids (ETAs). Despite remarkable progress, current ETAs lack empirical evidence and realistic testing environments and often focus on the substitution or augmentation of a single sense. OBJECTIVE This study aims to (1) establish a novel virtual reality (VR) environment to test the efficacy of ETAs in complex urban environments for a simulated visual impairment (VI) and (2) evaluate the impact of haptic and audio feedback, individually and combined, on navigation performance, movement behavior, and perception. Through this study, we aim to address gaps to advance the pragmatic development of assistive technologies (ATs) for persons with VI. METHODS The VR platform was designed to resemble a subway station environment with the most common challenges faced by persons with VI during navigation. This environment was used to test our multisensory, AT-integrated VR platform among 72 healthy participants performing an obstacle avoidance task while experiencing symptoms of VI. Each participant performed the task 4 times: once with haptic feedback, once with audio feedback, once with both feedback types, and once without any feedback. Data analysis encompassed metrics such as completion time, head and body orientation, and trajectory length and smoothness. To evaluate the effectiveness and interaction of the 2 feedback modalities, we conducted a 2-way repeated measures ANOVA on continuous metrics and a Scheirer-Ray-Hare test on discrete ones. We also conducted a descriptive statistical analysis of participants' answers to a questionnaire, assessing their experience and preference for feedback modalities. RESULTS Results from our study showed that haptic feedback significantly reduced collisions (P=.05) and the variability of the pitch angle of the head (P=.02). Audio feedback improved trajectory smoothness (P=.006) and mitigated the increase in the trajectory length from haptic feedback alone (P=.04). Participants reported a high level of engagement during the experiment (52/72, 72%) and found it interesting (42/72, 58%). However, when it came to feedback preferences, less than half of the participants (29/72, 40%) favored combined feedback modalities. This indicates that a majority preferred dedicated single modalities over combined ones. CONCLUSIONS AT is crucial for individuals with VI; however, it often lacks user-centered design principles. Research should prioritize consumer-oriented methodologies, testing devices in a staged manner with progression toward more realistic, ecologically valid settings to ensure safety. Our multisensory, AT-integrated VR system takes a holistic approach, offering a first step toward enhancing users' spatial awareness, promoting safer mobility, and holds potential for applications in medical treatment, training, and rehabilitation. Technological advancements can further refine such devices, significantly improving independence and quality of life for those with VI.
Collapse
Affiliation(s)
- Fabiana Sofia Ricci
- Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, New York, NY, United States
- Center for Urban Science and Progress, New York University Tandon School of Engineering, Brooklyn, New York, NY, United States
| | - Lorenzo Liguori
- Center for Urban Science and Progress, New York University Tandon School of Engineering, Brooklyn, New York, NY, United States
- Department of Mechanical and Aerospace Engineering, Sapienza University of Rome, Rome, Italy
| | - Eduardo Palermo
- Department of Mechanical and Aerospace Engineering, Sapienza University of Rome, Rome, Italy
| | - John-Ross Rizzo
- Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, New York, NY, United States
- Department of Mechanical and Aerospace Engineering, New York University Tandon School of Engineering, Brooklyn, New York, NY, United States
- Department of Rehabilitation Medicine, New York University Langone Health, New York, NY, United States
- Department of Neurology, New York University Langone Health, New York, NY, United States
| | - Maurizio Porfiri
- Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, New York, NY, United States
- Center for Urban Science and Progress, New York University Tandon School of Engineering, Brooklyn, New York, NY, United States
- Department of Mechanical and Aerospace Engineering, New York University Tandon School of Engineering, Brooklyn, New York, NY, United States
| |
Collapse
|
3
|
Yu R, Lee S, Xie J, Billah SM, Carroll JM. Human-AI Collaboration for Remote Sighted Assistance: Perspectives from the LLM Era. FUTURE INTERNET 2024; 16:254. [PMID: 40051468 PMCID: PMC11884418 DOI: 10.3390/fi16070254] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2025] Open
Abstract
Remote sighted assistance (RSA) has emerged as a conversational technology aiding people with visual impairments (VI) through real-time video chat communication with sighted agents. We conducted a literature review and interviewed 12 RSA users to understand the technical and navigational challenges faced by both agents and users. The technical challenges were categorized into four groups: agents' difficulties in orienting and localizing users, acquiring and interpreting users' surroundings and obstacles, delivering information specific to user situations, and coping with poor network connections. We also presented 15 real-world navigational challenges, including 8 outdoor and 7 indoor scenarios. Given the spatial and visual nature of these challenges, we identified relevant computer vision problems that could potentially provide solutions. We then formulated 10 emerging problems that neither human agents nor computer vision can fully address alone. For each emerging problem, we discussed solutions grounded in human-AI collaboration. Additionally, with the advent of large language models (LLMs), we outlined how RSA can integrate with LLMs within a human-AI collaborative framework, envisioning the future of visual prosthetics.
Collapse
Affiliation(s)
- Rui Yu
- Department of Computer Science and Engineering, University of Louisville, Louisville, KY 40208, USA
| | - Sooyeon Lee
- Department of Informatics, Ying Wu College of Computing, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | - Jingyi Xie
- College of Information Sciences and Technology, Pennsylvania State University, University Park, PA 16802, USA
| | - Syed Masum Billah
- College of Information Sciences and Technology, Pennsylvania State University, University Park, PA 16802, USA
| | - John M. Carroll
- College of Information Sciences and Technology, Pennsylvania State University, University Park, PA 16802, USA
| |
Collapse
|
4
|
Manzoor S, Iftikhar S, Ayub I, Shahid A, Haq AU, Muhammad W, Shafique M. Range sensor-based assistive technology solutions for people with visual impairment: a review. Disabil Rehabil Assist Technol 2024; 19:576-584. [PMID: 36036390 DOI: 10.1080/17483107.2022.2110618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 06/27/2022] [Accepted: 06/27/2022] [Indexed: 10/15/2022]
Abstract
PURPOSE There are about 2.2 billion people with visual impairment all over the world. In order to improve the quality of life for visually impaired people, various devices have been developed that support the users to make them capable of performing everyday work. There have been considerable advancements in the development of assistive devices over the last few decades. This work aims to review the research work of past decade to explore the technologies used in the assistive devices for the mobility of the people with visual impairment. It focuses on range sensors based (RSB) solutions and to present a comprehensive comparison for researchers to improve the quality of the assistive devices. METHODS In order to improve life quality of about 2.2 billion people with visual impairment in the world, various assistive devices have been developed. This work aims to review the research work of past decade to explore the technologies used in the assistive devices for the mobility of the people with visual impairment. It focuses on RSB solutions and their comparison. RESULTS Various devices developed or the people with visual impairment, in last decade are described along with their workings. Authors also introduced, their own, newly developed assistive device for the people with visual impairment. The feedback from the people with visual impairment, about assistive technology, is also included in the paper. CONCLUSIONS This study will benefit researchers developing assistive devices to help for the mobility of the people with visual impairment. Through user feedback and evaluation of the assistive devices, the authors have come to the conclusion that performance, weight and cost of the assistive devices are always important considerations to make the assistive devices more popular among their users.Implication for rehabilitationAlthough assistive device cannot rehabilitate a visually impaired person, range sensor-based assistive device may have following implications:•Use of assistive device is growing, and performance, weight and cost of assistive devices are always important considerations so that proposed technology solution should widely be accepted and adopted by the visually impaired people.•The adaptability and acceptability of an assistive device by the visually impaired people must be considered during design phase.•Proposed assistive technological solutions should meet all the needed functions.•Performance of these devices should be assessed in application context so that these devices may help the visually impaired to perform their tasks independently.•At the end, a new light weight and low-cost device developed by authors, is also given, that could be used to assist the visually impaired people to move independently.
Collapse
Affiliation(s)
- Sajjad Manzoor
- Department of Electrical Engineering, Mirpur University of Science and Technology (MUST), Mirpur, Pakistan
| | - Saqaf Iftikhar
- Department of Electrical Engineering, Mirpur University of Science and Technology (MUST), Mirpur, Pakistan
| | | | - Arqum Shahid
- Department of Electrical Engineering, Mirpur University of Science and Technology (MUST), Mirpur, Pakistan
| | - Auwar Ul Haq
- Department of Electrical Engineering, Mirpur University of Science and Technology (MUST), Mirpur, Pakistan
| | - Waisf Muhammad
- Department of Electrical Engineering, University of Gujrat, Gujrat, Pakistan
| | - Muhammad Shafique
- Department of Biomedical Engineering, Riphah International University, Islamabad, Pakistan
| |
Collapse
|
5
|
Lindroth H, Nalaie K, Raghu R, Ayala IN, Busch C, Bhattacharyya A, Moreno Franco P, Diedrich DA, Pickering BW, Herasevich V. Applied Artificial Intelligence in Healthcare: A Review of Computer Vision Technology Application in Hospital Settings. J Imaging 2024; 10:81. [PMID: 38667979 PMCID: PMC11050909 DOI: 10.3390/jimaging10040081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/08/2024] [Accepted: 03/11/2024] [Indexed: 04/28/2024] Open
Abstract
Computer vision (CV), a type of artificial intelligence (AI) that uses digital videos or a sequence of images to recognize content, has been used extensively across industries in recent years. However, in the healthcare industry, its applications are limited by factors like privacy, safety, and ethical concerns. Despite this, CV has the potential to improve patient monitoring, and system efficiencies, while reducing workload. In contrast to previous reviews, we focus on the end-user applications of CV. First, we briefly review and categorize CV applications in other industries (job enhancement, surveillance and monitoring, automation, and augmented reality). We then review the developments of CV in the hospital setting, outpatient, and community settings. The recent advances in monitoring delirium, pain and sedation, patient deterioration, mechanical ventilation, mobility, patient safety, surgical applications, quantification of workload in the hospital, and monitoring for patient events outside the hospital are highlighted. To identify opportunities for future applications, we also completed journey mapping at different system levels. Lastly, we discuss the privacy, safety, and ethical considerations associated with CV and outline processes in algorithm development and testing that limit CV expansion in healthcare. This comprehensive review highlights CV applications and ideas for its expanded use in healthcare.
Collapse
Affiliation(s)
- Heidi Lindroth
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Center for Aging Research, Regenstrief Institute, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
- Center for Health Innovation and Implementation Science, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Keivan Nalaie
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Roshini Raghu
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Ivan N. Ayala
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Charles Busch
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- College of Engineering, University of Wisconsin-Madison, Madison, WI 53705, USA
| | | | - Pablo Moreno Franco
- Department of Transplantation Medicine, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Daniel A. Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Brian W. Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| |
Collapse
|
6
|
Tan HL, Aplin T, McAuliffe T, Gullo H. An exploration of smartphone use by, and support for people with vision impairment: a scoping review. Disabil Rehabil Assist Technol 2024; 19:407-432. [PMID: 35776428 DOI: 10.1080/17483107.2022.2092223] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 06/15/2022] [Indexed: 10/17/2022]
Abstract
PURPOSE Smartphones have become a core piece of assistive technology (AT) for people with vision impairment (PVI) around the world. This scoping review sought to provide a comprehensive picture of the current evidence base of smartphones for PVI. METHODS Seven electronic databases (CINAHL, Cochrane Library, EMBASE, IEEE Xplore, Scopus, PubMed and Web of Science) were searched for papers published from 2007 to 2021. Peer-reviewed articles published in English which discussed smartphones use by PVI; smartphone technologies designed for PVI or training and learning support on the use of smartphones were included. RESULTS There were 16,899 records retrieved and 65 articles were included in this review. The majority (48%) of the papers focussed on developing better interfaces and Apps for PVI. Contrastingly, there was a paucity of papers (5%) discussing training or learning support for PVI to use smartphones and Apps effectively, even though it was highlighted to be important. Proper training will ensure that PVI can use this everyday technology as an AT to increase participation, enhance independence and improve quality of life overall. CONCLUSIONS The findings highlighted that smartphones and Apps can be used as effective and affordable AT by PVI. The many recent developments and research interest in smartphone technologies can further support its use. However, good training and learning support on the use of smartphones and Apps by PVI, is lacking. Future research should focus on the development, provision and evaluation of evidence based tailored training and support, especially in low- and middle-income countries. Implications for rehabilitationThere is a need for more training and learning support for people with vision impairment (PVI) on the use of smartphones and Apps.Individualized and a graded approach to training has been recommended for PVI to learn to use smartphones.When supporting or training people to use smartphones, the person's level of vision impairment as well as their age, are important considerations.Health professionals should be cognizant of the steep learning curve that some PVI may experience when using smartphones and Apps, especially when they switch from a phone with physical buttons to touchscreen.Certain smartphones features are useful to particular vision loss conditions. For example, zoom and magnification are helpful for those with low vision but text input and output, and commands using speech (e.g., Siri and TalkBack) are useful for those who are blind.
Collapse
Affiliation(s)
- Hwei Lan Tan
- School of Health and Rehabilitation Sciences, The University of Queensland, Saint Lucia, Australia
- Singapore Institute of Technology, Health and Social Sciences, Singapore, Singapore
| | - Tammy Aplin
- School of Health and Rehabilitation Sciences, The University of Queensland, Saint Lucia, Australia
- The Prince Charles Hospital, Metro North Hospital and Health Service, Chermside, Australia
| | - Tomomi McAuliffe
- School of Health and Rehabilitation Sciences, The University of Queensland, Saint Lucia, Australia
| | - Hannah Gullo
- School of Health and Rehabilitation Sciences, The University of Queensland, Saint Lucia, Australia
| |
Collapse
|
7
|
Pundlik S, Shivshanker P, Luo G. Impact of Apps as Assistive Devices for Visually Impaired Persons. Annu Rev Vis Sci 2023; 9:111-130. [PMID: 37127283 DOI: 10.1146/annurev-vision-111022-123837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
The pervasiveness of mobile devices and other associated technologies has affected all aspects of our daily lives. People with visual impairments are no exception, as they increasingly tend to rely on mobile apps for assistance with various visual tasks in daily life. Compared to dedicated visual aids, mobile apps offer advantages such as affordability, versatility, portability, and ubiquity. We have surveyed hundreds of mobile apps of potential interest to people with vision impairments, either released as special assistive apps claiming to help in tasks such as text or object recognition (n = 68), digital accessibility (n = 84), navigation (n = 44), and remote sighted service (n = 4), among others, or marketed as general camera magnification apps that can be used for visual assistance (n = 77). While assistive apps as a whole received positive feedback from visually impaired users, as reported in various studies, evaluations of the usability of every app were typically limited to user reviews, which are often not scientifically informative. Rigorous evaluation studies on the effect of vision assistance apps on daily task performance and quality of life are relatively rare. Moreover, evaluation criteria are difficult to establish, given the heterogeneity of the visual tasks and visual needs of the users. In addition to surveying literature on vision assistance apps, this review discusses the feasibility and necessity of conducting scientific research to understand visual needs and methods to evaluate real-world benefits.
Collapse
Affiliation(s)
- Shrinivas Pundlik
- Schepens Eye Research Institute of Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, USA;
| | - Prerana Shivshanker
- Schepens Eye Research Institute of Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, USA;
| | - Gang Luo
- Schepens Eye Research Institute of Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, USA;
| |
Collapse
|
8
|
Bayat N, Kim JH, Choudhury R, Kadhim IF, Al-Mashhadani Z, Aldritz Dela Virgen M, Latorre R, De La Paz R, Park JH. Vision Transformer Customized for Environment Detection and Collision Prediction to Assist the Visually Impaired. J Imaging 2023; 9:161. [PMID: 37623693 PMCID: PMC10455554 DOI: 10.3390/jimaging9080161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/28/2023] [Accepted: 08/07/2023] [Indexed: 08/26/2023] Open
Abstract
This paper presents a system that utilizes vision transformers and multimodal feedback modules to facilitate navigation and collision avoidance for the visually impaired. By implementing vision transformers, the system achieves accurate object detection, enabling the real-time identification of objects in front of the user. Semantic segmentation and the algorithms developed in this work provide a means to generate a trajectory vector of all identified objects from the vision transformer and to detect objects that are likely to intersect with the user's walking path. Audio and vibrotactile feedback modules are integrated to convey collision warning through multimodal feedback. The dataset used to create the model was captured from both indoor and outdoor settings under different weather conditions at different times across multiple days, resulting in 27,867 photos consisting of 24 different classes. Classification results showed good performance (95% accuracy), supporting the efficacy and reliability of the proposed model. The design and control methods of the multimodal feedback modules for collision warning are also presented, while the experimental validation concerning their usability and efficiency stands as an upcoming endeavor. The demonstrated performance of the vision transformer and the presented algorithms in conjunction with the multimodal feedback modules show promising prospects of its feasibility and applicability for the navigation assistance of individuals with vision impairment.
Collapse
Affiliation(s)
- Nasrin Bayat
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, USA; (N.B.); (Z.A.-M.); (R.L.)
| | - Jong-Hwan Kim
- AI R&D Center, Korea Military Academy, Seoul 01805, Republic of Korea;
| | - Renoa Choudhury
- Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA; (R.C.); (I.F.K.); (M.A.D.V.); (R.D.L.P.)
| | - Ibrahim F. Kadhim
- Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA; (R.C.); (I.F.K.); (M.A.D.V.); (R.D.L.P.)
| | - Zubaidah Al-Mashhadani
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, USA; (N.B.); (Z.A.-M.); (R.L.)
| | - Mark Aldritz Dela Virgen
- Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA; (R.C.); (I.F.K.); (M.A.D.V.); (R.D.L.P.)
| | - Reuben Latorre
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, USA; (N.B.); (Z.A.-M.); (R.L.)
| | - Ricardo De La Paz
- Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA; (R.C.); (I.F.K.); (M.A.D.V.); (R.D.L.P.)
| | - Joon-Hyuk Park
- Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA; (R.C.); (I.F.K.); (M.A.D.V.); (R.D.L.P.)
| |
Collapse
|
9
|
Xie J, Reddie M, Lee S, Billah SM, Zhou Z, Tsai CH, Carroll JM. Iterative Design and Prototyping of Computer Vision Mediated Remote Sighted Assistance. ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION : A PUBLICATION OF THE ASSOCIATION FOR COMPUTING MACHINERY 2022; 29:36. [PMID: 39896710 PMCID: PMC11785403 DOI: 10.1145/3501298] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 11/01/2021] [Indexed: 02/04/2025]
Abstract
Remote sighted assistance (RSA) is an emerging navigational aid for people with visual impairments (PVI). Using scenario-based design to illustrate our ideas, we developed a prototype showcasing potential applications for computer vision to support RSA interactions. We reviewed the prototype demonstrating real-world navigation scenarios with an RSA expert, and then iteratively refined the prototype based on feedback. We reviewed the refined prototype with 12 RSA professionals to evaluate the desirability and feasibility of the prototyped computer vision concepts. The RSA expert and professionals were engaged by, and reacted insightfully and constructively to the proposed design ideas. We discuss what we learned about key resources, goals, and challenges of the RSA prosthetic practice through our iterative prototype review, as well as implications for the design of RSA systems and the integration of computer vision technologies into RSA.
Collapse
Affiliation(s)
- Jingyi Xie
- Pennsylvania State University, University Park, Pennsylvania, USA
| | - Madison Reddie
- Pennsylvania State University, University Park, Pennsylvania, USA
| | - Sooyeon Lee
- Pennsylvania State University, University Park, Pennsylvania, USA
| | | | - Zihan Zhou
- Pennsylvania State University, University Park, Pennsylvania, USA
| | - Chun-Hua Tsai
- Pennsylvania State University, University Park, Pennsylvania, USA
| | - John M Carroll
- Pennsylvania State University, University Park, Pennsylvania, USA
| |
Collapse
|
10
|
Abstract
Guidance systems for visually impaired persons have become a popular topic in recent years. Existing guidance systems on the market typically utilize auxiliary tools and methods such as GPS, UWB, or a simple white cane that exploits the user’s single tactile or auditory sense. These guidance methodologies can be inadequate in a complex indoor environment. This paper proposes a multi-sensory guidance system for the visually impaired that can provide tactile and auditory advice using ORB-SLAM and YOLO techniques. Based on an RGB-D camera, the local obstacle avoidance system is realized at the tactile level through point cloud filtering that can inform the user via a vibrating motor. Our proposed method can generate a dense navigation map to implement global obstacle avoidance and path planning for the user through the coordinate transformation. Real-time target detection and a voice-prompt system based on YOLO are also incorporated at the auditory level. We implemented the proposed system as a smart cane. Experiments are performed using four different test scenarios. Experimental results demonstrate that the impediments in the walking path can be reliably located and classified in real-time. Our proposed system can function as a capable auxiliary to help visually impaired people navigate securely by integrating YOLO with ORB-SLAM.
Collapse
|
11
|
Xie J, Yu R, Lee S, Lyu Y, Billah SM, Carroll JM. Helping Helpers: Supporting Volunteers in Remote Sighted Assistance with Augmented Reality Maps. DIS. DESIGNING INTERACTIVE SYSTEMS (CONFERENCE) 2022; 2022:881-897. [PMID: 39807162 PMCID: PMC11727196 DOI: 10.1145/3532106.3533560] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/16/2025]
Abstract
Remote sighted assistance (RSA) has emerged as a conversational assistive service, where remote sighted workers, i.e., agents, provide real-time assistance to blind users via video-chat-like communication. Prior work identified several challenges for the agents to provide navigational assistance to users and proposed computer vision-mediated RSA service to address those challenges. We present an interactive system implementing a high-fidelity prototype of RSA service using augmented reality (AR) maps with localization and virtual elements placement capabilities. The paper also presents a confederate-based study design to evaluate the effects of AR maps with 13 untrained agents. The study revealed that, compared to baseline RSA, agents were significantly faster in providing indoor navigational assistance to a confederate playing the role of users, and agents' mental workload was significantly reduced-all indicate the feasibility and scalability of AR maps in RSA services.
Collapse
Affiliation(s)
- Jingyi Xie
- Pennsylvania State University, University Park, PA, USA
| | - Rui Yu
- Pennsylvania State University, University Park, PA, USA
| | - Sooyeon Lee
- Rochester Institute of Technology, Rochester, NY, USA
| | - Yao Lyu
- Pennsylvania State University, University Park, PA, USA
| | | | | |
Collapse
|
12
|
Kim K, Kim S, Choi A. Ultrasonic Sound Guide System with Eyeglass Device for the Visually Impaired. SENSORS 2022; 22:s22083077. [PMID: 35459062 PMCID: PMC9030799 DOI: 10.3390/s22083077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 03/29/2022] [Accepted: 04/14/2022] [Indexed: 11/25/2022]
Abstract
The ultrasonic sound guide system presents the audio broadcasting system based on the inaudible ultrasonic sound to assist the indoor and outdoor navigation of the visually impaired. The transmitters are placed at the point of interest to propagate the frequency modulated voice signal in ultrasonic sound range. The dual channel receiver device is carried by the visually impaired person in the form of eyeglasses to receive the ultrasonic sound for the voice signal via demodulation. Since the ultrasonic sound demonstrates the acoustic properties, the velocity, directivity, attenuation, and superposition of ultrasonic sound provide the acoustic clue to the user for localizing the multiple transmitter positions by binaural localization capability. The visually impaired hear the designated voice signal and follow the signal attributions to arrive at the specific location. Due to the low microphone gain from side addressing, the time delay between the receiver channels demonstrates the high variance and high bias in end directions. However, the perception experiment shows the further prediction accuracy in end directions as compared to the center direction outcomes. The overall evaluations show the precise directional prediction for narrow- and wide-angle situations. The ultrasonic sound guide system is a useful device to localize places in the near field without touching braille.
Collapse
|
13
|
Müller K, Engel C, Loitsch C, Stiefelhagen R, Weber G. Traveling more Independently: A Study on the Diverse Needs and Challenges of People with Visual or Mobility Impairments in Unfamiliar Indoor Environments. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2022. [DOI: 10.1145/3514255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
It is much more difficult for people with visual or mobility impairments to prepare for a trip or visit unfamiliar places than it is for people without disabilities. In addition to the usual travel arrangements, one needs to know if the various parts of the travel chain are accessible. To the best of our knowledge, there is no previous work that examines in depth travel behaviour for indoor environments for both trip planning and execution, highlighting the special needs of people with low vision, blindness or mobility impairments. In this paper, we present a survey with 125 participants with blindness, low vision and mobility impairments. We investigate how mobile they are, what strategies they use to prepare a journey to an unknown building, how they orient themselves there and what materials they use. For all three groups, our results provide insights into the problem space of the specific information needs when planning and executing a trip. We found that most of our participants have specific mobility problems depending on their disability. Feedback from the participants reveals there is a large information gap, especially for orientation in buildings, regarding availability of high-quality digital, tactile and printable indoor maps, accessibility of buildings and mobility supporting systems. In particular, there is a lack of available and high-quality indoor maps. Our analysis also points out that the specific needs differ for the three groups. Besides the expected between-group differences, also large in-group differences can be found. The current paper is an expanded version of [18] augmented by data of people with mobility impairments.
Collapse
|
14
|
El-taher FEZ, Taha A, Courtney J, Mckeever S. A Systematic Review of Urban Navigation Systems for Visually Impaired People. SENSORS (BASEL, SWITZERLAND) 2021; 21:3103. [PMID: 33946857 PMCID: PMC8125253 DOI: 10.3390/s21093103] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 04/22/2021] [Accepted: 04/25/2021] [Indexed: 11/16/2022]
Abstract
Blind and Visually impaired people (BVIP) face a range of practical difficulties when undertaking outdoor journeys as pedestrians. Over the past decade, a variety of assistive devices have been researched and developed to help BVIP navigate more safely and independently. In addition, research in overlapping domains are addressing the problem of automatic environment interpretation using computer vision and machine learning, particularly deep learning, approaches. Our aim in this article is to present a comprehensive review of research directly in, or relevant to, assistive outdoor navigation for BVIP. We breakdown the navigation area into a series of navigation phases and tasks. We then use this structure for our systematic review of research, analysing articles, methods, datasets and current limitations by task. We also provide an overview of commercial and non-commercial navigation applications targeted at BVIP. Our review contributes to the body of knowledge by providing a comprehensive, structured analysis of work in the domain, including the state of the art, and guidance on future directions. It will support both researchers and other stakeholders in the domain to establish an informed view of research progress.
Collapse
Affiliation(s)
- Fatma El-zahraa El-taher
- School of Computer Science, Technological University Dublin, D07EWV4 Dublin, Ireland; (F.E.-z.E.-t.); (A.T.); (J.C.)
| | - Ayman Taha
- School of Computer Science, Technological University Dublin, D07EWV4 Dublin, Ireland; (F.E.-z.E.-t.); (A.T.); (J.C.)
- Faculty of Computers and Artificial Intelligence, Cairo University, Cairo 12613, Egypt
| | - Jane Courtney
- School of Computer Science, Technological University Dublin, D07EWV4 Dublin, Ireland; (F.E.-z.E.-t.); (A.T.); (J.C.)
| | - Susan Mckeever
- School of Computer Science, Technological University Dublin, D07EWV4 Dublin, Ireland; (F.E.-z.E.-t.); (A.T.); (J.C.)
| |
Collapse
|
15
|
Htike HM, Margrain TH, Lai YK, Eslambolchilar P. Ability of Head-Mounted Display Technology to Improve Mobility in People With Low Vision: A Systematic Review. Transl Vis Sci Technol 2020; 9:26. [PMID: 33024619 PMCID: PMC7521174 DOI: 10.1167/tvst.9.10.26] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 08/17/2020] [Indexed: 02/06/2023] Open
Abstract
Purpose The purpose of this study was to undertake a systematic literature review on how vision enhancements, implemented using head-mounted displays (HMDs), can improve mobility, orientation, and associated aspects of visual function in people with low vision. Methods The databases Medline, Chinl, Scopus, and Web of Science were searched for potentially relevant studies. Publications from all years until November 2018 were identified based on predefined inclusion and exclusion criteria. The data were tabulated and synthesized to produce a systematic review. Results The search identified 28 relevant papers describing the performance of vision enhancement techniques on mobility and associated visual tasks. Simplifying visual scenes improved obstacle detection and object recognition but decreased walking speed. Minification techniques increased the size of the visual field by 3 to 5 times and improved visual search performance. However, the impact of minification on mobility has not been studied extensively. Clinical trials with commercially available devices recorded poor results relative to conventional aids. Conclusions The effects of current vision enhancements using HMDs are mixed. They appear to reduce mobility efficiency but improved obstacle detection and object recognition. The review highlights the lack of controlled studies with robust study designs. To support the evidence base, well-designed trials with larger sample sizes that represent different types of impairments and real-life scenarios are required. Future work should focus on identifying the needs of people with different types of vision impairment and providing targeted enhancements. Translational Relevance This literature review examines the evidence regarding the ability of HMD technology to improve mobility in people with sight loss.
Collapse
Affiliation(s)
- Hein Min Htike
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Tom H Margrain
- School of Optometry and Vision Sciences, Cardiff University, Cardiff, UK
| | - Yu-Kun Lai
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | | |
Collapse
|