1
|
Zhao J, Chen Y, Li Y, Xu H, Xu J, Li X, Zhang H, Jin L, Xu S. 4D+ City Sidewalk: Integrating Pedestrian View into Sidewalk Spaces to Support User-Centric Urban Spatial Perception. SENSORS (BASEL, SWITZERLAND) 2025; 25:1375. [PMID: 40096144 PMCID: PMC11902832 DOI: 10.3390/s25051375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2025] [Revised: 02/13/2025] [Accepted: 02/22/2025] [Indexed: 03/19/2025]
Abstract
As urban environments become increasingly interconnected, the demand for precise and efficient pedestrian solutions in digitalized smart cities has grown significantly. This study introduces a scalable spatial visualization system designed to enhance interactions between individuals and the street in outdoor sidewalk environments. The system operates in two main phases: the spatial prior phase and the target localization phase. In the spatial prior phase, the system captures the user's perspective using first-person visual data and leverages landmark elements within the sidewalk environment to localize the user's camera. In the target localization phase, the system detects surrounding objects, such as pedestrians or cyclists, using high-angle closed-circuit television (CCTV) cameras. The system was deployed in a real-world sidewalk environment at an intersection on a university campus. By combining user location data with CCTV observations, a 4D+ virtual monitoring system was developed to present a spatiotemporal visualization of the mobile participants within the user's surrounding sidewalk space. Experimental results show that the landmark-based localization method achieves a planar positioning error of 0.468 m and a height error of 0.120 m on average. With the assistance of CCTV cameras, the localization of other targets maintains an overall error of 0.24 m. This system establishes the spatial relationship between pedestrians and the street by integrating detailed sidewalk views, with promising applications for pedestrian navigation and the potential to enhance pedestrian-friendly urban ecosystems.
Collapse
Affiliation(s)
- Jinjing Zhao
- School of Electronics, Peking University, Beijing 100871, China; (J.Z.); (Y.C.); (Y.L.); (H.X.)
| | - Yunfan Chen
- School of Electronics, Peking University, Beijing 100871, China; (J.Z.); (Y.C.); (Y.L.); (H.X.)
| | - Yancheng Li
- School of Electronics, Peking University, Beijing 100871, China; (J.Z.); (Y.C.); (Y.L.); (H.X.)
| | - Haotian Xu
- School of Electronics, Peking University, Beijing 100871, China; (J.Z.); (Y.C.); (Y.L.); (H.X.)
| | - Jingjing Xu
- School of Integrated Circuits, Shandong University, Jinan 250100, China;
| | - Xuliang Li
- School of Aerospace, Beihang University, Beijing 102206, China;
| | - Hong Zhang
- School of Aerospace, Beihang University, Beijing 102206, China;
| | - Lei Jin
- Alpheus Robotics Technology Co., Ltd., Wuxi 214117, China;
| | - Shengyong Xu
- School of Electronics, Peking University, Beijing 100871, China; (J.Z.); (Y.C.); (Y.L.); (H.X.)
| |
Collapse
|
2
|
Zhang X, Huang X, Ding Y, Long L, Li W, Xu X. Advancements in Smart Wearable Mobility Aids for Visual Impairments: A Bibliometric Narrative Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:7986. [PMID: 39771730 PMCID: PMC11679352 DOI: 10.3390/s24247986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Revised: 12/10/2024] [Accepted: 12/12/2024] [Indexed: 01/11/2025]
Abstract
Research into new solutions for wearable assistive devices for the visually impaired is an important area of assistive technology (AT). This plays a crucial role in improving the functionality and independence of the visually impaired, helping them to participate fully in their daily lives and in various community activities. This study presents a bibliometric analysis of the literature published over the last decade on wearable assistive devices for the visually impaired, retrieved from the Web of Science Core Collection (WoSCC) using CiteSpace, to provide an overview of the current state of research, trends, and hotspots in the field. The narrative focuses on prominent innovations in recent years related to wearable assistive devices for the visually impaired based on sensory substitution technology, describing the latest achievements in haptic and auditory feedback devices, the application of smart materials, and the growing concern about the conflicting interests of individuals and societal needs. It also summarises the current opportunities and challenges facing the field and discusses the following insights and trends: (1) optimization of the transmission of haptic and auditory information while multitasking; (2) advance research on smart materials and foster cross-disciplinary collaboration among experts; and (3) balance the interests of individuals and society. Given the two essential directions, the low-cost, stand-alone pursuit of efficiency and the high-cost pursuit of high-quality services that are closely integrated with accessible infrastructure, the latest advances will gradually allow more freedom for ambient assisted living by using robotics and automated machines, while using sensor and human-machine interaction as bridges to promote the synchronization of machine intelligence and human cognition.
Collapse
Affiliation(s)
| | | | | | | | | | - Xing Xu
- Department of Industrial Design, Guangdong University of Technology, Guangzhou 510006, China; (X.Z.); (X.H.); (Y.D.); (L.L.); (W.L.)
| |
Collapse
|
3
|
Volkovich Z, Ravve EV, Avros R. Indoor Navigation in Facilities with Repetitive Structures. SENSORS (BASEL, SWITZERLAND) 2024; 24:2876. [PMID: 38732986 PMCID: PMC11086065 DOI: 10.3390/s24092876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 02/21/2024] [Accepted: 02/27/2024] [Indexed: 05/13/2024]
Abstract
Most facilities are structured in a repetitive manner. In this paper, we propose an algorithm and its partial implementation for a cellular guide in such facilities without GPS use. The complete system is based on iBeacons-like components, which operate on BLE technology, and their integration into a navigation application. We assume that the user's location is determined with sufficient accuracy. Our main goal revolves around leveraging the repetitive structure of the given facility to optimize navigation in terms of storage requirements, energy efficiency in the cellular device, algorithmic complexity, and other aspects. To the best of our knowledge, there is no prior experience in addressing this specific aim. In order to provide high performance in real time, we rely on optimal saving and the use of pre-calculated and stored navigation sub-routes. Our implementation seamlessly integrates iBeacon communications, a pre-defined indoor map, diverse data structures for efficient information storage, and a user interface, all working cohesively under a single supervision. Each module can be considered, developed, and improved independently. The approach is mainly directed to places, such as passenger ships, hotels, colleges, and so on. Because of the fact that there are "replicated" parts on different floors, stored once and used for multiple routes, we reduce the amount of information that must be stored, thus helping to reduce memory usage and as a result, yielding a better running time and energy consumption.
Collapse
|
4
|
Zheng W, Pang S, Liu N, Chai Q, Xu L. A Compact Snake Optimization Algorithm in the Application of WKNN Fingerprint Localization. SENSORS (BASEL, SWITZERLAND) 2023; 23:6282. [PMID: 37514575 PMCID: PMC10383412 DOI: 10.3390/s23146282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 06/29/2023] [Accepted: 06/30/2023] [Indexed: 07/30/2023]
Abstract
Indoor localization has broad application prospects, but accurately obtaining the location of test points (TPs) in narrow indoor spaces is a challenge. The weighted K-nearest neighbor algorithm (WKNN) is a powerful localization algorithm that can improve the localization accuracy of TPs. In recent years, with the rapid development of metaheuristic algorithms, it has shown efficiency in solving complex optimization problems. The main research purpose of this article is to study how to use metaheuristic algorithms to improve indoor positioning accuracy and verify the effectiveness of heuristic algorithms in indoor positioning. This paper presents a new algorithm called compact snake optimization (cSO). The novel algorithm introduces a compact strategy to the snake optimization (SO) algorithm, which ensures optimal performance in situations with limited computing and memory resources. The performance of cSO is evaluated on 28 test functions of CEC2013 and compared with several intelligent computing algorithms. The results demonstrate that cSO outperforms these algorithms. Furthermore, we combine the cSO algorithm with WKNN fingerprint positioning and RSSI positioning. The simulation experiments demonstrate that the cSO algorithm can effectively reduce positioning errors.
Collapse
Affiliation(s)
- Weimin Zheng
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
| | - Senyuan Pang
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
| | - Ning Liu
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
| | - Qingwei Chai
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
| | - Lindong Xu
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China
| |
Collapse
|
5
|
Yamamoto T, Yamaguchi T. Human-to-Human Position Estimation System Using RSSI in Outdoor Environment. SENSORS (BASEL, SWITZERLAND) 2022; 22:7621. [PMID: 36236720 PMCID: PMC9573188 DOI: 10.3390/s22197621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 09/30/2022] [Accepted: 10/02/2022] [Indexed: 06/16/2023]
Abstract
Methods to prevent collisions between people to avoid traffic accidents are receiving significant attention. To measure the position in the non-line-of-sight (NLOS) area, which cannot be directly visually recognized, position-measuring methods use wireless-communication-type GPS and propagation characteristics of radio signals, such as received signal strength indication (RSSI). However, conventional position estimation methods using RSSI require multiple receivers, which decreases the position estimation accuracy, owing to the presence of surrounding buildings. This study proposes a system to solve this challenge using a receiver and position estimation method based on RSSI MAP simulation and particle filter. Moreover, this study utilizes BLE peripheral/central functions capable of advertising as the transmitter/receiver. By using the advertising radio waves, our method provides a framework for estimating the position of unspecified transmitters. The effectiveness of the proposed system is evaluated in this study through simulations and experiments in actual environments. We obtained an error average of the distance to be 1.6 m from the simulations, which shows the precision of the proposed method. In the actual environment, the proposed method showed an error average of the distance to be 3.3 m. Furthermore, we evaluated the accuracy of the proposed method when both the transmitter and receiver are in motion, which can be considered as a moving person in the outdoor NLOS area. The result shows an error of 4.5 m. Consequently, we concluded that the accuracy was comparable when the transmitter is stationary and when it is moving. Compared with conventional path loss, the model can measure distances of 3 m to 10 m, whereas the proposed method can estimate the "position" with the same accuracy in an outdoor environment. In addition, it can be expected to be used as a collision avoidance system that confirms the presence of strangers in the NLOS area.
Collapse
Affiliation(s)
- Takashi Yamamoto
- Master’s Programs in Intelligent and Mechanical Interaction Systems, University of Tsukuba, Tsukuba 305-8573, Japan
| | - Tomoyuki Yamaguchi
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba 305-8573, Japan
| |
Collapse
|
6
|
Mukhiddinov M, Abdusalomov AB, Cho J. Automatic Fire Detection and Notification System Based on Improved YOLOv4 for the Blind and Visually Impaired. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22093307. [PMID: 35590996 PMCID: PMC9103130 DOI: 10.3390/s22093307] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 04/22/2022] [Accepted: 04/25/2022] [Indexed: 05/06/2023]
Abstract
The growing aging population suffers from high levels of vision and cognitive impairment, often resulting in a loss of independence. Such individuals must perform crucial everyday tasks such as cooking and heating with systems and devices designed for visually unimpaired individuals, which do not take into account the needs of persons with visual and cognitive impairment. Thus, the visually impaired persons using them run risks related to smoke and fire. In this paper, we propose a vision-based fire detection and notification system using smart glasses and deep learning models for blind and visually impaired (BVI) people. The system enables early detection of fires in indoor environments. To perform real-time fire detection and notification, the proposed system uses image brightness and a new convolutional neural network employing an improved YOLOv4 model with a convolutional block attention module. The h-swish activation function is used to reduce the running time and increase the robustness of YOLOv4. We adapt our previously developed smart glasses system to capture images and inform BVI people about fires and other surrounding objects through auditory messages. We create a large fire image dataset with indoor fire scenes to accurately detect fires. Furthermore, we develop an object mapping approach to provide BVI people with complete information about surrounding objects and to differentiate between hazardous and nonhazardous fires. The proposed system shows an improvement over other well-known approaches in all fire detection metrics such as precision, recall, and average precision.
Collapse
Affiliation(s)
- Mukhriddin Mukhiddinov
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Korea;
| | - Akmalbek Bobomirzaevich Abdusalomov
- Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan;
| | - Jinsoo Cho
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Korea;
- Correspondence:
| |
Collapse
|
7
|
Voice Navigation Created by VIP Improves Spatial Performance in People with Impaired Vision. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19074138. [PMID: 35409820 PMCID: PMC8998656 DOI: 10.3390/ijerph19074138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 03/27/2022] [Accepted: 03/29/2022] [Indexed: 11/29/2022]
Abstract
The difficulty associated with spatial navigation is one of the main obstacles to independent living for visually impaired people. With a lack of visual feedback, visually impaired people must identify information from the external environment through other sense organs. This study employed an observational survey to assess voice navigation version A, created by visually impaired people, and voice navigation version B, created by non-visually impaired people. Thirty-two simulated visually impaired people were assigned to conduct task assessments of voice navigation version A and version B. For mission 1, the mean completion rate is 0.988 ± 0.049 (version A); the mean error rate is 0.125 ± 0.182 (version A). For mission 2, the mean completion rate is 0.953 ± 0.148 (version A); the mean error rate is 0.094 ± 0.198 (version A). The assessment results concluded that version A has a higher completion rate (p = 0.001) and a lower error rate (p = 0.001). In the assessment of subjective satisfaction, all the indicators regarding the impression of navigation directives in version A were significantly superior to those indicators in version B. It appears that version A has a different logic of framing than version B. In future applications, a voice navigation version shall be built, according to the way visually impaired people think, because it will facilitate the direction guide when there is a lack of visual feedback.
Collapse
|