1
|
Yuan Y, Li F, Chen J, Wang Y, Liu K. An improved Kalman filter algorithm for tightly GNSS/INS integrated navigation system. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:963-983. [PMID: 38303450 DOI: 10.3934/mbe.2024040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Abstract
The Kalman filter based on singular value decomposition (SVD) can sufficiently reduce the accumulation of rounding errors and is widely used in various applications with numerical calculations. However, in order to improve the filtering performance and adaptability in a tightly GNSS/INS (Global Navigation Satellite System and Inertial Navigation System) integrated navigation system, we propose an improved robust method to satisfy the requirements. To solve the issue of large fluctuations in GNSS signals faced by the conventional method that uses a fixed noise covariance, the proposed method constructs a correction variable through the innovation and the new matrix which is obtained by performing SVD on the original matrix, dynamically correcting the noise covariance and has better robustness. In addition, the derived SVD form of the information filter (IF) extends its application. The proposed method has higher positioning accuracy and can be better applied to tightly coupled GNSS/INS navigation simulations and physical experiments. The experimental results show that, compared with the traditional Kalman algorithm based on SVD, the proposed algorithm*s maximum error is reduced by 45.77%. Compared with the traditional IF algorithm, the root mean squared error of the proposed IF algorithm in the form of SVD is also reduced by 4.7%.
Collapse
Affiliation(s)
- Yuelin Yuan
- School of Electronic Science, National University of Defense Technology, Changsha 410005, China
| | - Fei Li
- School of Transportation and Logistics, Dalian University of Technology, Dalian 116024, China
| | - Jialiang Chen
- School of Transportation and Logistics, Dalian University of Technology, Dalian 116024, China
| | - Yu Wang
- School of Traffic and Transportation Engineering, Dalian Jiaotong University, Dalian 116028, China
| | - Kai Liu
- School of Transportation and Logistics, Dalian University of Technology, Dalian 116024, China
| |
Collapse
|
2
|
Qiu H, Zhang X, Wang H, Xiang D, Xiao M, Zhu Z, Wang L. A Robust and Integrated Visual Odometry Framework Exploiting the Optical Flow and Feature Point Method. SENSORS (BASEL, SWITZERLAND) 2023; 23:8655. [PMID: 37896748 PMCID: PMC10611077 DOI: 10.3390/s23208655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 10/18/2023] [Accepted: 10/20/2023] [Indexed: 10/29/2023]
Abstract
In this paper, we propose a robust and integrated visual odometry framework exploiting the optical flow and feature point method that achieves faster pose estimate and considerable accuracy and robustness during the odometry process. Our method utilizes optical flow tracking to accelerate the feature point matching process. In the odometry, two visual odometry methods are used: global feature point method and local feature point method. When there is good optical flow tracking and enough key points optical flow tracking matching is successful, the local feature point method utilizes prior information from the optical flow to estimate relative pose transformation information. In cases where there is poor optical flow tracking and only a small number of key points successfully match, the feature point method with a filtering mechanism is used for posing estimation. By coupling and correlating the two aforementioned methods, this visual odometry greatly accelerates the computation time for relative pose estimation. It reduces the computation time of relative pose estimation to 40% of that of the ORB_SLAM3 front-end odometry, while ensuring that it is not too different from the ORB_SLAM3 front-end odometry in terms of accuracy and robustness. The effectiveness of this method was validated and analyzed using the EUROC dataset within the ORB_SLAM3 open-source framework. The experimental results serve as supporting evidence for the efficacy of the proposed approach.
Collapse
Affiliation(s)
- Haiyang Qiu
- School of Naval Architecture and Ocean Engineering, Guangzhou Maritime University, Guangzhou 510725, China; (H.W.); (D.X.); (M.X.)
| | - Xu Zhang
- School of Automation, Jiangsu University of Science and Technology, Zhenjiang 212013, China; (X.Z.); (Z.Z.)
| | - Hui Wang
- School of Naval Architecture and Ocean Engineering, Guangzhou Maritime University, Guangzhou 510725, China; (H.W.); (D.X.); (M.X.)
| | - Dan Xiang
- School of Naval Architecture and Ocean Engineering, Guangzhou Maritime University, Guangzhou 510725, China; (H.W.); (D.X.); (M.X.)
| | - Mingming Xiao
- School of Naval Architecture and Ocean Engineering, Guangzhou Maritime University, Guangzhou 510725, China; (H.W.); (D.X.); (M.X.)
| | - Zhiyu Zhu
- School of Automation, Jiangsu University of Science and Technology, Zhenjiang 212013, China; (X.Z.); (Z.Z.)
| | - Lei Wang
- State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430072, China;
| |
Collapse
|
3
|
Inostroza F, Parra-Tsunekawa I, Ruiz-del-Solar J. Robust Localization for Underground Mining Vehicles: An Application in a Room and Pillar Mine. SENSORS (BASEL, SWITZERLAND) 2023; 23:8059. [PMID: 37836889 PMCID: PMC10574974 DOI: 10.3390/s23198059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 09/19/2023] [Indexed: 10/15/2023]
Abstract
Most autonomous navigation systems used in underground mining vehicles such as load-haul-dump (LHD) vehicles and trucks use 2D light detection and ranging (LIDAR) sensors and 2D representations/maps of the environment. In this article, we propose the use of 3D LIDARs and existing 3D simultaneous localization and mapping (SLAM) jointly with 2D mapping methods to produce or update 2D grid maps of underground tunnels that may have significant elevation changes. Existing mapping methods that only use 2D LIDARs are shown to fail to produce accurate 2D grid maps of the environment. These maps can be used for robust localization and navigation in different mine types (e.g., sublevel stoping, block/panel caving, room and pillar), using only 2D LIDAR sensors. The proposed methodology was tested in the Werra Potash Mine located at Philippsthal, Germany, under real operational conditions. The obtained results show that the enhanced 2D map-building method produces a superior mapping performance compared with a 2D map generated without the use of the 3D LIDAR-based mapping solution. The 2D map generated enables robust 2D localization, which was tested during the operation of an autonomous LHD, performing autonomous navigation and autonomous loading over extended periods of time.
Collapse
Affiliation(s)
- Felipe Inostroza
- Advanced Mining Technology Center, Universidad de Chile, Santiago 8370451, Chile; (F.I.); (I.P.-T.)
| | - Isao Parra-Tsunekawa
- Advanced Mining Technology Center, Universidad de Chile, Santiago 8370451, Chile; (F.I.); (I.P.-T.)
| | - Javier Ruiz-del-Solar
- Advanced Mining Technology Center, Universidad de Chile, Santiago 8370451, Chile; (F.I.); (I.P.-T.)
- Department of Electrical Engineering, Universidad de Chile, Santiago 8370451, Chile
| |
Collapse
|
4
|
Chen J, Wang H, Yang S. Tightly Coupled LiDAR-Inertial Odometry and Mapping for Underground Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:6834. [PMID: 37571617 PMCID: PMC10422614 DOI: 10.3390/s23156834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 07/12/2023] [Accepted: 07/27/2023] [Indexed: 08/13/2023]
Abstract
The demand for autonomous exploration and mapping of underground environments has significantly increased in recent years. However, accurately localizing and mapping robots in subterranean settings presents notable challenges. This paper presents a tightly coupled LiDAR-Inertial odometry system that combines the NanoGICP point cloud registration method with IMU pre-integration using incremental smoothing and mapping. Specifically, a point cloud affected by dust particles is first filtered out and separated into ground and non-ground point clouds (for ground vehicles). To maintain accuracy in environments with spatial variations, an adaptive voxel filter is employed, which reduces computation time while preserving accuracy. The estimated motion derived from IMU pre-integration is utilized to correct point cloud distortion and provide an initial estimation for LiDAR odometry. Subsequently, a scan-to-map point cloud registration is executed using NanoGICP to obtain a more refined pose estimation. The resulting LiDAR odometry is then employed to estimate the bias of the IMU. We comprehensively evaluated our system on established subterranean datasets. These datasets were collected by two separate teams using different platforms during the DARPA Subterranean (SubT) Challenge. The experimental results demonstrate that our system achieved performance enhancements as high as 50-60% in terms of root mean square error (RMSE).
Collapse
Affiliation(s)
| | | | - Shan Yang
- School of Resources and Safety Engineering, Central South University, Changsha 410083, China
| |
Collapse
|
5
|
Karfakis PT, Couceiro MS, Portugal D. NR5G-SAM: A SLAM Framework for Field Robot Applications Based on 5G New Radio. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115354. [PMID: 37300084 DOI: 10.3390/s23115354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/04/2023] [Accepted: 05/19/2023] [Indexed: 06/12/2023]
Abstract
Robot localization is a crucial task in robotic systems and is a pre-requisite for navigation. In outdoor environments, Global Navigation Satellite Systems (GNSS) have aided towards this direction, alongside laser and visual sensing. Despite their application in the field, GNSS suffers from limited availability in dense urban and rural environments. Light Detection and Ranging (LiDAR), inertial and visual methods are also prone to drift and can be susceptible to outliers due to environmental changes and illumination conditions. In this work, we propose a cellular Simultaneous Localization and Mapping (SLAM) framework based on 5G New Radio (NR) signals and inertial measurements for mobile robot localization with several gNodeB stations. The method outputs the pose of the robot along with a radio signal map based on the Received Signal Strength Indicator (RSSI) measurements for correction purposes. We then perform benchmarking against LiDAR-Inertial Odometry Smoothing and Mapping (LIO-SAM), a state-of-the-art LiDAR SLAM method, comparing performance via a simulator ground truth reference. Two experimental setups are presented and discussed using the sub-6 GHz and mmWave frequency bands for communication, while the transmission is based on down-link (DL) signals. Our results show that 5G positioning can be utilized for radio SLAM, providing increased robustness in outdoor environments and demonstrating its potential to assist in robot localization, as an additional absolute source of information when LiDAR methods fail and GNSS data is unreliable.
Collapse
Affiliation(s)
- Panagiotis T Karfakis
- Ingeniarius Ltd., R. Nossa Sra. Conceição 146, 4445-147 Alfena, Portugal
- Institute of Systems and Robotics, University of Coimbra, 3030-290 Coimbra, Portugal
| | - Micael S Couceiro
- Ingeniarius Ltd., R. Nossa Sra. Conceição 146, 4445-147 Alfena, Portugal
- Institute of Systems and Robotics, University of Coimbra, 3030-290 Coimbra, Portugal
| | - David Portugal
- Institute of Systems and Robotics, University of Coimbra, 3030-290 Coimbra, Portugal
| |
Collapse
|
6
|
Meaney P, Augustine R, Welteke A, Pfrommer B, Pearson AM, Brisby H. Transmission-Based Vertebrae Strength Probe Development: Far Field Probe Property Extraction and Integrated Machine Vision Distance Validation Experiments. SENSORS (BASEL, SWITZERLAND) 2023; 23:4819. [PMID: 37430734 DOI: 10.3390/s23104819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 05/05/2023] [Accepted: 05/15/2023] [Indexed: 07/12/2023]
Abstract
We are developing a transmission-based probe for point-of-care assessment of vertebrae strength needed for fabricating the instrumentation used in supporting the spinal column during spinal fusion surgery. The device is based on a transmission probe whereby thin coaxial probes are inserted into the small canals through the pedicles and into the vertebrae, and a broad band signal is transmitted from one probe to the other across the bone tissue. Simultaneously, a machine vision scheme has been developed to measure the separation distance between the probe tips while they are inserted into the vertebrae. The latter technique includes a small camera mounted to the handle of one probe and associated fiducials printed on the other. Machine vision techniques make it possible to track the location of the fiducial-based probe tip and compare it to the fixed coordinate location of the camera-based probe tip. The combination of the two methods allows for straightforward calculation of tissue characteristics by exploiting the antenna far field approximation. Validation tests of the two concepts are presented as a precursor to clinical prototype development.
Collapse
Affiliation(s)
- Paul Meaney
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA
- Electrical Engineering Department, Uppsala University, 751 05 Uppsala, Sweden
| | - Robin Augustine
- Electrical Engineering Department, Uppsala University, 751 05 Uppsala, Sweden
| | - Adrian Welteke
- Electrical Engineering Department, Helmut Schmidt University, 22043 Hamburg, Germany
| | | | - Adam M Pearson
- Geisel School of Medicine, Dartmouth College, Lebanon, NH 03766, USA
| | - Helena Brisby
- Orthopedic Department, Sahlgrenska Hospital, 413 45 Gothenburg, Sweden
| |
Collapse
|
7
|
Bavle H, Sanchez-Lopez JL, Cimarelli C, Tourani A, Voos H. From SLAM to Situational Awareness: Challenges and Survey. SENSORS (BASEL, SWITZERLAND) 2023; 23:4849. [PMID: 37430762 DOI: 10.3390/s23104849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 04/27/2023] [Accepted: 05/13/2023] [Indexed: 07/12/2023]
Abstract
The capability of a mobile robot to efficiently and safely perform complex missions is limited by its knowledge of the environment, namely the situation. Advanced reasoning, decision-making, and execution skills enable an intelligent agent to act autonomously in unknown environments. Situational Awareness (SA) is a fundamental capability of humans that has been deeply studied in various fields, such as psychology, military, aerospace, and education. Nevertheless, it has yet to be considered in robotics, which has focused on single compartmentalized concepts such as sensing, spatial perception, sensor fusion, state estimation, and Simultaneous Localization and Mapping (SLAM). Hence, the present research aims to connect the broad multidisciplinary existing knowledge to pave the way for a complete SA system for mobile robotics that we deem paramount for autonomy. To this aim, we define the principal components to structure a robotic SA and their area of competence. Accordingly, this paper investigates each aspect of SA, surveying the state-of-the-art robotics algorithms that cover them, and discusses their current limitations. Remarkably, essential aspects of SA are still immature since the current algorithmic development restricts their performance to only specific environments. Nevertheless, Artificial Intelligence (AI), particularly Deep Learning (DL), has brought new methods to bridge the gap that maintains these fields apart from the deployment to real-world scenarios. Furthermore, an opportunity has been discovered to interconnect the vastly fragmented space of robotic comprehension algorithms through the mechanism of Situational Graph (S-Graph), a generalization of the well-known scene graph. Therefore, we finally shape our vision for the future of robotic situational awareness by discussing interesting recent research directions.
Collapse
Affiliation(s)
- Hriday Bavle
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Jose Luis Sanchez-Lopez
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Claudio Cimarelli
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Ali Tourani
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Holger Voos
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
- Department of Engineering, Faculty of Science, Technology, and Medicine (FSTM), University of Luxembourg, 1359 Luxembourg, Luxembourg
| |
Collapse
|
8
|
Gupta H, Andreasson H, Lilienthal AJ, Kurtser P. Robust Scan Registration for Navigation in Forest Environment Using Low-Resolution LiDAR Sensors. SENSORS (BASEL, SWITZERLAND) 2023; 23:4736. [PMID: 37430655 DOI: 10.3390/s23104736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 04/30/2023] [Accepted: 05/11/2023] [Indexed: 07/12/2023]
Abstract
Automated forest machines are becoming important due to human operators' complex and dangerous working conditions, leading to a labor shortage. This study proposes a new method for robust SLAM and tree mapping using low-resolution LiDAR sensors in forestry conditions. Our method relies on tree detection to perform scan registration and pose correction using only low-resolution LiDAR sensors (16Ch, 32Ch) or narrow field of view Solid State LiDARs without additional sensory modalities like GPS or IMU. We evaluate our approach on three datasets, including two private and one public dataset, and demonstrate improved navigation accuracy, scan registration, tree localization, and tree diameter estimation compared to current approaches in forestry machine automation. Our results show that the proposed method yields robust scan registration using detected trees, outperforming generalized feature-based registration algorithms like Fast Point Feature Histogram, with an above 3 m reduction in RMSE for the 16Chanel LiDAR sensor. For Solid-State LiDAR the algorithm achieves a similar RMSE of 3.7 m. Additionally, our adaptive pre-processing and heuristic approach to tree detection increased the number of detected trees by 13% compared to the current approach of using fixed radius search parameters for pre-processing. Our automated tree trunk diameter estimation method yields a mean absolute error of 4.3 cm (RSME = 6.5 cm) for the local map and complete trajectory maps.
Collapse
Affiliation(s)
- Himanshu Gupta
- Centre for Applied Autonomous Sensor Systems, Örebro University, 702 81 Örebro, Sweden
| | - Henrik Andreasson
- Centre for Applied Autonomous Sensor Systems, Örebro University, 702 81 Örebro, Sweden
| | - Achim J Lilienthal
- Centre for Applied Autonomous Sensor Systems, Örebro University, 702 81 Örebro, Sweden
- Perception for Intelligent Systems, Technical University of Munich, 80992 Munich, Germany
| | - Polina Kurtser
- Centre for Applied Autonomous Sensor Systems, Örebro University, 702 81 Örebro, Sweden
- Department of Radiation Science, Radiation Physics, Umeå University, 901 87 Umeå, Sweden
| |
Collapse
|
9
|
Tchuiev V, Indelman V. Epistemic Uncertainty Aware Semantic Localization and Mapping for Inference and Belief Space Planning. ARTIF INTELL 2023. [DOI: 10.1016/j.artint.2023.103903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
|
10
|
Wang S, Wang Y, Li D, Zhao Q. Distributed Relative Localization Algorithms for Multi-Robot Networks: A Survey. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23052399. [PMID: 36904602 PMCID: PMC10007377 DOI: 10.3390/s23052399] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/10/2023] [Accepted: 02/16/2023] [Indexed: 06/12/2023]
Abstract
For a network of robots working in a specific environment, relative localization among robots is the basis for accomplishing various upper-level tasks. To avoid the latency and fragility of long-range or multi-hop communication, distributed relative localization algorithms, in which robots take local measurements and calculate localizations and poses relative to their neighbors distributively, are highly desired. Distributed relative localization has the advantages of a low communication burden and better system robustness but encounters challenges in the distributed algorithm design, communication protocol design, local network organization, etc. This paper presents a detailed survey of the key methodologies designed for distributed relative localization for robot networks. We classify the distributed localization algorithms regarding to the types of measurements, i.e., distance-based, bearing-based, and multiple-measurement-fusion-based. The detailed design methodologies, advantages, drawbacks, and application scenarios of different distributed localization algorithms are introduced and summarized. Then, the research works that support distributed localization, including local network organization, communication efficiency, and the robustness of distributed localization algorithms, are surveyed. Finally, popular simulation platforms are summarized and compared in order to facilitate future research and experiments on distributed relative localization algorithms.
Collapse
Affiliation(s)
- Shuo Wang
- School of Information, Renmin University of China, Beijing 100872, China
| | - Yongcai Wang
- School of Information, Renmin University of China, Beijing 100872, China
- Metaverse Research Center, Renmin University of China, Beijing 100872, China
| | - Deying Li
- School of Information, Renmin University of China, Beijing 100872, China
| | - Qianchuan Zhao
- Department of Automation, Tsinghua University, Beijing 100084, China
| |
Collapse
|
11
|
Lyu Y, Nguyen T, Liu L, Cao M, Yuan S, Nguyen TH, Xie L. SPINS: A structure priors aided inertial navigation system. J FIELD ROBOT 2023. [DOI: 10.1002/rob.22161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Affiliation(s)
- Yang Lyu
- School of Automation Northwestern Polytechnical University Xi'an China
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Thien‐Minh Nguyen
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Liu Liu
- College of Engineering and Computer Science Australian National University Australian Capital Territory Canberra Australia
| | - Muqing Cao
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Shenghai Yuan
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Thien Hoang Nguyen
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Lihua Xie
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| |
Collapse
|
12
|
Chen C, Ma Y, Lv J, Zhao X, Li L, Liu Y, Gao W. OL-SLAM: A Robust and Versatile System of Object Localization and SLAM. SENSORS (BASEL, SWITZERLAND) 2023; 23:801. [PMID: 36679599 PMCID: PMC9865310 DOI: 10.3390/s23020801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 12/26/2022] [Accepted: 12/26/2022] [Indexed: 06/17/2023]
Abstract
This paper proposes a real-time, versatile Simultaneous Localization and Mapping (SLAM) and object localization system, which fuses measurements from LiDAR, camera, Inertial Measurement Unit (IMU), and Global Positioning System (GPS). Our system can locate itself in an unknown environment and build a scene map based on which we can also track and obtain the global location of objects of interest. Precisely, our SLAM subsystem consists of the following four parts: LiDAR-inertial odometry, Visual-inertial odometry, GPS-inertial odometry, and global pose graph optimization. The target-tracking and positioning subsystem is developed based on YOLOv4. Benefiting from the use of GPS sensor in the SLAM system, we can obtain the global positioning information of the target; therefore, it can be highly useful in military operations, rescue and disaster relief, and other scenarios.
Collapse
Affiliation(s)
- Chao Chen
- Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China
| | - Yukai Ma
- Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China
| | - Jiajun Lv
- Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China
| | - Xiangrui Zhao
- Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China
| | - Laijian Li
- Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China
| | - Yong Liu
- Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, China
| | - Wang Gao
- Science and Technology on Complex System Control and Intelligent Agent Cooperation Laboratory, Beijing 100191, China
| |
Collapse
|
13
|
Li K, Li J, Wang A, Luo H, Li X, Yang Z. A Resilient Method for Visual-Inertial Fusion Based on Covariance Tuning. SENSORS (BASEL, SWITZERLAND) 2022; 22:9836. [PMID: 36560205 PMCID: PMC9781031 DOI: 10.3390/s22249836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 11/27/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
To improve localization and pose precision of visual-inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on covariance tuning. During back-end optimization of the viSLAM process, the unit-weight root-mean-square error (RMSE) of the visual reprojection and IMU preintegration in each optimization is computed to construct a covariance tuning function, producing a new covariance matrix. This is used to perform another round of nonlinear optimization, effectively improving pose and localization precision without closed-loop detection. In the validation experiment, our algorithm outperformed the OKVIS, R-VIO, and VINS-Mono open-source viSLAM frameworks in pose and localization precision on the EuRoc dataset, at all difficulty levels.
Collapse
|
14
|
Bala JA, Adeshina SA, Aibinu AM. Advances in Visual Simultaneous Localisation and Mapping Techniques for Autonomous Vehicles: A Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:8943. [PMID: 36433549 PMCID: PMC9694639 DOI: 10.3390/s22228943] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 11/14/2022] [Accepted: 11/16/2022] [Indexed: 06/16/2023]
Abstract
The recent advancements in Information and Communication Technology (ICT) as well as increasing demand for vehicular safety has led to significant progressions in Autonomous Vehicle (AV) technology. Perception and Localisation are major operations that determine the success of AV development and usage. Therefore, significant research has been carried out to provide AVs with the capabilities to not only sense and understand their surroundings efficiently, but also provide detailed information of the environment in the form of 3D maps. Visual Simultaneous Localisation and Mapping (V-SLAM) has been utilised to enable a vehicle understand its surroundings, map the environment, and identify its position within the area. This paper presents a detailed review of V-SLAM techniques implemented for AV perception and localisation. An overview of SLAM techniques is presented. In addition, an in-depth review is conducted to highlight various V-SLAM schemes, their strengths, and limitations. Challenges associated with V-SLAM deployment and future research directions are also provided in this paper.
Collapse
Affiliation(s)
- Jibril Abdullahi Bala
- Department of Mechatronics Engineering, Federal University of Technology, Minna 920211, Nigeria
| | | | - Abiodun Musa Aibinu
- Department of Mechatronics Engineering, Federal University of Technology, Minna 920211, Nigeria
| |
Collapse
|
15
|
Tian X, Yi P, Zhang F, Lei J, Hong Y. STV-SC: Segmentation and Temporal Verification Enhanced Scan Context for Place Recognition in Unstructured Environment. SENSORS (BASEL, SWITZERLAND) 2022; 22:8604. [PMID: 36433200 PMCID: PMC9694967 DOI: 10.3390/s22228604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 10/30/2022] [Accepted: 11/03/2022] [Indexed: 06/16/2023]
Abstract
Place recognition is an essential part of simultaneous localization and mapping (SLAM). LiDAR-based place recognition relies almost exclusively on geometric information. However, geometric information may become unreliable when faced with environments dominated by unstructured objects. In this paper, we explore the role of segmentation for extracting key structured information. We propose STV-SC, a novel segmentation and temporal verification enhanced place recognition method for unstructured environments. It contains a range image-based 3D point segmentation algorithm and a three-stage process to detect a loop. The three-stage method consists of a two-stage candidate loop search process and a one-stage segmentation and temporal verification (STV) process. Our STV process utilizes the time-continuous feature of SLAM to determine whether there is an occasional mismatch. We quantitatively demonstrate that the STV process can trigger false detections caused by unstructured objects and effectively extract structured objects to avoid outliers. Comparison with state-of-art algorithms on public datasets shows that STV-SC can run online and achieve improved performance in unstructured environments (Under the same precision, the recall rate is 1.4∼16% higher than Scan context). Therefore, our algorithm can effectively avoid the mismatching caused by the original algorithm in unstructured environment and improve the environmental adaptability of mobile agents.
Collapse
Affiliation(s)
- Xiaojie Tian
- Department of Control Science and Engineering, Tongji University, Shanghai 201804, China
| | - Peng Yi
- Department of Control Science and Engineering, Tongji University, Shanghai 201804, China
- Shanghai Research Institute for Intelligent Autonomous Systems, Shanghai 201210, China
| | - Fu Zhang
- Department of Mechanical Engineering, Hong Kong University, Hong Kong 999077, China
| | - Jinlong Lei
- Department of Control Science and Engineering, Tongji University, Shanghai 201804, China
- Shanghai Research Institute for Intelligent Autonomous Systems, Shanghai 201210, China
| | - Yiguang Hong
- Department of Control Science and Engineering, Tongji University, Shanghai 201804, China
- Shanghai Research Institute for Intelligent Autonomous Systems, Shanghai 201210, China
| |
Collapse
|
16
|
Ye Z, Li G, Liu H, Cui Z, Bao H, Zhang G. CoLi-BA: Compact Linearization based Solver for Bundle Adjustment. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3727-3736. [PMID: 36048987 DOI: 10.1109/tvcg.2022.3203119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Bundle adjustment (BA) is widely used in SLAM and SfM, which are key technologies in Augmented Reality. For real-time SLAM and large-scale SfM, the efficiency of BA is of great importance. This paper proposes CoLi-BA, a novel and efficient BA solver that significantly improves the optimization speed by compact linearization and reordering. Specifically, for each reprojection function, the redundant matrix representation of Jacobian is replaced with a tiny 3D vector, by which the computational complexity, memory storage, and cache missing for Hessian matrix construction and Schur complement are significantly reduced. Besides, we also propose a novel reordering strategy to improve the cache efficiency for Schur complement. Experiments on diverse datasets show that the speed of the proposed CoLi-BA is five times that of Ceres and two times that of g2o without sacrificing accuracy. We further verify the effectiveness by porting CoLi-BA to the open-source SLAM and SfM systems. Even when running the proposed solver in a single thread, the local BA of SLAM only takes about 20ms on a desktop PC, and the reconstruction of SfM with seven thousand photos only takes half an hour. The source code is available on the webpage: https://github.com/zju3dv/CoLi-BA.
Collapse
|
17
|
Huang Q, Papalia A, Leonard JJ. Nested Sampling for Non-Gaussian Inference in SLAM Factor Graphs. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3189786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Qiangqiang Huang
- Computer Science and Artificial Intelligence Lab (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Alan Papalia
- CSAIL at MIT and the Department of Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution, Woods Hole, MA, USA
| | - John J. Leonard
- Computer Science and Artificial Intelligence Lab (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
18
|
Feng Y, Wang J, Zhang H, Lu G. Incrementally Stochastic and Accelerated Gradient Information Mixed Optimization for Manipulator Motion Planning. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3191206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Yichang Feng
- State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou, China
| | - Jin Wang
- State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou, China
| | - Haiyun Zhang
- State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou, China
| | - Guodong Lu
- State Key Laboratory of Fluid Power and Mechatronic Systems, School of Mechanical Engineering, Zhejiang University, Hangzhou, China
| |
Collapse
|
19
|
Farhi EI, Indelman V. Bayesian incremental inference update by re-using calculations from belief space planning: a new paradigm. Auton Robots 2022. [DOI: 10.1007/s10514-022-10045-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
20
|
Steckenrider JJ. Adaptive Aerial Localization Using Lissajous Search Patterns. IEEE T ROBOT 2022. [DOI: 10.1109/tro.2021.3126225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- J. Josiah Steckenrider
- Department of Civil and Mechanical Engineering United States Military Academy West Point, West Point, NY, USA
| |
Collapse
|
21
|
Saxena A, Chiu CY, Shrivastava R, Menke J, Sastry S. Simultaneous Localization and Mapping: Through the Lens of Nonlinear Optimization. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3181409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Amay Saxena
- Department of EECS at the University of California, Berkeley, CA, USA
| | - Chih-Yuan Chiu
- Department of EECS at the University of California, Berkeley, CA, USA
| | | | - Joseph Menke
- Department of EECS at the University of California, Berkeley, CA, USA
| | - Shankar Sastry
- Department of EECS at the University of California, Berkeley, CA, USA
| |
Collapse
|
22
|
Elimelech K, Indelman V. Simplified decision making in the belief space using belief sparsification. Int J Rob Res 2022. [DOI: 10.1177/02783649221076381] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In this work, we introduce a new and efficient solution approach for the problem of decision making under uncertainty, which can be formulated as decision making in a belief space, over a possibly high-dimensional state space. Typically, to solve a decision problem, one should identify the optimal action from a set of candidates, according to some objective. We claim that one can often generate and solve an analogous yet simplified decision problem, which can be solved more efficiently. A wise simplification method can lead to the same action selection, or one for which the maximal loss in optimality can be guaranteed. Furthermore, such simplification is separated from the state inference and does not compromise its accuracy, as the selected action would finally be applied on the original state. First, we present the concept for general decision problems and provide a theoretical framework for a coherent formulation of the approach. We then practically apply these ideas to decision problems in the belief space, which can be simplified by considering a sparse approximation of their initial belief. The scalable belief sparsification algorithm we provide is able to yield solutions which are guaranteed to be consistent with the original problem. We demonstrate the benefits of the approach in the solution of a realistic active-SLAM problem and manage to significantly reduce computation time, with no loss in the quality of solution. This work is both fundamental and practical and holds numerous possible extensions.
Collapse
Affiliation(s)
- Khen Elimelech
- Robotics and Autonomous Systems Program, Technion—Israel Institute of Technology, Haifa
| | - Vadim Indelman
- Department of Aerospace Engineering, Technion—Israel Institute of Technology, Haifa
| |
Collapse
|
23
|
Fast Loop Closure Selection Method with Spatiotemporal Consistency for Multi-Robot Map Fusion. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115291] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
This paper presents a robust method based on graph topology to find the topologically correct and consistent subset of inter-robot relative pose measurements for multi-robot map fusion. However, the absence of good prior on relative pose gives a severe challenge to distinguish the inliers and outliers, and once the wrong inter-robot loop closures are used to optimize the pose graph, which can seriously corrupt the fused global map. Existing works mainly rely on the consistency of spatial dimension to select inter-robot measurements, while it does not always hold. In this paper, we propose a fast inter-robot loop closure selection method that integrates the consistency and topology relationship of inter-robot measurements, which both conform to the continuity characteristics of similar scenes and spatiotemporal consistency. Firstly, a clustering method integrating topology correctness of inter-robot loop closures is proposed to split the entire measurement set into multiple clusters. Then, our method decomposes the traditional high-dimensional consistency matrix into the sub-matrix blocks corresponding to the overlapping trajectory regions. Finally, we define the weight function to find the topologically correct and consistent subset with the maximum cardinality, then convert the weight function to the maximum clique problem from graph theory and solve it. We evaluate the performance of our method in a simulation and in a real-world experiment. Compared to state-of-the-art methods, the results show that our method can achieve competitive performance in accuracy while reducing computation time by 75%.
Collapse
|
24
|
Real-Time Artificial Intelligence Based Visual Simultaneous Localization and Mapping in Dynamic Environments – a Review. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01643-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
25
|
Liu X, Li Z, Ishii M, Hager GD, Taylor RH, Unberath M. SAGE: SLAM with Appearance and Geometry Prior for Endoscopy. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : ICRA : [PROCEEDINGS]. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 2022; 2022:5587-5593. [PMID: 36937551 PMCID: PMC10018746 DOI: 10.1109/icra46639.2022.9812257] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In endoscopy, many applications (e.g., surgical navigation) would benefit from a real-time method that can simultaneously track the endoscope and reconstruct the dense 3D geometry of the observed anatomy from a monocular endoscopic video. To this end, we develop a Simultaneous Localization and Mapping system by combining the learning-based appearance and optimizable geometry priors and factor graph optimization. The appearance and geometry priors are explicitly learned in an end-to-end differentiable training pipeline to master the task of pair-wise image alignment, one of the core components of the SLAM system. In our experiments, the proposed SLAM system is shown to robustly handle the challenges of texture scarceness and illumination variation that are commonly seen in endoscopy. The system generalizes well to unseen endoscopes and subjects and performs favorably compared with a state-of-the-art feature-based SLAM system. The code repository is available at https://github.com/lppllppl920/SAGE-SLAM.git.
Collapse
Affiliation(s)
- Xingtong Liu
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| | - Zhaoshuo Li
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| | - Masaru Ishii
- Johns Hopkins Medical Institutions, Baltimore, MD 21224 USA
| | - Gregory D Hager
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| | - Russell H Taylor
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| | - Mathias Unberath
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| |
Collapse
|
26
|
He M, Rajkumar RR. LaneMatch: A Practical Real-Time Localization Method Via Lane-Matching. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3147012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Mengwen He
- Department of Electric and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Ragunathan Raj Rajkumar
- Department of Electric and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
27
|
McConnell J, Chen F, Englot B. Overhead Image Factors for Underwater Sonar-Based SLAM. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3154048] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
28
|
Arun A, Ayyalasomayajula R, Hunter W, Bharadia D. P2SLAM: Bearing Based WiFi SLAM for Indoor Robots. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3144796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
29
|
Wang Q, Zhang J, Liu Y, Zhang X. High-Precision and Fast LiDAR Odometry and Mapping Algorithm. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2022. [DOI: 10.20965/jaciii.2022.p0206] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
LiDAR SLAM technology is an important method for the accurate navigation of automatic vehicles and is a prerequisite for the safe driving of automatic vehicles in the unstructured road environment of complex parks. This paper proposes a LiDAR fast point cloud registration algorithm that can realize fast and accurate localization and mapping of automatic vehicle point clouds through a combination of a normal distribution transform (NDT) and point-to-line iterative closest point (PLICP). First, the NDT point cloud registration algorithm is applied for the rough registration of point clouds between adjacent frames to achieve a rough estimate of the pose of automatic vehicles. Then, the PLICP point cloud registration algorithm is adopted to correct the rough registration result of the point cloud. This step completes the precise registration of the point cloud and achieves an accurate estimate of the pose of the automatic vehicle. Finally, cloud registration is accumulated over time, and the point cloud information is continuously updated to construct the point cloud map. Through numerous experiments, we compared the proposed algorithm with PLICP. The average number of iterations of the point cloud registration between adjacent frames was reduced by 6.046. The average running time of the point cloud registration between adjacent frames decreased by 43.05156 ms. The efficiency of the point cloud registration calculation increased by approximately 51.7%. By applying the KITTI dataset, the computational efficiency of NDT-ICP was approximately 60% higher than that of LeGO-LOAM. The proposed method realizes the accurate localization and mapping of automatic vehicles relying on vehicle LiDAR in a complex park environment and was applied to a Small Cyclone automatic vehicle. The results indicate that the proposed algorithm is reliable and effective.
Collapse
|
30
|
Improved-UWB/LiDAR-SLAM Tightly Coupled Positioning System with NLOS Identification Using a LiDAR Point Cloud in GNSS-Denied Environments. REMOTE SENSING 2022. [DOI: 10.3390/rs14061380] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Reliable absolute positioning is indispensable in long-term positioning systems. Although simultaneous localization and mapping based on light detection and ranging (LiDAR-SLAM) is effective in global navigation satellite system (GNSS)-denied environments, it can provide only local positioning results, with error divergence over distance. Ultrawideband (UWB) technology is an effective alternative; however, non-line-of-sight (NLOS) propagation in complex indoor environments severely affects the precision of UWB positioning, and LiDAR-SLAM typically provides more robust results under such conditions. For robust and high-precision positioning, we propose an improved-UWB/LiDAR-SLAM tightly coupled (TC) integrated algorithm. This method is the first to combine a LiDAR point cloud map generated via LiDAR-SLAM with position information from UWB anchors to distinguish between line-of-sight (LOS) and NLOS measurements through obstacle detection and NLOS identification (NI) in real time. Additionally, to alleviate positioning error accumulation in long-term SLAM, an improved-UWB/LiDAR-SLAM TC positioning model is constructed using UWB LOS measurements and LiDAR-SLAM positioning information. Parameter solving using a robust extended Kalman filter (REKF) to suppress the effect of UWB gross errors improves the robustness and positioning performance of the integrated system. Experimental results show that the proposed NI method using the LiDAR point cloud can efficiently and accurately identify UWB NLOS errors to improve the performance of UWB ranging and positioning in real scenarios. The TC integrated method combining NI and REKF achieves better positioning effectiveness and robustness than other comparative methods and satisfactory control of sensor errors with a root-mean-square error of 0.094 m, realizing subdecimeter indoor positioning.
Collapse
|
31
|
Chen Y, Zhao L, Zhang Y, Huang S, Dissanayake G. Anchor Selection for SLAM Based on Graph Topology and Submodular Optimization. IEEE T ROBOT 2022. [DOI: 10.1109/tro.2021.3078333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
32
|
Taguchi S, Deguchi H, Hirose N, Kidono K. Fast Bayesian graph update for SLAM. Adv Robot 2022. [DOI: 10.1080/01691864.2021.2013939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Affiliation(s)
- Shun Taguchi
- Toyota Central R&D Labs., Inc., Nagakute, Aichi, Japan
| | | | | | | |
Collapse
|
33
|
Guadagnino T, Giammarino LD, Grisetti G. HiPE: Hierarchical Initialization for Pose Graphs. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2021.3125046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
34
|
Xiao H, Han Y, Zhao J, Cui J, Xiong L, Yu Z. LIO-Vehicle: A Tightly-Coupled Vehicle Dynamics Extension of LiDAR Inertial Odometry. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2021.3126336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
35
|
|
36
|
Rosinol A, Violette A, Abate M, Hughes N, Chang Y, Shi J, Gupta A, Carlone L. Kimera: From SLAM to spatial perception with 3D dynamic scene graphs. Int J Rob Res 2021. [DOI: 10.1177/02783649211056674] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Humans are able to form a complex mental model of the environment they move in. This mental model captures geometric and semantic aspects of the scene, describes the environment at multiple levels of abstractions (e.g., objects, rooms, buildings), includes static and dynamic entities and their relations (e.g., a person is in a room at a given time). In contrast, current robots’ internal representations still provide a partial and fragmented understanding of the environment, either in the form of a sparse or dense set of geometric primitives (e.g., points, lines, planes, and voxels), or as a collection of objects. This article attempts to reduce the gap between robot and human perception by introducing a novel representation, a 3D dynamic scene graph (DSG), that seamlessly captures metric and semantic aspects of a dynamic environment. A DSG is a layered graph where nodes represent spatial concepts at different levels of abstraction, and edges represent spatiotemporal relations among nodes. Our second contribution is Kimera, the first fully automatic method to build a DSG from visual–inertial data. Kimera includes accurate algorithms for visual–inertial simultaneous localization and mapping (SLAM), metric–semantic 3D reconstruction, object localization, human pose and shape estimation, and scene parsing. Our third contribution is a comprehensive evaluation of Kimera in real-life datasets and photo-realistic simulations, including a newly released dataset, uHumans2, which simulates a collection of crowded indoor and outdoor scenes. Our evaluation shows that Kimera achieves competitive performance in visual–inertial SLAM, estimates an accurate 3D metric–semantic mesh model in real-time, and builds a DSG of a complex indoor environment with tens of objects and humans in minutes. Our final contribution is to showcase how to use a DSG for real-time hierarchical semantic path-planning. The core modules in Kimera have been released open source.
Collapse
Affiliation(s)
- Antoni Rosinol
- Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Andrew Violette
- Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Marcus Abate
- Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Nathan Hughes
- Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Yun Chang
- Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Jingnan Shi
- Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Arjun Gupta
- Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Luca Carlone
- Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
37
|
Sung C, Jeon S, Lim H, Myung H. What if there was no revisit? Large-scale graph-based SLAM with traffic sign detection in an HD map using LiDAR inertial odometry. INTEL SERV ROBOT 2021. [DOI: 10.1007/s11370-021-00395-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
38
|
Vasilopoulos V, Pavlakos G, Schmeckpeper K, Daniilidis K, Koditschek DE. Reactive navigation in partially familiar planar environments using semantic perceptual feedback. Int J Rob Res 2021. [DOI: 10.1177/02783649211048931] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This article solves the planar navigation problem by recourse to an online reactive scheme that exploits recent advances in simultaneous localization and mapping (SLAM) and visual object recognition to recast prior geometric knowledge in terms of an offline catalog of familiar objects. The resulting vector field planner guarantees convergence to an arbitrarily specified goal, avoiding collisions along the way with fixed but arbitrarily placed instances from the catalog as well as completely unknown fixed obstacles so long as they are strongly convex and well separated. We illustrate the generic robustness properties of such deterministic reactive planners as well as the relatively modest computational cost of this algorithm by supplementing an extensive numerical study with physical implementation on both a wheeled and legged platform in different settings.
Collapse
Affiliation(s)
- Vasileios Vasilopoulos
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Georgios Pavlakos
- Department of Electrical Engineering and Computer Sciences,UC Berkeley, Berkeley, CA, USA
| | - Karl Schmeckpeper
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Kostas Daniilidis
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Daniel E. Koditschek
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
39
|
Chen W, Wang Y, Chen H, Liu Y. EIL‐SLAM: Depth‐enhanced edge‐based infrared‐LiDAR SLAM. J FIELD ROBOT 2021. [DOI: 10.1002/rob.22040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Wenqiang Chen
- School of Mechanical Engineering and Automation Harbin Institute of Technology Shenzhen Shenzhen Guangdong China
| | - Yu Wang
- School of Mechanical Engineering and Automation Harbin Institute of Technology Shenzhen Shenzhen Guangdong China
| | - Haoyao Chen
- School of Mechanical Engineering and Automation Harbin Institute of Technology Shenzhen Shenzhen Guangdong China
| | - Yunhui Liu
- Department of Mechanical and Automation Engineering The Chinese University of Hong Kong SHATIN NT Hong Kong Hong Kong
| |
Collapse
|
40
|
Albee K, Oestreich C, Specht C, Terán Espinoza A, Todd J, Hokaj I, Lampariello R, Linares R. A Robust Observation, Planning, and Control Pipeline for Autonomous Rendezvous with Tumbling Targets. Front Robot AI 2021; 8:641338. [PMID: 34604314 PMCID: PMC8484313 DOI: 10.3389/frobt.2021.641338] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Accepted: 07/12/2021] [Indexed: 11/28/2022] Open
Abstract
Accumulating space debris edges the space domain ever closer to cascading Kessler syndrome, a chain reaction of debris generation that could dramatically inhibit the practical use of space. Meanwhile, a growing number of retired satellites, particularly in higher orbits like geostationary orbit, remain nearly functional except for minor but critical malfunctions or fuel depletion. Servicing these ailing satellites and cleaning up “high-value” space debris remains a formidable challenge, but active interception of these targets with autonomous repair and deorbit spacecraft is inching closer toward reality as shown through a variety of rendezvous demonstration missions. However, some practical challenges are still unsolved and undemonstrated. Devoid of station-keeping ability, space debris and fuel-depleted satellites often enter uncontrolled tumbles on-orbit. In order to perform on-orbit servicing or active debris removal, docking spacecraft (the “Chaser”) must account for the tumbling motion of these targets (the “Target”), which is oftentimes not known a priori. Accounting for the tumbling dynamics of the Target, the Chaser spacecraft must have an algorithmic approach to identifying the state of the Target’s tumble, then use this information to produce useful motion planning and control. Furthermore, careful consideration of the inherent uncertainty of any maneuvers must be accounted for in order to provide guarantees on system performance. This study proposes the complete pipeline of rendezvous with such a Target, starting from a standoff estimation point to a mating point fixed in the rotating Target’s body frame. A novel visual estimation algorithm is applied using a 3D time-of-flight camera to perform remote standoff estimation of the Target’s rotational state and its principal axes of rotation. A novel motion planning algorithm is employed, making use of offline simulation of potential Target tumble types to produce a look-up table that is parsed on-orbit using the estimation data. This nonlinear programming-based algorithm accounts for known Target geometry and important practical constraints such as field of view requirements, producing a motion plan in the Target’s rotating body frame. Meanwhile, an uncertainty characterization method is demonstrated which propagates uncertainty in the Target’s tumble uncertainty to provide disturbance bounds on the motion plan’s reference trajectory in the inertial frame. Finally, this uncertainty bound is provided to a robust tube model predictive controller, which provides tube-based robustness guarantees on the system’s ability to follow the reference trajectory translationally. The combination and interfaces of these methods are shown, and some of the practical implications of their use on a planned demonstration on NASA’s Astrobee free-flyer are additionally discussed. Simulation results of each of the components individually and in a complete case study example of the full pipeline are presented as the study prepares to move toward demonstration on the International Space Station.
Collapse
Affiliation(s)
- Keenan Albee
- Space Systems Laboratory (SSL) and Astrodynamics, Space Robotics and Controls Lab (ARCLab), Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Charles Oestreich
- Space Systems Laboratory (SSL) and Astrodynamics, Space Robotics and Controls Lab (ARCLab), Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA, United States.,Guidance & Control Group, The Charles Stark Draper Laboratory, Inc., Cambridge, MA, United States
| | - Caroline Specht
- Autonomy and Teleoperation, Institute of Robotics and Mechatronics, German Aerospace Center (DLR), Oberpfaffenhofen, Germany
| | - Antonio Terán Espinoza
- Space Systems Laboratory (SSL) and Astrodynamics, Space Robotics and Controls Lab (ARCLab), Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Jessica Todd
- Human Systems Laboratory (HSL), Department of Aeronautics and Astronautics and Department of Applied Ocean Physics and Engineering, Massachusetts Institute of Technology/Woods Hole Oceanographic Institute, Cambridge, MA, United States
| | - Ian Hokaj
- Space Systems Laboratory (SSL) and Astrodynamics, Space Robotics and Controls Lab (ARCLab), Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Roberto Lampariello
- Autonomy and Teleoperation, Institute of Robotics and Mechatronics, German Aerospace Center (DLR), Oberpfaffenhofen, Germany
| | - Richard Linares
- Space Systems Laboratory (SSL) and Astrodynamics, Space Robotics and Controls Lab (ARCLab), Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
41
|
Kim JH, Hong S, Ji G, Jeon S, Hwangbo J, Oh JH, Park HW. Legged Robot State Estimation With Dynamic Contact Event Information. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3093876] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
42
|
|
43
|
Zhang X, He Z, Ma Z, Jun P, Yang K. VIAE-Net: An End-to-End Altitude Estimation through Monocular Vision and Inertial Feature Fusion Neural Networks for UAV Autonomous Landing. SENSORS (BASEL, SWITZERLAND) 2021; 21:6302. [PMID: 34577508 PMCID: PMC8472930 DOI: 10.3390/s21186302] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 09/17/2021] [Accepted: 09/18/2021] [Indexed: 11/21/2022]
Abstract
Altitude estimation is one of the fundamental tasks of unmanned aerial vehicle (UAV) automatic navigation, where it aims to accurately and robustly estimate the relative altitude between the UAV and specific areas. However, most methods rely on auxiliary signal reception or expensive equipment, which are not always available, or applicable owing to signal interference, cost or power-consuming limitations in real application scenarios. In addition, fixed-wing UAVs have more complex kinematic models than vertical take-off and landing UAVs. Therefore, an altitude estimation method which can be robustly applied in a GPS denied environment for fixed-wing UAVs must be considered. In this paper, we present a method for high-precision altitude estimation that combines the vision information from a monocular camera and poses information from the inertial measurement unit (IMU) through a novel end-to-end deep neural network architecture. Our method has numerous advantages over existing approaches. First, we utilize the visual-inertial information and physics-based reasoning to build an ideal altitude model that provides general applicability and data efficiency for neural network learning. A further advantage is that we have designed a novel feature fusion module to simplify the tedious manual calibration and synchronization of the camera and IMU, which are required for the standard visual or visual-inertial methods to obtain the data association for altitude estimation modeling. Finally, the proposed method was evaluated, and validated using real flight data obtained during a fixed-wing UAV landing phase. The results show the average estimation error of our method is less than 3% of the actual altitude, which vastly improves the altitude estimation accuracy compared to other visual and visual-inertial based methods.
Collapse
Affiliation(s)
- Xupei Zhang
- Xi’an Microelectronics Technology Institute, Xi’an 710065, China; (X.Z.); (Z.H.)
| | - Zhanzhuang He
- Xi’an Microelectronics Technology Institute, Xi’an 710065, China; (X.Z.); (Z.H.)
| | - Zhong Ma
- Xi’an Microelectronics Technology Institute, Xi’an 710065, China; (X.Z.); (Z.H.)
| | - Peng Jun
- Sichuan Tengden Technology Co., Ltd., Chengdu 610037, China; (P.J.); (K.Y.)
| | - Kun Yang
- Sichuan Tengden Technology Co., Ltd., Chengdu 610037, China; (P.J.); (K.Y.)
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
44
|
Abstract
The inspection of public infrastructure, such as viaducts and bridges, is crucial for their proper maintenance given the heavy use of many of them. Current inspection techniques are very costly and manual, requiring highly qualified personnel and involving many risks. This article presents a novel solution for the detailed inspection of viaducts using aerial robotic platforms. The system provides a highly automated visual inspection platform that does not rely on GPS and could even fly underneath the infrastructure. Unlike commercially available solutions, our system automatically references the inspection to a global coordinate system usable throughout the lifespan of the infrastructure. In addition, the system includes another aerial platform with a robotic arm to make contact inspections of detected defects, thus providing information that cannot be obtained only with images. Both aerial robotic platforms feature flexibility in the choice of camera or contact measurement sensors as the situation requires. The system was validated by performing inspection flights on real viaducts.
Collapse
|
45
|
Li K, Li M, Hanebeck UD. Towards High-Performance Solid-State-LiDAR-Inertial Odometry and Mapping. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3070251] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
46
|
Autonomous vehicle self-localization in urban environments based on 3D curvature feature points – Monte Carlo localization. ROBOTICA 2021. [DOI: 10.1017/s0263574721000862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractThis paper proposes a map-based localization system for autonomous vehicle self-localization in urban environments, which is composed of a pose graph mapping method and 3D curvature feature points – Monte Carlo Localization algorithm (3DCF-MCL). The advantage of 3DCF-MCL is that it combines the high accuracy of the 3D feature points registration and the robustness of particle filter. Experimental results show that 3DCF-MCL can provide an accurate localization for autonomous vehicles with the 3D point cloud map that generated by our mapping method. Compared with other map-based localization algorithms, it demonstrates that 3DCF-MCL outperforms them.
Collapse
|
47
|
A Robust Framework for Simultaneous Localization and Mapping with Multiple Non-Repetitive Scanning Lidars. REMOTE SENSING 2021. [DOI: 10.3390/rs13102015] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the ability to provide long range, highly accurate 3D surrounding measurements, while lowering the device cost, non-repetitive scanning Livox lidars have attracted considerable interest in the last few years. They have seen a huge growth in use in the fields of robotics and autonomous vehicles. In virtue of their restricted FoV, they are prone to degeneration in feature-poor scenes and have difficulty detecting the loop. In this paper, we present a robust multi-lidar fusion framework for self-localization and mapping problems, allowing different numbers of Livox lidars and suitable for various platforms. First, an automatic calibration procedure is introduced for multiple lidars. Based on the assumption of rigidity of geometric structure, the transformation between two lidars can be configured through map alignment. Second, the raw data from different lidars are time-synchronized and sent to respective feature extraction processes. Instead of sending all the feature candidates for estimating lidar odometry, only the most informative features are selected to perform scan registration. The dynamic objects are removed in the meantime, and a novel place descriptor is integrated for enhanced loop detection. The results show that our proposed system achieved better results than single Livox lidar methods. In addition, our method outperformed novel mechanical lidar methods in challenging scenarios. Moreover, the performance in feature-less and large motion scenarios has also been verified, both with approvable accuracy.
Collapse
|
48
|
Das A, Elfring J, Dubbelman G. Real-Time Vehicle Positioning and Mapping Using Graph Optimization. SENSORS 2021; 21:s21082815. [PMID: 33923735 PMCID: PMC8072526 DOI: 10.3390/s21082815] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 04/07/2021] [Accepted: 04/09/2021] [Indexed: 11/16/2022]
Abstract
In this work, we propose and evaluate a pose-graph optimization-based real-time multi-sensor fusion framework for vehicle positioning using low-cost automotive-grade sensors. Pose-graphs can model multiple absolute and relative vehicle positioning sensor measurements and can be optimized using nonlinear techniques. We model pose-graphs using measurements from a precise stereo camera-based visual odometry system, a robust odometry system using the in-vehicle velocity and yaw-rate sensor, and an automotive-grade GNSS receiver. Our evaluation is based on a dataset with 180 km of vehicle trajectories recorded in highway, urban, and rural areas, accompanied by postprocessed Real-Time Kinematic GNSS as ground truth. We compare the architecture's performance with (i) vehicle odometry and GNSS fusion and (ii) stereo visual odometry, vehicle odometry, and GNSS fusion; for offline and real-time optimization strategies. The results exhibit a 20.86% reduction in the localization error's standard deviation and a significant reduction in outliers when compared with automotive-grade GNSS receivers.
Collapse
Affiliation(s)
- Anweshan Das
- Signal Processing Systems Group, Department of Electrical Engineering, University of Eindhoven, 5600 MB Eindhoven, The Netherlands;
- Correspondence:
| | - Jos Elfring
- Control Systems Technology Group, Department of Mechanical Engineering, University of Eindhoven, 5600 MB Eindhoven, The Netherlands;
- Product Unit Autonomous Driving, TomTom, 1011 AC Amsterdam, The Netherlands
| | - Gijs Dubbelman
- Signal Processing Systems Group, Department of Electrical Engineering, University of Eindhoven, 5600 MB Eindhoven, The Netherlands;
| |
Collapse
|
49
|
Vehicle Odometry with Camera-Lidar-IMU Information Fusion and Factor-Graph Optimization. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01329-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
50
|
Elimelech K, Indelman V. Efficient Modification of the Upper Triangular Square Root Matrix on Variable Reordering. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3048663] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|