1
|
Zhang Q, Li R, Sun J, Wei L, Huang J, Tan Y. Dynamic 3D Point-Cloud-Driven Autonomous Hierarchical Path Planning for Quadruped Robots. Biomimetics (Basel) 2024; 9:259. [PMID: 38786469 PMCID: PMC11117888 DOI: 10.3390/biomimetics9050259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 04/19/2024] [Accepted: 04/20/2024] [Indexed: 05/25/2024] Open
Abstract
Aiming at effectively generating safe and reliable motion paths for quadruped robots, a hierarchical path planning approach driven by dynamic 3D point clouds is proposed in this article. The developed path planning model is essentially constituted of two layers: a global path planning layer, and a local path planning layer. At the global path planning layer, a new method is proposed for calculating the terrain potential field based on point cloud height segmentation. Variable step size is employed to improve the path smoothness. At the local path planning layer, a real-time prediction method for potential collision areas and a strategy for temporary target point selection are developed. Quadruped robot experiments were carried out in an outdoor complex environment. The experimental results verified that, for global path planning, the smoothness of the path is improved and the complexity of the passing ground is reduced. The effective step size is increased by a maximum of 13.4 times, and the number of iterations is decreased by up to 1/6, compared with the traditional fixed step size planning algorithm. For local path planning, the path length is shortened by 20%, and more efficient dynamic obstacle avoidance and more stable velocity planning are achieved by using the improved dynamic window approach (DWA).
Collapse
Affiliation(s)
- Qi Zhang
- School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China; (Q.Z.); (J.S.); (L.W.); (Y.T.)
| | - Ruiya Li
- School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China; (Q.Z.); (J.S.); (L.W.); (Y.T.)
- Robotics and Intelligent Manufacturing Engineering Research Center of Hubei Province, Wuhan 430070, China
| | - Jubiao Sun
- School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China; (Q.Z.); (J.S.); (L.W.); (Y.T.)
| | - Li Wei
- School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China; (Q.Z.); (J.S.); (L.W.); (Y.T.)
| | - Jun Huang
- School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
| | - Yuegang Tan
- School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China; (Q.Z.); (J.S.); (L.W.); (Y.T.)
| |
Collapse
|
2
|
Aijazi AK, Checchin P. Non-Repetitive Scanning LiDAR Sensor for Robust 3D Point Cloud Registration in Localization and Mapping Applications. SENSORS (BASEL, SWITZERLAND) 2024; 24:378. [PMID: 38257470 PMCID: PMC10818435 DOI: 10.3390/s24020378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Revised: 12/29/2023] [Accepted: 01/05/2024] [Indexed: 01/24/2024]
Abstract
Three-dimensional point cloud registration is a fundamental task for localization and mapping in autonomous navigation applications. Over the years, registration algorithms have evolved; nevertheless, several challenges still remain. Recently, non-repetitive scanning LiDAR sensors have emerged as a promising 3D data acquisition tool. However, the feasibility of this type of sensor to leverage robust point cloud registration still needs to be ascertained. In this paper, we explore the feasibility of one such LiDAR sensor with a Spirograph-type non-repetitive scanning pattern for robust 3D point cloud registration. We first characterize the data of this unique sensor; then, utilizing these results, we propose a new 3D point cloud registration method that exploits the unique scanning pattern of the sensor to register successive 3D scans. The characteristic equations of the unique scanning pattern, determined during the characterization phase, are used to reconstruct a perfect scan at the target distance. The real scan is then compared with this reconstructed scan to extract objects in the scene. The displacement of these extracted objects with respect to the center of the unique scanning pattern is compared in successive scans to determine the transformations that are then used to register these scans. The proposed method is evaluated on two real and different datasets and compared with other state-of-the-art registration methods. After analysis, the performance (localization and mapping results) of the proposed method is further improved by adding constraints like loop closure and employing a Curve Fitting Derivative Filter (CFDT) to better estimate the trajectory. The results clearly demonstrate the suitability of the sensor for such applications. The proposed method is found to be comparable with other methods in terms of accuracy but surpasses them in performance in terms of processing time.
Collapse
Affiliation(s)
- Ahmad K. Aijazi
- Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France;
| | | |
Collapse
|
3
|
You B, Zhong G, Chen C, Li J, Ma E. A Simultaneous Localization and Mapping System Using the Iterative Error State Kalman Filter Judgment Algorithm for Global Navigation Satellite System. SENSORS (BASEL, SWITZERLAND) 2023; 23:6000. [PMID: 37447850 DOI: 10.3390/s23136000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Revised: 06/15/2023] [Accepted: 06/26/2023] [Indexed: 07/15/2023]
Abstract
Outdoor autonomous mobile robots heavily rely on GPS data for localization. However, GPS data can be erroneous and signals can be interrupted in highly urbanized areas or areas with incomplete satellite coverage, leading to localization deviations. In this paper, we propose a SLAM (Simultaneous Localization and Mapping) system that combines the IESKF (Iterated Extended Kalman Filter) and a factor graph to address these issues. We perform IESKF filtering on LiDAR and inertial measurement unit (IMU) data at the front-end to achieve a more accurate estimation of local pose and incorporate the resulting laser inertial odometry into the back-end factor graph. Furthermore, we introduce a GPS signal filtering method based on GPS state and confidence to ensure that abnormal GPS data is not used in the back-end processing. In the back-end factor graph, we incorporate loop closure factors, IMU preintegration factors, and processed GPS factors. We conducted comparative experiments using the publicly available KITTI dataset and our own experimental platform to compare the proposed SLAM system with two commonly used SLAM systems: the filter-based SLAM system (FAST-LIO) and the graph optimization-based SLAM system (LIO-SAM). The experimental results demonstrate that the proposed SLAM system outperforms the other systems in terms of localization accuracy, especially in cases of GPS signal interruption.
Collapse
Affiliation(s)
- Bo You
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, Harbin University of Science and Technology, Harbin 150080, China
- Key Laboratory of Intelligent Technology for Cutting and Manufacturing Ministry of Education, Harbin University of Science and Technology, Harbin 150080, China
| | - Guangjin Zhong
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, Harbin University of Science and Technology, Harbin 150080, China
| | - Chen Chen
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, Harbin University of Science and Technology, Harbin 150080, China
- Key Laboratory of Intelligent Technology for Cutting and Manufacturing Ministry of Education, Harbin University of Science and Technology, Harbin 150080, China
| | - Jiayu Li
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, Harbin University of Science and Technology, Harbin 150080, China
- Key Laboratory of Intelligent Technology for Cutting and Manufacturing Ministry of Education, Harbin University of Science and Technology, Harbin 150080, China
| | - Ersi Ma
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, Harbin University of Science and Technology, Harbin 150080, China
| |
Collapse
|
4
|
LiDAR-Based Sensor Fusion SLAM and Localization for Autonomous Driving Vehicles in Complex Scenarios. J Imaging 2023; 9:jimaging9020052. [PMID: 36826971 PMCID: PMC9961341 DOI: 10.3390/jimaging9020052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/21/2023] [Accepted: 02/13/2023] [Indexed: 02/22/2023] Open
Abstract
LiDAR-based simultaneous localization and mapping (SLAM) and online localization methods are widely used in autonomous driving, and are key parts of intelligent vehicles. However, current SLAM algorithms have limitations in map drift and localization algorithms based on a single sensor have poor adaptability to complex scenarios. A SLAM and online localization method based on multi-sensor fusion is proposed and integrated into a general framework in this paper. In the mapping process, constraints consisting of normal distributions transform (NDT) registration, loop closure detection and real time kinematic (RTK) global navigation satellite system (GNSS) position for the front-end and the pose graph optimization algorithm for the back-end, which are applied to achieve an optimized map without drift. In the localization process, the error state Kalman filter (ESKF) fuses LiDAR-based localization position and vehicle states to realize more robust and precise localization. The open-source KITTI dataset and field tests are used to test the proposed method. The method effectiveness shown in the test results achieves 5-10 cm mapping accuracy and 20-30 cm localization accuracy, and it realizes online autonomous driving in complex scenarios.
Collapse
|
5
|
Nickels K, Gassaway J, Bries M, Anthony D, Fiorani GW. Persistent Mapping of Sensor Data for Medium-Term Autonomy. SENSORS (BASEL, SWITZERLAND) 2022; 22:5427. [PMID: 35891114 PMCID: PMC9325316 DOI: 10.3390/s22145427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 07/07/2022] [Accepted: 07/15/2022] [Indexed: 06/15/2023]
Abstract
For vehicles to operate in unmapped areas with some degree of autonomy, it would be useful to aggregate and store processed sensor data so that it can be used later. In this paper, a tool that records and optimizes the placement of costmap data on a persistent map is presented. The optimization takes several factors into account, including local vehicle odometry, GPS signals when available, local map consistency, deformation of map regions, and proprioceptive GPS offset error. Results illustrating the creation of maps from previously unseen regions (a 100 m × 880 m test track and a 1.2 km dirt trail) are presented, with and without GPS signals available during the creation of the maps. Finally, two examples of the use of these maps are given. First, a path is planned along roads that have been seen exactly once during the mapping phase. Secondly, the map is used for vehicle localization in the absence of GPS signals.
Collapse
Affiliation(s)
- Kevin Nickels
- Department of Engineering Science, Trinity University, San Antonio, TX 78259, USA
| | - Jason Gassaway
- Southwest Research Institute, 6220 Culebra Road, San Antonio, TX 78228, USA; (J.G.); (D.A.)
| | - Matthew Bries
- RC Mowers USA, 2146 E Deerfield Ave, Suamico, WI 54173, USA;
| | - David Anthony
- Southwest Research Institute, 6220 Culebra Road, San Antonio, TX 78228, USA; (J.G.); (D.A.)
| | - Graham W. Fiorani
- US Army CCDC Ground Vehicle Systems Center, 6305 E Eleven Mile Rd, Warren, MI 48092, USA;
| |
Collapse
|
6
|
An Efficient LiDAR Point Cloud Map Coding Scheme Based on Segmentation and Frame-Inserting Network. SENSORS 2022; 22:s22145108. [PMID: 35890793 PMCID: PMC9323153 DOI: 10.3390/s22145108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 06/29/2022] [Accepted: 07/06/2022] [Indexed: 11/17/2022]
Abstract
In this article, we present an efficient coding scheme for LiDAR point cloud maps. As a point cloud map consists of numerous single scans spliced together, by recording the time stamp and quaternion matrix of each scan during map building, we cast the point cloud map compression into the point cloud sequence compression problem. The coding architecture includes two techniques: intra-coding and inter-coding. For intra-frames, a segmentation-based intra-prediction technique is developed. For inter-frames, an interpolation-based inter-frame coding network is explored to remove temporal redundancy by generating virtual point clouds based on the decoded frames. We only need to code the difference between the original LiDAR data and the intra/inter-predicted point cloud data. The point cloud map can be reconstructed according to the decoded point cloud sequence and quaternion matrices. Experiments on the KITTI dataset show that the proposed coding scheme can largely eliminate the temporal and spatial redundancies. The point cloud map can be encoded to 1/24 of its original size with 2 mm-level precision. Our algorithm also obtains better coding performance compared with the octree and Google Draco algorithms.
Collapse
|
7
|
Marked-LIEO: Visual Marker-Aided LiDAR/IMU/Encoder Integrated Odometry. SENSORS 2022; 22:s22134749. [PMID: 35808241 PMCID: PMC9269198 DOI: 10.3390/s22134749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 06/17/2022] [Accepted: 06/20/2022] [Indexed: 11/16/2022]
Abstract
In this paper, we propose a visual marker-aided LiDAR/IMU/encoder integrated odometry, Marked-LIEO, to achieve pose estimation of mobile robots in an indoor long corridor environment. In the first stage, we design the pre-integration model of encoder and IMU respectively to realize the pose estimation combined with the pose estimation from the second stage providing prediction for the LiDAR odometry. In the second stage, we design low-frequency visual marker odometry, which is optimized jointly with LiDAR odometry to obtain the final pose estimation. In view of the wheel slipping and LiDAR degradation problems, we design an algorithm that can make the optimization weight of encoder odometry and LiDAR odometry adjust adaptively according to yaw angle and LiDAR degradation distance respectively. Finally, we realize the multi-sensor fusion localization through joint optimization of an encoder, IMU, LiDAR, and camera measurement information. Aiming at the problems of GNSS information loss and LiDAR degradation in indoor corridor environment, this method introduces the state prediction information of encoder and IMU and the absolute observation information of visual marker to achieve the accurate pose of indoor corridor environment, which has been verified by experiments in Gazebo simulation environment and real environment.
Collapse
|
8
|
Liu Y, Zhang W, Li F, Zuo Z, Huang Q. Real-Time Lidar Odometry and Mapping with Loop Closure. SENSORS 2022; 22:s22124373. [PMID: 35746155 PMCID: PMC9228722 DOI: 10.3390/s22124373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 06/06/2022] [Accepted: 06/07/2022] [Indexed: 12/04/2022]
Abstract
Real-time performance and global consistency are extremely important in Simultaneous Localization and Mapping (SLAM) problems. Classic lidar-based SLAM systems often consist of front-end odometry and back-end pose optimization. However, due to expensive computation, it is often difficult to achieve loop-closure detection without compromising the real-time performance of the odometry. We propose a SLAM system where scan-to-submap-based local lidar odometry and global pose optimization based on submap construction as well as loop-closure detection are designed as separated from each other. In our work, extracted edge and surface feature points are inserted into two consecutive feature submaps and added to the pose graph prepared for loop-closure detection and global pose optimization. In addition, a submap is added to the pose graph for global data association when it is marked as in a finished state. In particular, a method to filter out false loops is proposed to accelerate the construction of constraints in the pose graph. The proposed method is evaluated on public datasets and achieves competitive performance with pose estimation frequency over 15 Hz in local lidar odometry and low drift in global consistency.
Collapse
Affiliation(s)
- Yonghui Liu
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China; (Y.L.); (F.L.); (Z.Z.); (Q.H.)
| | - Weimin Zhang
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China; (Y.L.); (F.L.); (Z.Z.); (Q.H.)
- Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing Institute of Technology, Beijing 100081, China
- Beijing Advanced Innovation Center for Intelligent Robots and Systems, Beijing 100081, China
- Correspondence:
| | - Fangxing Li
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China; (Y.L.); (F.L.); (Z.Z.); (Q.H.)
- Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing Institute of Technology, Beijing 100081, China
- Beijing Advanced Innovation Center for Intelligent Robots and Systems, Beijing 100081, China
| | - Zhengqing Zuo
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China; (Y.L.); (F.L.); (Z.Z.); (Q.H.)
| | - Qiang Huang
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China; (Y.L.); (F.L.); (Z.Z.); (Q.H.)
- Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing Institute of Technology, Beijing 100081, China
- Beijing Advanced Innovation Center for Intelligent Robots and Systems, Beijing 100081, China
| |
Collapse
|
9
|
3D Global Localization in the Underground Mine Environment Using Mobile LiDAR Mapping and Point Cloud Registration. SENSORS 2022; 22:s22082873. [PMID: 35458856 PMCID: PMC9029966 DOI: 10.3390/s22082873] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/06/2022] [Accepted: 04/07/2022] [Indexed: 12/04/2022]
Abstract
This study proposes a 3D global localization method that implements mobile LiDAR mapping and point cloud registration to recognize the locations of objects in an underground mine. An initial global point cloud map was built for an entire underground mine area using mobile LiDAR; a local LiDAR scan (local point cloud) was generated at the point where underground positioning was required. We calculated fast point feature histogram (FPFH) descriptors for the global and local point clouds to extract point features. The match areas between the global and the local point clouds were searched and aligned using random sample consensus (RANSAC) and iterative closest point (ICP) registration. The object’s location on the global coordinate system was measured using the LiDAR sensor trajectory. Field experiments were performed at the Gwan-in underground mine using three mobile LiDAR systems. The local point cloud dataset formed for the six areas of the underground mine precisely matched the global point cloud, with a low average error of approximately 0.13 m, regardless of the type of mobile LiDAR system used. In addition, the LiDAR senor trajectory was aligned on the global coordinate system to confirm the change in the dynamic object’s position over time.
Collapse
|