1
|
Ma Y, Wang B, Lin H, Liu C, Hu M, Song Q. A continuation method for image registration based on dynamic adaptive kernel. Neural Netw 2023; 165:774-785. [PMID: 37418860 DOI: 10.1016/j.neunet.2023.06.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 06/07/2023] [Accepted: 06/22/2023] [Indexed: 07/09/2023]
Abstract
Image registration is a fundamental problem in computer vision and robotics. Recently, learning-based image registration methods have made great progress. However, these methods are sensitive to abnormal transformation and have insufficient robustness, which leads to more mismatched points in the actual environment. In this paper, we propose a new registration framework based on ensemble learning and dynamic adaptive kernel. Specifically, we first use a dynamic adaptive kernel to extract deep features at the coarse level to guide fine-level registration. Then we added an adaptive feature pyramid network based on the integrated learning principle to realize the fine-level feature extraction. Through different scale, receptive fields, not only the local geometric information of each point is considered, but also its low texture information at the pixel level is considered. According to the actual registration environment, fine features are adaptively obtained to reduce the sensitivity of the model to abnormal transformation. We use the global receptive field provided in the transformer to obtain feature descriptors based on these two levels. In addition, we use the cosine loss directly defined on the corresponding relationship to train the network and balance the samples, to achieve feature point registration based on the corresponding relationship. Extensive experiments on object-level and scene-level datasets show that the proposed method outperforms existing state-of-the-art techniques by a large margin. More critically, it has the best generalization ability in unknown scenes with different sensor modes.
Collapse
Affiliation(s)
- Yuandong Ma
- Beijing University of Posts and Telecommunications, Beijing 10000, China.
| | - Boyuan Wang
- Beijing University of Posts and Telecommunications, Beijing 10000, China.
| | - Hezheng Lin
- Beijing University of Posts and Telecommunications, Beijing 10000, China; Beijing Yixiao Technology Co., Ltd., Beijing 10000, China.
| | - Chun Liu
- Beijing University of Posts and Telecommunications, Beijing 10000, China.
| | - Mengjie Hu
- Beijing University of Posts and Telecommunications, Beijing 10000, China.
| | - Qing Song
- Beijing University of Posts and Telecommunications, Beijing 10000, China.
| |
Collapse
|
2
|
Fu K, Luo J, Luo X, Liu S, Zhang C, Wang M. Robust Point Cloud Registration Framework Based on Deep Graph Matching. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:6183-6195. [PMID: 36067105 DOI: 10.1109/tpami.2022.3204713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
3D point cloud registration is a fundamental problem in computer vision and robotics. Recently, learning-based point cloud registration methods have made great progress. However, these methods are sensitive to outliers, which lead to more incorrect correspondences. In this paper, we propose a novel deep graph matching-based framework for point cloud registration. Specifically, we first transform point clouds into graphs and extract deep features for each point. Then, we develop a module based on deep graph matching to calculate a soft correspondence matrix. By using graph matching, not only the local geometry of each point but also its structure and topology in a larger range are considered in establishing correspondences, so that more correct correspondences are found. We train the network with a loss directly defined on the correspondences, and in the test stage the soft correspondences are transformed into hard one-to-one correspondences so that registration can be performed by a correspondence-based solver. Furthermore, we introduce a transformer-based method to generate edges for graph construction, which further improves the quality of the correspondences. Extensive experiments on object-level and scene-level benchmark datasets show that the proposed method achieves state-of-the-art performance.
Collapse
|
3
|
Qi L, Wu F, Ge Z, Sun Y. DeepMatch: Toward Lightweight in Point Cloud Registration. Front Neurorobot 2022; 16:891158. [PMID: 35923220 PMCID: PMC9339710 DOI: 10.3389/fnbot.2022.891158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 06/09/2022] [Indexed: 11/13/2022] Open
Abstract
From source to target, point cloud registration solves for a rigid body transformation that aligns the two point clouds. IterativeClosest Point (ICP) and other traditional algorithms require a long registration time and are prone to fall into local optima. Learning-based algorithms such as Deep ClosestPoint (DCP) perform better than those traditional algorithms and escape from local optimality. However, they are still not perfectly robust and rely on the complex model design due to the extracted local features are susceptible to noise. In this study, we propose a lightweight point cloud registration algorithm, DeepMatch. DeepMatch extracts a point feature for each point, which is a spatial structure composed of each point itself, the center point of the point cloud, and the farthest point of each point. Because of the superiority of this per-point feature, the computing resources and time required by DeepMatch to complete the training are less than one-tenth of other learning-based algorithms with similar performance. In addition, experiments show that our algorithm achieves state-of-the-art (SOTA) performance on both clean, with Gaussian noise and unseen category datasets. Among them, on the unseen categories, compared to the previous best learning-based point cloud registration algorithms, the registration error of DeepMatch is reduced by two orders of magnitude, achieving the same performance as on the categories seen in training, which proves DeepMatch is generalizable in point cloud registration tasks. Finally, only our DeepMatch completes 100% recall on all three test sets.
Collapse
Affiliation(s)
- Lizhe Qi
- Intelligent Industrial Robot and Intelligent Manufacturing Laboratory, Ministry of Education's Engineering Research Center of AI and Robotics, Academy for Engineering and Technology, Fudan University, Shanghai, China
- Intelligent Industrial Robot and Intelligent Manufacturing Laboratory, Shanghai Engineering Research Center of AI and Robotics, Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Fuwang Wu
- Intelligent Industrial Robot and Intelligent Manufacturing Laboratory, Ministry of Education's Engineering Research Center of AI and Robotics, Academy for Engineering and Technology, Fudan University, Shanghai, China
- Intelligent Industrial Robot and Intelligent Manufacturing Laboratory, Shanghai Engineering Research Center of AI and Robotics, Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Zuhao Ge
- Intelligent Industrial Robot and Intelligent Manufacturing Laboratory, Ministry of Education's Engineering Research Center of AI and Robotics, Academy for Engineering and Technology, Fudan University, Shanghai, China
- Intelligent Industrial Robot and Intelligent Manufacturing Laboratory, Shanghai Engineering Research Center of AI and Robotics, Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Yuquan Sun
- Intelligent Industrial Robot and Intelligent Manufacturing Laboratory, Ministry of Education's Engineering Research Center of AI and Robotics, Academy for Engineering and Technology, Fudan University, Shanghai, China
- Intelligent Industrial Robot and Intelligent Manufacturing Laboratory, Shanghai Engineering Research Center of AI and Robotics, Academy for Engineering and Technology, Fudan University, Shanghai, China
| |
Collapse
|
4
|
A Robot Pose Estimation Optimized Visual SLAM Algorithm Based on CO-HDC Instance Segmentation Network for Dynamic Scenes. REMOTE SENSING 2022. [DOI: 10.3390/rs14092114] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
In order to improve the accuracy of visual SLAM algorithms in a dynamic scene, instance segmentation is widely used to eliminate dynamic feature points. However, the existing segmentation technology has low accuracy, especially for the contour of the object, and the amount of calculation of instance segmentation is large, limiting the speed of visual SLAM based on instance segmentation. Therefore, this paper proposes a contour optimization hybrid dilated convolutional neural network (CO-HDC) algorithm, which can perform a lightweight calculation on the basis of improving the accuracy of contour segmentation. Firstly, a hybrid dilated convolutional neural network (HDC) is used to increase the receptive field, which is defined as the size of the region in the input that produces the feature. Secondly, the contour quality evaluation (CQE) algorithm is proposed to enhance the contour, retaining the highest quality contour and solving the problem of distinguishing dynamic feature points from static feature points at the contour. Finally, in order to match the mapping speed of visual SLAM, the Beetle Antennae Search Douglas–Peucker (BAS-DP) algorithm is proposed to lighten the contour extraction. The experimental results have demonstrated that the proposed visual SLAM based on the CO-HDC algorithm performs well in the field of pose estimation and map construction on the TUM dataset. Compared with ORB-SLAM2, the Root Mean Squared Error (Rmse) of the proposed method in absolute trajectory error is about 30 times smaller and is only 0.02 m.
Collapse
|
5
|
Han L, Gu S, Zhong D, Quan S, Fang L. Real-Time Globally Consistent Dense 3D Reconstruction With Online Texturing. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:1519-1533. [PMID: 32877330 DOI: 10.1109/tpami.2020.3021023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
High-quality reconstruction of 3D geometry and texture plays a vital role in providing immersive perception of the real world. Additionally, online computation enables the practical usage of 3D reconstruction for interaction. We present an RGBD-based globally-consistent dense 3D reconstruction approach, where high-quality (i.e., the spatial resolution of the RGB image) texture patches are mapped on high-resolution ([Formula: see text]) geometric models online. The whole pipeline uses merely the CPU computing of a portable device. For real-time geometric reconstruction with online texturing, we propose to solve the texture optimization problem with a simplified incremental MRF solver in the context of geometric reconstruction pipeline using sparse voxel sampling strategy. An efficient reference-based color adjustment scheme is also proposed to achieve consistent texture patch colors under inconsistent luminance situations. Quantitative and qualitative experiments demonstrate that our online scheme achieves a realistic visualization of the environment with more abundant details, while taking fairly compact memory consumption and much lower computational complexity than existing solutions.
Collapse
|
6
|
Fu Q, Yu H, Wang X, Yang Z, He Y, Zhang H, Mian A. Fast ORB-SLAM Without Keypoint Descriptors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1433-1446. [PMID: 34951846 DOI: 10.1109/tip.2021.3136710] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Indirect methods for visual SLAM are gaining popularity due to their robustness to environmental variations. ORB-SLAM2 (Mur-Artal and Tardós, 2017) is a benchmark method in this domain, however, it consumes significant time for computing descriptors that never get reused unless a frame is selected as a keyframe. To overcome these problems, we present FastORB-SLAM which is light-weight and efficient as it tracks keypoints between adjacent frames without computing descriptors. To achieve this, a two stage descriptor-independent keypoint matching method is proposed based on sparse optical flow. In the first stage, we predict initial keypoint correspondences via a simple but effective motion model and then robustly establish the correspondences via pyramid-based sparse optical flow tracking. In the second stage, we leverage the constraints of the motion smoothness and epipolar geometry to refine the correspondences. In particular, our method computes descriptors only for keyframes. We test FastORB-SLAM on TUM and ICL-NUIM RGB-D datasets and compare its accuracy and efficiency to nine existing RGB-D SLAM methods. Qualitative and quantitative results show that our method achieves state-of-the-art accuracy and is about twice as fast as the ORB-SLAM2.
Collapse
|
7
|
SCRnet: A Spatial Consistency Guided Network Using Contrastive Learning for Point Cloud Registration. Symmetry (Basel) 2022. [DOI: 10.3390/sym14010140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/10/2022] Open
Abstract
Point cloud registration is used to find a rigid transformation from the source point cloud to the target point cloud. The main challenge in the point cloud registration is in finding correct correspondences in complex scenes that may contain many noise and repetitive structures. At present, many existing methods use outlier rejections to help the network obtain more accurate correspondences, but they often ignore the spatial consistency between keypoints. Therefore, to address this issue, we propose a spatial consistency guided network using contrastive learning for point cloud registration (SCRnet), in which its overall stage is symmetrical. SCRnet consists of four blocks, namely feature extraction block, confidence estimation block, contrastive learning block and registration block. Firstly, we use mini-PointNet to extract coarse local and global features. Secondly, we propose confidence estimation block, which formulate outlier rejection as confidence estimation problem of keypoint correspondences. In addition, the local spatial features are encoded into the confidence estimation block, which makes the correspondence possess local spatial consistency. Moreover, we propose contrastive learning block by constructing positive point pairs and hard negative point pairs and using Point-Pair-INfoNCE contrastive loss, which can further remove hard outliers through global spatial consistency. Finally, the proposed registration block selects a set of matching points with high spatial consistency and uses these matching sets to calculate multiple transformations, then the best transformation can be identified by initial alignment and Iterative Closest Point (ICP) algorithm. Extensive experiments are conducted on KITTI and nuScenes dataset, which demonstrate the high accuracy and strong robustness of SCRnet on point cloud registration task.
Collapse
|
8
|
Local feature extraction network with high correspondences for 3d point cloud registration. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03055-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
9
|
SAP-Net: A Simple and Robust 3D Point Cloud Registration Network Based on Local Shape Features. SENSORS 2021; 21:s21217177. [PMID: 34770483 PMCID: PMC8587363 DOI: 10.3390/s21217177] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 10/19/2021] [Accepted: 10/26/2021] [Indexed: 11/17/2022]
Abstract
Point cloud registration is a key step in the reconstruction of 3D data models. The traditional ICP registration algorithm depends on the initial position of the point cloud. Otherwise, it may get trapped into local optima. In addition, the registration method based on the feature learning of PointNet cannot directly or effectively extract local features. To solve these two problems, this paper proposes SAP-Net, inspired by CorsNet and PointNet++, as an optimized CorsNet. To be more specific, SAP-Net firstly uses the set abstraction layer in PointNet++ as the feature extraction layer and then combines the global features with the initial template point cloud. Finally, PointNet is used as the transform prediction layer to obtain the six parameters required for point cloud registration directly, namely the rotation matrix and the translation vector. Experiments on the ModelNet40 dataset and real data show that SAP-Net not only outperforms ICP and CorsNet on both seen and unseen categories of the point cloud but also has stronger robustness.
Collapse
|
10
|
Min Z, Liu J, Liu L, Meng MQH. Generalized Coherent Point Drift With Multi-Variate Gaussian Distribution and Watson Distribution. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3093011] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
11
|
Shu Z, Cao S, Jiang Q, Xu Z, Tang J, Zhou Q. Pairwise Registration Algorithm for Large-Scale Planar Point Cloud Used in Flatness Measurement. SENSORS 2021; 21:s21144860. [PMID: 34300603 PMCID: PMC8309750 DOI: 10.3390/s21144860] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/05/2021] [Accepted: 07/13/2021] [Indexed: 11/30/2022]
Abstract
In this paper, an optimized three-dimensional (3D) pairwise point cloud registration algorithm is proposed, which is used for flatness measurement based on a laser profilometer. The objective is to achieve a fast and accurate six-degrees-of-freedom (6-DoF) pose estimation of a large-scale planar point cloud to ensure that the flatness measurement is precise. To that end, the proposed algorithm extracts the boundary of the point cloud to obtain more effective feature descriptors of the keypoints. Then, it eliminates the invalid keypoints by neighborhood evaluation to obtain the initial matching point pairs. Thereafter, clustering combined with the geometric consistency constraints of correspondences is conducted to realize coarse registration. Finally, the iterative closest point (ICP) algorithm is used to complete fine registration based on the boundary point cloud. The experimental results demonstrate that the proposed algorithm is superior to the current algorithms in terms of boundary extraction and registration performance.
Collapse
|
12
|
Xu L, Su Z, Han L, Yu T, Liu Y, Fang L. UnstructuredFusion: Realtime 4D Geometry and Texture Reconstruction Using Commercial RGBD Cameras. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:2508-2522. [PMID: 31071018 DOI: 10.1109/tpami.2019.2915229] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
A high-quality 4D geometry and texture reconstruction for human activities usually requires multiview perceptions via highly structured multi-camera setup, where both the specifically designed cameras and the tedious pre-calibration restrict the popularity of professional multi-camera systems for daily applications. In this paper, we propose UnstructuredFusion, a practicable realtime markerless human performance capture method using unstructured commercial RGBD cameras. Along with the flexible hardware setup using simply three unstructured RGBD cameras without any careful pre-calibration, the challenge 4D reconstruction through multiple asynchronous videos is solved by proposing three novel technique contributions, i.e., online multi-camera calibration, skeleton warping based non-rigid tracking, and temporal blending based atlas texturing. The overall insights behind lie in the solid global constraints of human body and human motion which are modeled by the skeleton and the skeleton warping, respectively. Extensive experiments such as allocating three cameras flexibly in a handheld way demonstrate that the proposed UnstructuredFusion achieves high-quality 4D geometry and texture reconstruction without tiresome pre-calibration, liberating the cumbersome hardware and software restrictions in conventional structured multi-camera system, while eliminating the inherent occlusion issues of the single camera setup.
Collapse
|
13
|
Han L, Zheng T, Zhu Y, Xu L, Fang L. Live Semantic 3D Perception for Immersive Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2012-2022. [PMID: 32070983 DOI: 10.1109/tvcg.2020.2973477] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Semantic understanding of 3D environments is critical for both the unmanned system and the human involved virtual/augmented reality (VR/AR) immersive experience. Spatially-sparse convolution, taking advantage of the intrinsic sparsity of 3D point cloud data, makes high resolution 3D convolutional neural networks tractable with state-of-the-art results on 3D semantic segmentation problems. However, the exhaustive computations limits the practical usage of semantic 3D perception for VR/AR applications in portable devices. In this paper, we identify that the efficiency bottleneck lies in the unorganized memory access of the sparse convolution steps, i.e., the points are stored independently based on a predefined dictionary, which is inefficient due to the limited memory bandwidth of parallel computing devices (GPU). With the insight that points are continuous as 2D surfaces in 3D space, a chunk-based sparse convolution scheme is proposed to reuse the neighboring points within each spatially organized chunk. An efficient multi-layer adaptive fusion module is further proposed for employing the spatial consistency cue of 3D data to further reduce the computational burden. Quantitative experiments on public datasets demonstrate that our approach works 11× faster than previous approaches with competitive accuracy. By implementing both semantic and geometric 3D reconstruction simultaneously on a portable tablet device, we demo a foundation platform for immersive AR applications.
Collapse
|
14
|
A Novel RGB-D SLAM Algorithm Based on Cloud Robotics. SENSORS 2019; 19:s19235288. [PMID: 31805628 PMCID: PMC6928679 DOI: 10.3390/s19235288] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 11/27/2019] [Accepted: 11/28/2019] [Indexed: 11/25/2022]
Abstract
In this paper, we present a novel red-green-blue-depth simultaneous localization and mapping (RGB-D SLAM) algorithm based on cloud robotics, which combines RGB-D SLAM with the cloud robot and offloads the back-end process of the RGB-D SLAM algorithm to the cloud. This paper analyzes the front and back parts of the original RGB-D SLAM algorithm and improves the algorithm from three aspects: feature extraction, point cloud registration, and pose optimization. Experiments show the superiority of the improved algorithm. In addition, taking advantage of the cloud robotics, the RGB-D SLAM algorithm is combined with the cloud robot and the back-end part of the computationally intensive algorithm is offloaded to the cloud. Experimental validation is provided, which compares the cloud robotic-based RGB-D SLAM algorithm with the local RGB-D SLAM algorithm. The results of the experiments demonstrate the superiority of our framework. The combination of cloud robotics and RGB-D SLAM can not only improve the efficiency of SLAM but also reduce the robot’s price and size.
Collapse
|
15
|
Real-Time RGB-D Simultaneous Localization and Mapping Guided by Terrestrial LiDAR Point Cloud for Indoor 3-D Reconstruction and Camera Pose Estimation. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9163264] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In recent years, low-cost and lightweight RGB and depth (RGB-D) sensors, such as Microsoft Kinect, have made available rich image and depth data, making them very popular in the field of simultaneous localization and mapping (SLAM), which has been increasingly used in robotics, self-driving vehicles, and augmented reality. The RGB-D SLAM constructs 3D environmental models of natural landscapes while simultaneously estimating camera poses. However, in highly variable illumination and motion blur environments, long-distance tracking can result in large cumulative errors and scale shifts. To address this problem in actual applications, in this study, we propose a novel multithreaded RGB-D SLAM framework that incorporates a highly accurate prior terrestrial Light Detection and Ranging (LiDAR) point cloud, which can mitigate cumulative errors and improve the system’s robustness in large-scale and challenging scenarios. First, we employed deep learning to achieve system automatic initialization and motion recovery when tracking is lost. Next, we used terrestrial LiDAR point cloud to obtain prior data of the landscape, and then we applied the point-to-surface inductively coupled plasma (ICP) iterative algorithm to realize accurate camera pose control from the previously obtained LiDAR point cloud data, and finally expanded its control range in the local map construction. Furthermore, an innovative double window segment-based map optimization method is proposed to ensure consistency, better real-time performance, and high accuracy of map construction. The proposed method was tested for long-distance tracking and closed-loop in two different large indoor scenarios. The experimental results indicated that the standard deviation of the 3D map construction is 10 cm in a mapping distance of 100 m, compared with the LiDAR ground truth. Further, the relative cumulative error of the camera in closed-loop experiments is 0.09%, which is twice less than that of the typical SLAM algorithm (3.4%). Therefore, the proposed method was demonstrated to be more robust than the ORB-SLAM2 algorithm in complex indoor environments.
Collapse
|