1
|
LRPL-VIO: A Lightweight and Robust Visual-Inertial Odometry with Point and Line Features. SENSORS (BASEL, SWITZERLAND) 2024; 24:1322. [PMID: 38400480 PMCID: PMC10892506 DOI: 10.3390/s24041322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 01/26/2024] [Accepted: 02/15/2024] [Indexed: 02/25/2024]
Abstract
Visual-inertial odometry (VIO) algorithms, fusing various features such as points and lines, are able to improve their performance in challenging scenes while the running time severely increases. In this paper, we propose a novel lightweight point-line visual-inertial odometry algorithm to solve this problem, called LRPL-VIO. Firstly, a fast line matching method is proposed based on the assumption that the photometric values of endpoints and midpoints are invariant between consecutive frames, which greatly reduces the time consumption of the front end. Then, an efficient filter-based state estimation framework is designed to finish information fusion (point, line, and inertial). Fresh measurements of line features with good tracking quality are selected for state estimation using a unique feature selection scheme, which improves the efficiency of the proposed algorithm. Finally, validation experiments are conducted on public datasets and in real-world tests to evaluate the performance of LRPL-VIO and the results show that we outperform other state-of-the-art algorithms especially in terms of speed and robustness.
Collapse
|
2
|
Design and experiments with a SLAM system for low-density canopy environments in greenhouses based on an improved Cartographer framework. FRONTIERS IN PLANT SCIENCE 2024; 15:1276799. [PMID: 38362453 PMCID: PMC10867628 DOI: 10.3389/fpls.2024.1276799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 01/03/2024] [Indexed: 02/17/2024]
Abstract
To address the problem that the low-density canopy of greenhouse crops affects the robustness and accuracy of simultaneous localization and mapping (SLAM) algorithms, a greenhouse map construction method for agricultural robots based on multiline LiDAR was investigated. Based on the Cartographer framework, this paper proposes a map construction and localization method based on spatial downsampling. Taking suspended tomato plants planted in greenhouses as the research object, an adaptive filtering point cloud projection (AF-PCP) SLAM algorithm was designed. Using a wheel odometer, 16-line LiDAR point cloud data based on adaptive vertical projections were linearly interpolated to construct a map and perform high-precision pose estimation in a greenhouse with a low-density canopy environment. Experiments were carried out in canopy environments with leaf area densities (LADs) of 2.945-5.301 m2/m3. The results showed that the AF-PCP SLAM algorithm increased the average mapping area of the crop rows by 155.7% compared with that of the Cartographer algorithm. The mean error and coefficient of variation of the crop row length were 0.019 m and 0.217%, respectively, which were 77.9% and 87.5% lower than those of the Cartographer algorithm. The average maximum void length was 0.124 m, which was 72.8% lower than that of the Cartographer algorithm. The localization experiments were carried out at speeds of 0.2 m/s, 0.4 m/s, and 0.6 m/s. The average relative localization errors at these speeds were respectively 0.026 m, 0.029 m, and 0.046 m, and the standard deviation was less than 0.06 m. Compared with that of the track deduction algorithm, the average localization error was reduced by 79.9% with the proposed algorithm. The results show that our proposed framework can map and localize robots with precision even in low-density canopy environments in greenhouses, demonstrating the satisfactory capability of the proposed approach and highlighting its promising applications in the autonomous navigation of agricultural robots.
Collapse
|
3
|
Multi-Sensor Fusion Simultaneous Localization Mapping Based on Deep Reinforcement Learning and Multi-Model Adaptive Estimation. SENSORS (BASEL, SWITZERLAND) 2023; 24:48. [PMID: 38202911 DOI: 10.3390/s24010048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 12/13/2023] [Accepted: 12/14/2023] [Indexed: 01/12/2024]
Abstract
In this study, we designed a multi-sensor fusion technique based on deep reinforcement learning (DRL) mechanisms and multi-model adaptive estimation (MMAE) for simultaneous localization and mapping (SLAM). The LiDAR-based point-to-line iterative closest point (PLICP) and RGB-D camera-based ORBSLAM2 methods were utilized to estimate the localization of mobile robots. The residual value anomaly detection was combined with the Proximal Policy Optimization (PPO)-based DRL model to accomplish the optimal adjustment of weights among different localization algorithms. Two kinds of indoor simulation environments were established by using the Gazebo simulator to validate the multi-model adaptive estimation localization performance, which is used in this paper. The experimental results of the proposed method in this study confirmed that it can effectively fuse the localization information from multiple sensors and enable mobile robots to obtain higher localization accuracy than the traditional PLICP and ORBSLAM2. It was also found that the proposed method increases the localization stability of mobile robots in complex environments.
Collapse
|
4
|
LFVB-BioSLAM: A Bionic SLAM System with a Light-Weight LiDAR Front End and a Bio-Inspired Visual Back End. Biomimetics (Basel) 2023; 8:410. [PMID: 37754161 PMCID: PMC10526866 DOI: 10.3390/biomimetics8050410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 08/17/2023] [Accepted: 09/01/2023] [Indexed: 09/28/2023] Open
Abstract
Simultaneous localization and mapping (SLAM) is one of the crucial techniques applied in autonomous robot navigation. The majority of present popular SLAM algorithms are built within probabilistic optimization frameworks, achieving high accuracy performance at the expense of high power consumption and latency. In contrast to robots, animals are born with the capability to efficiently and robustly navigate in nature, and bionic SLAM algorithms have received increasing attention recently. Current bionic SLAM algorithms, including RatSLAM, with relatively low accuracy and robustness, tend to fail in certain challenging environments. In order to design a bionic SLAM system with a novel framework and relatively high practicality, and to facilitate the development of bionic SLAM research, in this paper we present LFVB-BioSLAM, a bionic SLAM system with a light-weight LiDAR-based front end and a bio-inspired vision-based back end. We adopt a range flow-based LiDAR odometry as the front end of the SLAM system, providing the odometry estimation for the back end, and we propose a biologically-inspired back end processing algorithm based on the monocular RGB camera, performing loop closure detection and path integration. Our method is verified through real-world experiments, and the results show that LFVB-BioSLAM outperforms RatSLAM, a vision-based bionic SLAM algorithm, and RF2O, a laser-based horizontal planar odometry algorithm, in terms of accuracy and robustness.
Collapse
|
5
|
A Robust Semi-Direct 3D SLAM for Mobile Robot Based on Dense Optical Flow in Dynamic Scenes. Biomimetics (Basel) 2023; 8:371. [PMID: 37622976 PMCID: PMC10452154 DOI: 10.3390/biomimetics8040371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 08/06/2023] [Accepted: 08/15/2023] [Indexed: 08/26/2023] Open
Abstract
Dynamic objects bring about a large number of error accumulations in pose estimation of mobile robots in dynamic scenes, and result in the failure to build a map that is consistent with the surrounding environment. Along these lines, this paper presents a robust semi-direct 3D simultaneous localization and mapping (SLAM) algorithm for mobile robots based on dense optical flow. First, a preliminary estimation of the robot's pose is conducted using the sparse direct method and the homography matrix is utilized to compensate for the current frame image to reduce the image deformation caused by rotation during the robot's motion. Then, by calculating the dense optical flow field of two adjacent frames and segmenting the dynamic region in the scene based on the dynamic threshold, the local map points projected within the dynamic regions are eliminated. On this basis, the robot's pose is optimized by minimizing the reprojection error. Moreover, a high-performance keyframe selection strategy is developed, and keyframes are inserted when the robot's pose is successfully tracked. Meanwhile, feature points are extracted and matched to the keyframes for subsequent optimization and mapping. Considering that the direct method is subject to tracking failure in practical application scenarios, the feature points and map points of keyframes are employed in robot relocation. Finally, all keyframes and map points are used as optimization variables for global bundle adjustment (BA) optimization, so as to construct a globally consistent 3D dense octree map. A series of simulations and experiments demonstrate the superior performance of the proposed algorithm.
Collapse
|
6
|
Integrating Sparse Learning-Based Feature Detectors into Simultaneous Localization and Mapping-A Benchmark Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:2286. [PMID: 36850884 PMCID: PMC9961729 DOI: 10.3390/s23042286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/14/2023] [Accepted: 02/17/2023] [Indexed: 06/18/2023]
Abstract
Simultaneous localization and mapping (SLAM) is one of the cornerstones of autonomous navigation systems in robotics and the automotive industry. Visual SLAM (V-SLAM), which relies on image features, such as keypoints and descriptors to estimate the pose transformation between consecutive frames, is a highly efficient and effective approach for gathering environmental information. With the rise of representation learning, feature detectors based on deep neural networks (DNNs) have emerged as an alternative to handcrafted solutions. This work examines the integration of sparse learned features into a state-of-the-art SLAM framework and benchmarks handcrafted and learning-based approaches by comparing the two methods through in-depth experiments. Specifically, we replace the ORB detector and BRIEF descriptor of the ORBSLAM3 pipeline with those provided by Superpoint, a DNN model that jointly computes keypoints and descriptors. Experiments on three publicly available datasets from different application domains were conducted to evaluate the pose estimation performance and resource usage of both solutions.
Collapse
|
7
|
Semantic-Structure-Aware Multi-Level Information Fusion for Robust Global Orientation Optimization of Autonomous Mobile Robots. SENSORS (BASEL, SWITZERLAND) 2023; 23:1125. [PMID: 36772164 PMCID: PMC9920800 DOI: 10.3390/s23031125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 01/12/2023] [Accepted: 01/13/2023] [Indexed: 06/18/2023]
Abstract
Multi-camera-based simultaneous localization and mapping (SLAM) has been widely applied in various mobile robots under uncertain or unknown environments to accomplish tasks autonomously. However, the conventional purely data-driven feature extraction methods cannot utilize the rich semantic information in the environment, which leads to the performance of the SLAM system being susceptible to various interferences. In this work, we present a semantic-aware multi-level information fusion scheme for robust global orientation estimation. Specifically, a visual semantic perception system based on the synthesized surround view image is proposed for the multi-eye surround vision system widely used in mobile robots, which is used to obtain the visual semantic information required for SLAM tasks. The original multi-eye image was first transformed to the synthesized surround view image, and the passable space was extracted with the help of the semantic segmentation network model as a mask for feature extraction; moreover, the hybrid edge information was extracted to effectively eliminate the distorted edges by further using the distortion characteristics of the reverse perspective projection process. Then, the hybrid semantic information was used for robust global orientation estimation; thus, better localization performance was obtained. The experiments on an intelligent vehicle, which was used for automated valet parking both in indoor and outdoor scenes, showed that the proposed hybrid multi-level information fusion method achieved at least a 10-percent improvement in comparison with other edge segmentation methods, the average orientation estimation error being between 1 and 2 degrees, much smaller than other methods, and the trajectory drift value of the proposed method was much smaller than that of other methods.
Collapse
|
8
|
STV-SC: Segmentation and Temporal Verification Enhanced Scan Context for Place Recognition in Unstructured Environment. SENSORS (BASEL, SWITZERLAND) 2022; 22:8604. [PMID: 36433200 PMCID: PMC9694967 DOI: 10.3390/s22228604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 10/30/2022] [Accepted: 11/03/2022] [Indexed: 06/16/2023]
Abstract
Place recognition is an essential part of simultaneous localization and mapping (SLAM). LiDAR-based place recognition relies almost exclusively on geometric information. However, geometric information may become unreliable when faced with environments dominated by unstructured objects. In this paper, we explore the role of segmentation for extracting key structured information. We propose STV-SC, a novel segmentation and temporal verification enhanced place recognition method for unstructured environments. It contains a range image-based 3D point segmentation algorithm and a three-stage process to detect a loop. The three-stage method consists of a two-stage candidate loop search process and a one-stage segmentation and temporal verification (STV) process. Our STV process utilizes the time-continuous feature of SLAM to determine whether there is an occasional mismatch. We quantitatively demonstrate that the STV process can trigger false detections caused by unstructured objects and effectively extract structured objects to avoid outliers. Comparison with state-of-art algorithms on public datasets shows that STV-SC can run online and achieve improved performance in unstructured environments (Under the same precision, the recall rate is 1.4∼16% higher than Scan context). Therefore, our algorithm can effectively avoid the mismatching caused by the original algorithm in unstructured environment and improve the environmental adaptability of mobile agents.
Collapse
|
9
|
Visual SLAM for Dynamic Environments Based on Object Detection and Optical Flow for Dynamic Object Removal. SENSORS (BASEL, SWITZERLAND) 2022; 22:7553. [PMID: 36236652 PMCID: PMC9571647 DOI: 10.3390/s22197553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 09/21/2022] [Accepted: 09/29/2022] [Indexed: 06/16/2023]
Abstract
In dynamic indoor environments and for a Visual Simultaneous Localization and Mapping (vSLAM) system to operate, moving objects should be considered because they could affect the system's visual odometer stability and its position estimation accuracy. vSLAM can use feature points or a sequence of images, as it is the only source of input that can perform localization while simultaneously creating a map of the environment. A vSLAM system based on ORB-SLAM3 and on YOLOR was proposed in this paper. The newly proposed system in combination with an object detection model (YOLOX) applied on extracted feature points is capable of achieving 2-4% better accuracy compared to VPS-SLAM and DS-SLAM. Static feature points such as signs and benches were used to calculate the camera position, and dynamic moving objects were eliminated by using the tracking thread. A specific custom personal dataset that includes indoor and outdoor RGB-D pictures of train stations, including dynamic objects and high density of people, ground truth data, sequence data, and video recordings of the train stations and X, Y, Z data was used to validate and evaluate the proposed method. The results show that ORB-SLAM3 with YOLOR as object detection achieves 89.54% of accuracy in dynamic indoor environments compared to previous systems such as VPS-SLAM.
Collapse
|
10
|
Loop Closure Detection Based on Residual Network and Capsule Network for Mobile Robot. SENSORS (BASEL, SWITZERLAND) 2022; 22:7137. [PMID: 36236235 PMCID: PMC9573234 DOI: 10.3390/s22197137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 07/10/2022] [Accepted: 07/17/2022] [Indexed: 06/16/2023]
Abstract
Loop closure detection based on a residual network (ResNet) and a capsule network (CapsNet) is proposed to address the problems of low accuracy and poor robustness for mobile robot simultaneous localization and mapping (SLAM) in complex scenes. First, the residual network of a feature coding strategy is introduced to extract the shallow geometric features and deep semantic features of images, reduce the amount of image noise information, accelerate the convergence speed of the model, and solve the problems of gradient disappearance and network degradation of deep neural networks. Then, the dynamic routing mechanism of the capsule network is optimized through the entropy peak density, and a vector is used to represent the spatial position relationship between features, which can improve the ability of image feature extraction and expression to optimize the overall performance of networks. Finally, the optimized residual network and capsule network are fused to retain the differences and correlations between features, and the global feature descriptors and feature vectors are combined to calculate the similarity of image features for loop closure detection. The experimental results show that the proposed method can achieve loop closure detection for mobile robots in complex scenes, such as view changes, illumination changes, and dynamic objects, and improve the accuracy and robustness of mobile robot SLAM.
Collapse
|
11
|
Expectation-Maximization-Based Simultaneous Localization and Mapping for Millimeter-Wave Communication Systems. SENSORS (BASEL, SWITZERLAND) 2022; 22:6941. [PMID: 36146290 PMCID: PMC9505293 DOI: 10.3390/s22186941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/02/2022] [Accepted: 09/05/2022] [Indexed: 06/16/2023]
Abstract
In this paper, we proposed a novel expectation-maximization-based simultaneous localization and mapping (SLAM) algorithm for millimeter-wave (mmW) communication systems. By fully exploiting the geometric relationship among the access point (AP) positions, the angle difference of arrival (ADOA) from the APs and the mobile terminal (MT) position, and regarding the MT positions as the latent variable of the AP positions, the proposed algorithm first reformulates the SLAM problem as the maximum likelihood joint estimation over both the AP positions and the MT positions in a latent variable model. Then, it employs a feasible stochastic approximation expectation-maximization (EM) method to estimate the AP positions. Specifically, the stochastic Monte Carlo approximation is employed to obtain the intractable expectation of the MT positions' posterior probability in the E-step, and the gradient descent-based optimization is used as a viable substitute for estimating the high-dimensional AP positions in the M-step. Further, it estimates the MT positions and constructs the indoor map based on the estimated AP topology. Due to the efficient processing capability of the stochastic approximation EM method and taking full advantage of the abundant spatial information in the crowd-sourcing ADOA data, the proposed method can achieve a better positioning and mapping performance than the existing geometry-based mmW SLAM method, which usually has to compromise between the computation complexity and the estimation performance. The simulation results confirm the effectiveness of the proposed algorithm.
Collapse
|
12
|
Real-Time Lidar Odometry and Mapping with Loop Closure. SENSORS 2022; 22:s22124373. [PMID: 35746155 PMCID: PMC9228722 DOI: 10.3390/s22124373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 06/06/2022] [Accepted: 06/07/2022] [Indexed: 12/04/2022]
Abstract
Real-time performance and global consistency are extremely important in Simultaneous Localization and Mapping (SLAM) problems. Classic lidar-based SLAM systems often consist of front-end odometry and back-end pose optimization. However, due to expensive computation, it is often difficult to achieve loop-closure detection without compromising the real-time performance of the odometry. We propose a SLAM system where scan-to-submap-based local lidar odometry and global pose optimization based on submap construction as well as loop-closure detection are designed as separated from each other. In our work, extracted edge and surface feature points are inserted into two consecutive feature submaps and added to the pose graph prepared for loop-closure detection and global pose optimization. In addition, a submap is added to the pose graph for global data association when it is marked as in a finished state. In particular, a method to filter out false loops is proposed to accelerate the construction of constraints in the pose graph. The proposed method is evaluated on public datasets and achieves competitive performance with pose estimation frequency over 15 Hz in local lidar odometry and low drift in global consistency.
Collapse
|
13
|
Stereo Visual Odometry Pose Correction through Unsupervised Deep Learning. SENSORS 2021; 21:s21144735. [PMID: 34300475 PMCID: PMC8309519 DOI: 10.3390/s21144735] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 07/06/2021] [Accepted: 07/09/2021] [Indexed: 12/04/2022]
Abstract
Visual simultaneous localization and mapping (VSLAM) plays a vital role in the field of positioning and navigation. At the heart of VSLAM is visual odometry (VO), which uses continuous images to estimate the camera’s ego-motion. However, due to many assumptions of the classical VO system, robots can hardly operate in challenging environments. To solve this challenge, we combine the multiview geometry constraints of the classical stereo VO system with the robustness of deep learning to present an unsupervised pose correction network for the classical stereo VO system. The pose correction network regresses a pose correction that results in positioning error due to violation of modeling assumptions to make the classical stereo VO positioning more accurate. The pose correction network does not rely on the dataset with ground truth poses for training. The pose correction network also simultaneously generates a depth map and an explainability mask. Extensive experiments on the KITTI dataset show the pose correction network can significantly improve the positioning accuracy of the classical stereo VO system. Notably, the corrected classical stereo VO system’s average absolute trajectory error, average translational relative pose error, and average translational root-mean-square drift on a length of 100–800 m in the KITTI dataset is 13.77 cm, 0.038 m, and 1.08%, respectively. Therefore, the improved stereo VO system has almost reached the state of the art.
Collapse
|
14
|
Computationally Efficient Cooperative Dynamic Range-Only SLAM Based on Sum of Gaussian Filter. SENSORS 2020; 20:s20113306. [PMID: 32532014 PMCID: PMC7308995 DOI: 10.3390/s20113306] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 06/05/2020] [Accepted: 06/08/2020] [Indexed: 12/03/2022]
Abstract
A cooperative dynamic range-only simultaneous localization and mapping (CDRO-SLAM) algorithm based on the sum of Gaussian (SoG) filter was recently introduced. The main characteristics of the CDRO-SLAM are (i) the integration of inter-node ranges as well as usual direct robot-node ranges to improve the convergence rate and localization accuracy and (ii) the tracking of any moving nodes under dynamic environments by resetting and updating the SoG variables. In this paper, an efficient implementation of the CDRO-SLAM (eCDRO-SLAM) is proposed to mitigate the high computational burden of the CDRO-SLAM due to the inter-node measurements. Furthermore, a thorough computational analysis is presented, which reveals that the computational efficiency of the eCDRO-SLAM is significantly improved over the CDRO-SLAM. The performance of the proposed eCDRO-SLAM is compared with those of several conventional RO-SLAM algorithms and the results show that the proposed efficient algorithm has a faster convergence rate and a similar map estimation error regardless of the map size. Accordingly, the proposed eCDRO-SLAM can be utilized in various RO-SLAM applications.
Collapse
|
15
|
Hybrid Indoor Localization Using IMU Sensors and Smartphone Camera. SENSORS (BASEL, SWITZERLAND) 2019; 19:E5084. [PMID: 31766352 PMCID: PMC6929196 DOI: 10.3390/s19235084] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Revised: 11/14/2019] [Accepted: 11/18/2019] [Indexed: 11/16/2022]
Abstract
Smartphone camera or inertial measurement unit (IMU) sensor-based systems can be independently used to provide accurate indoor positioning results. However, the accuracy of an IMU-based localization system depends on the magnitude of sensor errors that are caused by external electromagnetic noise or sensor drifts. Smartphone camera based positioning systems depend on the experimental floor map and the camera poses. The challenge in smartphone camera-based localization is that accuracy depends on the rapidness of changes in the user's direction. In order to minimize the positioning errors in both the smartphone camera and IMU-based localization systems, we propose hybrid systems that combine both the camera-based and IMU sensor-based approaches for indoor localization. In this paper, an indoor experiment scenario is designed to analyse the performance of the IMU-based localization system, smartphone camera-based localization system and the proposed hybrid indoor localization system. The experiment results demonstrate the effectiveness of the proposed hybrid system and the results show that the proposed hybrid system exhibits significant position accuracy when compared to the IMU and smartphone camera-based localization systems. The performance of the proposed hybrid system is analysed in terms of average localization error and probability distributions of localization errors. The experiment results show that the proposed oriented fast rotated binary robust independent elementary features (BRIEF)-simultaneous localization and mapping (ORB-SLAM) with the IMU sensor hybrid system shows a mean localization error of 0.1398 m and the proposed simultaneous localization and mapping by fusion of keypoints and squared planar markers (UcoSLAM) with IMU sensor-based hybrid system has a 0.0690 m mean localization error and are compared with the individual localization systems in terms of mean error, maximum error, minimum error and standard deviation of error.
Collapse
|
16
|
Robust and Efficient CPU-Based RGB-D Scene Reconstruction. SENSORS 2018; 18:s18113652. [PMID: 30373281 PMCID: PMC6263609 DOI: 10.3390/s18113652] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2018] [Revised: 10/25/2018] [Accepted: 10/25/2018] [Indexed: 11/16/2022]
Abstract
3D scene reconstruction is an important topic in computer vision. A complete scene is reconstructed from views acquired along the camera trajectory, each view containing a small part of the scene. Tracking in textureless scenes is well known to be a Gordian knot of camera tracking, and how to obtain accurate 3D models quickly is a major challenge for existing systems. For the application of robotics, we propose a robust CPU-based approach to reconstruct indoor scenes efficiently with a consumer RGB-D camera. The proposed approach bridges feature-based camera tracking and volumetric-based data integration together and has a good reconstruction performance in terms of both robustness and efficiency. The key points in our approach include: (i) a robust and fast camera tracking method combining points and edges, which improves tracking stability in textureless scenes; (ii) an efficient data fusion strategy to select camera views and integrate RGB-D images on multiple scales, which enhances the efficiency of volumetric integration; (iii) a novel RGB-D scene reconstruction system, which can be quickly implemented on a standard CPU. Experimental results demonstrate that our approach reconstructs scenes with higher robustness and efficiency compared to state-of-the-art reconstruction systems.
Collapse
|
17
|
Detection and Compensation of Degeneracy Cases for IMU-Kinect Integrated Continuous SLAM with Plane Features. SENSORS 2018; 18:s18040935. [PMID: 29565287 PMCID: PMC5948940 DOI: 10.3390/s18040935] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2018] [Revised: 03/16/2018] [Accepted: 03/20/2018] [Indexed: 11/24/2022]
Abstract
In a group of general geometric primitives, plane-based features are widely used for indoor localization because of their robustness against noises. However, a lack of linearly independent planes may lead to a non-trivial estimation. This in return can cause a degenerate state from which all states cannot be estimated. To solve this problem, this paper first proposed a degeneracy detection method. A compensation method that could fix orientations by projecting an inertial measurement unit’s (IMU) information was then explained. Experiments were conducted using an IMU-Kinect v2 integrated sensor system prone to fall into degenerate cases owing to its narrow field-of-view. Results showed that the proposed framework could enhance map accuracy by successful detection and compensation of degenerated orientations.
Collapse
|
18
|
An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation. SENSORS 2016; 16:s16071148. [PMID: 27455279 PMCID: PMC4970190 DOI: 10.3390/s16071148] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Revised: 07/07/2016] [Accepted: 07/19/2016] [Indexed: 11/20/2022]
Abstract
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments.
Collapse
|