1
|
Zhang Y, Chu L, Mao Y, Yu X, Wang J, Guo C. A Vision/Inertial Navigation/Global Navigation Satellite Integrated System for Relative and Absolute Localization in Land Vehicles. SENSORS (BASEL, SWITZERLAND) 2024; 24:3079. [PMID: 38793933 PMCID: PMC11125282 DOI: 10.3390/s24103079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 04/10/2024] [Accepted: 05/09/2024] [Indexed: 05/26/2024]
Abstract
This paper presents an enhanced ground vehicle localization method designed to address the challenges associated with state estimation for autonomous vehicles operating in diverse environments. The focus is specifically on the precise localization of position and orientation in both local and global coordinate systems. The proposed approach integrates local estimates generated by existing visual-inertial odometry (VIO) methods into global position information obtained from the Global Navigation Satellite System (GNSS). This integration is achieved through optimizing fusion in a pose graph, ensuring precise local estimation and drift-free global position estimation. Considering the inherent complexities in autonomous driving scenarios, such as the potential failures of a visual-inertial navigation system (VINS) and restrictions on GNSS signals in urban canyons, leading to disruptions in localization outcomes, we introduce an adaptive fusion mechanism. This mechanism allows seamless switching between three modes: utilizing only VINS, using only GNSS, and normal fusion. The effectiveness of the proposed algorithm is demonstrated through rigorous testing in the Carla simulation environment and challenging UrbanNav scenarios. The evaluation includes both qualitative and quantitative analyses, revealing that the method exhibits robustness and accuracy.
Collapse
Affiliation(s)
- Yao Zhang
- National Key Laboratory of Automotive Chassis Integration and Bionics, Jilin University, Changchun 130025, China; (Y.Z.); (L.C.); (Y.M.); (J.W.)
| | - Liang Chu
- National Key Laboratory of Automotive Chassis Integration and Bionics, Jilin University, Changchun 130025, China; (Y.Z.); (L.C.); (Y.M.); (J.W.)
| | - Yabin Mao
- National Key Laboratory of Automotive Chassis Integration and Bionics, Jilin University, Changchun 130025, China; (Y.Z.); (L.C.); (Y.M.); (J.W.)
| | - Xintong Yu
- China FAW Group Co., Ltd., Changchun 130000, China;
| | - Jiawei Wang
- National Key Laboratory of Automotive Chassis Integration and Bionics, Jilin University, Changchun 130025, China; (Y.Z.); (L.C.); (Y.M.); (J.W.)
| | - Chong Guo
- National Key Laboratory of Automotive Chassis Integration and Bionics, Jilin University, Changchun 130025, China; (Y.Z.); (L.C.); (Y.M.); (J.W.)
- Changsha Automobile Innovation Research Institute, Changsha 410005, China
| |
Collapse
|
2
|
Long B, Goodin S, Kachergis G, Marchman VA, Radwan SF, Sparks RZ, Xiang V, Zhuang C, Hsu O, Newman B, Yamins DLK, Frank MC. The BabyView camera: Designing a new head-mounted camera to capture children's early social and visual environments. Behav Res Methods 2024; 56:3523-3534. [PMID: 37656342 DOI: 10.3758/s13428-023-02206-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/24/2023] [Indexed: 09/02/2023]
Abstract
Head-mounted cameras have been used in developmental psychology research for more than a decade to provide a rich and comprehensive view of what infants see during their everyday experiences. However, variation between these devices has limited the field's ability to compare results across studies and across labs. Further, the video data captured by these cameras to date has been relatively low-resolution, limiting how well machine learning algorithms can operate over these rich video data. Here, we provide a well-tested and easily constructed design for a head-mounted camera assembly-the BabyView-developed in collaboration with Daylight Design, LLC., a professional product design firm. The BabyView collects high-resolution video, accelerometer, and gyroscope data from children approximately 6-30 months of age via a GoPro camera custom mounted on a soft child-safety helmet. The BabyView also captures a large, portrait-oriented vertical field-of-view that encompasses both children's interactions with objects and with their social partners. We detail our protocols for video data management and for handling sensitive data from home environments. We also provide customizable materials for onboarding families with the BabyView. We hope that these materials will encourage the wide adoption of the BabyView, allowing the field to collect high-resolution data that can link children's everyday environments with their learning outcomes.
Collapse
Affiliation(s)
- Bria Long
- Department of Psychology, Stanford University, Stanford, CA, USA.
| | | | - George Kachergis
- Department of Psychology, Stanford University, Stanford, CA, USA
| | | | - Samaher F Radwan
- Department of Psychology, Stanford University, Stanford, CA, USA
- Graduate School of Education, Stanford University, Stanford, CA, USA
| | - Robert Z Sparks
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Violet Xiang
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Chengxu Zhuang
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Oliver Hsu
- Daylight Design, LLC, San Francisco, CA, USA
| | | | - Daniel L K Yamins
- Department of Psychology, Stanford University, Stanford, CA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Michael C Frank
- Department of Psychology, Stanford University, Stanford, CA, USA
| |
Collapse
|
3
|
Cai Y, Ou Y, Qin T. Improving SLAM Techniques with Integrated Multi-Sensor Fusion for 3D Reconstruction. SENSORS (BASEL, SWITZERLAND) 2024; 24:2033. [PMID: 38610245 PMCID: PMC11014387 DOI: 10.3390/s24072033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 03/09/2024] [Accepted: 03/18/2024] [Indexed: 04/14/2024]
Abstract
Simultaneous Localization and Mapping (SLAM) poses distinct challenges, especially in settings with variable elements, which demand the integration of multiple sensors to ensure robustness. This study addresses these issues by integrating advanced technologies like LiDAR-inertial odometry (LIO), visual-inertial odometry (VIO), and sophisticated Inertial Measurement Unit (IMU) preintegration methods. These integrations enhance the robustness and reliability of the SLAM process for precise mapping of complex environments. Additionally, incorporating an object-detection network aids in identifying and excluding transient objects such as pedestrians and vehicles, essential for maintaining the integrity and accuracy of environmental mapping. The object-detection network features a lightweight design and swift performance, enabling real-time analysis without significant resource utilization. Our approach focuses on harmoniously blending these techniques to yield superior mapping outcomes in complex scenarios. The effectiveness of our proposed methods is substantiated through experimental evaluation, demonstrating their capability to produce more reliable and precise maps in environments with variable elements. The results indicate improvements in autonomous navigation and mapping, providing a practical solution for SLAM in challenging and dynamic settings.
Collapse
Affiliation(s)
- Yiyi Cai
- School of Electronic and Information Engineering, South China University of Technology, Guangzhou 510641, China;
- The Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, China;
- School of Computer and Electronic Information, Guangxi University, Nanning 530000, China
| | - Yang Ou
- The Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, China;
- School of Computer and Electronic Information, Guangxi University, Nanning 530000, China
| | - Tuanfa Qin
- School of Electronic and Information Engineering, South China University of Technology, Guangzhou 510641, China;
- The Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, China;
- School of Computer and Electronic Information, Guangxi University, Nanning 530000, China
| |
Collapse
|
4
|
Tang Y, Zhao C, Wang J, Zhang C, Sun Q, Zheng WX, Du W, Qian F, Kurths J. Perception and Navigation in Autonomous Systems in the Era of Learning: A Survey. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:9604-9624. [PMID: 35482692 DOI: 10.1109/tnnls.2022.3167688] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Autonomous systems possess the features of inferring their own state, understanding their surroundings, and performing autonomous navigation. With the applications of learning systems, like deep learning and reinforcement learning, the visual-based self-state estimation, environment perception, and navigation capabilities of autonomous systems have been efficiently addressed, and many new learning-based algorithms have surfaced with respect to autonomous visual perception and navigation. In this review, we focus on the applications of learning-based monocular approaches in ego-motion perception, environment perception, and navigation in autonomous systems, which is different from previous reviews that discussed traditional methods. First, we delineate the shortcomings of existing classical visual simultaneous localization and mapping (vSLAM) solutions, which demonstrate the necessity to integrate deep learning techniques. Second, we review the visual-based environmental perception and understanding methods based on deep learning, including deep learning-based monocular depth estimation, monocular ego-motion prediction, image enhancement, object detection, semantic segmentation, and their combinations with traditional vSLAM frameworks. Then, we focus on the visual navigation based on learning systems, mainly including reinforcement learning and deep reinforcement learning. Finally, we examine several challenges and promising directions discussed and concluded in related research of learning systems in the era of computer science and robotics.
Collapse
|
5
|
Chen J, Wang H, Yang S. Tightly Coupled LiDAR-Inertial Odometry and Mapping for Underground Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:6834. [PMID: 37571617 PMCID: PMC10422614 DOI: 10.3390/s23156834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 07/12/2023] [Accepted: 07/27/2023] [Indexed: 08/13/2023]
Abstract
The demand for autonomous exploration and mapping of underground environments has significantly increased in recent years. However, accurately localizing and mapping robots in subterranean settings presents notable challenges. This paper presents a tightly coupled LiDAR-Inertial odometry system that combines the NanoGICP point cloud registration method with IMU pre-integration using incremental smoothing and mapping. Specifically, a point cloud affected by dust particles is first filtered out and separated into ground and non-ground point clouds (for ground vehicles). To maintain accuracy in environments with spatial variations, an adaptive voxel filter is employed, which reduces computation time while preserving accuracy. The estimated motion derived from IMU pre-integration is utilized to correct point cloud distortion and provide an initial estimation for LiDAR odometry. Subsequently, a scan-to-map point cloud registration is executed using NanoGICP to obtain a more refined pose estimation. The resulting LiDAR odometry is then employed to estimate the bias of the IMU. We comprehensively evaluated our system on established subterranean datasets. These datasets were collected by two separate teams using different platforms during the DARPA Subterranean (SubT) Challenge. The experimental results demonstrate that our system achieved performance enhancements as high as 50-60% in terms of root mean square error (RMSE).
Collapse
Affiliation(s)
| | | | - Shan Yang
- School of Resources and Safety Engineering, Central South University, Changsha 410083, China
| |
Collapse
|
6
|
Karfakis PT, Couceiro MS, Portugal D. NR5G-SAM: A SLAM Framework for Field Robot Applications Based on 5G New Radio. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115354. [PMID: 37300084 DOI: 10.3390/s23115354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/04/2023] [Accepted: 05/19/2023] [Indexed: 06/12/2023]
Abstract
Robot localization is a crucial task in robotic systems and is a pre-requisite for navigation. In outdoor environments, Global Navigation Satellite Systems (GNSS) have aided towards this direction, alongside laser and visual sensing. Despite their application in the field, GNSS suffers from limited availability in dense urban and rural environments. Light Detection and Ranging (LiDAR), inertial and visual methods are also prone to drift and can be susceptible to outliers due to environmental changes and illumination conditions. In this work, we propose a cellular Simultaneous Localization and Mapping (SLAM) framework based on 5G New Radio (NR) signals and inertial measurements for mobile robot localization with several gNodeB stations. The method outputs the pose of the robot along with a radio signal map based on the Received Signal Strength Indicator (RSSI) measurements for correction purposes. We then perform benchmarking against LiDAR-Inertial Odometry Smoothing and Mapping (LIO-SAM), a state-of-the-art LiDAR SLAM method, comparing performance via a simulator ground truth reference. Two experimental setups are presented and discussed using the sub-6 GHz and mmWave frequency bands for communication, while the transmission is based on down-link (DL) signals. Our results show that 5G positioning can be utilized for radio SLAM, providing increased robustness in outdoor environments and demonstrating its potential to assist in robot localization, as an additional absolute source of information when LiDAR methods fail and GNSS data is unreliable.
Collapse
Affiliation(s)
- Panagiotis T Karfakis
- Ingeniarius Ltd., R. Nossa Sra. Conceição 146, 4445-147 Alfena, Portugal
- Institute of Systems and Robotics, University of Coimbra, 3030-290 Coimbra, Portugal
| | - Micael S Couceiro
- Ingeniarius Ltd., R. Nossa Sra. Conceição 146, 4445-147 Alfena, Portugal
- Institute of Systems and Robotics, University of Coimbra, 3030-290 Coimbra, Portugal
| | - David Portugal
- Institute of Systems and Robotics, University of Coimbra, 3030-290 Coimbra, Portugal
| |
Collapse
|
7
|
Bavle H, Sanchez-Lopez JL, Cimarelli C, Tourani A, Voos H. From SLAM to Situational Awareness: Challenges and Survey. SENSORS (BASEL, SWITZERLAND) 2023; 23:4849. [PMID: 37430762 DOI: 10.3390/s23104849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 04/27/2023] [Accepted: 05/13/2023] [Indexed: 07/12/2023]
Abstract
The capability of a mobile robot to efficiently and safely perform complex missions is limited by its knowledge of the environment, namely the situation. Advanced reasoning, decision-making, and execution skills enable an intelligent agent to act autonomously in unknown environments. Situational Awareness (SA) is a fundamental capability of humans that has been deeply studied in various fields, such as psychology, military, aerospace, and education. Nevertheless, it has yet to be considered in robotics, which has focused on single compartmentalized concepts such as sensing, spatial perception, sensor fusion, state estimation, and Simultaneous Localization and Mapping (SLAM). Hence, the present research aims to connect the broad multidisciplinary existing knowledge to pave the way for a complete SA system for mobile robotics that we deem paramount for autonomy. To this aim, we define the principal components to structure a robotic SA and their area of competence. Accordingly, this paper investigates each aspect of SA, surveying the state-of-the-art robotics algorithms that cover them, and discusses their current limitations. Remarkably, essential aspects of SA are still immature since the current algorithmic development restricts their performance to only specific environments. Nevertheless, Artificial Intelligence (AI), particularly Deep Learning (DL), has brought new methods to bridge the gap that maintains these fields apart from the deployment to real-world scenarios. Furthermore, an opportunity has been discovered to interconnect the vastly fragmented space of robotic comprehension algorithms through the mechanism of Situational Graph (S-Graph), a generalization of the well-known scene graph. Therefore, we finally shape our vision for the future of robotic situational awareness by discussing interesting recent research directions.
Collapse
Affiliation(s)
- Hriday Bavle
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Jose Luis Sanchez-Lopez
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Claudio Cimarelli
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Ali Tourani
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
| | - Holger Voos
- Interdisciplinary Center for Security Reliability and Trust (SnT), University of Luxembourg, 1855 Luxembourg, Luxembourg
- Department of Engineering, Faculty of Science, Technology, and Medicine (FSTM), University of Luxembourg, 1359 Luxembourg, Luxembourg
| |
Collapse
|
8
|
Hao Y, Liu J, Liu Y, Liu X, Meng Z, Xing F. Global Visual-Inertial Localization for Autonomous Vehicles with Pre-Built Map. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094510. [PMID: 37177714 PMCID: PMC10181573 DOI: 10.3390/s23094510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 05/01/2023] [Accepted: 05/04/2023] [Indexed: 05/15/2023]
Abstract
Accurate, robust and drift-free global pose estimation is a fundamental problem for autonomous vehicles. In this work, we propose a global drift-free map-based localization method for estimating the global poses of autonomous vehicles that integrates visual-inertial odometry and global localization with respect to a pre-built map. In contrast to previous work on visual-inertial localization, the global pre-built map provides global information to eliminate drift and assists in obtaining the global pose. Additionally, in order to ensure the local odometry frame and the global map frame can be aligned accurately, we augment the transformation between these two frames into the state vector and use a global pose-graph optimization for online estimation. Extensive evaluations on public datasets and real-world experiments demonstrate the effectiveness of the proposed method. The proposed method can provide accurate global pose-estimation results in different scenarios. The experimental results are compared against the mainstream map-based localization method, revealing that the proposed approach is more accurate and consistent than other methods.
Collapse
Affiliation(s)
- Yun Hao
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Jiacheng Liu
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Yuzhen Liu
- Robotics X, Tencent, Shenzhen 518057, China
| | - Xinyuan Liu
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Ziyang Meng
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Fei Xing
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| |
Collapse
|
9
|
Ferguson JM, Ertop TE, Herrell SD, Webster RJ. Unified Robot and Inertial Sensor Self-Calibration. ROBOTICA 2023; 41:1590-1616. [PMID: 37732333 PMCID: PMC10508886 DOI: 10.1017/s0263574723000012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
Robots and inertial measurement units (IMUs) are typically calibrated independently. IMUs are placed in purpose-built, expensive automated test rigs. Robot poses are typically measured using highly accurate (and thus expensive) tracking systems. In this paper, we present a quick, easy, and inexpensive new approach to calibrate both simultaneously, simply by attaching the IMU anywhere on the robot's end effector and moving the robot continuously through space. Our approach provides a fast and inexpensive alternative to both robot and IMU calibration, without any external measurement systems. We accomplish this using continuous-time batch estimation, providing statistically optimal solutions. Under Gaussian assumptions, we show that this becomes a nonlinear least squares problem and analyze the structure of the associated Jacobian. Our methods are validated both numerically and experimentally and compared to standard individual robot and IMU calibration methods.
Collapse
Affiliation(s)
- James M. Ferguson
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Tayfun Efe Ertop
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| | - S. Duke Herrell
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Robert J. Webster
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
10
|
Wu J, Wang M, Fourati H, Li H, Zhu Y, Zhang C, Jiang Y, Hu X, Liu M. Generalized n-Dimensional Rigid Registration: Theory and Applications. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:927-940. [PMID: 35507617 DOI: 10.1109/tcyb.2022.3168938] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The generalized rigid registration problem in high-dimensional Euclidean spaces is studied. The loss function is minimized with an equivalent error formulation by the Cayley formula. The closed-form linear least-square solution to such a problem is derived which generates the registration covariances, i.e., uncertainty information of rotation and translation, providing quite accurate probabilistic descriptions. Simulation results indicate the correctness of the proposed method and also present its efficiency on computation-time consumption, compared with previous algorithms using singular value decomposition (SVD) and linear matrix inequality (LMI). The proposed scheme is then applied to an interpolation problem on the special Euclidean group SE(n) with covariance-preserving functionality. Finally, experiments on covariance-aided Lidar mapping show practical superiority in robotic navigation.
Collapse
|
11
|
Lyu Y, Nguyen T, Liu L, Cao M, Yuan S, Nguyen TH, Xie L. SPINS: A structure priors aided inertial navigation system. J FIELD ROBOT 2023. [DOI: 10.1002/rob.22161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Affiliation(s)
- Yang Lyu
- School of Automation Northwestern Polytechnical University Xi'an China
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Thien‐Minh Nguyen
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Liu Liu
- College of Engineering and Computer Science Australian National University Australian Capital Territory Canberra Australia
| | - Muqing Cao
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Shenghai Yuan
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Thien Hoang Nguyen
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| | - Lihua Xie
- School of Electrical and Electronic Engineering Nanyang Technological University Singapore Singapore
| |
Collapse
|
12
|
Li K, Li J, Wang A, Luo H, Li X, Yang Z. A Resilient Method for Visual-Inertial Fusion Based on Covariance Tuning. SENSORS (BASEL, SWITZERLAND) 2022; 22:9836. [PMID: 36560205 PMCID: PMC9781031 DOI: 10.3390/s22249836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 11/27/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
To improve localization and pose precision of visual-inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on covariance tuning. During back-end optimization of the viSLAM process, the unit-weight root-mean-square error (RMSE) of the visual reprojection and IMU preintegration in each optimization is computed to construct a covariance tuning function, producing a new covariance matrix. This is used to perform another round of nonlinear optimization, effectively improving pose and localization precision without closed-loop detection. In the validation experiment, our algorithm outperformed the OKVIS, R-VIO, and VINS-Mono open-source viSLAM frameworks in pose and localization precision on the EuRoc dataset, at all difficulty levels.
Collapse
|
13
|
IBISCape: A Simulated Benchmark for multi-modal SLAM Systems Evaluation in Large-scale Dynamic Environments. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01753-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
14
|
Li Y, Yang S, Xiu X, Miao Z. A Spatiotemporal Calibration Algorithm for IMU-LiDAR Navigation System Based on Similarity of Motion Trajectories. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22197637. [PMID: 36236759 PMCID: PMC9570820 DOI: 10.3390/s22197637] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 10/02/2022] [Accepted: 10/04/2022] [Indexed: 06/12/2023]
Abstract
The fusion of light detection and ranging (LiDAR) and inertial measurement unit (IMU) sensing information can effectively improve the environment modeling and localization accuracy of navigation systems. To realize the spatiotemporal unification of data collected by the IMU and the LiDAR, a two-step spatiotemporal calibration method combining coarse and fine is proposed. The method mainly includes two aspects: (1) Modeling continuous-time trajectories of IMU attitude motion using B-spline basis functions; the motion of the LiDAR is estimated by using the normal distributions transform (NDT) point cloud registration algorithm, taking the Hausdorff distance between the local trajectories as the cost function and combining it with the hand-eye calibration method to solve the initial value of the spatiotemporal relationship between the two sensors' coordinate systems, and then using the measurement data of the IMU to correct the LiDAR distortion. (2) According to the IMU preintegration, and the point, line, and plane features of the lidar point cloud, the corresponding nonlinear optimization objective function is constructed. Combined with the corrected LiDAR data and the initial value of the spatiotemporal calibration of the coordinate systems, the target is optimized under the nonlinear graph optimization framework. The rationality, accuracy, and robustness of the proposed algorithm are verified by simulation analysis and actual test experiments. The results show that the accuracy of the proposed algorithm in the spatial coordinate system relationship calibration was better than 0.08° (3δ) and 5 mm (3δ), respectively, and the time deviation calibration accuracy was better than 0.1 ms and had strong environmental adaptability. This can meet the high-precision calibration requirements of multisensor spatiotemporal parameters of field robot navigation systems.
Collapse
|
15
|
Mahlknecht F, Gehrig D, Nash J, Rockenbauer FM, Morrell B, Delaune J, Scaramuzza D. Exploring Event Camera-Based Odometry for Planetary Robots. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3187826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Florian Mahlknecht
- Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA
| | - Daniel Gehrig
- Robotics and Perception Group, University of Zurich, Zurich, Switzerland
| | - Jeremy Nash
- Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA
| | | | - Benjamin Morrell
- Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA
| | - Jeff Delaune
- Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA
| | - Davide Scaramuzza
- Robotics and Perception Group, University of Zurich, Zurich, Switzerland
| |
Collapse
|
16
|
Ghaffari M, Zhang R, Zhu M, Lin CE, Lin TY, Teng S, Li T, Liu T, Song J. Progress in symmetry preserving robot perception and control through geometry and learning. Front Robot AI 2022; 9:969380. [PMID: 36185972 PMCID: PMC9515513 DOI: 10.3389/frobt.2022.969380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 08/02/2022] [Indexed: 11/22/2022] Open
Abstract
This article reports on recent progress in robot perception and control methods developed by taking the symmetry of the problem into account. Inspired by existing mathematical tools for studying the symmetry structures of geometric spaces, geometric sensor registration, state estimator, and control methods provide indispensable insights into the problem formulations and generalization of robotics algorithms to challenging unknown environments. When combined with computational methods for learning hard-to-measure quantities, symmetry-preserving methods unleash tremendous performance. The article supports this claim by showcasing experimental results of robot perception, state estimation, and control in real-world scenarios.
Collapse
Affiliation(s)
- Maani Ghaffari
- Computational Autonomy and Robotics Laboratory (CURLY), University of Michigan, Ann Arbor, MI, United States
| | | | | | | | | | | | | | | | | |
Collapse
|
17
|
HVIOnet: A deep learning based hybrid visual-inertial odometry approach for unmanned aerial system position estimation. Neural Netw 2022; 155:461-474. [PMID: 36152378 DOI: 10.1016/j.neunet.2022.09.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 07/25/2022] [Accepted: 09/02/2022] [Indexed: 11/21/2022]
Abstract
Sensor fusion is used to solve the localization problem in autonomous mobile robotics applications by integrating complementary data acquired from various sensors. In this study, we adopt Visual-Inertial Odometry (VIO), a low-cost sensor fusion method that integrates inertial data with images using a Deep Learning (DL) framework to predict the position of an Unmanned Aerial System (UAS). The developed system has three steps. The first step extracts features from images acquired from a platform camera and uses a Convolutional Neural Network (CNN) to project them to a visual feature manifold. Next, temporal features are extracted from the Inertial Measurement Unit (IMU) data on the platform using a Bidirectional Long Short Term Memory (BiLSTM) network and are projected to an inertial feature manifold. The final step estimates the UAS position by fusing the visual and inertial feature manifolds via a BiLSTM-based architecture. The proposed approach is tested with the public EuRoC (European Robotics Challenge) dataset and simulation environment data generated within the Robot Operating System (ROS). The result of the EuRoC dataset shows that the proposed approach achieves successful position estimations comparable to previous popular VIO methods. In addition, as a result of the experiment with the simulation dataset, the UAS position is successfully estimated with 0.167 Mean Square Error (RMSE). The obtained results prove that the proposed deep architecture is useful for UAS position estimation.
Collapse
|
18
|
Ye H, Kwan KC, Fu H. 3D Curve Creation on and Around Physical Objects With Mobile AR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2809-2821. [PMID: 33400650 DOI: 10.1109/tvcg.2020.3049006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The recent advance in motion tracking (e.g., Visual Inertial Odometry) allows the use of a mobile phone as a 3D pen, thus significantly benefiting various mobile Augmented Reality (AR) applications based on 3D curve creation. However, when creating 3D curves on and around physical objects with mobile AR, tracking might be less robust or even lost due to camera occlusion or textureless scenes. This motivates us to study how to achieve natural interaction with minimum tracking errors during close interaction between a mobile phone and physical objects. To this end, we contribute an elicitation study on input point and phone grip, and a quantitative study on tracking errors. Based on the results, we present a system for direct 3D drawing with an AR-enabled mobile phone as a 3D pen, and interactive correction of 3D curves with tracking errors in mobile AR. We demonstrate the usefulness and effectiveness of our system for two applications: in-situ 3D drawing, and direct 3D measurement.
Collapse
|
19
|
Cheng J, Jin Y, Zhai Z, Liu X, Zhou K. Research on Positioning Method in Underground Complex Environments Based on Fusion of Binocular Vision and IMU. SENSORS (BASEL, SWITZERLAND) 2022; 22:5711. [PMID: 35957268 PMCID: PMC9371209 DOI: 10.3390/s22155711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 07/29/2022] [Accepted: 07/29/2022] [Indexed: 06/15/2023]
Abstract
Aiming at the failure of traditional visual slam localization caused by dynamic target interference and weak texture in underground complexes, an effective robot localization scheme was designed in this paper. Firstly, the Harris algorithm with stronger corner detection ability was used, which further improved the ORB (oriented FAST and rotated BRIEF) algorithm of traditional visual slam. Secondly, the non-uniform rational B-splines algorithm was used to transform the discrete data of inertial measurement unit (IMU) into second-order steerable continuous data, and the visual sensor data were fused with IMU data. Finally, the experimental results under the KITTI dataset, EUROC dataset, and a simulated real scene proved that the method used in this paper has the characteristics of stronger robustness, better localization accuracy, small size of hardware equipment, and low power consumption.
Collapse
Affiliation(s)
- Jie Cheng
- School of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou 310018, China; (J.C.); (Z.Z.)
| | - Yinglian Jin
- College of Modern Science and Technology, China Jiliang University, Hangzhou 310018, China;
| | - Zhen Zhai
- School of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou 310018, China; (J.C.); (Z.Z.)
| | - Xiaolong Liu
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21211, USA;
| | - Kun Zhou
- School of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou 310018, China; (J.C.); (Z.Z.)
| |
Collapse
|
20
|
FastFusion: Real-Time Indoor Scene Reconstruction with Fast Sensor Motion. REMOTE SENSING 2022. [DOI: 10.3390/rs14153551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Real-time 3D scene reconstruction has attracted a great amount of attention in the fields of augmented reality, virtual reality and robotics. Previous works usually assumed slow sensor motions to avoid large interframe differences and strong image blur, but this limits the applicability of the techniques in real cases. In this study, we propose an end-to-end 3D reconstruction system that combines color, depth and inertial measurements to achieve a robust reconstruction with fast sensor motions. We involved an extended Kalman filter (EKF) to fuse RGB-D-IMU data and jointly optimize feature correspondences, camera poses and scene geometry by using an iterative method. A novel geometry-aware patch deformation technique is proposed to adapt the changes in patch features in the image domain, leading to highly accurate feature tracking with fast sensor motions. In addition, we maintained the global consistency of the reconstructed model by achieving loop closure with submap-based depth image encoding and 3D map deformation. The experiments revealed that our patch deformation method improves the accuracy of feature tracking, that our improved loop detection method is more efficient than the original method and that our system possesses superior 3D reconstruction results compared with the state-of-the-art solutions in handling fast camera motions.
Collapse
|
21
|
Robust stereo inertial odometry based on self-supervised feature points. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03278-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
22
|
Rahman S, Quattrini Li A, Rekleitis I. SVIn2: A multi-sensor fusion-based underwater SLAM system. Int J Rob Res 2022. [DOI: 10.1177/02783649221110259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper presents SVIn2, a novel tightly-coupled keyframe-based Simultaneous Localization and Mapping (SLAM) system, which fuses Scanning Profiling Sonar, Visual, Inertial, and water-pressure information in a non-linear optimization framework for small and large scale challenging underwater environments. The developed real-time system features robust initialization, loop-closing, and relocalization capabilities, which make the system reliable in the presence of haze, blurriness, low light, and lighting variations, typically observed in underwater scenarios. Over the last decade, Visual-Inertial Odometry and SLAM systems have shown excellent performance for mobile robots in indoor and outdoor environments, but often fail underwater due to the inherent difficulties in such environments. Our approach combats the weaknesses of previous approaches by utilizing additional sensors and exploiting their complementary characteristics. In particular, we use (1) acoustic range information for improved reconstruction and localization, thanks to the reliable distance measurement; (2) depth information from water-pressure sensor for robust initialization, refining the scale, and assisting to limit the drift in the tightly-coupled integration. The developed software—made open source—has been successfully used to test and validate the proposed system in both benchmark datasets and numerous real world underwater scenarios, including datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle Aqua2. SVIn2 demonstrated outstanding performance in terms of accuracy and robustness on those datasets and enabled other robotic tasks, for example, planning for underwater robots in presence of obstacles.
Collapse
Affiliation(s)
- Sharmin Rahman
- Computer Science and Engineering Department, University of South Carolina, Columbia, SC, USA
| | | | - Ioannis Rekleitis
- Computer Science and Engineering Department, University of South Carolina, Columbia, SC, USA
| |
Collapse
|
23
|
Li T, Pei L, Xiang Y, Yu W, Truong TK. P$^{3}$-VINS: Tightly-Coupled PPP/INS/Visual SLAM Based on Optimization Approach. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3180441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Tao Li
- Shanghai Key Laboratory of Navigation and Location Based Services, Shanghai Jiao Tong University, Shanghai, China
| | - Ling Pei
- Shanghai Key Laboratory of Navigation and Location Based Services, Shanghai Jiao Tong University, Shanghai, China
| | - Yan Xiang
- Shanghai Key Laboratory of Navigation and Location Based Services, Shanghai Jiao Tong University, Shanghai, China
| | - Wenxian Yu
- Shanghai Key Laboratory of Navigation and Location Based Services, Shanghai Jiao Tong University, Shanghai, China
| | - Trieu-Kien Truong
- Shanghai Key Laboratory of Navigation and Location Based Services, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
24
|
Hu C, Zhu S, Liang Y, Song W. Tightly-Coupled Visual-Inertial-Pressure Fusion Using Forward and Backward IMU Preintegration. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3177847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Chao Hu
- Ocean College, Zhejiang University, Zhoushan, Zhejiang, China
| | - Shiqiang Zhu
- Ocean College, Zhejiang University, Zhoushan, Zhejiang, China
| | | | - Wei Song
- Ocean College, Zhejiang University, Zhoushan, Zhejiang, China
| |
Collapse
|
25
|
Rong JX, Zhang L, Huang H, Zhang FL. IMU-Assisted Online Video Background Identification. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:4336-4351. [PMID: 35727783 DOI: 10.1109/tip.2022.3183442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Distinguishing between dynamic foreground objects and a mostly static background is a fundamental problem in many computer vision and computer graphics tasks. This paper presents a novel online video background identification method with the assistance of inertial measurement unit (IMU). Based on the fact that the background motion of a video essentially reflects the 3D camera motion, we leverage IMU data to realize a robust camera motion estimation for identifying background feature points by only investigating a few historical frames. We observe that the displacement of the 2D projection of a scene point caused by camera rotation is depth-invariant, and the rotation estimation by using IMU data can be quite accurate. We thus propose to analyze 2D feature points by decomposing the 2D motion into two components: rotation projection and translation projection. In our method, after establishing the 3D camera rotations, we generate the depth-relevant 2D feature point movement induced by the camera 3D translation. Then, by examining the disparity between inter-frame offset and the projection of estimated 3D camera motion, we can identify the background feature points. In the experiments, our online method is able to run at 30FPS with only 1 frame latency and outperforms state-of-the-art background identification and other relevant methods. Our method directly leads to a better camera motion estimation, which is beneficial to many applications like online video stabilization, SLAM, image stitching, etc.
Collapse
|
26
|
Zhai Y, Zhang S. A Novel LiDAR–IMU–Odometer Coupling Framework for Two-Wheeled Inverted Pendulum (TWIP) Robot Localization and Mapping with Nonholonomic Constraint Factors. SENSORS 2022; 22:s22134778. [PMID: 35808273 PMCID: PMC9268906 DOI: 10.3390/s22134778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Revised: 06/14/2022] [Accepted: 06/18/2022] [Indexed: 11/16/2022]
Abstract
This paper proposes a method to solve the problem of localization and mapping of a two-wheeled inverted pendulum (TWIP) robot on approximately flat ground using a Lidar–IMU–Odometer system. When TWIP is in motion, it is constrained by the ground and suffers from motion disturbances caused by rough terrain or motion shaking. Combining the motion characteristics of TWIP, this paper proposes a framework for localization consisting of a Lidar-IMU-Odometer system. This system formulates a factor graph with five types of factors, thereby coupling relative and absolute measurements from different sensors (including ground constraints) into the system. Moreover, we analyze the constraint dimension of each factor according to the motion characteristics of TWIP and propose a new nonholonomic constraint factor for the odometry pre-integration constraint and ground constraint factor in order to add them naturally to the factor graph with the robot state node on SE(3). Meanwhile, we calculate the uncertainty of each constraint. Utilizing such a nonholonomic constraint factor, a complete lidar–IMU–odometry-based motion estimation system for TWIP is developed via smoothing and mapping. Indoor and outdoor experiments show that our method has better accuracy for two-wheeled inverted pendulum robots.
Collapse
|
27
|
Positioning of Quadruped Robot Based on Tightly Coupled LiDAR Vision Inertial Odometer. REMOTE SENSING 2022. [DOI: 10.3390/rs14122945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Quadruped robots, an important class of unmanned aerial vehicles, have broad potential for applications in education, service, industry, military, and other fields. Their independent positioning plays a key role for completing assigned tasks in a complex environment. However, positioning based on global navigation satellite systems (GNSS) may result in GNSS jamming and quadruped robots not operating properly in environments sheltered by buildings. In this paper, a tightly coupled LiDAR vision inertial odometer (LVIO) is proposed to address the positioning inaccuracy of quadruped robots, which have poor mileage information obtained though legs and feet structures only. With this optimization method, the point cloud data obtained by 3D LiDAR, the image feature information obtained by binocular vision, and the IMU inertial data are combined to improve the precise indoor and outdoor positioning of a quadruped robot. This method reduces the errors caused by the uniform motion model in laser odometer as well as the image blur caused by rapid movements of the robot, which can lead to error-matching in a dynamic scene; at the same time, it alleviates the impact of drift on inertial measurements. Finally, the quadruped robot in the laboratory is used to build a physical platform for verification. The experimental results show that the designed LVIO effectively realizes the positioning of four groups of robots with high precision and strong robustness, both indoors or outdoors, which verify the feasibility and effectiveness of the proposed method.
Collapse
|
28
|
Abstract
The ability of intelligent unmanned platforms to achieve autonomous navigation and positioning in a large-scale environment has become increasingly demanding, in which LIDAR-based Simultaneous Localization and Mapping (SLAM) is the mainstream of research schemes. However, the LIDAR-based SLAM system will degenerate and affect the localization and mapping effects in extreme environments with high dynamics or sparse features. In recent years, a large number of LIDAR-based multi-sensor fusion SLAM works have emerged in order to obtain a more stable and robust system. In this work, the development process of LIDAR-based multi-sensor fusion SLAM and the latest research work are highlighted. After summarizing the basic idea of SLAM and the necessity of multi-sensor fusion, this paper introduces the basic principles and recent work of multi-sensor fusion in detail from four aspects based on the types of fused sensors and data coupling methods. Meanwhile, we review some SLAM datasets and compare the performance of five open-source algorithms using the UrbanNav dataset. Finally, the development trend and popular research directions of SLAM based on 3D LIDAR multi-sensor fusion are discussed and summarized.
Collapse
|
29
|
Cremona J, Comelli R, Pire T. Experimental evaluation of Visual‐Inertial Odometry systems for arable farming. J FIELD ROBOT 2022. [DOI: 10.1002/rob.22099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Javier Cremona
- CIFASIS French Argentine International Center for Information and Systems Sciences (CONICET‐UNR) Rosario Argentina
| | - Román Comelli
- CIFASIS French Argentine International Center for Information and Systems Sciences (CONICET‐UNR) Rosario Argentina
| | - Taihú Pire
- CIFASIS French Argentine International Center for Information and Systems Sciences (CONICET‐UNR) Rosario Argentina
| |
Collapse
|
30
|
GNSS-RTK Adaptively Integrated with LiDAR/IMU Odometry for Continuously Global Positioning in Urban Canyons. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12105193] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Global Navigation Satellite System Real-time Kinematic (GNSS-RTK) is an indispensable source for the absolute positioning of autonomous systems. Unfortunately, the performance of the GNSS-RTK is significantly degraded in urban canyons, due to the notorious multipath and Non-Line-of-Sight (NLOS). On the contrary, LiDAR/inertial odometry (LIO) can provide locally accurate pose estimation in structured urban scenarios but is subjected to drift over time. Considering their complementarities, GNSS-RTK, adaptively integrated with LIO was proposed in this paper, aiming to realize continuous and accurate global positioning for autonomous systems in urban scenarios. As one of the main contributions, this paper proposes to identify the quality of the GNSS-RTK solution based on the point cloud map incrementally generated by LIO. A smaller mean elevation angle mask of the surrounding point cloud indicates a relatively open area thus the correspondent GNSS-RTK would be reliable. Global factor graph optimization is performed to fuse reliable GNSS-RTK and LIO. Evaluations are performed on datasets collected in typical urban canyons of Hong Kong. With the help of the proposed GNSS-RTK selection strategy, the performance of the GNSS-RTK/LIO integration was significantly improved with the absolute translation error reduced by more than 50%, compared with the conventional integration method where all the GNSS-RTK solutions are used.
Collapse
|
31
|
Yang M, Sun X, Jia F, Rushworth A, Dong X, Zhang S, Fang Z, Yang G, Liu B. Sensors and Sensor Fusion Methodologies for Indoor Odometry: A Review. Polymers (Basel) 2022; 14:polym14102019. [PMID: 35631899 PMCID: PMC9143447 DOI: 10.3390/polym14102019] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/05/2022] [Accepted: 05/11/2022] [Indexed: 02/04/2023] Open
Abstract
Although Global Navigation Satellite Systems (GNSSs) generally provide adequate accuracy for outdoor localization, this is not the case for indoor environments, due to signal obstruction. Therefore, a self-contained localization scheme is beneficial under such circumstances. Modern sensors and algorithms endow moving robots with the capability to perceive their environment, and enable the deployment of novel localization schemes, such as odometry, or Simultaneous Localization and Mapping (SLAM). The former focuses on incremental localization, while the latter stores an interpretable map of the environment concurrently. In this context, this paper conducts a comprehensive review of sensor modalities, including Inertial Measurement Units (IMUs), Light Detection and Ranging (LiDAR), radio detection and ranging (radar), and cameras, as well as applications of polymers in these sensors, for indoor odometry. Furthermore, analysis and discussion of the algorithms and the fusion frameworks for pose estimation and odometry with these sensors are performed. Therefore, this paper straightens the pathway of indoor odometry from principle to application. Finally, some future prospects are discussed.
Collapse
Affiliation(s)
- Mengshen Yang
- Department of Mechanical, Materials and Manufacturing Engineering, The Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China; (M.Y.); (F.J.); (B.L.)
- Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315201, China;
- Zhejiang Key Laboratory of Robotics and Intelligent Manufacturing Equipment Technology, Ningbo 315201, China
| | - Xu Sun
- Department of Mechanical, Materials and Manufacturing Engineering, The Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China; (M.Y.); (F.J.); (B.L.)
- Nottingham Ningbo China Beacons of Excellence Research and Innovation Institute, University of Nottingham Ningbo China, Ningbo 315100, China
- Correspondence: (X.S.); (A.R.); (G.Y.)
| | - Fuhua Jia
- Department of Mechanical, Materials and Manufacturing Engineering, The Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China; (M.Y.); (F.J.); (B.L.)
| | - Adam Rushworth
- Department of Mechanical, Materials and Manufacturing Engineering, The Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China; (M.Y.); (F.J.); (B.L.)
- Correspondence: (X.S.); (A.R.); (G.Y.)
| | - Xin Dong
- Department of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, Nottingham NG7 2RD, UK;
| | - Sheng Zhang
- Ningbo Research Institute, Zhejiang University, Ningbo 315100, China;
| | - Zaojun Fang
- Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315201, China;
- Zhejiang Key Laboratory of Robotics and Intelligent Manufacturing Equipment Technology, Ningbo 315201, China
| | - Guilin Yang
- Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315201, China;
- Zhejiang Key Laboratory of Robotics and Intelligent Manufacturing Equipment Technology, Ningbo 315201, China
- Correspondence: (X.S.); (A.R.); (G.Y.)
| | - Bingjian Liu
- Department of Mechanical, Materials and Manufacturing Engineering, The Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China; (M.Y.); (F.J.); (B.L.)
| |
Collapse
|
32
|
Zhao Z, Zhang Y, Long L, Lu Z, Shi J. Efficient and adaptive lidar–visual–inertial odometry for agricultural unmanned ground vehicle. INT J ADV ROBOT SYST 2022. [DOI: 10.1177/17298806221094925] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The accuracy of agricultural unmanned ground vehicles’ localization directly affects the accuracy of their navigation. However, due to the changeable environment and fewer features in the agricultural scene, it is challenging for these unmanned ground vehicles to localize precisely in global positioning system-denied areas with a single sensor. In this article, we present an efficient and adaptive sensor-fusion odometry framework based on simultaneous localization and mapping to handle the localization problems of agricultural unmanned ground vehicles without the assistance of a global positioning system. The framework leverages three kinds of sub-odometry (lidar odometry, visual odometry and inertial odometry) and automatically combines them depending on the environment to provide accurate pose estimation in real time. The combination of sub-odometry is implemented by trading off the robustness and the accuracy of pose estimation. The efficiency and adaptability are mainly reflected in the novel surfel-based iterative closest point method for lidar odometry we propose, which utilizes the changeable surfel radius range and the adaptive iterative closest point initialization to improve the accuracy of pose estimation in different environments. We test our system in various agricultural unmanned ground vehicles’ working zones and some other open data sets, and the results prove that the proposed method shows better performance mainly in accuracy, efficiency and robustness, compared with the state-of-art methods.
Collapse
Affiliation(s)
- Zixu Zhao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yucheng Zhang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Long Long
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Zaiwang Lu
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Jinglin Shi
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
33
|
Zhao M, Zhou D, Song X, Chen X, Zhang L. DiT-SLAM: Real-Time Dense Visual-Inertial SLAM with Implicit Depth Representation and Tightly-Coupled Graph Optimization. SENSORS 2022; 22:s22093389. [PMID: 35591079 PMCID: PMC9102487 DOI: 10.3390/s22093389] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 04/26/2022] [Accepted: 04/27/2022] [Indexed: 02/04/2023]
Abstract
Recently, generating dense maps in real-time has become a hot research topic in the mobile robotics community, since dense maps can provide more informative and continuous features compared with sparse maps. Implicit depth representation (e.g., the depth code) derived from deep neural networks has been employed in the visual-only or visual-inertial simultaneous localization and mapping (SLAM) systems, which achieve promising performances on both camera motion and local dense geometry estimations from monocular images. However, the existing visual-inertial SLAM systems combined with depth codes are either built on a filter-based SLAM framework, which can only update poses and maps in a relatively small local time window, or based on a loosely-coupled framework, while the prior geometric constraints from the depth estimation network have not been employed for boosting the state estimation. To well address these drawbacks, we propose DiT-SLAM, a novel real-time Dense visual-inertial SLAM with implicit depth representation and Tightly-coupled graph optimization. Most importantly, the poses, sparse maps, and low-dimensional depth codes are optimized with the tightly-coupled graph by considering the visual, inertial, and depth residuals simultaneously. Meanwhile, we propose a light-weight monocular depth estimation and completion network, which is combined with attention mechanisms and the conditional variational auto-encoder (CVAE) to predict the uncertainty-aware dense depth maps from more low-dimensional codes. Furthermore, a robust point sampling strategy introducing the spatial distribution of 2D feature points is also proposed to provide geometric constraints in the tightly-coupled optimization, especially for textureless or featureless cases in indoor environments. We evaluate our system on open benchmarks. The proposed methods achieve better performances on both the dense depth estimation and the trajectory estimation compared to the baseline and other systems.
Collapse
Affiliation(s)
- Mingle Zhao
- Institute of Remote Sensing and Geographic Information System, Peking University, Beijing 100871, China; (M.Z.); (X.C.)
- Robotics and Autonomous Driving Laboratory, Baidu Research, Beijing 100085, China; (X.S.); (L.Z.)
| | - Dingfu Zhou
- Robotics and Autonomous Driving Laboratory, Baidu Research, Beijing 100085, China; (X.S.); (L.Z.)
- National Engineering Laboratory of Deep Learning Technology and Application, Beijing 100085, China
- Correspondence:
| | - Xibin Song
- Robotics and Autonomous Driving Laboratory, Baidu Research, Beijing 100085, China; (X.S.); (L.Z.)
- National Engineering Laboratory of Deep Learning Technology and Application, Beijing 100085, China
| | - Xiuwan Chen
- Institute of Remote Sensing and Geographic Information System, Peking University, Beijing 100871, China; (M.Z.); (X.C.)
| | - Liangjun Zhang
- Robotics and Autonomous Driving Laboratory, Baidu Research, Beijing 100085, China; (X.S.); (L.Z.)
- National Engineering Laboratory of Deep Learning Technology and Application, Beijing 100085, China
| |
Collapse
|
34
|
LiDAR-Inertial-GNSS Fusion Positioning System in Urban Environment: Local Accurate Registration and Global Drift-Free. REMOTE SENSING 2022. [DOI: 10.3390/rs14092104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Aiming at the insufficient accuracy and accumulated error of the point cloud registration of LiDAR-inertial odometry (LIO) in an urban environment, we propose a LiDAR-inertial-GNSS fusion positioning algorithm based on voxelized accurate registration. Firstly, a voxelized point cloud downsampling method based on curvature segmentation is proposed. Rough classification is carried out by the curvature threshold, and the voxelized point cloud downsampling is performed using HashMap instead of a random sample consensus algorithm. Secondly, a point cloud registration model based on the nearest neighbors of the point and neighborhood point sets is constructed. Furthermore, an iterative termination threshold is set to reduce the probability of the local optimal solution. The registration time of a single frame point cloud is increased by an order of magnitude. Finally, we propose a LIO-GNSS fusion positioning model based on graph optimization that uses GNSS observations weighted by confidence to globally correct local drift. The experimental results show that the average root mean square error of the absolute trajectory error of our algorithm is 1.58m on average in a large-scale outdoor environment, which is approximately 83.5% higher than that of similar algorithms. It is fully proved that our algorithm can realize a more continuous and accurate position and attitude estimation and map reconstruction in urban environments.
Collapse
|
35
|
An Evaluation of MEMS-IMU Performance on the Absolute Trajectory Error of Visual-Inertial Navigation System. MICROMACHINES 2022; 13:mi13040602. [PMID: 35457906 PMCID: PMC9024873 DOI: 10.3390/mi13040602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 04/04/2022] [Accepted: 04/09/2022] [Indexed: 02/01/2023]
Abstract
Nowadays, accurate and robust localization is preliminary for achieving a high autonomy for robots and emerging applications. More and more, sensors are fused to guarantee these requirements. A lot of related work has been developed, such as visual-inertial odometry (VIO). In this research, benefiting from the complementary sensing capabilities of IMU and cameras, many problems have been solved. However, few of them pay attention to the impact of different performance IMU on the accuracy of sensor fusion. When faced with actual scenarios, especially in the case of massive hardware deployment, there is the question of how to choose an IMU appropriately? In this paper, we chose six representative IMUs with different performances from consumer-grade to tactical grade for exploring. According to the final performance of VIO based on different IMUs in different scenarios, we analyzed the absolute trajectory error of Visual-Inertial Systems (VINS_Fusion). The assistance of IMU can improve the accuracy of multi-sensor fusion, but the improvement of fusion accuracy with different grade MEMS-IMU is not very significant in the eight experimental scenarios; the consumer-grade IMU can also have an excellent result. In addition, the IMU with low noise is more versatile and stable in various scenarios. The results build the route for the development of Inertial Navigation System (INS) fusion with visual odometry and at the same time, provide a guideline for the selection of IMU.
Collapse
|
36
|
Liu Y, Zhao C, Ren M. An Enhanced Hybrid Visual-Inertial Odometry System for Indoor Mobile Robot. SENSORS (BASEL, SWITZERLAND) 2022; 22:2930. [PMID: 35458915 PMCID: PMC9024916 DOI: 10.3390/s22082930] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 04/02/2022] [Accepted: 04/07/2022] [Indexed: 06/14/2023]
Abstract
As mobile robots are being widely used, accurate localization of the robot counts for the system. Compared with position systems with a single sensor, multi-sensor fusion systems provide better performance and increase the accuracy and robustness. At present, camera and IMU (Inertial Measurement Unit) fusion positioning is extensively studied and many representative Visual-Inertial Odometry (VIO) systems have been produced. Multi-State Constraint Kalman Filter (MSCKF), one of the tightly coupled filtering methods, is characterized by high accuracy and low computational load among typical VIO methods. In the general framework, IMU information is not used after predicting the state and covariance propagation. In this article, we proposed a framework which introduce IMU pre-integration result into MSCKF framework as observation information to improve the system positioning accuracy. Additionally, the system uses the Helmert variance component estimation (HVCE) method to adjust the weight between feature points and pre-integration to further improve the positioning accuracy. Similarly, this article uses the wheel odometer information of the mobile robot to perform zero speed detection, zero-speed update, and pre-integration update to enhance the positioning accuracy of the system. Finally, after experiments carried out in Gazebo simulation environment, public dataset and real scenarios, it is proved that the proposed algorithm has better accuracy results while ensuring real-time performance than existing mainstream algorithms.
Collapse
|
37
|
A SLAM System with Direct Velocity Estimation for Mechanical and Solid-State LiDARs. REMOTE SENSING 2022. [DOI: 10.3390/rs14071741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Simultaneous localization and mapping (SLAM) is essential for intelligent robots operating in unknown environments. However, existing algorithms are typically developed for specific types of solid-state LiDARs, leading to weak feature representation abilities for new sensors. Moreover, LiDAR-based SLAM methods are limited by distortions caused by LiDAR ego motion. To address the above issues, this paper presents a versatile and velocity-aware LiDAR-based odometry and mapping (VLOM) system. A spherical projection-based feature extraction module is utilized to process the raw point cloud generated by various LiDARs, hence avoiding the time-consuming adaptation of various irregular scan patterns. The extracted features are grouped into higher-level clusters to filter out smaller objects and reduce false matching during feature association. Furthermore, bundle adjustment is adopted to jointly estimate the poses and velocities for multiple scans, effectively improving the velocity estimation accuracy and compensating for point cloud distortions. Experiments on publicly available datasets demonstrate the superiority of VLOM over other state-of-the-art LiDAR-based SLAM systems in terms of accuracy and robustness. Additionally, the satisfactory performance of VLOM on RS-LiDAR-M1, a newly released solid-state LiDAR, shows its applicability to a wide range of LiDARs.
Collapse
|
38
|
Lim H, Jeon J, Myung H. UV-SLAM: Unconstrained Line-Based SLAM Using Vanishing Points for Structural Mapping. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3140816] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
39
|
Zhang L, Wisth D, Camurri M, Fallon M. Balancing the Budget: Feature Selection and Tracking for Multi-Camera Visual-Inertial Odometry. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2021.3137910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
40
|
Dexheimer E, Peluse P, Chen J, Pritts J, Kaess M. Information-Theoretic Online Multi-Camera Extrinsic Calibration. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3145061] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
41
|
Kim Y, Yu B, Lee EM, Kim JH, Park HW, Myung H. STEP: State Estimator for Legged Robots Using a Preintegrated Foot Velocity Factor. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3150844] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
42
|
Song Y, Zhang Z, Wu J, Wang Y, Zhao L, Huang S. A Right Invariant Extended Kalman Filter for Object Based SLAM. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2021.3139370] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
43
|
Sola J, Vallve J, Casals J, Deray J, Fourmy M, Atchuthan D, Corominas-Murtra A, Andrade-Cetto J. WOLF: A Modular Estimation Framework for Robotics Based on Factor Graphs. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3151404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Joan Sola
- Institut de Robòtica i Informàtica Industrial (IRI), CSIC-UPC, Barcelona, Spain
| | - Joan Vallve
- Institut de Robòtica i Informàtica Industrial (IRI), CSIC-UPC, Barcelona, Spain
| | - Joaquim Casals
- Institut de Robòtica i Informàtica Industrial (IRI), CSIC-UPC, Barcelona, Spain
| | - Jeremie Deray
- Institut de Robòtica i Informàtica Industrial (IRI), CSIC-UPC, Barcelona, Spain
| | | | | | | | - Juan Andrade-Cetto
- Institut de Robòtica i Informàtica Industrial (IRI), CSIC-UPC, Barcelona, Spain
| |
Collapse
|
44
|
Warburg F, Hernandez-Juarez D, Tarrio J, Vakhitov A, Bonde U, Alcantarilla PF. Self-Supervised Depth Completion for Active Stereo. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3145512] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
45
|
Nguyen TM, Cao M, Yuan S, Lyu Y, Nguyen TH, Xie L. VIRAL-Fusion: A Visual-Inertial-Ranging-Lidar Sensor Fusion Approach. IEEE T ROBOT 2022. [DOI: 10.1109/tro.2021.3094157] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
46
|
Cioffi G, Cieslewski T, Scaramuzza D. Continuous-Time Vs. Discrete-Time Vision-Based SLAM: A Comparative Study. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3143303] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
47
|
Visual-Inertial Cross Fusion: A Fast and Accurate State Estimation Framework for Micro Flapping Wing Rotors. DRONES 2022. [DOI: 10.3390/drones6040090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Real-time and drift-free state estimation is essential for the flight control of Micro Aerial Vehicles (MAVs). Due to the vibration caused by the particular flapping motion and the stringent constraints of scale, weight, and power, state estimation divergence actually becomes an open challenge for flapping wing platforms’ longterm stable flight. Unlike conventional MAVs, the direct adoption of mature state estimation strategies, such as inertial or vision-based methods, has difficulty obtaining satisfactory sensing performance on flapping wing platforms. Inertial sensors offer high sampling frequency but suffer from flapping-introduced oscillation and drift. External visual sensors, such as motion capture systems, can provide accurate feedback but come with a relatively low sampling rate and severe delay. This work proposes a novel state estimation framework to combine the merits from both to address such key sensing challenges of a special flapping wing platform—micro flapping wing rotors (FWRs). In particular, a cross-fusion scheme, which integrates two alternately updated Extended Kalman Filters based on a convex combination, is proposed to tightly fuse both onboard inertial and external visual information. Such a design leverages both the high sampling rate of the inertial feedback and the accuracy of the external vision-based feedback. To address the sensing delay of the visual feedback, a ring buffer is designed to cache historical states for online drift compensation. Experimental validations have been conducted on two sophisticated microFWRs with different actuation and control principles. Both of them show realtime and drift-free state estimation.
Collapse
|
48
|
IMU-Aided Registration of MLS Point Clouds Using Inertial Trajectory Error Model and Least Squares Optimization. REMOTE SENSING 2022. [DOI: 10.3390/rs14061365] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Mobile laser scanning (MLS) point cloud registration plays a critical role in mobile 3D mapping and inspection, but conventional point cloud registration methods for terrain LiDAR scanning (TLS) are not suitable for MLS. To cope with this challenge, we use inertial measurement unit (IMU) to assist registration and propose an MLS point cloud registration method based on an inertial trajectory error model. First, we propose an error model of inertial trajectory over a short time period to construct the constraints between trajectory points at different times. On this basis, a relationship between the point cloud registration error and the inertial trajectory error is established, then trajectory error parameters are estimated by minimizing the point cloud registration error using the least squares optimization. Finally, a reliable and concise inertial-assisted MLS registration algorithm is realized. We carried out experiments in three different scenarios: indoor, outdoor and integrated indoor–outdoor. We evaluated the overall performance, accuracy and efficiency of the proposed method. Compared with the ICP method, the accuracy and speed of the proposed method were improved by 2 and 2.8 times, respectively, which verified the effectiveness and reliability of the proposed method. Furthermore, experimental results show the significance of our method in constructing a reliable and scalable mobile 3D mapping system suitable for complex scenes.
Collapse
|
49
|
LiDAR-Visual-Inertial Odometry Based on Optimized Visual Point-Line Features. REMOTE SENSING 2022. [DOI: 10.3390/rs14030622] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
This study presents a LiDAR-Visual-Inertial Odometry (LVIO) based on optimized visual point-line features, which can effectively compensate for the limitations of a single sensor in real-time localization and mapping. Firstly, an improved line feature extraction in scale space and constraint matching strategy, using the least square method, is proposed to provide a richer visual feature for the front-end of LVIO. Secondly, multi-frame LiDAR point clouds were projected into the visual frame for feature depth correlation. Thirdly, the initial estimation results of Visual-Inertial Odometry (VIO) were carried out to optimize the scanning matching accuracy of LiDAR. Finally, a factor graph based on Bayesian network is proposed to build the LVIO fusion system, in which GNSS factor and loop factor are introduced to constrain LVIO globally. The evaluations on indoor and outdoor datasets show that the proposed algorithm is superior to other state-of-the-art algorithms in real-time efficiency, positioning accuracy, and mapping effect. Specifically, the average RMSE of absolute trajectory in the indoor environment is 0.075 m and that in the outdoor environment is 3.77 m. These experimental results can prove that the proposed algorithm can effectively solve the problem of line feature mismatching and the accumulated error of local sensors in mobile carrier positioning.
Collapse
|
50
|
A Comprehensive Survey of the Recent Studies with UAV for Precision Agriculture in Open Fields and Greenhouses. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031047] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The increasing world population makes it necessary to fight challenges such as climate change and to realize production efficiently and quickly. However, the minimum cost, maximum income, environmental pollution protection and the ability to save water and energy are all factors that should be taken into account in this process. The use of information and communication technologies (ICTs) in agriculture to meet all of these criteria serves the purpose of precision agriculture. As unmanned aerial vehicles (UAVs) can easily obtain real-time data, they have a great potential to address and optimize solutions to the problems faced by agriculture. Despite some limitations, such as the battery, load, weather conditions, etc., UAVs will be used frequently in agriculture in the future because of the valuable data that they obtain and their efficient applications. According to the known literature, UAVs have been carrying out tasks such as spraying, monitoring, yield estimation, weed detection, etc. In recent years, articles related to agricultural UAVs have been presented in journals with high impact factors. Most precision agriculture applications with UAVs occur in outdoor environments where GPS access is available, which provides more reliable control of the UAV in both manual and autonomous flights. On the other hand, there are almost no UAV-based applications in greenhouses where all-season crop production is available. This paper emphasizes this deficiency and provides a comprehensive review of the use of UAVs for agricultural tasks and highlights the importance of simultaneous localization and mapping (SLAM) for a UAV solution in the greenhouse.
Collapse
|