1
|
Ferguson JM, Ertop TE, Herrell SD, Webster RJ. Unified Robot and Inertial Sensor Self-Calibration. ROBOTICA 2023; 41:1590-1616. [PMID: 37732333 PMCID: PMC10508886 DOI: 10.1017/s0263574723000012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
Robots and inertial measurement units (IMUs) are typically calibrated independently. IMUs are placed in purpose-built, expensive automated test rigs. Robot poses are typically measured using highly accurate (and thus expensive) tracking systems. In this paper, we present a quick, easy, and inexpensive new approach to calibrate both simultaneously, simply by attaching the IMU anywhere on the robot's end effector and moving the robot continuously through space. Our approach provides a fast and inexpensive alternative to both robot and IMU calibration, without any external measurement systems. We accomplish this using continuous-time batch estimation, providing statistically optimal solutions. Under Gaussian assumptions, we show that this becomes a nonlinear least squares problem and analyze the structure of the associated Jacobian. Our methods are validated both numerically and experimentally and compared to standard individual robot and IMU calibration methods.
Collapse
Affiliation(s)
- James M. Ferguson
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Tayfun Efe Ertop
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| | - S. Duke Herrell
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Robert J. Webster
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
2
|
Yang Z, Li Y, Lin J, Sun Y, Zhu J. Tightly-coupled fusion of iGPS measurements in optimization-based visual SLAM. OPTICS EXPRESS 2023; 31:5910-5926. [PMID: 36823861 DOI: 10.1364/oe.481848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 01/16/2023] [Indexed: 06/18/2023]
Abstract
The monocular visual Simultaneous Localization and Mapping (SLAM) can achieve accurate and robust pose estimation with excellent perceptual ability. However, accumulated image error over time brings out excessive trajectory drift in a GPS-denied indoor environment lacking global positioning constraints. In this paper, we propose a novel optimization-based SLAM fusing rich visual features and indoor GPS (iGPS) measurements, obtained by workshop Measurement Position System, (wMPS), to tackle the problem of trajectory drift associated with visual SLAM. Here, we first calibrate the spatial shift and temporal offset of two types of sensors using multi-view alignment and pose optimization bundle adjustment (BA) algorithms, respectively. Then, we initialize camera poses and map points in a unified world frame by iGPS-aided monocular initialization and PnP algorithms. Finally, we employ a tightly-coupled fusion of iGPS measurements and visual observations using a pose optimization strategy for high-accuracy global localization and mapping. In experiments, public datasets and self-collected sequences are used to evaluate the performance of our approach. The proposed system improves the result of absolute trajectory error from the current state-of-the-art 19.16mm (ORB-SLAM3) to 5.87mm in the public dataset and from 31.20mm to 5.85mm in the real-world experiment. Furthermore, the proposed system also shows good robustness in the evaluations.
Collapse
|
3
|
Liu Z, Shi D, Li R, Yang S. ESVIO: Event-Based Stereo Visual-Inertial Odometry. SENSORS (BASEL, SWITZERLAND) 2023; 23:1998. [PMID: 36850602 PMCID: PMC9961954 DOI: 10.3390/s23041998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 01/30/2023] [Accepted: 02/08/2023] [Indexed: 06/18/2023]
Abstract
The emerging event cameras are bio-inspired sensors that can output pixel-level brightness changes at extremely high rates, and event-based visual-inertial odometry (VIO) is widely studied and used in autonomous robots. In this paper, we propose an event-based stereo VIO system, namely ESVIO. Firstly, we present a novel direct event-based VIO method, which fuses events' depth, Time-Surface images, and pre-integrated inertial measurement to estimate the camera motion and inertial measurement unit (IMU) biases in a sliding window non-linear optimization framework, effectively improving the state estimation accuracy and robustness. Secondly, we design an event-inertia semi-joint initialization method, through two steps of event-only initialization and event-inertia initial optimization, to rapidly and accurately solve the initialization parameters of the VIO system, thereby further improving the state estimation accuracy. Based on these two methods, we implement the ESVIO system and evaluate the effectiveness and robustness of ESVIO on various public datasets. The experimental results show that ESVIO achieves good performance in both accuracy and robustness when compared with other state-of-the-art event-based VIO and stereo visual odometry (VO) systems, and, at the same time, with no compromise to real-time performance.
Collapse
Affiliation(s)
- Zhe Liu
- College of Computer, National University of Defense Technology, Changsha 410005, China
| | - Dianxi Shi
- Artificial Intelligence Research Center (AIRC), Defense Innovation Institute, Beijing 100166, China
| | - Ruihao Li
- Artificial Intelligence Research Center (AIRC), Defense Innovation Institute, Beijing 100166, China
| | - Shaowu Yang
- College of Computer, National University of Defense Technology, Changsha 410005, China
| |
Collapse
|
4
|
Aslan MF, Durdu A, Yusefi A, Yilmaz A. HVIOnet: A deep learning based hybrid visual-inertial odometry approach for unmanned aerial system position estimation. Neural Netw 2022; 155:461-474. [PMID: 36152378 DOI: 10.1016/j.neunet.2022.09.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 07/25/2022] [Accepted: 09/02/2022] [Indexed: 11/21/2022]
Abstract
Sensor fusion is used to solve the localization problem in autonomous mobile robotics applications by integrating complementary data acquired from various sensors. In this study, we adopt Visual-Inertial Odometry (VIO), a low-cost sensor fusion method that integrates inertial data with images using a Deep Learning (DL) framework to predict the position of an Unmanned Aerial System (UAS). The developed system has three steps. The first step extracts features from images acquired from a platform camera and uses a Convolutional Neural Network (CNN) to project them to a visual feature manifold. Next, temporal features are extracted from the Inertial Measurement Unit (IMU) data on the platform using a Bidirectional Long Short Term Memory (BiLSTM) network and are projected to an inertial feature manifold. The final step estimates the UAS position by fusing the visual and inertial feature manifolds via a BiLSTM-based architecture. The proposed approach is tested with the public EuRoC (European Robotics Challenge) dataset and simulation environment data generated within the Robot Operating System (ROS). The result of the EuRoC dataset shows that the proposed approach achieves successful position estimations comparable to previous popular VIO methods. In addition, as a result of the experiment with the simulation dataset, the UAS position is successfully estimated with 0.167 Mean Square Error (RMSE). The obtained results prove that the proposed deep architecture is useful for UAS position estimation.
Collapse
Affiliation(s)
- Muhammet Fatih Aslan
- Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, Karaman, Turkey.
| | - Akif Durdu
- Robotics Automation Control Laboratory (RAC-LAB), Electrical and Electronics Engineering, Konya Technical University, Konya, Turkey
| | - Abdullah Yusefi
- Research and Development MPG Machinery Production Group Inc. Co. Konya, Turkey
| | - Alper Yilmaz
- Photogrammetric Computer Vision Laboratory, Ohio State University, Columbus, USA
| |
Collapse
|
5
|
Cheng J, Jin Y, Zhai Z, Liu X, Zhou K. Research on Positioning Method in Underground Complex Environments Based on Fusion of Binocular Vision and IMU. SENSORS (BASEL, SWITZERLAND) 2022; 22:5711. [PMID: 35957268 PMCID: PMC9371209 DOI: 10.3390/s22155711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 07/29/2022] [Accepted: 07/29/2022] [Indexed: 06/15/2023]
Abstract
Aiming at the failure of traditional visual slam localization caused by dynamic target interference and weak texture in underground complexes, an effective robot localization scheme was designed in this paper. Firstly, the Harris algorithm with stronger corner detection ability was used, which further improved the ORB (oriented FAST and rotated BRIEF) algorithm of traditional visual slam. Secondly, the non-uniform rational B-splines algorithm was used to transform the discrete data of inertial measurement unit (IMU) into second-order steerable continuous data, and the visual sensor data were fused with IMU data. Finally, the experimental results under the KITTI dataset, EUROC dataset, and a simulated real scene proved that the method used in this paper has the characteristics of stronger robustness, better localization accuracy, small size of hardware equipment, and low power consumption.
Collapse
Affiliation(s)
- Jie Cheng
- School of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou 310018, China; (J.C.); (Z.Z.)
| | - Yinglian Jin
- College of Modern Science and Technology, China Jiliang University, Hangzhou 310018, China;
| | - Zhen Zhai
- School of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou 310018, China; (J.C.); (Z.Z.)
| | - Xiaolong Liu
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21211, USA;
| | - Kun Zhou
- School of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou 310018, China; (J.C.); (Z.Z.)
| |
Collapse
|
6
|
Stumberg LV, Cremers D. DM-VIO: Delayed Marginalization Visual-Inertial Odometry. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2021.3140129] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
7
|
Abstract
In the aim of improving the positioning accuracy of the monocular visual-inertial simultaneous localization and mapping (VI-SLAM) system, an improved initialization method with faster convergence is proposed. This approach is classified into three parts: Firstly, in the initial stage, the pure vision measurement model of ORB-SLAM is employed to make all the variables visible. Secondly, the frequency of the IMU and camera was aligned by IMU pre-integration technology. Thirdly, an improved iterative method is put forward for estimating the initial parameters of IMU faster. The estimation of IMU initial parameters is divided into several simpler sub-problems, containing direction refinement gravity estimation, gyroscope deviation estimation, accelerometer bias, and scale estimation. The experimental results on the self-built robot platform show that our method can up-regulate the initialization convergence speed, simultaneously improve the positioning accuracy of the entire VI-SLAM system.
Collapse
|
8
|
Campos C, Elvira R, Rodriguez JJG, M. Montiel JM, D. Tardos J. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3075644 10.1109/tro.2021.3075644] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
9
|
Campos C, Elvira R, Rodriguez JJG, Montiel JM, Tardos JD. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3075644] [Citation(s) in RCA: 284] [Impact Index Per Article: 71.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
10
|
Zuniga-Noel D, Moreno FA, Gonzalez-Jimenez J. An Analytical Solution to the IMU Initialization Problem for Visual-Inertial Systems. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3091407] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
11
|
Renormalization for Initialization of Rolling Shutter Visual-Inertial Odometry. Int J Comput Vis 2021. [DOI: 10.1007/s11263-021-01462-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
12
|
Huang W, Wan W, Liu H. Optimization-Based Online Initialization and Calibration of Monocular Visual-Inertial Odometry Considering Spatial-Temporal Constraints. SENSORS 2021; 21:s21082673. [PMID: 33920218 PMCID: PMC8070556 DOI: 10.3390/s21082673] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 03/23/2021] [Accepted: 04/06/2021] [Indexed: 11/25/2022]
Abstract
The online system state initialization and simultaneous spatial-temporal calibration are critical for monocular Visual-Inertial Odometry (VIO) since these parameters are either not well provided or even unknown. Although impressive performance has been achieved, most of the existing methods are designed for filter-based VIOs. For the optimization-based VIOs, there is not much online spatial-temporal calibration method in the literature. In this paper, we propose an optimization-based online initialization and spatial-temporal calibration method for VIO. The method does not need any prior knowledge about spatial and temporal configurations. It estimates the initial states of metric-scale, velocity, gravity, Inertial Measurement Unit (IMU) biases, and calibrates the coordinate transformation and time offsets between the camera and IMU sensors. The work routine of the method is as follows. First, it uses a time offset model and two short-term motion interpolation algorithms to align and interpolate the camera and IMU measurement data. Then, the aligned and interpolated results are sent to an incremental estimator to estimate the initial states and the spatial–temporal parameters. After that, a bundle adjustment is additionally included to improve the accuracy of the estimated results. Experiments using both synthetic and public datasets are performed to examine the performance of the proposed method. The results show that both the initial states and the spatial-temporal parameters can be well estimated. The method outperforms other contemporary methods used for comparison.
Collapse
Affiliation(s)
- Weibo Huang
- Key Laboratory of Machine Perception, Peking University Shenzhen Graduate School, Shenzhen 518055, China;
| | - Weiwei Wan
- School of Engineering Science, Osaka University, Osaka 5608531, Japan
- Correspondence: (W.W.); (H.L.)
| | - Hong Liu
- Key Laboratory of Machine Perception, Peking University Shenzhen Graduate School, Shenzhen 518055, China;
- Correspondence: (W.W.); (H.L.)
| |
Collapse
|
13
|
Evangelidis G, Micusik B. Revisiting Visual-Inertial Structure-From-Motion for Odometry and SLAM Initialization. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3057564] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
14
|
An Efficient Approach to Initialization of Visual-Inertial Navigation System using Closed-Form Solution for Autonomous Robots. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01313-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
15
|
Huang W, Liu H, Wan W. An Online Initialization and Self-Calibration Method for Stereo Visual-Inertial Odometry. IEEE T ROBOT 2020. [DOI: 10.1109/tro.2019.2959161] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
16
|
|
17
|
Martinelli A. Cooperative Visual-Inertial Odometry: Analysis of Singularities, Degeneracies and Minimal Cases. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2965063] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
18
|
Martinelli A, Renzaglia A, Oliva A. Cooperative visual-inertial sensor fusion: fundamental equations and state determination in closed-form. Auton Robots 2020. [DOI: 10.1007/s10514-019-09841-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
19
|
Bjørne E, Brekke EF, Bryne TH, Delaune J, Johansen TA. Globally stable velocity estimation using normalized velocity measurement. Int J Rob Res 2019. [DOI: 10.1177/0278364919887436] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The problem of estimating velocity from a monocular camera and calibrated inertial measurement unit (IMU) measurements is revisited. For the presented setup, it is assumed that normalized velocity measurements are available from the camera. By applying results from nonlinear observer theory, we present velocity estimators with proven global stability under defined conditions, and without the need to observe features from several camera frames. Several nonlinear methods are compared with each other, also against an extended Kalman filter (EKF), where the robustness of the nonlinear methods compared with the EKF are demonstrated in simulations and experiments.
Collapse
Affiliation(s)
- Elias Bjørne
- Center for Autonomous Marine Operations and Systems (NTNU-AMOS) and Department of Engineering Cybernetics, Norwegian University of Science and Technology (NTNU), Norway
| | - Edmund F Brekke
- Center for Autonomous Marine Operations and Systems (NTNU-AMOS) and Department of Engineering Cybernetics, Norwegian University of Science and Technology (NTNU), Norway
| | - Torleiv H Bryne
- Center for Autonomous Marine Operations and Systems (NTNU-AMOS) and Department of Engineering Cybernetics, Norwegian University of Science and Technology (NTNU), Norway
| | - Jeff Delaune
- Computer Vision Group at the Jet Propulsion Laboratory, NASA/California Institute of Technology, USA
| | - Tor Arne Johansen
- Center for Autonomous Marine Operations and Systems (NTNU-AMOS) and Department of Engineering Cybernetics, Norwegian University of Science and Technology (NTNU), Norway
| |
Collapse
|
20
|
Martinelli A, Oliva A, Mourrain B. Cooperative Visual-Inertial Sensor Fusion: The Analytic Solution. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2891025] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
21
|
Infrared-Inertial Navigation for Commercial Aircraft Precision Landing in Low Visibility and GPS-Denied Environments. SENSORS 2019; 19:s19020408. [PMID: 30669520 PMCID: PMC6359318 DOI: 10.3390/s19020408] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2018] [Revised: 01/16/2019] [Accepted: 01/17/2019] [Indexed: 11/26/2022]
Abstract
This paper proposes a novel infrared-inertial navigation method for the precise landing of commercial aircraft in low visibility and Global Position System (GPS)-denied environments. Within a Square-root Unscented Kalman Filter (SR_UKF), inertial measurement unit (IMU) data, forward-looking infrared (FLIR) images and airport geo-information are integrated to estimate the position, velocity and attitude of the aircraft during landing. Homography between the synthetic image and the real image which implicates the camera pose deviations is created as vision measurement. To accurately extract real runway features, the current results of runway detection are used as the prior knowledge for the next frame detection. To avoid possible homography decomposition solutions, it is directly converted to a vector and fed to the SR_UKF. Moreover, the proposed navigation system is proven to be observable by nonlinear observability analysis. Last but not least, a general aircraft was elaborately equipped with vision and inertial sensors to collect flight data for algorithm verification. The experimental results have demonstrated that the proposed method could be used for the precise landing of commercial aircraft in low visibility and GPS-denied environments.
Collapse
|
22
|
Affiliation(s)
- S. Suzuki
- Department of Mechanical Engineering and Robotics, Shinshu University, Ueda-shi, Nagano, Japan
| |
Collapse
|
23
|
Qin T, Li P, Shen S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE T ROBOT 2018. [DOI: 10.1109/tro.2018.2853729] [Citation(s) in RCA: 1156] [Impact Index Per Article: 165.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
24
|
|
25
|
Liu T, Shen S. Spline-Based Initialization of Monocular Visual–Inertial State Estimators at High Altitude. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2724770] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
26
|
Lin Y, Gao F, Qin T, Gao W, Liu T, Wu W, Yang Z, Shen S. Autonomous aerial navigation using monocular visual-inertial fusion. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21732] [Citation(s) in RCA: 114] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Yi Lin
- Hong Kong University of Science and Technology
| | - Fei Gao
- Hong Kong University of Science and Technology
| | - Tong Qin
- Hong Kong University of Science and Technology
| | | | - Tianbo Liu
- Hong Kong University of Science and Technology
| | - William Wu
- Hong Kong University of Science and Technology
| | | | | |
Collapse
|
27
|
|