1
|
Yang Z, Li Y, Lin J, Sun Y, Zhu J. Tightly-coupled fusion of iGPS measurements in optimization-based visual SLAM. OPTICS EXPRESS 2023; 31:5910-5926. [PMID: 36823861 DOI: 10.1364/oe.481848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 01/16/2023] [Indexed: 06/18/2023]
Abstract
The monocular visual Simultaneous Localization and Mapping (SLAM) can achieve accurate and robust pose estimation with excellent perceptual ability. However, accumulated image error over time brings out excessive trajectory drift in a GPS-denied indoor environment lacking global positioning constraints. In this paper, we propose a novel optimization-based SLAM fusing rich visual features and indoor GPS (iGPS) measurements, obtained by workshop Measurement Position System, (wMPS), to tackle the problem of trajectory drift associated with visual SLAM. Here, we first calibrate the spatial shift and temporal offset of two types of sensors using multi-view alignment and pose optimization bundle adjustment (BA) algorithms, respectively. Then, we initialize camera poses and map points in a unified world frame by iGPS-aided monocular initialization and PnP algorithms. Finally, we employ a tightly-coupled fusion of iGPS measurements and visual observations using a pose optimization strategy for high-accuracy global localization and mapping. In experiments, public datasets and self-collected sequences are used to evaluate the performance of our approach. The proposed system improves the result of absolute trajectory error from the current state-of-the-art 19.16mm (ORB-SLAM3) to 5.87mm in the public dataset and from 31.20mm to 5.85mm in the real-world experiment. Furthermore, the proposed system also shows good robustness in the evaluations.
Collapse
|
2
|
Li Y, Yang S, Xiu X, Miao Z. A Spatiotemporal Calibration Algorithm for IMU-LiDAR Navigation System Based on Similarity of Motion Trajectories. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22197637. [PMID: 36236759 PMCID: PMC9570820 DOI: 10.3390/s22197637] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 10/02/2022] [Accepted: 10/04/2022] [Indexed: 06/12/2023]
Abstract
The fusion of light detection and ranging (LiDAR) and inertial measurement unit (IMU) sensing information can effectively improve the environment modeling and localization accuracy of navigation systems. To realize the spatiotemporal unification of data collected by the IMU and the LiDAR, a two-step spatiotemporal calibration method combining coarse and fine is proposed. The method mainly includes two aspects: (1) Modeling continuous-time trajectories of IMU attitude motion using B-spline basis functions; the motion of the LiDAR is estimated by using the normal distributions transform (NDT) point cloud registration algorithm, taking the Hausdorff distance between the local trajectories as the cost function and combining it with the hand-eye calibration method to solve the initial value of the spatiotemporal relationship between the two sensors' coordinate systems, and then using the measurement data of the IMU to correct the LiDAR distortion. (2) According to the IMU preintegration, and the point, line, and plane features of the lidar point cloud, the corresponding nonlinear optimization objective function is constructed. Combined with the corrected LiDAR data and the initial value of the spatiotemporal calibration of the coordinate systems, the target is optimized under the nonlinear graph optimization framework. The rationality, accuracy, and robustness of the proposed algorithm are verified by simulation analysis and actual test experiments. The results show that the accuracy of the proposed algorithm in the spatial coordinate system relationship calibration was better than 0.08° (3δ) and 5 mm (3δ), respectively, and the time deviation calibration accuracy was better than 0.1 ms and had strong environmental adaptability. This can meet the high-precision calibration requirements of multisensor spatiotemporal parameters of field robot navigation systems.
Collapse
|
3
|
Eckenhoff K, Geneva P, Huang G. MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera Visual-Inertial Navigation System. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3049445] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
4
|
Persic J, Petrovic L, Markovic I, Petrovic I. Spatiotemporal Multisensor Calibration via Gaussian Processes Moving Target Tracking. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3061364] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
5
|
Zhang M, Zuo X, Chen Y, Liu Y, Li M. Pose Estimation for Ground Robots: On Manifold Representation, Integration, Reparameterization, and Optimization. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2020.3043970] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
6
|
Huang L, Wen S, Yan Z, Song H, Su S, Guan W. Single LED positioning scheme based on angle sensors in robotics. APPLIED OPTICS 2021; 60:6275-6287. [PMID: 34613294 DOI: 10.1364/ao.425744] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 06/22/2021] [Indexed: 06/13/2023]
Abstract
Indoor robotic localization is one of the most active areas in robotics research nowadays. Visible light positioning (VLP) is a promising indoor localization method, as it provides high positioning accuracy and allows for leveraging the existing lighting infrastructure. Apparently, accurate positioning performance is mostly shown by the VLP system with multiple LEDs, while such strict requirement of LED numbers is more likely to lead to VLP system failure in actual environments. In this paper, we propose a single-LED VLP system based on image sensor with the help of angle sensor estimation, which efficiently relaxes the assumption on the minimum number of simultaneously captured LEDs from several to one. Aiming at improving the robustness and accuracy of positioning in the process of continuous change of robot pose, two methods of visual-inertial message synchronization are proposed and used to obtain the well-matched positioning data packets. Various schemes of single-LED VLP system based on different sensor selections and message synchronization methods have been listed and compared in an actual environment. The effectiveness of the proposed single-LED VLP system based on odometer and image sensor as well as the robustness under LED shortage, handover situation and background non-signal light interference, are verified by real-world experiments. The experimental results show that our proposed system can provide an average accuracy of 2.47 cm and the average computational time in low-cost embedded platforms is around 0.184 s.
Collapse
|
7
|
Wang D, Xie F, Yang J, Lu R, Zhu T, Liu Y. Industry robotic motion and pose recognition method based on camera pose estimation and neural network. INT J ADV ROBOT SYST 2021. [DOI: 10.1177/17298814211018549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
To control industry robots and make sure they are working in a correct status, an efficient way to judge the motion of the robot is important. In this article, an industry robotic motion and pose recognition method based on camera pose estimation and neural network are proposed. Firstly, industry robotic motion recognition based on the neural network has been developed to estimate and optimize motion of the robotics only by a monoscope camera. Secondly, the motion recognition including key flames recording and pose adjustment has been proposed and analyzed to restore the pose of the robotics more accurately. Finally, a KUKA industry robot has been used to test the proposed method, and the test results have demonstrated that the motion and pose recognition method can recognize the industry robotic pose accurately and efficiently without inertial measurement unit (IMU) and other censers. Below in the same algorithm, the error of the method introduced in this article is better than the traditional method using IMU and has a better merit of reducing cumulative error.
Collapse
Affiliation(s)
- Ding Wang
- School of Electrical and Automation Engineering, Nanjing Normal University, Nanjing, China
- Nanjing Zhongke Yuchen Laser Technology Co., Ltd., Nanjing, China
| | - Fei Xie
- School of Electrical and Automation Engineering, Nanjing Normal University, Nanjing, China
- Nanjing Zhongke Yuchen Laser Technology Co., Ltd., Nanjing, China
- Jiangsu Key Laboratory of 3D Printing Equipment and Manufacturing, Nanjing Normal University, Nanjing, China
| | - Jiquan Yang
- School of Electrical and Automation Engineering, Nanjing Normal University, Nanjing, China
- Nanjing Zhongke Yuchen Laser Technology Co., Ltd., Nanjing, China
- Jiangsu Key Laboratory of 3D Printing Equipment and Manufacturing, Nanjing Normal University, Nanjing, China
| | - Rongjian Lu
- Nanjing Zhongke Yuchen Laser Technology Co., Ltd., Nanjing, China
- Automation Department, School of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing, China
| | - Tengfei Zhu
- School of Electrical and Automation Engineering, Nanjing Normal University, Nanjing, China
- Nanjing Zhongke Yuchen Laser Technology Co., Ltd., Nanjing, China
| | - Yijian Liu
- School of Electrical and Automation Engineering, Nanjing Normal University, Nanjing, China
- Nanjing Zhongke Yuchen Laser Technology Co., Ltd., Nanjing, China
- Jiangsu Key Laboratory of 3D Printing Equipment and Manufacturing, Nanjing Normal University, Nanjing, China
| |
Collapse
|
8
|
Huang W, Wan W, Liu H. Optimization-Based Online Initialization and Calibration of Monocular Visual-Inertial Odometry Considering Spatial-Temporal Constraints. SENSORS 2021; 21:s21082673. [PMID: 33920218 PMCID: PMC8070556 DOI: 10.3390/s21082673] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 03/23/2021] [Accepted: 04/06/2021] [Indexed: 11/25/2022]
Abstract
The online system state initialization and simultaneous spatial-temporal calibration are critical for monocular Visual-Inertial Odometry (VIO) since these parameters are either not well provided or even unknown. Although impressive performance has been achieved, most of the existing methods are designed for filter-based VIOs. For the optimization-based VIOs, there is not much online spatial-temporal calibration method in the literature. In this paper, we propose an optimization-based online initialization and spatial-temporal calibration method for VIO. The method does not need any prior knowledge about spatial and temporal configurations. It estimates the initial states of metric-scale, velocity, gravity, Inertial Measurement Unit (IMU) biases, and calibrates the coordinate transformation and time offsets between the camera and IMU sensors. The work routine of the method is as follows. First, it uses a time offset model and two short-term motion interpolation algorithms to align and interpolate the camera and IMU measurement data. Then, the aligned and interpolated results are sent to an incremental estimator to estimate the initial states and the spatial–temporal parameters. After that, a bundle adjustment is additionally included to improve the accuracy of the estimated results. Experiments using both synthetic and public datasets are performed to examine the performance of the proposed method. The results show that both the initial states and the spatial-temporal parameters can be well estimated. The method outperforms other contemporary methods used for comparison.
Collapse
Affiliation(s)
- Weibo Huang
- Key Laboratory of Machine Perception, Peking University Shenzhen Graduate School, Shenzhen 518055, China;
| | - Weiwei Wan
- School of Engineering Science, Osaka University, Osaka 5608531, Japan
- Correspondence: (W.W.); (H.L.)
| | - Hong Liu
- Key Laboratory of Machine Perception, Peking University Shenzhen Graduate School, Shenzhen 518055, China;
- Correspondence: (W.W.); (H.L.)
| |
Collapse
|
9
|
Qiu K, Qin T, Pan J, Liu S, Shen S. Real-Time Temporal and Rotational Calibration of Heterogeneous Sensors Using Motion Correlation Analysis. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2020.3033698] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
10
|
Latency Compensated Visual-Inertial Odometry for Agile Autonomous Flight. SENSORS 2020; 20:s20082209. [PMID: 32295132 PMCID: PMC7218848 DOI: 10.3390/s20082209] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 04/10/2020] [Accepted: 04/12/2020] [Indexed: 12/05/2022]
Abstract
In visual-inertial odometry (VIO), inertial measurement unit (IMU) dead reckoning acts as the dynamic model for flight vehicles while camera vision extracts information about the surrounding environment and determines features or points of interest. With these sensors, the most widely used algorithm for estimating vehicle and feature states for VIO is an extended Kalman filter (EKF). The design of the standard EKF does not inherently allow for time offsets between the timestamps of the IMU and vision data. In fact, sensor-related delays that arise in various realistic conditions are at least partially unknown parameters. A lack of compensation for unknown parameters often leads to a serious impact on the accuracy of VIO systems and systems like them. To compensate for the uncertainties of the unknown time delays, this study incorporates parameter estimation into feature initialization and state estimation. Moreover, computing cross-covariance and estimating delays in online temporal calibration correct residual, Jacobian, and covariance. Results from flight dataset testing validate the improved accuracy of VIO employing latency compensated filtering frameworks. The insights and methods proposed here are ultimately useful in any estimation problem (e.g., multi-sensor fusion scenarios) where compensation for partially unknown time delays can enhance performance.
Collapse
|
11
|
Zhang M, Xu X, Chen Y, Li M. A Lightweight and Accurate Localization Algorithm Using Multiple Inertial Measurement Units. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2969146] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
12
|
|
13
|
Yang Y, Geneva P, Eckenhoff K, Huang G. Degenerate Motion Analysis for Aided INS With Online Spatial and Temporal Sensor Calibration. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2893803] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
14
|
Eckenhoff K, Yang Y, Geneva P, Huang G. Tightly-Coupled Visual-Inertial Localization and 3-D Rigid-Body Target Tracking. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2896472] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
15
|
Lee CR, Yoon JH, Yoon KJ. Calibration and Noise Identification of a Rolling Shutter Camera and a Low-Cost Inertial Measurement Unit. SENSORS (BASEL, SWITZERLAND) 2018; 18:s18072345. [PMID: 30029509 PMCID: PMC6069048 DOI: 10.3390/s18072345] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 06/25/2018] [Accepted: 07/10/2018] [Indexed: 06/08/2023]
Abstract
A low-cost inertial measurement unit (IMU) and a rolling shutter camera form a conventional device configuration for localization of a mobile platform due to their complementary properties and low costs. This paper proposes a new calibration method that jointly estimates calibration and noise parameters of the low-cost IMU and the rolling shutter camera for effective sensor fusion in which accurate sensor calibration is very critical. Based on the graybox system identification, the proposed method estimates unknown noise density so that we can minimize calibration error and its covariance by using the unscented Kalman filter. Then, we refine the estimated calibration parameters with the estimated noise density in batch manner. Experimental results on synthetic and real data demonstrate the accuracy and stability of the proposed method and show that the proposed method provides consistent results even with unknown noise density of the IMU. Furthermore, a real experiment using a commercial smartphone validates the performance of the proposed calibration method in off-the-shelf devices.
Collapse
Affiliation(s)
- Chang-Ryeol Lee
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology (GIST), Gwangju 61005, Korea.
| | - Ju Hong Yoon
- Korea Electronics Technology Institute (KETI), Seongnam-si 13509, Korea.
| | - Kuk-Jin Yoon
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea.
| |
Collapse
|
16
|
Burri M, Bloesch M, Taylor Z, Siegwart R, Nieto J. A framework for maximum likelihood parameter identification applied on MAVs. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21729] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
| | | | | | | | - Juan Nieto
- Autonomous Systems Lab; ETH Zurich; Switzerland
| |
Collapse
|
17
|
Forster C, Carlone L, Dellaert F, Scaramuzza D. On-Manifold Preintegration for Real-Time Visual--Inertial Odometry. IEEE T ROBOT 2017. [DOI: 10.1109/tro.2016.2597321] [Citation(s) in RCA: 511] [Impact Index Per Article: 63.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
18
|
Rehder J, Siegwart R, Furgale P. A General Approach to Spatiotemporal Calibration in Multisensor Systems. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2016.2529645] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
19
|
Burri M, Nikolic J, Gohl P, Schneider T, Rehder J, Omari S, Achtelik MW, Siegwart R. The EuRoC micro aerial vehicle datasets. Int J Rob Res 2016. [DOI: 10.1177/0278364915620033] [Citation(s) in RCA: 717] [Impact Index Per Article: 79.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper presents visual-inertial datasets collected on-board a micro aerial vehicle. The datasets contain synchronized stereo images, IMU measurements and accurate ground truth. The first batch of datasets facilitates the design and evaluation of visual-inertial localization algorithms on real flight data. It was collected in an industrial environment and contains millimeter accurate position ground truth from a laser tracking system. The second batch of datasets is aimed at precise 3D environment reconstruction and was recorded in a room equipped with a motion capture system. The datasets contain 6D pose ground truth and a detailed 3D scan of the environment. Eleven datasets are provided in total, ranging from slow flights under good visual conditions to dynamic flights with motion blur and poor illumination, enabling researchers to thoroughly test and evaluate their algorithms. All datasets contain raw sensor measurements, spatio-temporally aligned sensor data and ground truth, extrinsic and intrinsic calibrations and datasets for custom calibrations.
Collapse
Affiliation(s)
- Michael Burri
- Autonomous Systems Laboratory, ETH Zürich, Switzerland
| | | | - Pascal Gohl
- Autonomous Systems Laboratory, ETH Zürich, Switzerland
| | | | - Joern Rehder
- Autonomous Systems Laboratory, ETH Zürich, Switzerland
| | - Sammy Omari
- Autonomous Systems Laboratory, ETH Zürich, Switzerland
| | | | | |
Collapse
|
20
|
Maye J, Sommer H, Agamennoni G, Siegwart R, Furgale P. Online self-calibration for robotic systems. Int J Rob Res 2015. [DOI: 10.1177/0278364915596232] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We present a generic algorithm for self-calibration of robotic systems that utilizes two key innovations. First, it uses an information-theoretic measure to automatically identify and store novel measurement sequences. This keeps the computation tractable by discarding redundant information and allows the system to build a sparse but complete calibration dataset from data collected at different times. Second, as the full observability of the calibration parameters may not be guaranteed for an arbitrary measurement sequence, the algorithm detects and locks unobservable directions in parameter space using a combination of rank-revealing QR and singular value decompositions of the Fisher information matrix. The result is an algorithm that listens to an incoming sensor stream, builds a minimal set of data for estimating the calibration parameters, and updates parameters as they become observable, leaving the others locked at their initial guess. We validate our approach through an extensive set of simulated and real-world experiments.
Collapse
Affiliation(s)
- Jérôme Maye
- Autonomous Systems Lab, ETH Zurich, Switzerland
| | | | | | | | | |
Collapse
|
21
|
Gui J, Gu D, Wang S, Hu H. A review of visual inertial odometry from filtering and optimisation perspectives. Adv Robot 2015. [DOI: 10.1080/01691864.2015.1057616] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|