1
|
Ferguson JM, Ertop TE, Herrell SD, Webster RJ. Unified Robot and Inertial Sensor Self-Calibration. ROBOTICA 2023; 41:1590-1616. [PMID: 37732333 PMCID: PMC10508886 DOI: 10.1017/s0263574723000012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
Robots and inertial measurement units (IMUs) are typically calibrated independently. IMUs are placed in purpose-built, expensive automated test rigs. Robot poses are typically measured using highly accurate (and thus expensive) tracking systems. In this paper, we present a quick, easy, and inexpensive new approach to calibrate both simultaneously, simply by attaching the IMU anywhere on the robot's end effector and moving the robot continuously through space. Our approach provides a fast and inexpensive alternative to both robot and IMU calibration, without any external measurement systems. We accomplish this using continuous-time batch estimation, providing statistically optimal solutions. Under Gaussian assumptions, we show that this becomes a nonlinear least squares problem and analyze the structure of the associated Jacobian. Our methods are validated both numerically and experimentally and compared to standard individual robot and IMU calibration methods.
Collapse
Affiliation(s)
- James M. Ferguson
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Tayfun Efe Ertop
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| | - S. Duke Herrell
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Robert J. Webster
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
2
|
Monocular Visual-Inertial Sensing of Unknown Rotating Objects: Observability Analyses and Case Study for Metric 3D Reconstructing of Space Debris. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3143291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
3
|
Kinnari J, Verdoja F, Kyrki V. Season-invariant GNSS-denied visual localization for UAVs. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3191038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
| | | | - Ville Kyrki
- School of Electrical Engineering, Aalto University, Finland
| |
Collapse
|
4
|
Sun K, Schlotfeldt B, Pappas GJ, Kumar V. Stochastic Motion Planning Under Partial Observability for Mobile Robots With Continuous Range Measurements. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2020.3042129] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
5
|
Evangelidis G, Micusik B. Revisiting Visual-Inertial Structure-From-Motion for Odometry and SLAM Initialization. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3057564] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
6
|
An Efficient Approach to Initialization of Visual-Inertial Navigation System using Closed-Form Solution for Autonomous Robots. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01313-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
7
|
Koch DP, Wheeler DO, Beard RW, McLain TW, Brink KM. Relative multiplicative extended Kalman filter for observable GPS-denied navigation. Int J Rob Res 2020. [DOI: 10.1177/0278364920903094] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This work presents a multiplicative extended Kalman filter (MEKF) for estimating the relative state of a multirotor vehicle operating in a GPS-denied environment. The filter fuses data from an inertial measurement unit and altimeter with relative-pose updates from a keyframe-based visual odometry or laser scan-matching algorithm. Because the global position and heading states of the vehicle are unobservable in the absence of global measurements such as GPS, the filter in this article estimates the state with respect to a local frame that is colocated with the odometry keyframe. As a result, the odometry update provides nearly direct measurements of the relative vehicle pose, making those states observable. Recent publications have rigorously documented the theoretical advantages of such an observable parameterization, including improved consistency, accuracy, and system robustness, and have demonstrated the effectiveness of such an approach during prolonged multirotor flight tests. This article complements this prior work by providing a complete, self-contained, tutorial derivation of the relative MEKF, which has been thoroughly motivated but only briefly described to date. This article presents several improvements and extensions to the filter while clearly defining all quaternion conventions and properties used, including several new useful properties relating to error quaternions and their Euler-angle decomposition. Finally, this article derives the filter both for traditional dynamics defined with respect to an inertial frame, and for robocentric dynamics defined with respect to the vehicle’s body frame, and provides insights into the subtle differences that arise between the two formulations.
Collapse
Affiliation(s)
- Daniel P Koch
- Department of Mechanical Engineering, Brigham Young University, Provo, UT, USA
| | - David O Wheeler
- Department of Electrical and Computer Engineering, Brigham Young University, Provo, UT, USA
| | - Randal W Beard
- Department of Electrical and Computer Engineering, Brigham Young University, Provo, UT, USA
| | - Timothy W McLain
- Department of Mechanical Engineering, Brigham Young University, Provo, UT, USA
| | | |
Collapse
|
8
|
|
9
|
Stančin S, Tomažič S. Computationally Efficient 3D Orientation Tracking Using Gyroscope Measurements. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20082240. [PMID: 32326632 PMCID: PMC7218895 DOI: 10.3390/s20082240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 04/07/2020] [Accepted: 04/13/2020] [Indexed: 06/11/2023]
Abstract
Computationally efficient 3D orientation (3DO) tracking using gyroscope angular velocity measurements enables a short execution time and low energy consumption for the computing device. These are essential requirements in today's wearable device environments, which are characterized by limited resources and demands for high energy autonomy. We show that the computational efficiency of 3DO tracking is significantly improved by correctly interpreting each triplet of gyroscope measurements as simultaneous (using the rotation vector called the Simultaneous Orthogonal Rotation Angle, or SORA) rather than as sequential (using Euler angles) rotation. For an example rotation of 90°, depending on the change in the rotation axis, using Euler angles requires 35 to 78 times more measurement steps for comparable levels of accuracy, implying a higher sampling frequency and computational complexity. In general, the higher the demanded 3DO accuracy, the higher the computational advantage of using the SORA. Furthermore, we demonstrate that 12 to 14 times faster execution is achieved by adapting the SORA-based 3DO tracking to the architecture of the executing low-power ARM Cortex® M0+ microcontroller using only integer arithmetic, lookup tables, and the small-angle approximation. Finally, we show that the computational efficiency is further improved by choosing the appropriate 3DO computational method. Using rotation matrices is 1.85 times faster than using rotation quaternions when 3DO calculations are performed for each measurement step. On the other hand, using rotation quaternions is 1.75 times faster when only the final 3DO result of several consecutive rotations is needed. We conclude that by adopting the presented practices, the clock frequency of a processor computing the 3DO can be significantly reduced. This substantially prolongs the energy autonomy of the device and enhances its usability in day-to-day measurement scenarios.
Collapse
|
10
|
Martinelli A, Renzaglia A, Oliva A. Cooperative visual-inertial sensor fusion: fundamental equations and state determination in closed-form. Auton Robots 2020. [DOI: 10.1007/s10514-019-09841-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
11
|
Yang Y, Huang G. Observability Analysis of Aided INS With Heterogeneous Features of Points, Lines, and Planes. IEEE T ROBOT 2019. [DOI: 10.1109/tro.2019.2927835] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
12
|
Spielvogel AR, Whitcomb LL. Adaptive bias and attitude observer on the special orthogonal group for true-north gyrocompass systems: Theory and preliminary results. Int J Rob Res 2019. [DOI: 10.1177/0278364919881689] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This article reports an adaptive sensor bias observer and attitude observer operating directly on [Formula: see text] for true-north gyrocompass systems that utilize six-degree-of-freedom inertial measurement units (IMUs) with three-axis accelerometers and three-axis angular rate gyroscopes (without magnetometers). Most present-day low-cost robotic vehicles employ attitude estimation systems that employ microelectromechanical system (MEMS) magnetometers, angular rate gyros, and accelerometers to estimate magnetic attitude (roll, pitch, and magnetic heading) with limited heading accuracy. Present-day MEMS gyros are not sensitive enough to dynamically detect the Earth’s rotation, and thus cannot be used to estimate true-north geodetic heading. Relying on magnetic compasses can be problematic for vehicles that operate in environments with magnetic anomalies and those requiring high-accuracy navigation as the limited accuracy ([Formula: see text] error) of magnetic compasses is typically the largest error source in underwater vehicle navigation systems. Moreover, magnetic compasses need to undergo time-consuming recalibration for hard-iron and soft-iron errors every time a vehicle is reconfigured with a new instrument or other payload, as very frequently occurs on oceanographic marine vehicles. In contrast, the gyrocompass system reported herein utilizes fiber optic gyroscope (FOG) IMU angular rate gyro and MEMS accelerometer measurements (without magnetometers) to dynamically estimate the instrument’s time-varying true-north attitude (roll, pitch, and geodetic heading) in real-time while the instrument is subject to a priori unknown rotations. This gyrocompass system is immune to magnetic anomalies and does not require recalibration every time a new payload is added to or removed from the vehicle. Stability proofs for the reported bias and attitude observers, preliminary simulations, and a full-scale vehicle trial are reported that suggest the viability of the true-north gyrocompass system to provide dynamic real-time true-north heading, pitch, and roll utilizing a comparatively low-cost FOG IMU.
Collapse
Affiliation(s)
- Andrew R Spielvogel
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Louis L Whitcomb
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
13
|
Bai H, Taylor CN. Control-enabled Observability and Sensitivity Functions in Visual-Inertial Odometry. J INTELL ROBOT SYST 2019. [DOI: 10.1007/s10846-018-0808-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
14
|
|
15
|
Scale Estimation and Correction of the Monocular Simultaneous Localization and Mapping (SLAM) Based on Fusion of 1D Laser Range Finder and Vision Data. SENSORS 2018; 18:s18061948. [PMID: 29914114 PMCID: PMC6021903 DOI: 10.3390/s18061948] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 06/10/2018] [Accepted: 06/13/2018] [Indexed: 11/27/2022]
Abstract
This article presents a new sensor fusion method for visual simultaneous localization and mapping (SLAM) through integration of a monocular camera and a 1D-laser range finder. Such as a fusion method provides the scale estimation and drift correction and it is not limited by volume, e.g., the stereo camera is constrained by the baseline and overcomes the limited depth range problem associated with SLAM for RGBD cameras. We first present the analytical feasibility for estimating the absolute scale through the fusion of 1D distance information and image information. Next, the analytical derivation of the laser-vision fusion is described in detail based on the local dense reconstruction of the image sequences. We also correct the scale drift of the monocular SLAM using the laser distance information which is independent of the drift error. Finally, application of this approach to both indoor and outdoor scenes is verified by the Technical University of Munich dataset of RGBD and self-collected data. We compare the effects of the scale estimation and drift correction of the proposed method with the SLAM for a monocular camera and a RGBD camera.
Collapse
|
16
|
Chirarattananon P. A direct optic flow-based strategy for inverse flight altitude estimation with monocular vision and IMU measurements. BIOINSPIRATION & BIOMIMETICS 2018; 13:036004. [PMID: 29256435 DOI: 10.1088/1748-3190/aaa2be] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
With tiny and limited nervous systems, insects demonstrate a remarkable ability to fly through complex environments. Optic flow has been identified to play a crucial role in regulating flight conditions and navigation in flies and bees. In robotics, optic flow has been widely studied thanks to the low computational requirements. However, with only monocular visual information, optic flow is inherently devoid of a scale factor required for estimating the absolute distance. In this paper, we propose a strategy for estimating the flight altitude of a flying robot with a ventral camera by combining the optic flow with measurements from an inertial measurement unit. Instead of using the prevalent feature-based approach for calculation of optic flow, we implement a direct method that evaluates the flow information via image gradients. We show that the direct approach notably simplifies the computation steps compared to the feature-based method. When combined with an extended Kalman filter for fusion of inertial measurement units measurements, the flight altitude can be estimated in real time. We carried out extensive flight tests in different settings. Among 31 hovering and vertical flights near the altitude of 40 cm, we achieved the RMS errors in the altitude estimate of 2.51 cm. Further analysis of factors that affect the quality of the flow and the distance estimate is also provided.
Collapse
Affiliation(s)
- Pakpong Chirarattananon
- Department of Mechanical and Biomedical Engineering, City University of Hong Kong, Hong Kong SAR, People's Republic of China
| |
Collapse
|
17
|
|
18
|
|
19
|
Realtime Edge Based Visual Inertial Odometry for MAV Teleoperation in Indoor Environments. J INTELL ROBOT SYST 2017. [DOI: 10.1007/s10846-017-0670-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
20
|
Spica R, Robuffo Giordano P, Chaumette F. Coupling active depth estimation and visual servoing via a large projection operator. Int J Rob Res 2017. [DOI: 10.1177/0278364917728327] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
21
|
Li P, Garratt M, Lambert A, Lin S. Inertial-Aided Metric States and Surface Normal Estimation using a Monocular Camera. J INTELL ROBOT SYST 2017. [DOI: 10.1007/s10846-017-0506-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
22
|
Wang C, Li K, Liang G, Chen H, Huang S, Wu X. A Heterogeneous Sensing System-Based Method for Unmanned Aerial Vehicle Indoor Positioning. SENSORS 2017; 17:s17081842. [PMID: 28796184 PMCID: PMC5579552 DOI: 10.3390/s17081842] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 11/16/2022]
Abstract
The indoor environment has brought new challenges for micro Unmanned Aerial Vehicles (UAVs) in terms of their being able to execute tasks with high positioning accuracy. Conventional positioning methods based on GPS are unreliable, although certain circumstances of limited space make it possible to apply new technologies. In this paper, we propose a novel indoor self-positioning system of UAV based on a heterogeneous sensing system, which integrates data from a structured light scanner, ultra-wideband (UWB), and an inertial navigation system (INS). We made the structured light scanner, which is composed of a low-cost structured light and camera, ourselves to improve the positioning accuracy at a specified area. We applied adaptive Kalman filtering to fuse the data from the INS and UWB while the vehicle was moving, as well as Gauss filtering to fuse the data from the UWB and the structured light scanner in a hovering state. The results of our simulations and experiments demonstrate that the proposed strategy significantly improves positioning accuracy in motion and also in the hovering state, as compared to using a single sensor.
Collapse
Affiliation(s)
- Can Wang
- Guangdong Provincial Key Laboratory of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| | - Kang Li
- School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan 430000, China.
| | - Guoyuan Liang
- Guangdong Provincial Key Laboratory of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| | - Haoyao Chen
- School of Mechanical Engineering and Automation, Harbin Institute of Technology Shenzhen Graduate School, Shenzhen 518055, China.
| | - Sheng Huang
- Guangdong Provincial Key Laboratory of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| | - Xinyu Wu
- Guangdong Provincial Key Laboratory of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong 999077, China.
| |
Collapse
|
23
|
Forster C, Carlone L, Dellaert F, Scaramuzza D. On-Manifold Preintegration for Real-Time Visual--Inertial Odometry. IEEE T ROBOT 2017. [DOI: 10.1109/tro.2016.2597321] [Citation(s) in RCA: 511] [Impact Index Per Article: 63.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
24
|
Kaiser J, Martinelli A, Fontana F, Scaramuzza D. Simultaneous State Initialization and Gyroscope Bias Calibration in Visual Inertial Aided Navigation. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2016.2521413] [Citation(s) in RCA: 68] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
25
|
Mafrica S, Servel A, Ruffier F. Minimalistic optic flow sensors applied to indoor and outdoor visual guidance and odometry on a car-like robot. BIOINSPIRATION & BIOMIMETICS 2016; 11:066007. [PMID: 27831937 DOI: 10.1088/1748-3190/11/6/066007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Here we present a novel bio-inspired optic flow (OF) sensor and its application to visual guidance and odometry on a low-cost car-like robot called BioCarBot. The minimalistic OF sensor was robust to high-dynamic-range lighting conditions and to various visual patterns encountered thanks to its M2APIX auto-adaptive pixels and the new cross-correlation OF algorithm implemented. The low-cost car-like robot estimated its velocity and steering angle, and therefore its position and orientation, via an extended Kalman filter (EKF) using only two downward-facing OF sensors and the Ackerman steering model. Indoor and outdoor experiments were carried out in which the robot was driven in the closed-loop mode based on the velocity and steering angle estimates. The experimental results obtained show that our novel OF sensor can deliver high-frequency measurements ([Formula: see text]) in a wide OF range (1.5-[Formula: see text]) and in a 7-decade high-dynamic light level range. The OF resolution was constant and could be adjusted as required (up to [Formula: see text]), and the OF precision obtained was relatively high (standard deviation of [Formula: see text] with an average OF of [Formula: see text], under the most demanding lighting conditions). An EKF-based algorithm gave the robot's position and orientation with a relatively high accuracy (maximum errors outdoors at a very low light level: [Formula: see text] and [Formula: see text] over about [Formula: see text] and [Formula: see text]) despite the low-resolution control systems of the steering servo and the DC motor, as well as a simplified model identification and calibration. Finally, the minimalistic OF-based odometry results were compared to those obtained using measurements based on an inertial measurement unit (IMU) and a motor's speed sensor.
Collapse
Affiliation(s)
- Stefano Mafrica
- PSA Peugeot Citroën, 78140 Vélizy-Villacoublay, France. Aix-Marseille Univ., CNRS, ISM, Inst. Movement Sci., Marseille, France
| | | | | |
Collapse
|
26
|
Abstract
SUMMARYWe propose a novel stereo visual IMU-assisted (Inertial Measurement Unit) technique that extends to large inter-frame motion the use of KLT tracker (Kanade–Lucas–Tomasi). The constrained and coherent inter-frame motion acquired from the IMU is applied to detected features through homogenous transform using 3D geometry and stereoscopy properties. This predicts efficiently the projection of the optical flow in subsequent images. Accurate adaptive tracking windows limit tracking areas resulting in a minimum of lost features and also prevent tracking of dynamic objects. This new feature tracking approach is adopted as part of a fast and robust visual odometry algorithm based on double dogleg trust region method. Comparisons with gyro-aided KLT and variants approaches show that our technique is able to maintain minimum loss of features and low computational cost even on image sequences presenting important scale change. Visual odometry solution based on this IMU-assisted KLT gives more accurate result than INS/GPS solution for trajectory generation in certain context.
Collapse
|
27
|
Waldmann J, da Silva RIG, Chagas RAJ. Observability analysis of inertial navigation errors from optical flow subspace constraint. Inf Sci (N Y) 2016. [DOI: 10.1016/j.ins.2015.08.017] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
28
|
Briod A, Zufferey JC, Floreano D. A method for ego-motion estimation in micro-hovering platforms flying in very cluttered environments. Auton Robots 2015. [DOI: 10.1007/s10514-015-9494-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
29
|
Planar-Based Visual Inertial Navigation: Observability Analysis and Motion Estimation. J INTELL ROBOT SYST 2015. [DOI: 10.1007/s10846-015-0257-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
30
|
Affiliation(s)
- Ji Zhang
- The Robotics Institute; Carnegie Mellon University; Pittsburgh Pennsylvania 15213
| | - Sanjiv Singh
- The Robotics Institute; Carnegie Mellon University; Pittsburgh Pennsylvania 15213
| |
Collapse
|
31
|
Grabe V, Bülthoff HH, Scaramuzza D, Giordano PR. Nonlinear ego-motion estimation from optical flow for online control of a quadrotor UAV. Int J Rob Res 2015. [DOI: 10.1177/0278364915578646] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
For the control of unmanned aerial vehicles (UAVs) in GPS-denied environments, cameras have been widely exploited as the main sensory modality for addressing the UAV state estimation problem. However, the use of visual information for ego-motion estimation presents several theoretical and practical difficulties, such as data association, occlusions, and lack of direct metric information when exploiting monocular cameras. In this paper, we address these issues by considering a quadrotor UAV equipped with an onboard monocular camera and an inertial measurement unit (IMU). First, we propose a robust ego-motion estimation algorithm for recovering the UAV scaled linear velocity and angular velocity from optical flow by exploiting the so-called continuous homography constraint in the presence of planar scenes. Then, we address the problem of retrieving the (unknown) metric scale by fusing the visual information with measurements from the onboard IMU. To this end, two different estimation strategies are proposed and critically compared: a first exploiting the classical extended Kalman filter (EKF) formulation, and a second one based on a novel nonlinear estimation framework. The main advantage of the latter scheme lies in the possibility of imposing a desired transient response to the estimation error when the camera moves with a constant acceleration norm with respect to the observed plane. We indeed show that, when compared against the EKF on the same trajectory and sensory data, the nonlinear scheme yields considerably superior performance in terms of convergence rate and predictability of the estimation. The paper is then concluded by an extensive experimental validation, including an onboard closed-loop control of a real quadrotor UAV meant to demonstrate the robustness of our approach in real-world conditions.
Collapse
Affiliation(s)
- Volker Grabe
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Robotics and Perception Group of the University of Zurich, Zurich, Switzerland
| | - Heinrich H. Bülthoff
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
| | - Davide Scaramuzza
- Robotics and Perception Group of the University of Zurich, Zurich, Switzerland
| | | |
Collapse
|
32
|
Vision-aided Estimation of Attitude, Velocity, and Inertial Measurement Bias for UAV Stabilization. J INTELL ROBOT SYST 2015. [DOI: 10.1007/s10846-015-0206-2] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
33
|
Tkocz M, Janschek K. Towards Consistent State and Covariance Initialization for Monocular SLAM Filters. J INTELL ROBOT SYST 2015. [DOI: 10.1007/s10846-015-0185-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
34
|
Unmanned Aerial Vehicle Navigation Using Wide-Field Optical Flow and Inertial Sensors. JOURNAL OF ROBOTICS 2015. [DOI: 10.1155/2015/251379] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper offers a set of novel navigation techniques that rely on the use of inertial sensors and wide-field optical flow information. The aircraft ground velocity and attitude states are estimated with an Unscented Information Filter (UIF) and are evaluated with respect to two sets of experimental flight data collected from an Unmanned Aerial Vehicle (UAV). Two different formulations are proposed, a full state formulation including velocity and attitude and a simplified formulation which assumes that the lateral and vertical velocity of the aircraft are negligible. An additional state is also considered within each formulation to recover the image distance which can be measured using a laser rangefinder. The results demonstrate that the full state formulation is able to estimate the aircraft ground velocity to within 1.3 m/s of a GPS receiver solution used as reference “truth” and regulate attitude angles within 1.4 degrees standard deviation of error for both sets of flight data.
Collapse
|
35
|
Spica R, Giordano PR, Chaumette F. Active Structure From Motion: Application to Point, Sphere, and Cylinder. IEEE T ROBOT 2014. [DOI: 10.1109/tro.2014.2365652] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
36
|
Jia C, Evans BL. Online camera-gyroscope autocalibration for cell phones. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:5070-5081. [PMID: 25265608 DOI: 10.1109/tip.2014.2360120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The gyroscope is playing a key role in helping estimate 3D camera rotation for various vision applications on cell phones, including video stabilization and feature tracking. Successful fusion of gyroscope and camera data requires that the camera, gyroscope, and their relative pose to be calibrated. In addition, the timestamps of gyroscope readings and video frames are usually not well synchronized. Previous paper performed camera-gyroscope calibration and synchronization offline after the entire video sequence has been captured with restrictions on the camera motion, which is unnecessarily restrictive for everyday users to run apps that directly use the gyroscope. In this paper, we propose an online method that estimates all the necessary parameters, whereas a user is capturing video. Our contributions are: 1) simultaneous online camera self-calibration and camera-gyroscope calibration based on an implicit extended Kalman filter and 2) generalization of the multiple-view coplanarity constraint on camera rotation in a rolling shutter camera model for cell phones. The proposed method is able to estimate the needed calibration and synchronization parameters online with all kinds of camera motion and can be embedded in gyro-aided applications, such as video stabilization and feature tracking. Both Monte Carlo simulation and cell phone experiments show that the proposed online calibration and synchronization method converge fast to the ground truth values.
Collapse
|
37
|
Jiang W, Wang L, Niu X, Zhang Q, Zhang H, Tang M, Hu X. High-precision image aided inertial navigation with known features: observability analysis and performance evaluation. SENSORS 2014; 14:19371-401. [PMID: 25330046 PMCID: PMC4239938 DOI: 10.3390/s141019371] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2014] [Revised: 09/19/2014] [Accepted: 10/09/2014] [Indexed: 11/17/2022]
Abstract
A high-precision image-aided inertial navigation system (INS) is proposed as an alternative to the carrier-phase-based differential Global Navigation Satellite Systems (CDGNSSs) when satellite-based navigation systems are unavailable. In this paper, the image/INS integrated algorithm is modeled by a tightly-coupled iterative extended Kalman filter (IEKF). Tightly-coupled integration ensures that the integrated system is reliable, even if few known feature points (i.e., less than three) are observed in the images. A new global observability analysis of this tightly-coupled integration is presented to guarantee that the system is observable under the necessary conditions. The analysis conclusions were verified by simulations and field tests. The field tests also indicate that high-precision position (centimeter-level) and attitude (half-degree-level)-integrated solutions can be achieved in a global reference.
Collapse
Affiliation(s)
- Weiping Jiang
- GNSS Research Center, Wuhan University, Wuhan 430079, China.
| | - Li Wang
- GNSS Research Center, Wuhan University, Wuhan 430079, China.
| | - Xiaoji Niu
- GNSS Research Center, Wuhan University, Wuhan 430079, China.
| | - Quan Zhang
- GNSS Research Center, Wuhan University, Wuhan 430079, China.
| | - Hui Zhang
- GNSS Research Center, Wuhan University, Wuhan 430079, China.
| | - Min Tang
- School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China.
| | - Xiangyun Hu
- School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China.
| |
Collapse
|
38
|
Abstract
In this paper, we focus on the problem of pose estimation using measurements from an inertial measurement unit and a rolling-shutter (RS) camera. The challenges posed by RS image capture are typically addressed by using approximate, low-dimensional representations of the camera motion. However, when the motion contains significant accelerations (common in small-scale systems) these representations can lead to loss of accuracy. By contrast, we here describe a different approach, which exploits the inertial measurements to avoid any assumptions on the nature of the trajectory. Instead of parameterizing the trajectory, our approach parameterizes the errors in the trajectory estimates by a low-dimensional model. A key advantage of this approach is that, by using prior knowledge about the estimation errors, it is possible to obtain upper bounds on the modeling inaccuracies incurred by different choices of the parameterization’s dimension. These bounds can provide guarantees for the performance of the method, and facilitate addressing the accuracy–efficiency tradeoff. This RS formulation is used in an extended-Kalman-filter estimator for localization in unknown environments. Our results demonstrate that the resulting algorithm outperforms prior work, in terms of accuracy and computational cost. Moreover, we demonstrate that the algorithm makes it possible to use low-cost consumer devices (i.e. smartphones) for high-precision navigation on multiple platforms.
Collapse
Affiliation(s)
- Mingyang Li
- Department of Electrical Engineering, University of California at Riverside, CA, USA
| | | |
Collapse
|
39
|
Abstract
When fusing visual and inertial measurements for motion estimation, each measurement’s sampling time must be precisely known. This requires knowledge of the time offset that inevitably exists between the two sensors’ data streams. The first contribution of this work is an online approach for estimating this time offset, by treating it as an additional state variable to be estimated along with all other variables of interest (inertial measurement unit (IMU) pose and velocity, biases, camera-to-IMU transformation, feature positions). We show that this approach can be employed in pose-tracking with mapped features, in simultaneous localization and mapping, and in visual–inertial odometry. The second main contribution of this paper is an analysis of the identifiability of the time offset between the visual and inertial sensors. We show that the offset is locally identifiable, except in a small number of degenerate motion cases, which we characterize in detail. These degenerate cases are either (i) cases known to cause loss of observability even when no time offset exists, or (ii) cases that are unlikely to occur in practice. Our simulation and experimental results validate these theoretical findings, and demonstrate that the proposed approach yields high-precision, consistent estimates, in scenarios involving either known or unknown features, with both constant and time-varying offsets.
Collapse
Affiliation(s)
- Mingyang Li
- Department of Electrical Engineering, University of California, Riverside, USA
| | | |
Collapse
|
40
|
Hesch JA, Kottas DG, Bowman SL, Roumeliotis SI. Consistency Analysis and Improvement of Vision-aided Inertial Navigation. IEEE T ROBOT 2014. [DOI: 10.1109/tro.2013.2277549] [Citation(s) in RCA: 141] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
41
|
Hesch JA, Kottas DG, Bowman SL, Roumeliotis SI. Camera-IMU-based localization: Observability analysis and consistency improvement. Int J Rob Res 2013. [DOI: 10.1177/0278364913509675] [Citation(s) in RCA: 141] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This work investigates the relationship between system observability properties and estimator inconsistency for a Vision-aided Inertial Navigation System (VINS). In particular, first we introduce a new methodology for determining the unobservable directions of nonlinear systems by factorizing the observability matrix according to the observable and unobservable modes. Subsequently, we apply this method to the VINS nonlinear model and determine its unobservable directions analytically. We leverage our analysis to improve the accuracy and consistency of linearized estimators applied to VINS. Our key findings are evaluated through extensive simulations and experimental validation on real-world data, demonstrating the superior accuracy and consistency of the proposed VINS framework compared to standard approaches.
Collapse
Affiliation(s)
| | - Dimitrios G Kottas
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, USA
| | - Sean L Bowman
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA
| | - Stergios I Roumeliotis
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, USA
| |
Collapse
|
42
|
|
43
|
Abstract
In this paper, we focus on the problem of motion tracking in unknown environments using visual and inertial sensors. We term this estimation task visual–inertial odometry (VIO), in analogy to the well-known visual-odometry problem. We present a detailed study of extended Kalman filter (EKF)-based VIO algorithms, by comparing both their theoretical properties and empirical performance. We show that an EKF formulation where the state vector comprises a sliding window of poses (the multi-state-constraint Kalman filter (MSCKF)) attains better accuracy, consistency, and computational efficiency than the simultaneous localization and mapping (SLAM) formulation of the EKF, in which the state vector contains the current pose and the features seen by the camera. Moreover, we prove that both types of EKF approaches are inconsistent, due to the way in which Jacobians are computed. Specifically, we show that the observability properties of the EKF’s linearized system models do not match those of the underlying system, which causes the filters to underestimate the uncertainty in the state estimates. Based on our analysis, we propose a novel, real-time EKF-based VIO algorithm, which achieves consistent estimation by (i) ensuring the correct observability properties of its linearized system model, and (ii) performing online estimation of the camera-to-inertial measurement unit (IMU) calibration parameters. This algorithm, which we term MSCKF 2.0, is shown to achieve accuracy and consistency higher than even an iterative, sliding-window fixed-lag smoother, in both Monte Carlo simulations and real-world testing.
Collapse
|