1
|
Ferguson JM, Rucker DC, Webster RJ. Unified Shape and External Load State Estimation for Continuum Robots. IEEE T ROBOT 2024; 40:1813-1827. [PMID: 39464302 PMCID: PMC11500828 DOI: 10.1109/tro.2024.3360950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/29/2024]
Abstract
Continuum robots navigate narrow, winding passageways while safely and compliantly interacting with their environments. Sensing the robot's shape under these conditions is often done indirectly, using a few coarsely distributed (e.g. strain or position) sensors combined with the robot's mechanics-based model. More recently, given high-fidelity shape data, external interaction loads along the robot have been estimated by solving an inverse problem on the mechanics model of the robot. In this paper, we argue that since shape and force are fundamentally coupled, they should be estimated simultaneously in a statistically principled approach. We accomplish this by applying continuous-time batch estimation directly to the arclength domain. A general continuum robot model serves as a statistical prior which is fused with discrete, noisy measurements taken along the robot's backbone. The result is a continuous posterior containing both shape and load functions of arclength, as well as their uncertainties. We first test the approach with a Cosserat rod, i.e. the underlying modeling framework that is the basis for a variety of continuum robots. We verify our approach numerically using distributed loads with various sensor combinations. Next, we experimentally validate shape and external load errors for highly concentrated force distributions (point loads). Finally, we apply the approach to a tendon-actuated continuum robot demonstrating applicability to more complex actuated robots.
Collapse
|
2
|
Ferguson JM, Ertop TE, Herrell SD, Webster RJ. Unified Robot and Inertial Sensor Self-Calibration. ROBOTICA 2023; 41:1590-1616. [PMID: 37732333 PMCID: PMC10508886 DOI: 10.1017/s0263574723000012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
Robots and inertial measurement units (IMUs) are typically calibrated independently. IMUs are placed in purpose-built, expensive automated test rigs. Robot poses are typically measured using highly accurate (and thus expensive) tracking systems. In this paper, we present a quick, easy, and inexpensive new approach to calibrate both simultaneously, simply by attaching the IMU anywhere on the robot's end effector and moving the robot continuously through space. Our approach provides a fast and inexpensive alternative to both robot and IMU calibration, without any external measurement systems. We accomplish this using continuous-time batch estimation, providing statistically optimal solutions. Under Gaussian assumptions, we show that this becomes a nonlinear least squares problem and analyze the structure of the associated Jacobian. Our methods are validated both numerically and experimentally and compared to standard individual robot and IMU calibration methods.
Collapse
Affiliation(s)
- James M. Ferguson
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Tayfun Efe Ertop
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| | - S. Duke Herrell
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Robert J. Webster
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
3
|
Wang Y, Yang J, Peng X, Wu P, Gao L, Huang K, Chen J, Kneip L. Visual Odometry with an Event Camera Using Continuous Ray Warping and Volumetric Contrast Maximization. SENSORS (BASEL, SWITZERLAND) 2022; 22:5687. [PMID: 35957244 PMCID: PMC9370870 DOI: 10.3390/s22155687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 07/12/2022] [Accepted: 07/26/2022] [Indexed: 06/15/2023]
Abstract
We present a new solution to tracking and mapping with an event camera. The motion of the camera contains both rotation and translation displacements in the plane, and the displacements happen in an arbitrarily structured environment. As a result, the image matching may no longer be represented by a low-dimensional homographic warping, thus complicating an application of the commonly used Image of Warped Events (IWE). We introduce a new solution to this problem by performing contrast maximization in 3D. The 3D location of the rays cast for each event is smoothly varied as a function of a continuous-time motion parametrization, and the optimal parameters are found by maximizing the contrast in a volumetric ray density field. Our method thus performs joint optimization over motion and structure. The practical validity of our approach is supported by an application to AGV motion estimation and 3D reconstruction with a single vehicle-mounted event camera. The method approaches the performance obtained with regular cameras and eventually outperforms in challenging visual conditions.
Collapse
Affiliation(s)
- Yifu Wang
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China; (Y.W.); (J.Y.); (X.P.); (P.W.); (L.G.); (K.H.); (J.C.)
| | - Jiaqi Yang
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China; (Y.W.); (J.Y.); (X.P.); (P.W.); (L.G.); (K.H.); (J.C.)
| | - Xin Peng
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China; (Y.W.); (J.Y.); (X.P.); (P.W.); (L.G.); (K.H.); (J.C.)
| | - Peng Wu
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China; (Y.W.); (J.Y.); (X.P.); (P.W.); (L.G.); (K.H.); (J.C.)
| | - Ling Gao
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China; (Y.W.); (J.Y.); (X.P.); (P.W.); (L.G.); (K.H.); (J.C.)
| | - Kun Huang
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China; (Y.W.); (J.Y.); (X.P.); (P.W.); (L.G.); (K.H.); (J.C.)
| | - Jiaben Chen
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China; (Y.W.); (J.Y.); (X.P.); (P.W.); (L.G.); (K.H.); (J.C.)
| | - Laurent Kneip
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China; (Y.W.); (J.Y.); (X.P.); (P.W.); (L.G.); (K.H.); (J.C.)
- Shanghai Engineering Research Center of Intelligent Vision and Imaging, ShanghaiTech University, Shanghai 201210, China
| |
Collapse
|
4
|
Eckenhoff K, Geneva P, Huang G. MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera Visual-Inertial Navigation System. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2021.3049445] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
5
|
Huang W, Wan W, Liu H. Optimization-Based Online Initialization and Calibration of Monocular Visual-Inertial Odometry Considering Spatial-Temporal Constraints. SENSORS 2021; 21:s21082673. [PMID: 33920218 PMCID: PMC8070556 DOI: 10.3390/s21082673] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 03/23/2021] [Accepted: 04/06/2021] [Indexed: 11/25/2022]
Abstract
The online system state initialization and simultaneous spatial-temporal calibration are critical for monocular Visual-Inertial Odometry (VIO) since these parameters are either not well provided or even unknown. Although impressive performance has been achieved, most of the existing methods are designed for filter-based VIOs. For the optimization-based VIOs, there is not much online spatial-temporal calibration method in the literature. In this paper, we propose an optimization-based online initialization and spatial-temporal calibration method for VIO. The method does not need any prior knowledge about spatial and temporal configurations. It estimates the initial states of metric-scale, velocity, gravity, Inertial Measurement Unit (IMU) biases, and calibrates the coordinate transformation and time offsets between the camera and IMU sensors. The work routine of the method is as follows. First, it uses a time offset model and two short-term motion interpolation algorithms to align and interpolate the camera and IMU measurement data. Then, the aligned and interpolated results are sent to an incremental estimator to estimate the initial states and the spatial–temporal parameters. After that, a bundle adjustment is additionally included to improve the accuracy of the estimated results. Experiments using both synthetic and public datasets are performed to examine the performance of the proposed method. The results show that both the initial states and the spatial-temporal parameters can be well estimated. The method outperforms other contemporary methods used for comparison.
Collapse
Affiliation(s)
- Weibo Huang
- Key Laboratory of Machine Perception, Peking University Shenzhen Graduate School, Shenzhen 518055, China;
| | - Weiwei Wan
- School of Engineering Science, Osaka University, Osaka 5608531, Japan
- Correspondence: (W.W.); (H.L.)
| | - Hong Liu
- Key Laboratory of Machine Perception, Peking University Shenzhen Graduate School, Shenzhen 518055, China;
- Correspondence: (W.W.); (H.L.)
| |
Collapse
|
6
|
Burnett K, Schoellig AP, Barfoot TD. Do We Need to Compensate for Motion Distortion and Doppler Effects in Spinning Radar Navigation? IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3052439] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
7
|
Pacholska M, Dumbgen F, Scholefield A. Relax and Recover: Guaranteed Range-Only Continuous Localization. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2970952] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
8
|
Ovrén H, Forssén PE. Trajectory representation and landmark projection for continuous-time structure from motion. Int J Rob Res 2019. [DOI: 10.1177/0278364919839765] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper revisits the problem of continuous-time structure from motion, and introduces a number of extensions that improve convergence and efficiency. The formulation with a [Formula: see text]-continuous spline for the trajectory naturally incorporates inertial measurements, as derivatives of the sought trajectory. We analyze the behavior of split spline interpolation on [Formula: see text] and on [Formula: see text], and a joint spline on [Formula: see text], and show that the latter implicitly couples the direction of translation and rotation. Such an assumption can make good sense for a camera mounted on a robot arm, but not for hand-held or body-mounted cameras. Our experiments in the Spline Fusion framework show that a split spline on [Formula: see text] is preferable over an [Formula: see text] spline in all tested cases. Finally, we investigate the problem of landmark reprojection on rolling shutter cameras, and show that the tested reprojection methods give similar quality, whereas their computational load varies by a factor of two.
Collapse
Affiliation(s)
- Hannes Ovrén
- Linköping University, Sweden
- Swedish Defence Research Agency, Sweden
| | | |
Collapse
|
9
|
Mukadam M, Dong J, Yan X, Dellaert F, Boots B. Continuous-time Gaussian process motion planning via probabilistic inference. Int J Rob Res 2018. [DOI: 10.1177/0278364918790369] [Citation(s) in RCA: 55] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We introduce a novel formulation of motion planning, for continuous-time trajectories, as probabilistic inference. We first show how smooth continuous-time trajectories can be represented by a small number of states using sparse Gaussian process (GP) models. We next develop an efficient gradient-based optimization algorithm that exploits this sparsity and GP interpolation. We call this algorithm the Gaussian Process Motion Planner (GPMP). We then detail how motion planning problems can be formulated as probabilistic inference on a factor graph. This forms the basis for GPMP2, a very efficient algorithm that combines GP representations of trajectories with fast, structure-exploiting inference via numerical optimization. Finally, we extend GPMP2 to an incremental algorithm, iGPMP2, that can efficiently replan when conditions change. We benchmark our algorithms against several sampling-based and trajectory optimization-based motion planning algorithms on planning problems in multiple environments. Our evaluation reveals that GPMP2 is several times faster than previous algorithms while retaining robustness. We also benchmark iGPMP2 on replanning problems, and show that it can find successful solutions in a fraction of the time required by GPMP2 to replan from scratch.
Collapse
Affiliation(s)
- Mustafa Mukadam
- Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA, USA
| | - Jing Dong
- Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA, USA
| | - Xinyan Yan
- Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA, USA
| | - Frank Dellaert
- Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA, USA
| | - Byron Boots
- Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|