1
|
Li H, Wang K, Yang K, Cheng R, Wang C, Fei L. Unconstrained self-calibration of stereo camera on visually impaired assistance devices. Appl Opt 2019; 58:6377-6387. [PMID: 31503785 DOI: 10.1364/ao.58.006377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Accepted: 07/18/2019] [Indexed: 06/10/2023]
Abstract
Stereo cameras are widely used in wearable visually impaired assistance devices (VIADs). However, the inevitable vibration, shock, and mechanical stress may make the camera pair become misaligned and cause a sharp decline in the quality of the acquired depth map, which significantly influences the performance of VIADs. In this paper, we propose an epipolar-constraint-based unconstrained self-calibration method that requires neither user involvement nor specific environment, while achieving a rotation accuracy of 0.83 mrad and a translation accuracy of 0.42 mm. Several approaches are proposed to address the image matching issues, including blurred images removal, mismatched key points removal, etc. Based on correctly matched key point pairs, a planar quadric-distribution approach is proposed to ensure the quality and consistency of the final key point group. These collection approaches ensure the reliability of key point pairs, which is the most important factor to realize high accuracy with minimum constraint. A comprehensive set of experiments demonstrates the high robustness of the proposed methods, which are suitable for VIADs. We also present a field test with blindfolded users to validate the flexibility and applicability of the approach.
Collapse
|
2
|
Abstract
A multi-camera dense RGB-D SLAM (simultaneous localization and mapping) system has the potential both to speed up scene reconstruction and to improve localization accuracy, thanks to multiple mounted sensors and an enlarged effective field of view. To effectively tap the potential of the system, two issues must be understood: first, how to calibrate the system where sensors usually shares small or no common field of view to maximally increase the effective field of view; second, how to fuse the location information from different sensors. In this work, a three-Kinect system is reported. For system calibration, two kinds of calibration methods are proposed, one is suitable for system with inertial measurement unit (IMU) using an improved hand–eye calibration method, the other for pure visual SLAM without any other auxiliary sensors. In the RGB-D SLAM stage, we extend and improve a state-of-art single RGB-D SLAM method to multi-camera system. We track the multiple cameras’ poses independently and select the one with the pose minimal-error as the reference pose at each moment to correct other cameras’ poses. To optimize the initial estimated pose, we improve the deformation graph by adding an attribute of device number to distinguish surfels built by different cameras and do deformations according to the device number. We verify the accuracy of our extrinsic calibration methods in the experiment section and show the satisfactory reconstructed models by our multi-camera dense RGB-D SLAM. The RMSE (root-mean-square error) of the lengths measured in our reconstructed mode is 1.55 cm (similar to the state-of-art single camera RGB-D SLAM systems).
Collapse
Affiliation(s)
- Xinrui Meng
- National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Wei Gao
- National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Zhanyi Hu
- National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
3
|
|
4
|
Abstract
This paper solves the classical problem of simultaneous localization and mapping (SLAM) in a fashion that avoids linearized approximations altogether. Based on the creation of virtual synthetic measurements, the algorithm uses a linear time-varying Kalman observer, bypassing errors and approximations brought by the linearization process in traditional extended Kalman filtering SLAM. Convergence rates of the algorithm are established using contraction analysis. Different combinations of sensor information can be exploited, such as bearing measurements, range measurements, optical flow, or time-to-contact. SLAM-DUNK, a more advanced version of the algorithm in global coordinates, exploits the conditional independence property of the SLAM problem, decoupling the covariance matrices between different landmarks and reducing computational complexity to O(n). As illustrated in simulations, the proposed algorithm can solve SLAM problems in both 2D and 3D scenarios with guaranteed convergence rates in a full nonlinear context.
Collapse
Affiliation(s)
- Feng Tan
- Nonlinear Systems Laboratory, Massachusetts Institute of Technology, USA
| | - Winfried Lohmiller
- Nonlinear Systems Laboratory, Massachusetts Institute of Technology, USA
| | | |
Collapse
|
5
|
Parikh A, Cheng TH, Chen HY, Dixon WE. A Switched Systems Framework for Guaranteed Convergence of Image-Based Observers With Intermittent Measurements. IEEE T ROBOT 2017. [DOI: 10.1109/tro.2016.2627024] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
6
|
Rezaei M, Ozgoli S. Lucid Workspace for Stereo Vision. J INTELL ROBOT SYST 2015; 78:223-237. [DOI: 10.1007/s10846-014-0083-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
7
|
Abstract
A novel real-time pose estimation system is presented for solving the visual simultaneous localization and mapping problem using a rigid set of central cameras arranged such that there is no overlap in their fields-of-view. A new parameterization for point feature position using a spherical coordinate update is formulated which isolates system parameters dependent on global scale, allowing the shape parameters of the system to converge despite the scale remaining uncertain. Furthermore, an initialization scheme is proposed from which the optimization will converge accurately using only the measurements from the cameras at the first time step. The algorithm is implemented and verified in experiments with a camera cluster constructed using multiple perspective cameras mounted on a multirotor aerial vehicle and augmented with tracking markers to collect high-precision ground-truth motion measurements from an optical indoor positioning system. The accuracy and performance of the proposed pose estimation system are confirmed for various motion profiles in both indoor and challenging outdoor environments, despite no overlap in the camera fields-of-view.
Collapse
Affiliation(s)
- Michael J. Tribou
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Adam Harmat
- Department of Mechanical Engineering, McGill University, Montreal, QC, Canada
| | - David W.L. Wang
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Inna Sharf
- Department of Mechanical Engineering, McGill University, Montreal, QC, Canada
| | - Steven L. Waslander
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
8
|
|
9
|
Martinez-Gomez J, Fernandez-Caballero A, Garcia-Varea I, Rodriguez L, Romero-Gonzalez C. A Taxonomy of Vision Systems for Ground Mobile Robots. INT J ADV ROBOT SYST 2014. [DOI: 10.5772/58900] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
This paper introduces a taxonomy of vision systems for ground mobile robots. In the last five years, a significant number of relevant papers have contributed to this subject. Firstly, a thorough review of the papers is proposed to discuss and classify both past and the most current approaches in the field. As a result, a global picture of the state of the art of the last five years is obtained. Moreover, the study of the articles is used to put forward a comprehensive taxonomy based on the most up-to-date research in ground mobile robotics. In this sense, the paper aims at being especially helpful to both budding and experienced researchers in the areas of vision systems and mobile ground robots. The taxonomy described is devised from a novel perspective, namely in order to respond to the main questions posed when designing robotic vision systems: why?, what for?, what with?, how?, and where? The answers are derived from the most relevant techniques described in the recent literature, leading in a natural way to a series of classifications that are discussed and contextualized. The article offers a global picture of the state of the art in the area and discovers some promising research lines.
Collapse
Affiliation(s)
- Jesus Martinez-Gomez
- Universidad de Castilla-La Mancha, Departamento de Sistemas Informaticos, Albacete, Spain
| | | | - Ismael Garcia-Varea
- Universidad de Castilla-La Mancha, Departamento de Sistemas Informaticos, Albacete, Spain
| | - Luis Rodriguez
- Universidad de Castilla-La Mancha, Departamento de Sistemas Informaticos, Albacete, Spain
| | | |
Collapse
|
10
|
Brunner C, Peynot T, Vidal-Calleja T, Underwood J. Selective Combination of Visual and Thermal Imaging for Resilient Localization in Adverse Conditions: Day and Night, Smoke and Fire. J FIELD ROBOT 2013. [DOI: 10.1002/rob.21464] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Christopher Brunner
- Australian Centre for Field Robotics; The University of Sydney; NSW 2006 Australia
| | - Thierry Peynot
- Australian Centre for Field Robotics; The University of Sydney; NSW 2006 Australia
| | - Teresa Vidal-Calleja
- Centre for Autonomous Systems, Faculty of Engineering and IT; University of Technology Sydney; NSW 2007 Australia
| | - James Underwood
- Australian Centre for Field Robotics; The University of Sydney; NSW 2006 Australia
| |
Collapse
|
11
|
Solà J, Vidal-calleja T, Civera J, Montiel JMM. Impact of Landmark Parametrization on Monocular EKF-SLAM with Points and Lines. Int J Comput Vis 2012; 97:339-68. [DOI: 10.1007/s11263-011-0492-5] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
12
|
Piniés P, Paz LM, Gálvez-López D, Tardós JD. CI-Graph simultaneous localization and mapping for three-dimensional reconstruction of large and complex environments using a multicamera system. J FIELD ROBOT 2010. [DOI: 10.1002/rob.20355] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|