1
|
Stranner M, Fleck P, Schmalstieg D, Arth C. Instant Segmentation and Fitting of Excavations in Subsurface Utility Engineering. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2319-2329. [PMID: 38437110 DOI: 10.1109/tvcg.2024.3372064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Using augmented reality for subsurface utility engineering (SUE) has benefited from recent advances in sensing hardware, enabling the first practical and commercial applications. However, this progress has uncovered a latent problem - the insufficient quality of existing SUE data in terms of completeness and accuracy. In this work, we present a novel approach to automate the process of aligning existing SUE databases with measurements taken during excavation works, with the potential to correct the deviation from the as-planned to as-built documentation, which is still a big challenge for traditional workers at sight. Our segmentation algorithm performs infrastructure segmentation based on the live capture of an excavation on site. Our fitting approach correlates the inferred position and orientation with the existing digital plan and registers the as-planned model into the as-built state. Our approach is the first to circumvent tedious postprocessing, as it corrects data online and on-site. In our experiments, we show the results of our proposed method on both synthetic data and a set of real excavations.
Collapse
|
2
|
Wei L, Huo J. Camera pose estimation algorithm involving weighted measurement uncertainty of feature points based on rotation parameters. APPLIED OPTICS 2023; 62:2200-2206. [PMID: 37132857 DOI: 10.1364/ao.484055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
To solve the perspective-n-point problem in visual measurement, we present a camera pose estimation algorithm involving weighted measurement uncertainty based on rotation parameters. The method does not involve the depth factor, and the objective function is converted into a least-squares cost function that contains three rotation parameters. Furthermore, the noise uncertainty model enables a more accurate estimated pose, which can be directly calculated without initial values. Experimental results prove the high accuracy and good robustness of the proposed method. In the space of 1.5m×1.5m×1.5m, the maximum estimation errors of rotation and translation are better than 0.04° and 0.2%.
Collapse
|
3
|
Hitchcox T, Forbes JR. Mind the Gap: Norm-Aware Adaptive Robust Loss for Multivariate Least-Squares Problems. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3179424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Thomas Hitchcox
- Department of Mechanical Engineering, McGill University, Montreal, QC, Canada
| | | |
Collapse
|
4
|
Fei X. A High Precision Conical Target Pose Measurement Method Using Monocular Vision. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [DOI: 10.1134/s1054661821030081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
5
|
Allab A, Vazquez C, Cresson T, Guise JD. Calibration of Stereo Radiography System for Radiostereometric Analysis Application. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:4859-4862. [PMID: 31946949 DOI: 10.1109/embc.2019.8857531] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper describes a new alternative to conventional radiography system currently used for radiostereometric analysis studies. Instead of using two non-calibrated X-ray sources with a cumbersome calibration cage, we propose to use the biplanar radiography EOS system. Its fixed configuration provides a preliminary calibration and a much simpler acquisition protocol. A flexible and accurate calibration method is presented to optimize EOS default calibration using a simple object and a self-calibration method. To validate our system, we calculate the 3D reconstruction error of a known object. Results showed an accuracy of 70±11μm and 0.05±0.02° for translation and rotation respectively, and an average epipolar error of 23±03μm.
Collapse
|
6
|
Cui J, Min C, Feng D. Research on pose estimation for stereo vision measurement system by an improved method: uncertainty weighted stereopsis pose solution method based on projection vector. OPTICS EXPRESS 2020; 28:5470-5491. [PMID: 32121767 DOI: 10.1364/oe.377707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Accepted: 02/05/2020] [Indexed: 06/10/2023]
Abstract
We present UWSPSM, an algorithm of uncertainty weighted stereopsis pose solution method based on the projection vector which to solve the problem of pose estimation for stereo vision measurement system based on feature points. Firstly, we use a covariance matrix to represent the direction uncertainty of feature points, and utilize projection matrix to integrate the direction uncertainty of feature points into stereo-vision pose estimation. Then, the optimal translation vector is solved based on the projection vector of feature points, as well the depth is updated by the projection vector of feature points. In the absolute azimuth solution stage, the singular value decomposition algorithm is used to calculate the relative attitude matrix, and the above two stages are iteratively performed until the result converges. Finally, the convergence of the proposed algorithm is proved, from the theoretical point of view, by the global convergence theorem. Expanded into stereo-vision, the fixed relationship constraint between cameras is introduced into the stereoscopic pose estimation, so that only one pose parameter of the two images captured is optimized in the iterative process, and the two cameras are better bound as a camera, it can improve accuracy and efficiency while enhancing measurement reliability. The experimental results show that the proposed pose estimation algorithm can converge quickly, has high-precision and good robustness, and can tolerate different degrees of error uncertainty. So, it has useful practical application prospects.
Collapse
|
7
|
Abstract
In this paper, we present a novel formulation of the inverse kinematics (IK) problem with generic constraints as a mixed-integer convex optimization program. The proposed approach can solve the IK problem globally with generic task space constraints: a major improvement over existing approaches, which either solve the problem in only a local neighborhood of the user initial guess through nonlinear non-convex optimization, or address only a limited set of kinematics constraints. Specifically, we propose a mixed-integer convex relaxation of non-convex [Formula: see text] rotation constraints, and apply this relaxation on the IK problem. Our formulation can detect if an instance of the IK problem is globally infeasible, or produce an approximate solution when it is feasible. We show results on a seven-joint arm grasping objects in a cluttered environment, an 18-degree-of-freedom quadruped standing on stepping stones, and a parallel Stewart platform. Moreover, we show that our approach can find a collision free path for a gripper in a cluttered environment, or certify such a path does not exist. We also compare our approach against the analytical approach for a six-joint manipulator. The open-source code is available at http://drake.mit.edu .
Collapse
Affiliation(s)
- Hongkai Dai
- Toyota Research Institute, Los Altos, CA, USA
| | - Gregory Izatt
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Russ Tedrake
- Toyota Research Institute, Los Altos, CA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
8
|
Assad A, Deep K. A Hybrid Harmony search and Simulated Annealing algorithm for continuous optimization. Inf Sci (N Y) 2018. [DOI: 10.1016/j.ins.2018.03.042] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
9
|
Cheng L, Chen S, Liu X, Xu H, Wu Y, Li M, Chen Y. Registration of Laser Scanning Point Clouds: A Review. SENSORS 2018; 18:s18051641. [PMID: 29883397 PMCID: PMC5981425 DOI: 10.3390/s18051641] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Revised: 05/09/2018] [Accepted: 05/16/2018] [Indexed: 11/16/2022]
Abstract
The integration of multi-platform, multi-angle, and multi-temporal LiDAR data has become important for geospatial data applications. This paper presents a comprehensive review of LiDAR data registration in the fields of photogrammetry and remote sensing. At present, a coarse-to-fine registration strategy is commonly used for LiDAR point clouds registration. The coarse registration method is first used to achieve a good initial position, based on which registration is then refined utilizing the fine registration method. According to the coarse-to-fine framework, this paper reviews current registration methods and their methodologies, and identifies important differences between them. The lack of standard data and unified evaluation systems is identified as a factor limiting objective comparison of different methods. The paper also describes the most commonly-used point cloud registration error analysis methods. Finally, avenues for future work on LiDAR data registration in terms of applications, data, and technology are discussed. In particular, there is a need to address registration of multi-angle and multi-scale data from various newly available types of LiDAR hardware, which will play an important role in diverse applications such as forest resource surveys, urban energy use, cultural heritage protection, and unmanned vehicles.
Collapse
Affiliation(s)
- Liang Cheng
- Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210093, China.
- Collaborative Innovation Center for the South Sea Studies, Nanjing University, Nanjing 210093, China.
- Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing University, Nanjing 210093, China.
- School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing 210093, China.
| | - Song Chen
- Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210093, China.
- Collaborative Innovation Center for the South Sea Studies, Nanjing University, Nanjing 210093, China.
- School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing 210093, China.
| | - Xiaoqiang Liu
- Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210093, China.
- Collaborative Innovation Center for the South Sea Studies, Nanjing University, Nanjing 210093, China.
- School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing 210093, China.
| | - Hao Xu
- Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210093, China.
- Collaborative Innovation Center for the South Sea Studies, Nanjing University, Nanjing 210093, China.
- School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing 210093, China.
| | - Yang Wu
- Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210093, China.
- Collaborative Innovation Center for the South Sea Studies, Nanjing University, Nanjing 210093, China.
- School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing 210093, China.
| | - Manchun Li
- Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210093, China.
- Collaborative Innovation Center for the South Sea Studies, Nanjing University, Nanjing 210093, China.
- Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing University, Nanjing 210093, China.
- School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing 210093, China.
| | - Yanming Chen
- Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210093, China.
- Collaborative Innovation Center for the South Sea Studies, Nanjing University, Nanjing 210093, China.
- Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing University, Nanjing 210093, China.
- School of Geographic and Oceanographic Sciences, Nanjing University, Nanjing 210093, China.
| |
Collapse
|
10
|
Abstract
For a variety of head and neck cancers, specifically those of the oropharynx, larynx, and hypopharynx, minimally invasive trans-oral approaches have been developed to reduce perioperative and long-term morbidity. However, in trans-oral surgical approaches anatomical deformation due to instrumentation, specifically placement of laryngoscopes and retractors, present a significant challenge for surgeons relying on preoperative imaging to resect tumors to negative margins. Quantifying the deformation due to instrumentation is needed in order to develop predictive models of operative deformation. In order to study this deformation, we used a CT/MR-compatible laryngoscopy system in concert with intraoperative CT imaging. 3D models of preoperative and intraoperative anatomy were generated. Mandible and hyoid displacements as well as tongue deformations were quantified for eight patients undergoing diagnostic laryngoscopy. Across patients, we found on average 1.3 cm of displacement for these anatomic structures due to laryngoscope insertion. On average, the maximum displacement for certain tongue regions exceeded 4 cm. The anatomical deformations quantified here can serve as a reference for describing how the upper aerodigestive tract anatomy changes during instrumentation and may be helpful in developing predictive models of intraoperative upper aerodigestive tract deformation.
Collapse
|
11
|
Fan Z, Chen G, Wang J, Liao H. Spatial Position Measurement System for Surgical Navigation Using 3-D Image Marker-Based Tracking Tools With Compact Volume. IEEE Trans Biomed Eng 2018; 65:378-389. [DOI: 10.1109/tbme.2017.2771356] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
12
|
Hajeer MY, Mao Z, Millett DT, Ayoub AF, Siebert JP. A New Three-Dimensional Method of Assessing Facial Volumetric Changes after Orthognathic Treatment. Cleft Palate Craniofac J 2017; 42:113-20. [PMID: 15748101 DOI: 10.1597/03-132.1] [Citation(s) in RCA: 50] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Objective To validate a new method of facial volumetric assessment that is dependent on the use of stereophotogrammetric models and a software-based Facial Analysis Tool. Design The method was validated in vitro with three-dimensional (3D) models of a lifelike plastic female dummy head and in vivo with a male-subject head. Methods Thirty facial silicone explants were added in the nasal and perioral regions of each head, and their volumes were obtained by three different algorithms. These were compared with the actual values obtained by a “water displacement” method. Results The least mean error was found with the “tetrahedron formation” method followed by the “projection” method and the “back-plane construction” method. The error with the tetrahedron formation method was 0.071 cm3 (95% confidence interval [CI]: −0.074 to 0.2161 cm3) with the in vitro models and 0.314 cm3 (95% CI: −0.080 to 0.708 cm3) with the in vivo models. The increased volumetric assessment error observed in vivo was attributed to the registration procedure and possible changes in facial expression. Conclusions These results encourage the use of this method in the 3D assessment of orthognathic surgical outcome, provided a standardized facial expression is used for image acquisition.
Collapse
|
13
|
Frank JA, Krishnamoorthy SP, Kapila V. Toward Mobile Mixed-Reality Interaction With Multi-Robot Systems. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2714128] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
14
|
Affiliation(s)
- Assif Assad
- Department of Mathematics Indian Institute of Technology Roorkee Roorkee India
| | - Kusum Deep
- Department of Mathematics Indian Institute of Technology Roorkee Roorkee India
| |
Collapse
|
15
|
Roberts R, Barajas M, Rodriguez-Leal E, Gordillo JL. Haptic feedback and visual servoing of teleoperated unmanned aerial vehicle for obstacle awareness and avoidance. INT J ADV ROBOT SYST 2017. [DOI: 10.1177/1729881417716365] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Obstacle avoidance represents a fundamental challenge for unmanned aerial vehicle navigation. This is particularly relevant for low altitude flight, which is highly subjected to collisions, causing property damage or even compromise human safety. Autonomous navigation algorithms address this problem and are applied in various tasks. However, this approach is usually overshadowed by unreliable results in uncertain environments. In contrast, human pilots are able to maneuver vehicles in complex situations, in which an algorithm would no offer a reliable performance. This article explores a novel configuration of assisted flying and implements an experimental setup in order to prove its efficacy. The user controls an unmanned aerial vehicle with a force feedback device, where simultaneously an assisted navigation algorithm can manipulate this apparatus to divert the unmanned aerial vehicle from its path. Experiments confirm the authors’ hypothesis that the unmanned aerial vehicle is deviated or maintains the same course at the operator’s will. Unlike conventional controllers that dictate roll, pitch, and yaw, this implementation uses direct mapping between the position represented by the haptic device and the unmanned aerial vehicle. This configuration applies feedback before the unmanned aerial vehicle has reached the position referenced by the haptic device, providing valuable time for the user to make the necessary path correction.
Collapse
Affiliation(s)
- Ricardo Roberts
- Laboratorio de Robótica del Área Noreste y Centro de México, Tecnolgico de Monterrey, Monterrey, Mexico
| | - Manlio Barajas
- Laboratorio de Robótica del Área Noreste y Centro de México, Tecnolgico de Monterrey, Monterrey, Mexico
| | - Ernesto Rodriguez-Leal
- Laboratorio de Robótica del Área Noreste y Centro de México, Tecnolgico de Monterrey, Monterrey, Mexico
| | - José Luis Gordillo
- Laboratorio de Robótica del Área Noreste y Centro de México, Tecnolgico de Monterrey, Monterrey, Mexico
| |
Collapse
|
16
|
Svarm L, Enqvist O, Kahl F, Oskarsson M. City-Scale Localization for Cameras with Known Vertical Direction. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2017; 39:1455-1461. [PMID: 27514034 DOI: 10.1109/tpami.2016.2598331] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We consider the problem of localizing a novel image in a large 3D model, given that the gravitational vector is known. In principle, this is just an instance of camera pose estimation, but the scale of the problem introduces some interesting challenges. Most importantly, it makes the correspondence problem very difficult so there will often be a significant number of outliers to handle. To tackle this problem, we use recent theoretical as well as technical advances. Many modern cameras and phones have gravitational sensors that allow us to reduce the search space. Further, there are new techniques to efficiently and reliably deal with extreme rates of outliers. We extend these methods to camera pose estimation by using accurate approximations and fast polynomial solvers. Experimental results are given demonstrating that it is possible to reliably estimate the camera pose despite cases with more than 99 percent outlier correspondences in city-scale models with several millions of 3D points.
Collapse
|
17
|
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor. SENSORS 2016; 16:s16122139. [PMID: 27983714 PMCID: PMC5191119 DOI: 10.3390/s16122139] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2016] [Revised: 11/29/2016] [Accepted: 12/08/2016] [Indexed: 11/29/2022]
Abstract
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.
Collapse
|
18
|
Robust and Accurate Algorithm for Wearable Stereoscopic Augmented Reality with Three Indistinguishable Markers. ELECTRONICS 2016. [DOI: 10.3390/electronics5030059] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
19
|
Abstract
The intent of this paper is to demonstrate how the accuracy of 3D position tracking can be improved by considering rover locomotion in rough terrain as a holistic problem. Although the selection of good sensors is crucial to accurately track the rover's position, it is not the only aspect to consider. Indeed, the use of an unadapted locomotion concept severely affects the signal to noise ratio of the sensors, which leads to poor motion estimates. In this work, a mechanical structure allowing smooth motion across obstacles with limited wheel slip is used. In particular, this enables the use of odometry and inertial sensors to improve the position estimation in rough terrain. A method for computing 3D motion increments based on the wheel encoders and chassis state sensors is developed. Because it accounts for the kinematics of the rover, this method provides better results than the standard approach. To further improve the accuracy of the position tracking and the rover's climbing performance, a controller minimizing wheel slip has been developed. The algorithm runs online and can be adapted to any kind of passive wheeled rover. Finally, sensor fusion using 3D-Odometry, inertial sensors and visual motion estimation based on stereovision is presented. The experimental results demonstrate how each sensor contributes to increase the accuracy and robustness of the 3D position estimation.
Collapse
Affiliation(s)
- Pierre Lamon
- Eidgenössische Technische Hochschule (ETH) 8092 Zürich, CH
| | | |
Collapse
|
20
|
Hygounenc E, Jung IK, Souères P, Lacroix S. The Autonomous Blimp Project of LAAS-CNRS: Achievements in Flight Control and Terrain Mapping. Int J Rob Res 2016. [DOI: 10.1177/0278364904042200] [Citation(s) in RCA: 82] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In this paper we provide a progress report of the LAAS-CNRS project of autonomous blimp robot development, in the context of field robotics. Hardware developments aimed at designing a generic and versatile experimental platform are first presented. On this base, the flight control and terrain mapping issues, which constitute the main thrust of the research work, are presented in two parts. The first part, devoted to the automatic control study, is based on a rigorous modeling of the airship dynamics. Considering the decoupling of the lateral and longitudinal dynamics, several flight phases are identified for which appropriate control strategies are proposed. The description focuses on the lateral steady navigation. In the second part of the paper, we present work on terrain mapping with lowaltitude stereovision. A simultaneous localization and map building approach based on an extended Kalman filter is depicted, with details on the identification of the various errors involved in the process. Experimental results show that positioning in the three-dimensional space with a centimeter accuracy can be achieved, thus allowing the possibility to build high-resolution digital elevation maps.
Collapse
Affiliation(s)
| | - Il-Kyun Jung
- LAAS/CNRS, 7, av. du Colonel Roche 31077 Toulouse Cedex 4, France
| | - Philippe Souères
- LAAS/CNRS, 7, av. du Colonel Roche 31077 Toulouse Cedex 4, France
| | - Simon Lacroix
- LAAS/CNRS, 7, av. du Colonel Roche 31077 Toulouse Cedex 4, France
| |
Collapse
|
21
|
Abstract
A new practical, high-performance mobile robot localization technique is described that is motivated by the fact that many man-made environments contain substantially flat, visually textured surfaces of persistent appearance. While the tracking of image regions is much studied in computer vision, appearance is still a largely unexploited localization resource in commercially relevant applications. We show how prior appearance models can be used to enable highly repeatable mobile robot guidance that, unlike commercial alternatives, is both infrastructure-free and free-ranging. Very large-scale mosaics are constructed and used to localize a mobile robot operating in the modeled environment. Straightforward techniques from vision-based localization and mosaicking are used to produce a field-relevant AGV guidance system based only on vision and odometry. The feasibility, design, implementation, and precommercial field qualification of such a guidance system are described.
Collapse
Affiliation(s)
- Alonzo Kelly
- Robotics Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213-3890, USA
| |
Collapse
|
22
|
Miraldo P, Araujo H, Gonçalves N. Pose Estimation for General Cameras Using Lines. IEEE TRANSACTIONS ON CYBERNETICS 2015; 45:2156-2164. [PMID: 25576587 DOI: 10.1109/tcyb.2014.2366378] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, we address the problem of pose estimation under the framework of generalized camera models. We propose a solution based on the knowledge of the coordinates of 3-D straight lines (expressed in the world coordinate frame) and their corresponding image pixels. Previous approaches used the knowledge of the coordinates of 3-D points (zero dimensional elements) and their corresponding images (zero dimensional elements). In this paper, pixels belonging to the image of 3-D lines are used. There is no need to establish correspondences between pixels and 3-D points. Correspondences are established between 3-D lines and their images. There is no need to identify individual pixels. The use of correspondences between pixels (that belong to the images of the 3-D lines) and 3-D lines facilitates the correspondence problem when compared to the use of world and image points. This is one of the contributions of this paper. The approach is both evaluated and validated using synthetic data and also real images.
Collapse
|
23
|
Fang Y, Xiaoyong Z, Zhiwu H, Yu W, Wang Y. A Switched Extend Kalman-Filter for Visual Servoing Applied in Nonholonomic Robot with the FOV Constraint. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2015. [DOI: 10.20965/jaciii.2015.p0185] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper, a switched Kalmanfilter (KF) is used to predict the status of feature points leaving the field of view (FOV), which is one of the most common constraints in FOV. By using the prediction of status to compensate for the real state of feature points, nonholonomic robots conduct visual servoing tasks efficiently. Results of simulation and experiments verify the effectiveness of the proposed approach.
Collapse
|
24
|
Miraldo P, Araujo H. Direct solution to the minimal generalized pose. IEEE TRANSACTIONS ON CYBERNETICS 2015; 45:418-429. [PMID: 25014983 DOI: 10.1109/tcyb.2014.2326970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Pose estimation is a relevant problem for imaging systems whose applications range from augmented reality to robotics. In this paper we propose a novel solution for the minimal pose problem, within the framework of generalized camera models and using a planar homography. Within this framework and considering only the geometric elements of the generalized camera models, an imaging system can be modeled by a set of mappings associating image pixels to 3-D straight lines. This mapping is defined in a 3-D world coordinate system. Pose estimation performs the computation of the rigid transformation between the original 3-D world coordinate system and the one in which the camera was calibrated. Using synthetic data, we compare the proposed minimal-based method with the state-of-the-art methods in terms of numerical errors, number of solutions and processing time. From the experiments, we conclude that the proposed method performs better, especially because there is a smaller variation in numerical errors, while results are similar in terms of number of solutions and computation time. To further evaluate the proposed approach we tested our method with real data. One of the relevant contributions of this paper is theoretical. When compared to the state-of-the-art approaches, we propose a completely new parametrization of the problem that can be solved in four simple steps. In addition, our approach does not require any predefined transformation of the dataset, which yields a simpler solution for the problem.
Collapse
|
25
|
Baleia J, Santana P, Barata J. On Exploiting Haptic Cues for Self-Supervised Learning of Depth-Based Robot Navigation Affordances. J INTELL ROBOT SYST 2015. [DOI: 10.1007/s10846-015-0184-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
26
|
Human-PnP: Ergonomic AR Interaction Paradigm for Manual Placement of Rigid Bodies. AUGMENTED ENVIRONMENTS FOR COMPUTER-ASSISTED INTERVENTIONS 2015. [DOI: 10.1007/978-3-319-24601-7_6] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
27
|
Kelly J, Roy N, Sukhatme GS. Determining the Time Delay Between Inertial and Visual Sensor Measurements. IEEE T ROBOT 2014. [DOI: 10.1109/tro.2014.2343073] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
28
|
Hoermann S, Borges PVK. Vehicle Localization and Classification Using Off-Board Vision and 3-D Models. IEEE T ROBOT 2014. [DOI: 10.1109/tro.2013.2291613] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
29
|
Schmid K, Lutz P, Tomić T, Mair E, Hirschmüller H. Autonomous Vision-based Micro Air Vehicle for Indoor and Outdoor Navigation. J FIELD ROBOT 2014. [DOI: 10.1002/rob.21506] [Citation(s) in RCA: 78] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Korbinian Schmid
- Perception and Cognition, Robotics and Mechatronics Center (RMC); German Aerospace Center (DLR); Oberpfaffenhofen Germany
| | - Philipp Lutz
- Autonomy and Teleoperation, Robotics and Mechatronics Center (RMC); German Aerospace Center (DLR); Oberpfaffenhofen Germany
| | - Teodor Tomić
- Analysis and Control of Advanced Robotic Systems, Robotics and Mechatronics Center (RMC); German Aerospace Center (DLR); Oberpfaffenhofen Germany
| | - Elmar Mair
- Perception and Cognition, Robotics and Mechatronics Center (RMC); German Aerospace Center (DLR); Oberpfaffenhofen Germany
| | - Heiko Hirschmüller
- Perception and Cognition, Robotics and Mechatronics Center (RMC); German Aerospace Center (DLR); Oberpfaffenhofen Germany
| |
Collapse
|
30
|
Assa A, Janabi-Sharifi F. A robust vision-based sensor fusion approach for real-time pose estimation. IEEE TRANSACTIONS ON CYBERNETICS 2014; 44:217-227. [PMID: 23757545 DOI: 10.1109/tcyb.2013.2252339] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
Collapse
|
31
|
Lv D, Sun JF, Li Q, Wang Q. 3D pose estimation of ground rigid target based on ladar range image. APPLIED OPTICS 2013; 52:8073-8081. [PMID: 24513760 DOI: 10.1364/ao.52.008073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2013] [Accepted: 10/18/2013] [Indexed: 06/03/2023]
Abstract
In the target recognition of laser radar (ladar), the accurate estimation of target pose can effectively simplify the recognition process. To achieve 3D pose estimation of rigid objects on the ground and simplify the complexity of the algorithm, a novel pose estimation method is proposed in this paper. In this approach, based on the feature that most rigid objects on the ground have large planar areas which are horizontal on the top of the targets and vertical sides and combined with the 3D geometric characteristics of ladar range images, the planar normals of rigid targets were adopted as the vectors in the positive direction of the axes in the model coordinate system to estimate the 3D pose angles of targets. The simulation experiments were performed with six military vehicle models and the performance in self-occlusion, occlusion, and noise was investigated. The results show that the estimation errors are less than 2° in self-occlusion. For the tank LECRERC model, as long as the upper and side planes of the target are not completely occluded, even though the occlusion reaches 80%, the pose angles can be estimated with the estimation error less than 2.5°. Moreover, the proposed method is robust to noise and effective.
Collapse
|
32
|
Prankl J, Zillich M, Vincze M. Interactive object modelling based on piecewise planar surface patches. COMPUTER VISION AND IMAGE UNDERSTANDING : CVIU 2013; 117:718-731. [PMID: 24511219 PMCID: PMC3916791 DOI: 10.1016/j.cviu.2013.01.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2012] [Accepted: 01/23/2013] [Indexed: 06/03/2023]
Abstract
Detecting elements such as planes in 3D is essential to describe objects for applications such as robotics and augmented reality. While plane estimation is well studied, table-top scenes exhibit a large number of planes and methods often lock onto a dominant plane or do not estimate 3D object structure but only homographies of individual planes. In this paper we introduce MDL to the problem of incrementally detecting multiple planar patches in a scene using tracked interest points in image sequences. Planar patches are reconstructed and stored in a keyframe-based graph structure. In case different motions occur, separate object hypotheses are modelled from currently visible patches and patches seen in previous frames. We evaluate our approach on a standard data set published by the Visual Geometry Group at the University of Oxford [24] and on our own data set containing table-top scenes. Results indicate that our approach significantly improves over the state-of-the-art algorithms.
Collapse
|
33
|
|
34
|
Computationally Efficient Iterative Pose Estimation for Space Robot Based on Vision. JOURNAL OF ROBOTICS 2013. [DOI: 10.1155/2013/692838] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In postestimation problem for space robot, photogrammetry has been used to determine the relative pose between an object and a camera. The calculation of the projection from two-dimensional measured data to three-dimensional models is of utmost importance in this vision-based estimation however, this process is usually time consuming, especially in the outer space environment with limited performance of hardware. This paper proposes a computationally efficient iterative algorithm for pose estimation based on vision technology. In this method, an error function is designed to estimate the object-space collinearity error, and the error is minimized iteratively for rotation matrix based on the absolute orientation information. Experimental result shows that this approach achieves comparable accuracy with the SVD-based methods; however, the computational time has been greatly reduced due to the use of the absolute orientation method.
Collapse
|
35
|
Chang WC, Wu CH. Hand-Eye Coordination for Robotic Assembly Tasks. INTERNATIONAL JOURNAL OF AUTOMATION AND SMART TECHNOLOGY 2012. [DOI: 10.5875/ausmt.v2i4.162] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
36
|
Sanromà G, Alquézar R, Serratosa F, Herrera B. Smooth point-set registration using neighboring constraints. Pattern Recognit Lett 2012. [DOI: 10.1016/j.patrec.2012.04.008] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
37
|
Ababsa FE, Mallem M. Hybrid three-dimensional camera pose estimation using particle filter sensor fusion. Adv Robot 2012. [DOI: 10.1163/156855307779293689] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
38
|
|
39
|
Affiliation(s)
- Fabien Dionnet
- a INRIA Rennes-Bretagne Atlantique, IRISA, Lagadic, 35000 Rennes, France
| | - Eric Marchand
- b INRIA Rennes-Bretagne Atlantique, IRISA, Lagadic, 35000 Rennes, France
| |
Collapse
|
40
|
Suthakorn J, Chirikjian GS. A new inverse kinematics algorithm for binary manipulators with many actuators. Adv Robot 2012. [DOI: 10.1163/15685530152116245] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Jackrit Suthakorn
- a Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Gregory S. Chirikjian
- b Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
41
|
Zhang X, Zhang Z, Li Y, Zhu X, Yu Q, Ou J. Robust camera pose estimation from unknown or known line correspondences. APPLIED OPTICS 2012; 51:936-948. [PMID: 22410898 DOI: 10.1364/ao.51.000936] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2011] [Accepted: 12/17/2011] [Indexed: 05/31/2023]
Abstract
We address the model-to-image registration problem with line features in the following two ways. (a) We present a robust solution to simultaneously recover the camera pose and the three-dimensional-to-two-dimensional line correspondences. With weak pose priors, our approach progressively verifies the pose guesses with a Kalman filter by using a subset of recursively found match hypotheses. Experiments show our method is robust to occlusions and clutter. (b) We propose a new line feature based pose estimation algorithm, which iteratively optimizes the objective function in the object space. Experiments show that the algorithm has strong robustness to noise and outliers and that it can attain very accurate results efficiently.
Collapse
Affiliation(s)
- Xiaohu Zhang
- College of Aerospace and Materials Engineering, National University of Defense Technology, Changsha 410073, Hunan, China
| | | | | | | | | | | |
Collapse
|
42
|
Stelzer A, Hirschmüller H, Görner M. Stereo-vision-based navigation of a six-legged walking robot in unknown rough terrain. Int J Rob Res 2012. [DOI: 10.1177/0278364911435161] [Citation(s) in RCA: 85] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper we present a visual navigation algorithm for the six-legged walking robot DLR Crawler in rough terrain. The algorithm is based on stereo images from which depth images are computed using the semi-global matching (SGM) method. Further, a visual odometry is calculated along with an error measure. Pose estimates are obtained by fusing inertial data with relative leg odometry and visual odometry measurements using an indirect information filter. The visual odometry error measure is used in the filtering process to put lower weights on erroneous visual odometry data, hence, improving the robustness of pose estimation. From the estimated poses and the depth images, a dense digital terrain map is created by applying the locus method. The traversability of the terrain is estimated by a plane fitting approach and paths are planned using a D* Lite planner taking the traversability of the terrain and the current motion capabilities of the robot into account. Motion commands and the traversability measures of the upcoming terrain are sent to the walking layer of the robot so that it can choose an appropriate gait for the terrain. Experimental results show the accuracy of the navigation algorithm and its robustness against visual disturbances.
Collapse
Affiliation(s)
- Annett Stelzer
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics, Oberpfaffenhofen, Germany
| | - Heiko Hirschmüller
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics, Oberpfaffenhofen, Germany
| | - Martin Görner
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics, Oberpfaffenhofen, Germany
| |
Collapse
|
43
|
Steger C. Least-squares estimation of anisotropic similarity transformations from corresponding 2D point sets. Pattern Recognit Lett 2012. [DOI: 10.1016/j.patrec.2011.10.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
44
|
Smart Localization Using a New Sensor Association Framework for Outdoor Augmented Reality Systems. JOURNAL OF ROBOTICS 2012. [DOI: 10.1155/2012/634758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Augmented Reality (AR) aims at enhancing our the real world, by adding fictitious elements that are not perceptible naturally such as: computer-generated images, virtual objects, texts, symbols, graphics, sounds, and smells. The quality of the real/virtual registration depends mainly on the accuracy of the 3D camera pose estimation. In this paper, we present an original real-time localization system for outdoor AR which combines three heterogeneous sensors: a camera, a GPS, and an inertial sensor. The proposed system is subdivided into two modules: the main module is vision based; it estimates the user’s location using a markerless tracking method. When the visual tracking fails, the system switches automatically to the secondary localization module composed of the GPS and the inertial sensor.
Collapse
|
45
|
MESHOUL SOUHAM, BATOUCHE MOHAMED. COMBINING EXTREMAL OPTIMIZATION WITH SINGULAR VALUE DECOMPOSITION FOR EFFECTIVE POINT MATCHING. INT J PATTERN RECOGN 2011. [DOI: 10.1142/s0218001403002782] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Feature point matching is a key step for most problems in computer vision. It is an ill-posed problem and suffers from combinatorial complexity which becomes even more critical with the increase in data and the presence of outliers. The work covered in this paper describes a new framework to solve this problem in order to achieve robust registration of two feature point sets assumed to be available. This framework combines the use of extremal optimization heuristic with a clever startup routine which exploits some properties of singular value decomposition. The role of the latter is to produce an interesting matching configuration whereas the role of the former is to refine the initial matching by generating hypothetical matches and outliers using a far-from-equilibrium based stochastic rule. Experiments on a wide range of real data have shown the effectiveness of the proposed method and its ability to achieve reliable feature point matching.
Collapse
Affiliation(s)
- SOUHAM MESHOUL
- Computer Vision Group, LIRE Laboratory, Mentouri University of Constantine, Constantine, 25000, Algeria
| | - MOHAMED BATOUCHE
- Computer Vision Group, LIRE Laboratory, Mentouri University of Constantine, Constantine, 25000, Algeria
| |
Collapse
|
46
|
CHERIET F, DANSEREAU J, PETIT Y, AUBIN CÉ, LABELLE H, DE GUISE JAU. TOWARDS THE SELF-CALIBRATION OF A MULTIVIEW RADIOGRAPHIC IMAGING SYSTEM FOR THE 3D RECONSTRUCTION OF THE HUMAN SPINE AND RIB CAGE. INT J PATTERN RECOGN 2011. [DOI: 10.1142/s0218001499000434] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The main objective of this study was to develop a 3D reconstruction technique of the spine and rib cage of idiopathic scoliotic patients using the self-calibration of the imaging system. The proposed approach computes the intrinsic and extrinsic parameters of the radiographic setup with respect to the global coordinate system used at Ste-Justine Hospital. Our approach determines an optimal estimate of the geometrical parameters of the imaging system from a nonlinear minimization of the mean square distance between the observed and analytical projections of a set of matched points identified on a pair of radiographic views. The accuracy of the optimal estimate for the intrinsic parameters was significantly improved when geometric knowledge such as the known length of detectable straight bars is incorporated as a set of equality constraints in the optimization process. Furthermore, in order to retrieve the 3D structure of interest in the global coordinate system, a reference plane including the origin of the global coordinate system is specified. Computer simulations were performed to evaluate the self-calibration procedure and to determine the minimum knowledge required to obtain an accurate 3D reconstruction for clinical applications. An in vitro validation on real images of a dry cadaveric human spine showed that the method is feasible and reaches the expected accuracy.
Collapse
Affiliation(s)
- F. CHERIET
- LIS3D, Research Center, Sainte-Justine Hospital, 3175 Côte Sainte-Catherine Road, Montreal, Quebec, H3T 1C5, Canada
- École Polytechnique, P.O. Box 6079, Station Centre-Ville, Montreal, Quebec, H3C 3A7, Canada
| | - J. DANSEREAU
- LIS3D, Research Center, Sainte-Justine Hospital, 3175 Côte Sainte-Catherine Road, Montreal, Quebec, H3T 1C5, Canada
- École Polytechnique, P.O. Box 6079, Station Centre-Ville, Montreal, Quebec, H3C 3A7, Canada
| | - Y. PETIT
- LIS3D, Research Center, Sainte-Justine Hospital, 3175 Côte Sainte-Catherine Road, Montreal, Quebec, H3T 1C5, Canada
- École Polytechnique, P.O. Box 6079, Station Centre-Ville, Montreal, Quebec, H3C 3A7, Canada
| | - C.-É. AUBIN
- LIS3D, Research Center, Sainte-Justine Hospital, 3175 Côte Sainte-Catherine Road, Montreal, Quebec, H3T 1C5, Canada
- École Polytechnique, P.O. Box 6079, Station Centre-Ville, Montreal, Quebec, H3C 3A7, Canada
| | - H. LABELLE
- LIS3D, Research Center, Sainte-Justine Hospital, 3175 Côte Sainte-Catherine Road, Montreal, Quebec, H3T 1C5, Canada
| | - J. AU. DE GUISE
- LIS3D, Research Center, Sainte-Justine Hospital, 3175 Côte Sainte-Catherine Road, Montreal, Quebec, H3T 1C5, Canada
- École de Technologie Supérieure, 1100 Notre Dame Ouest Road, Montreal, Quebec, H3C 1K3, Canada
| |
Collapse
|
47
|
Bellchambers GD, Manby FR. An approximate density-functional method using the Harris-Foulkes functional. J Chem Phys 2011; 135:084105. [DOI: 10.1063/1.3625433] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
48
|
Janabi-Sharifi F, Marey M. A Kalman-Filter-Based Method for Pose Estimation in Visual Servoing. IEEE T ROBOT 2010. [DOI: 10.1109/tro.2010.2061290] [Citation(s) in RCA: 127] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
49
|
Tamadazte B, Marchand E, Dembélé S, Le Fort-Piat N. CAD Model-based Tracking and 3D Visual-based Control for MEMS Microassembly. Int J Rob Res 2010. [DOI: 10.1177/0278364910376033] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper investigates sequential robotic microassembly for the construction of 3D micro-electro-mechanical systems (MEMSs) structures using a 3D visual servoing approach. The previous solutions proposed in the literature for these kinds of problems are based on 2D visual control because of the lack of precise and robust 3D measures from the work scene. In this paper, the relevance of the real-time 3D visual tracking method and the 3D vision-based control law proposed is demonstrated. The 3D poses of the MEMSs are supplied in real-time by a computer-aided design model-based tracking algorithm. This algorithm is sufficiently accurate and robust to enable a precise regulation toward zero of the 3D error using the proposed pose-based visual servoing approach. Experiments on a microrobotic setup have been carried out to achieve assemblies of two or more 400 μm × 400 μm × 100 μm silicon micro-objects by their respective 97 μm × 97 μm × 100 μm notches with an assembly clearance from 1 μm to 5 μm. The different microassembly processes are performed with a mean error of 0.3 μm in position and 0.35× 10−2 rad in orientation.
Collapse
Affiliation(s)
- B. Tamadazte
- Femto-St Institute, UMR CNRS 6174 - UFC / ENSMM / UTBM, Automatic Control and Micro-Mechatronic Systems Department, Besançon, France,
| | - E. Marchand
- INRIA Rennes-Bretagne Atlantique, IRISA, Lagadic, France
| | - S. Dembélé
- Femto-St Institute, UMR CNRS 6174 - UFC / ENSMM / UTBM, Automatic Control and Micro-Mechatronic Systems Department, Besançon, France
| | - N. Le Fort-Piat
- Femto-St Institute, UMR CNRS 6174 - UFC / ENSMM / UTBM, Automatic Control and Micro-Mechatronic Systems Department, Besançon, France
| |
Collapse
|
50
|
Chen C, Schonfeld D. A particle filtering framework for joint video tracking and pose estimation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2010; 19:1625-1634. [PMID: 20215081 DOI: 10.1109/tip.2010.2043009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
A method is introduced to track the object's motion and estimate its pose directly from 2-D image sequences. Scale-invariant feature transform (SIFT) is used to extract corresponding feature points from image sequences. We demonstrate that pose estimation from the corresponding feature points can be formed as a solution to Sylvester's equation. We show that the proposed approach to the solution of Sylvester's equation is equivalent to the classical SVD method for 3D-3D pose estimation. However, whereas classical SVD cannot be used for pose estimation directly from 2-D image sequences, our method based on Sylvester's equation provides a new approach to pose estimation. Smooth video tracking and pose estimation is finally obtained by using the solution to Sylvester's equation within the importance sampling density of the particle filtering framework. Finally, computer simulation experiments conducted over synthetic data and real-world videos demonstrate the effectiveness of our method in both robustness and speed compared with other similar object tracking and pose estimation methods.
Collapse
Affiliation(s)
- Chong Chen
- Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA.
| | | |
Collapse
|