1
|
Abstract
Research and development of autonomous mobile robotic solutions that can perform several active agricultural tasks (pruning, harvesting, mowing) have been growing. Robots are now used for a variety of tasks such as planting, harvesting, environmental monitoring, supply of water and nutrients, and others. To do so, robots need to be able to perform online localization and, if desired, mapping. The most used approach for localization in agricultural applications is based in standalone Global Navigation Satellite System-based systems. However, in many agricultural and forest environments, satellite signals are unavailable or inaccurate, which leads to the need of advanced solutions independent from these signals. Approaches like simultaneous localization and mapping and visual odometry are the most promising solutions to increase localization reliability and availability. This work leads to the main conclusion that, few methods can achieve simultaneously the desired goals of scalability, availability, and accuracy, due to the challenges imposed by these harsh environments. In the near future, novel contributions to this field are expected that will help one to achieve the desired goals, with the development of more advanced techniques, based on 3D localization, and semantic and topological mapping. In this context, this work proposes an analysis of the current state-of-the-art of localization and mapping approaches in agriculture and forest environments. Additionally, an overview about the available datasets to develop and test these approaches is performed. Finally, a critical analysis of this research field is done, with the characterization of the literature using a variety of metrics.
Collapse
|
2
|
A Distributed Radio Beacon/IMU/Altimeter Integrated Localization Scheme with Uncertain Initial Beacon Locations for Lunar Pinpoint Landing. SENSORS 2020; 20:s20195643. [PMID: 33023169 PMCID: PMC7583795 DOI: 10.3390/s20195643] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 09/10/2020] [Accepted: 09/30/2020] [Indexed: 11/17/2022]
Abstract
As a growing number of exploration missions have successfully landed on the Moon in recent decades, ground infrastructures, such as radio beacons, have attracted a great deal of attention in the design of navigation systems. None of the available studies regarding integrating beacon measurements for pinpoint landing have considered uncertain initial beacon locations, which are quite common in practice. In this paper, we propose a radio beacon/inertial measurement unit (IMU)/altimeter localization scheme that is sufficiently robust regarding uncertain initial beacon locations. This scheme was designed based on the sparse extended information filter (SEIF) to locate the lander and update the beacon configuration at the same time. Then, an adaptive iterated sparse extended hybrid filter (AISEHF) was devised by modifying the prediction and update stage of SEIF with a hybrid-form propagation and a damping iteration algorithm, respectively. The simulation results indicated that the proposed method effectively reduced the error in the position estimations caused by uncertain beacon locations and made an effective trade-off between the estimation accuracy and the computational efficiency. Thus, this method is a potential candidate for future lunar exploration activities.
Collapse
|
3
|
Barfoot TD, Forbes JR, Yoon DJ. Exactly sparse Gaussian variational inference with application to derivative-free batch nonlinear state estimation. Int J Rob Res 2020. [DOI: 10.1177/0278364920937608] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We present a Gaussian variational inference (GVI) technique that can be applied to large-scale nonlinear batch state estimation problems. The main contribution is to show how to fit both the mean and (inverse) covariance of a Gaussian to the posterior efficiently, by exploiting factorization of the joint likelihood of the state and data, as is common in practical problems. This is different than maximum a posteriori (MAP) estimation, which seeks the point estimate for the state that maximizes the posterior (i.e., the mode). The proposed exactly sparse Gaussian variational inference (ESGVI) technique stores the inverse covariance matrix, which is typically very sparse (e.g., block-tridiagonal for classic state estimation). We show that the only blocks of the (dense) covariance matrix that are required during the calculations correspond to the non-zero blocks of the inverse covariance matrix, and further show how to calculate these blocks efficiently in the general GVI problem. ESGVI operates iteratively, and while we can use analytical derivatives at each iteration, Gaussian cubature can be substituted, thereby producing an efficient derivative-free batch formulation. ESGVI simplifies to precisely the Rauch–Tung–Striebel (RTS) smoother in the batch linear estimation case, but goes beyond the ‘extended’ RTS smoother in the nonlinear case because it finds the best-fit Gaussian (mean and covariance), not the MAP point estimate. We demonstrate the technique on controlled simulation problems and a batch nonlinear simultaneous localization and mapping problem with an experimental dataset.
Collapse
Affiliation(s)
| | - James R Forbes
- Department of Mechanical Engineering, McGill University, Canada
| | - David J Yoon
- Institute for Aerospace Studies, University of Toronto, Canada
| |
Collapse
|
4
|
Jiang Z, Zhu J, Lin Z, Li Z, Guo R. 3D mapping of outdoor environments by scan matching and motion averaging. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.09.022] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
5
|
A Decorrelated Distributed EKF-SLAM System for the Autonomous Navigation of Mobile Robots. J INTELL ROBOT SYST 2019. [DOI: 10.1007/s10846-019-01069-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
6
|
GNSS/INS/LiDAR-SLAM Integrated Navigation System Based on Graph Optimization. REMOTE SENSING 2019. [DOI: 10.3390/rs11091009] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
A Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS)/Light Detection and Ranging (LiDAR)-Simultaneous Localization and Mapping (SLAM) integrated navigation system based on graph optimization is proposed and implemented in this paper. The navigation results are obtained by the information fusion of the GNSS position, Inertial Measurement Unit (IMU) preintegration result and the relative pose from the 3D probability map matching with graph optimizing. The sliding window method was adopted to ensure that the computational load of the graph optimization does not increase with time. Land vehicle tests were conducted, and the results show that the proposed GNSS/INS/LiDAR-SLAM integrated navigation system can effectively improve the navigation positioning accuracy compared to GNSS/INS and other current GNSS/INS/LiDAR methods. During the simulation of one-minute periods of GNSS outages, compared to the GNSS/INS integrated navigation system, the root mean square (RMS) of the position errors in the North and East directions of the proposed navigation system are reduced by approximately 82.2% and 79.6%, respectively, and the position error in the vertical direction and attitude errors are equivalent. Compared to the benchmark method of GNSS/INS/LiDAR-Google Cartographer, the RMS of the position errors in the North, East and vertical directions decrease by approximately 66.2%, 63.1% and 75.1%, respectively, and the RMS of the roll, pitch and yaw errors are reduced by approximately 89.5%, 92.9% and 88.5%, respectively. Furthermore, the relative position error during the GNSS outage periods is reduced to 0.26% of the travel distance for the proposed method. Therefore, the GNSS/INS/LiDAR-SLAM integrated navigation system proposed in this paper can effectively fuse the information of GNSS, IMU and LiDAR and can significantly mitigate the navigation error, especially for cases of GNSS signal attenuation or interruption.
Collapse
|
7
|
Abstract
This paper solves the classical problem of simultaneous localization and mapping (SLAM) in a fashion that avoids linearized approximations altogether. Based on the creation of virtual synthetic measurements, the algorithm uses a linear time-varying Kalman observer, bypassing errors and approximations brought by the linearization process in traditional extended Kalman filtering SLAM. Convergence rates of the algorithm are established using contraction analysis. Different combinations of sensor information can be exploited, such as bearing measurements, range measurements, optical flow, or time-to-contact. SLAM-DUNK, a more advanced version of the algorithm in global coordinates, exploits the conditional independence property of the SLAM problem, decoupling the covariance matrices between different landmarks and reducing computational complexity to O(n). As illustrated in simulations, the proposed algorithm can solve SLAM problems in both 2D and 3D scenarios with guaranteed convergence rates in a full nonlinear context.
Collapse
Affiliation(s)
- Feng Tan
- Nonlinear Systems Laboratory, Massachusetts Institute of Technology, USA
| | - Winfried Lohmiller
- Nonlinear Systems Laboratory, Massachusetts Institute of Technology, USA
| | | |
Collapse
|
8
|
|
9
|
|
10
|
Mu B, Paull L, Agha-Mohammadi AA, Leonard JJ, How JP. Two-Stage Focused Inference for Resource-Constrained Minimal Collision Navigation. IEEE T ROBOT 2017. [DOI: 10.1109/tro.2016.2623344] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
11
|
Barrios P, Adams M, Leung K, Inostroza F, Naqvi G, Orchard ME. Metrics for Evaluating Feature-Based Mapping Performance. IEEE T ROBOT 2017. [DOI: 10.1109/tro.2016.2627027] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
12
|
A Hybrid Bayesian-Frequentist Approach to SLAM. J INTELL ROBOT SYST 2016. [DOI: 10.1007/s10846-015-0319-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
13
|
Visual EKF-SLAM from Heterogeneous Landmarks. SENSORS 2016; 16:s16040489. [PMID: 27070602 PMCID: PMC4851003 DOI: 10.3390/s16040489] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2015] [Revised: 03/24/2016] [Accepted: 03/30/2016] [Indexed: 11/30/2022]
Abstract
Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology.
Collapse
|
14
|
|
15
|
He B, Liu Y, Dong D, Shen Y, Yan T, Nian R. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles. SENSORS 2015; 15:19852-79. [PMID: 26287194 PMCID: PMC4570400 DOI: 10.3390/s150819852] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2015] [Revised: 08/03/2015] [Accepted: 08/06/2015] [Indexed: 11/16/2022]
Abstract
In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.
Collapse
Affiliation(s)
- Bo He
- School of Information Science and Engineering, Ocean University of China, 238 Songling Road, Qingdao 266100, China.
| | - Yang Liu
- School of Information Science and Engineering, Ocean University of China, 238 Songling Road, Qingdao 266100, China.
| | - Diya Dong
- School of Information Science and Engineering, Ocean University of China, 238 Songling Road, Qingdao 266100, China.
| | - Yue Shen
- School of Information Science and Engineering, Ocean University of China, 238 Songling Road, Qingdao 266100, China.
| | - Tianhong Yan
- School of Mechanical and Electrical Engineering, China Jiliang University, 258 Xueyuan Street, Xiasha High-Edu Park, Hangzhou 310018, China.
| | - Rui Nian
- School of Information Science and Engineering, Ocean University of China, 238 Songling Road, Qingdao 266100, China.
| |
Collapse
|
16
|
Abstract
SUMMARYIn this work, we investigate a quaternion-based formulation of 3D Simultaneous Localization and Mapping with Extended Kalman Filter (EKF-SLAM) using relative pose measurements. We introduce a discrete-time derivation that avoids thenormalization problemthat often arises when using unit quaternions in Kalman filter and we study its observability properties. The consistency of the estimation errors with the corresponding covariance matrices is also evaluated. The approach is further tested on real data from theRawseeds datasetand it is applied within a delayed-state EKF architecture for estimating a dense 3D map of an unknown environment. The contribution is motivated by the possibility of abstracting multi-sensorial information in terms of relative pose measurements and for its straightforward extensions to the multi robot case.
Collapse
|
17
|
|
18
|
|
19
|
Walter MR, Hemachandra S, Homberg B, Tellex S, Teller S. A framework for learning semantic maps from grounded natural language descriptions. Int J Rob Res 2014. [DOI: 10.1177/0278364914537359] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper describes a framework that enables robots to efficiently learn human-centric models of their environment from natural language descriptions. Typical semantic mapping approaches are limited to augmenting metric maps with higher-level properties of the robot’s surroundings (e.g. place type, object locations) that can be inferred from the robot’s sensor data, but do not use this information to improve the metric map. The novelty of our algorithm lies in fusing high-level knowledge that people can uniquely provide through speech with metric information from the robot’s low-level sensor streams. Our method jointly estimates a hybrid metric, topological, and semantic representation of the environment. This semantic graph provides a common framework in which we integrate information that the user communicates (e.g. labels and spatial relations) with metric observations from low-level sensors. Our algorithm efficiently maintains a factored distribution over semantic graphs based upon the stream of natural language and low-level sensor information. We detail the means by which the framework incorporates knowledge conveyed by the user’s descriptions, including the ability to reason over expressions that reference yet unknown regions in the environment. We evaluate the algorithm’s ability to learn human-centric maps of several different environments and analyze the knowledge inferred from language and the utility of the learned maps. The results demonstrate that the incorporation of information from free-form descriptions increases the metric, topological, and semantic accuracy of the recovered environment model.
Collapse
Affiliation(s)
- Matthew R. Walter
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Sachithra Hemachandra
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Bianca Homberg
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Stefanie Tellex
- Department of Computer Science, Brown University, Providence, RI, USA
| | - Seth Teller
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
20
|
|
21
|
Song Y, Li Q, Kang Y, Yan D. Effective cubature FastSLAM: SLAM with Rao-Blackwellized particle filter and cubature rule for Gaussian weighted integral. Adv Robot 2013. [DOI: 10.1080/01691864.2013.826406] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
22
|
Abstract
This paper addresses the visual odometry problem from a machine learning perspective. Optical flow information from a single camera is used as input for a multiple-output Gaussian process (MOGP) framework, that estimates linear and angular camera velocities. This approach has several benefits. (1) It substitutes the need for conventional camera calibration, by introducing a semi-parametric model that is able to capture nuances that a strictly parametric geometric model struggles with. (2) It is able to recover absolute scale if a range sensor (e.g. a laser scanner) is used for ground-truth, provided that training and testing data share a certain similarity. (3) It is naturally able to provide measurement uncertainties. We extend the standard MOGP framework to include the ability to infer joint estimates (full covariance matrices) for both translation and rotation, taking advantage of the fact that all estimates are correlated since they are derived from the same vehicle. We also modify the common zero mean assumption of a Gaussian process to accommodate a standard geometric model of the camera, thus providing an initial estimate that is then further refined by the non-parametric model. Both Gaussian process hyperparameters and camera parameters are trained simultaneously, so there is still no need for traditional camera calibration, although if these values are known they can be used to speed up training. This approach has been tested in a wide variety of situations, both 2D in urban and off-road environments (two degrees of freedom) and 3D with unmanned aerial vehicles (six degrees of freedom), with results that are comparable to standard state-of-the-art visual odometry algorithms and even more traditional methods, such as wheel encoders and laser-based Iterative Closest Point. We also test its limits to generalize over environment changes by varying training and testing conditions independently, and also by changing cameras between training and testing.
Collapse
Affiliation(s)
- Vitor Guizilini
- Australian Centre for Field Robotics, School of Information Technologies, University of Sydney, Australia
| | - Fabio Ramos
- Australian Centre for Field Robotics, School of Information Technologies, University of Sydney, Australia
| |
Collapse
|
23
|
Gutmann JS, Eade E, Fong P, Munich ME. Vector Field SLAM—Localization by Learning the Spatial Variation of Continuous Signals. IEEE T ROBOT 2012. [DOI: 10.1109/tro.2011.2177691] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
24
|
Dong H, Tang J, Chen W, Nagano A, Luo Z. Novel Information Matrix Sparsification Approach for Practical Implementation of Simultaneous Localization and Mapping. Adv Robot 2012. [DOI: 10.1163/016918610x493624] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Haiwei Dong
- a Department of Computer Science and Systems Engineering, Graduate School of Engineering, Kobe University, 1-1 Rokkodai, Nada-ku, Kobe 657-8501, Japan;,
| | - Jun Tang
- b New Huadu Cooperation Limited, No. 100, Century Avenue, Pudong New Area, Shanghai 200-120, P. R. China
| | - Weidong Chen
- c Department of Automation, School of Electronic, Information and Electrical Engineering, Shanghai Jiaotong University, 800 Dongchuan Road, Shanghai 200-240, P. R. China
| | - Akinori Nagano
- d Department of Computer Science and Systems Engineering, Graduate School of Engineering, Kobe University, 1-1 Rokkodai, Nada-ku, Kobe 657-8501, Japan
| | - Zhiwei Luo
- e Department of Computer Science and Systems Engineering, Graduate School of Engineering, Kobe University, 1-1 Rokkodai, Nada-ku, Kobe 657-8501, Japan
| |
Collapse
|
25
|
Pathirana PN, Savkinb AV, Ekanayake SW, Bauer NJ. A Robust Solution to the Stereo-Vision-Based Simultaneous Localization and Mapping Problem with Steady and Moving Landmarks. Adv Robot 2012. [DOI: 10.1163/016918611x563292] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
| | - Andrey V. Savkinb
- b School of Engineering Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia
| | | | - Nicholas J. Bauer
- d School of Engineering and IT, Deakin University, VIC 3217, Australia
| |
Collapse
|
26
|
Saeedi S, Paull L, Trentini M, Li H. Neural network-based multiple robot simultaneous localization and mapping. IEEE TRANSACTIONS ON NEURAL NETWORKS 2011; 22:2376-87. [PMID: 22156983 DOI: 10.1109/tnn.2011.2176541] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this paper, a decentralized platform for simultaneous localization and mapping (SLAM) with multiple robots is developed. Each robot performs single robot view-based SLAM using an extended Kalman filter to fuse data from two encoders and a laser ranger. To extend this approach to multiple robot SLAM, a novel occupancy grid map fusion algorithm is proposed. Map fusion is achieved through a multistep process that includes image preprocessing, map learning (clustering) using neural networks, relative orientation extraction using norm histogram cross correlation and a Radon transform, relative translation extraction using matching norm vectors, and then verification of the results. The proposed map learning method is a process based on the self-organizing map. In the learning phase, the obstacles of the map are learned by clustering the occupied cells of the map into clusters. The learning is an unsupervised process which can be done on the fly without any need to have output training patterns. The clusters represent the spatial form of the map and make further analyses of the map easier and faster. Also, clusters can be interpreted as features extracted from the occupancy grid map so the map fusion problem becomes a task of matching features. Results of the experiments from tests performed on a real environment with multiple robots prove the effectiveness of the proposed solution.
Collapse
Affiliation(s)
- Sajad Saeedi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 9P8, Canada.
| | | | | | | |
Collapse
|
27
|
He B, Zhang H, Li C, Zhang S, Liang Y, Yan T. Autonomous navigation for autonomous underwater vehicles based on information filters and active sensing. SENSORS 2011; 11:10958-80. [PMID: 22346682 PMCID: PMC3274324 DOI: 10.3390/s111110958] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2011] [Revised: 11/11/2011] [Accepted: 11/18/2011] [Indexed: 11/27/2022]
Abstract
This paper addresses an autonomous navigation method for the autonomous underwater vehicle (AUV) C-Ranger applying information-filter-based simultaneous localization and mapping (SLAM), and its sea trial experiments in Tuandao Bay (Shangdong Province, P.R. China). Weak links in the information matrix in an extended information filter (EIF) can be pruned to achieve an efficient approach-sparse EIF algorithm (SEIF-SLAM). All the basic update formulae can be implemented in constant time irrespective of the size of the map; hence the computational complexity is significantly reduced. The mechanical scanning imaging sonar is chosen as the active sensing device for the underwater vehicle, and a compensation method based on feedback of the AUV pose is presented to overcome distortion of the acoustic images due to the vehicle motion. In order to verify the feasibility of the navigation methods proposed for the C-Ranger, a sea trial was conducted in Tuandao Bay. Experimental results and analysis show that the proposed navigation approach based on SEIF-SLAM improves the accuracy of the navigation compared with conventional method; moreover the algorithm has a low computational cost when compared with EKF-SLAM.
Collapse
Affiliation(s)
- Bo He
- School of Information Science and Engineering, Ocean University of China, 238 Songling Road, Qingdao 266100, China; E-Mails: (H.Z.); (C.L.); (S.Z.); (Y.L.)
- Authors to whom correspondence should be addressed; E-Mails: (B.H.); (T.Y.); Tel.: +86-532-6678-2339 (B.H.); Fax: +86-532-6678-2339 (B.H.)
| | - Hongjin Zhang
- School of Information Science and Engineering, Ocean University of China, 238 Songling Road, Qingdao 266100, China; E-Mails: (H.Z.); (C.L.); (S.Z.); (Y.L.)
| | - Chao Li
- School of Information Science and Engineering, Ocean University of China, 238 Songling Road, Qingdao 266100, China; E-Mails: (H.Z.); (C.L.); (S.Z.); (Y.L.)
| | - Shujing Zhang
- School of Information Science and Engineering, Ocean University of China, 238 Songling Road, Qingdao 266100, China; E-Mails: (H.Z.); (C.L.); (S.Z.); (Y.L.)
| | - Yan Liang
- School of Information Science and Engineering, Ocean University of China, 238 Songling Road, Qingdao 266100, China; E-Mails: (H.Z.); (C.L.); (S.Z.); (Y.L.)
| | - Tianhong Yan
- School of Mechanical & Electrical Engineering, China Jiliang University, 258 Xueyuan Street, Xiasha High-Edu Park, Hangzhou 310018, China
- State Key Lab of Digital Manufacturing and Equipments Technology, Huazhong University of Science and Technology, Luoyu Road, Wuhan 430074, China
- Authors to whom correspondence should be addressed; E-Mails: (B.H.); (T.Y.); Tel.: +86-532-6678-2339 (B.H.); Fax: +86-532-6678-2339 (B.H.)
| |
Collapse
|
28
|
Li Y, Olson EB. A general purpose feature extractor for light detection and ranging data. SENSORS 2010; 10:10356-75. [PMID: 22163474 PMCID: PMC3230992 DOI: 10.3390/s101110356] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2010] [Revised: 10/07/2010] [Accepted: 10/30/2010] [Indexed: 11/26/2022]
Abstract
Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.
Collapse
Affiliation(s)
- Yangming Li
- Department of Computer Science Engineering, University of Michigan, 2260 Hayward St, Ann Arbor, MI 48109, USA.
| | | |
Collapse
|
29
|
|
30
|
Iterated D-SLAM map joining: evaluating its performance in terms of consistency, accuracy and efficiency. Auton Robots 2009. [DOI: 10.1007/s10514-009-9153-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
31
|
Dunbabin M, Corke P, Vasilescu I, Rus D. Experiments with Cooperative Control of Underwater Robots. Int J Rob Res 2009. [DOI: 10.1177/0278364908098456] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In this paper we describe cooperative control algorithms for robots and sensor nodes in an underwater environment. Cooperative navigation is defined as the ability of a coupled system of autonomous robots to pool their resources to achieve long-distance navigation and a larger controllability space. Other types of useful cooperation in underwater environments include: exchange of information such as data download and retasking; cooperative localization and tracking; and physical connection (docking) for tasks such as deployment of underwater sensor networks, collection of nodes and rescue of damaged robots. We present experimental results obtained with an underwater system that consists of two very different robots and a number of sensor network modules. We present the hardware and software architecture of this underwater system. We then describe various interactions between the robots and sensor nodes and between the two robots, including cooperative navigation. Finally, we describe our experiments with this underwater system and present data.
Collapse
Affiliation(s)
| | | | | | - Daniela Rus
- Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
32
|
Shoudong Huang, Zhan Wang, Dissanayake G. Sparse Local Submap Joining Filter for Building Large-Scale Maps. IEEE T ROBOT 2008. [DOI: 10.1109/tro.2008.2003259] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
33
|
|
34
|
Pinies P, Tardos J. Large-Scale SLAM Building Conditionally Independent Local Maps: Application to Monocular Vision. IEEE T ROBOT 2008. [DOI: 10.1109/tro.2008.2004636] [Citation(s) in RCA: 76] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|