1
|
Kagioulis E, Knight J, Graham P, Nowotny T, Philippides A. Adaptive Route Memory Sequences for Insect-Inspired Visual Route Navigation. Biomimetics (Basel) 2024; 9:731. [PMID: 39727735 DOI: 10.3390/biomimetics9120731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 11/04/2024] [Accepted: 11/09/2024] [Indexed: 12/28/2024] Open
Abstract
Visual navigation is a key capability for robots and animals. Inspired by the navigational prowess of social insects, a family of insect-inspired route navigation algorithms-familiarity-based algorithms-have been developed that use stored panoramic images collected during a training route to subsequently derive directional information during route recapitulation. However, unlike the ants that inspire them, these algorithms ignore the sequence in which the training images are acquired so that all temporal information/correlation is lost. In this paper, the benefits of incorporating sequence information in familiarity-based algorithms are tested. To do this, instead of comparing a test view to all the training route images, a window of memories is used to restrict the number of comparisons that need to be made. As ants are able to visually navigate when odometric information is removed, the window position is updated via visual matching information only and not odometry. The performance of an algorithm without sequence information is compared to the performance of window methods with different fixed lengths as well as a method that adapts the window size dynamically. All algorithms were benchmarked on a simulation of an environment used for ant navigation experiments and showed that sequence information can boost performance and reduce computation. A detailed analysis of successes and failures highlights the interaction between the length of the route memory sequence and environment type and shows the benefits of an adaptive method.
Collapse
Affiliation(s)
- Efstathios Kagioulis
- Sussex AI, School of Engineering and Informatics, University of Sussex, Brighton BN1 9QJ, UK
| | - James Knight
- Sussex AI, School of Engineering and Informatics, University of Sussex, Brighton BN1 9QJ, UK
| | - Paul Graham
- Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9QG, UK
| | - Thomas Nowotny
- Sussex AI, School of Engineering and Informatics, University of Sussex, Brighton BN1 9QJ, UK
| | - Andrew Philippides
- Sussex AI, School of Engineering and Informatics, University of Sussex, Brighton BN1 9QJ, UK
| |
Collapse
|
2
|
van Dijk T, De Wagter C, de Croon GCHE. Visual route following for tiny autonomous robots. Sci Robot 2024; 9:eadk0310. [PMID: 39018372 DOI: 10.1126/scirobotics.adk0310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 06/14/2024] [Indexed: 07/19/2024]
Abstract
Navigation is an essential capability for autonomous robots. In particular, visual navigation has been a major research topic in robotics because cameras are lightweight, power-efficient sensors that provide rich information on the environment. However, the main challenge of visual navigation is that it requires substantial computational power and memory for visual processing and storage of the results. As of yet, this has precluded its use on small, extremely resource-constrained robots such as lightweight drones. Inspired by the parsimony of natural intelligence, we propose an insect-inspired approach toward visual navigation that is specifically aimed at extremely resource-restricted robots. It is a route-following approach in which a robot's outbound trajectory is stored as a collection of highly compressed panoramic images together with their spatial relationships as measured with odometry. During the inbound journey, the robot uses a combination of odometry and visual homing to return to the stored locations, with visual homing preventing the buildup of odometric drift. A main advancement of the proposed strategy is that the number of stored compressed images is minimized by spacing them apart as far as the accuracy of odometry allows. To demonstrate the suitability for small systems, we implemented the strategy on a tiny 56-gram drone. The drone could successfully follow routes up to 100 meters with a trajectory representation that consumed less than 20 bytes per meter. The presented method forms a substantial step toward the autonomous visual navigation of tiny robots, facilitating their more widespread application.
Collapse
Affiliation(s)
- Tom van Dijk
- Control and Operations Department, Faculty of Aerospace Engineering, Delft University of Technology, Delft, Netherlands
| | - Christophe De Wagter
- Control and Operations Department, Faculty of Aerospace Engineering, Delft University of Technology, Delft, Netherlands
| | - Guido C H E de Croon
- Control and Operations Department, Faculty of Aerospace Engineering, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
3
|
Yu Z, Sadati SMH, Perera S, Hauser H, Childs PRN, Nanayakkara T. Tapered whisker reservoir computing for real-time terrain identification-based navigation. Sci Rep 2023; 13:5213. [PMID: 36997577 PMCID: PMC10063629 DOI: 10.1038/s41598-023-31994-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 03/21/2023] [Indexed: 04/03/2023] Open
Abstract
This paper proposes a new method for real-time terrain recognition-based navigation for mobile robots. Mobile robots performing tasks in unstructured environments need to adapt their trajectories in real-time to achieve safe and efficient navigation in complex terrains. However, current methods largely depend on visual and IMU (inertial measurement units) that demand high computational resources for real-time applications. In this paper, a real-time terrain identification-based navigation method is proposed using an on-board tapered whisker-based reservoir computing system. The nonlinear dynamic response of the tapered whisker was investigated in various analytical and Finite Element Analysis frameworks to demonstrate its reservoir computing capabilities. Numerical simulations and experiments were cross-checked with each other to verify that whisker sensors can separate different frequency signals directly in the time domain and demonstrate the computational superiority of the proposed system, and that different whisker axis locations and motion velocities provide variable dynamical response information. Terrain surface-following experiments demonstrated that our system could accurately identify changes in the terrain in real-time and adjust its trajectory to stay on specific terrain.
Collapse
Affiliation(s)
- Zhenhua Yu
- Dyson School of Design Engineering, Imperial College London, London, SW7 2DB, UK.
| | - S M Hadi Sadati
- Department of Surgical and Interventional Engineering, King's College London, London, WC2R 2LS, UK
| | - Shehara Perera
- Dyson School of Design Engineering, Imperial College London, London, SW7 2DB, UK
| | - Helmut Hauser
- Bristol Robotics Laboratory, and also with SoftLab, University of Bristol, Bristol, BS8 1TH, UK
| | - Peter R N Childs
- Dyson School of Design Engineering, Imperial College London, London, SW7 2DB, UK
| | | |
Collapse
|
4
|
Mahdavian M, Yin K, Chen M. Robust Visual Teach and Repeat for UGVs Using 3D Semantic Maps. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3189165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Mohammad Mahdavian
- School of Computing Science, Simon Fraser University, Burnaby, BC, Canada
| | - KangKang Yin
- School of Computing Science, Simon Fraser University, Burnaby, BC, Canada
| | - Mo Chen
- School of Computing Science, Simon Fraser University, Burnaby, BC, Canada
| |
Collapse
|
5
|
Rozsypálek Z, Broughton G, Linder P, Rouček T, Blaha J, Mentzl L, Kusumam K, Krajník T. Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation. SENSORS 2022; 22:s22082975. [PMID: 35458959 PMCID: PMC9030179 DOI: 10.3390/s22082975] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 04/04/2022] [Accepted: 04/11/2022] [Indexed: 12/04/2022]
Abstract
Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model’s robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R.
Collapse
Affiliation(s)
- Zdeněk Rozsypálek
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
- Correspondence:
| | - George Broughton
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
| | - Pavel Linder
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
| | - Tomáš Rouček
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
| | - Jan Blaha
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
| | - Leonard Mentzl
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
| | - Keerthy Kusumam
- Department of Computer Science, University of Nottingham, Jubilee Campus, 7301 Wollaton Rd, Lenton, Nottingham NG8 1BB, UK;
| | - Tomáš Krajník
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (G.B.); (P.L.); (T.R.); (J.B.); (L.M.); (T.K.)
| |
Collapse
|
6
|
Rouček T, Amjadi AS, Rozsypálek Z, Broughton G, Blaha J, Kusumam K, Krajník T. Self-Supervised Robust Feature Matching Pipeline for Teach and Repeat Navigation. SENSORS (BASEL, SWITZERLAND) 2022; 22:2836. [PMID: 35458823 PMCID: PMC9032253 DOI: 10.3390/s22082836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 03/28/2022] [Accepted: 03/31/2022] [Indexed: 06/14/2023]
Abstract
The performance of deep neural networks and the low costs of computational hardware has made computer vision a popular choice in many robotic systems. An attractive feature of deep-learned methods is their ability to cope with appearance changes caused by day-night cycles and seasonal variations. However, deep learning of neural networks typically relies on large numbers of hand-annotated images, which requires significant effort for data collection and annotation. We present a method that allows autonomous, self-supervised training of a neural network in visual teach-and-repeat (VT&R) tasks, where a mobile robot has to traverse a previously taught path repeatedly. Our method is based on a fusion of two image registration schemes: one based on a Siamese neural network and another on point-feature matching. As the robot traverses the taught paths, it uses the results of feature-based matching to train the neural network, which, in turn, provides coarse registration estimates to the feature matcher. We show that as the neural network gets trained, the accuracy and robustness of the navigation increases, making the robot capable of dealing with significant changes in the environment. This method can significantly reduce the data annotation efforts when designing new robotic systems or introducing robots into new environments. Moreover, the method provides annotated datasets that can be deployed in other navigation systems. To promote the reproducibility of the research presented herein, we provide our datasets, codes and trained models online.
Collapse
Affiliation(s)
- Tomáš Rouček
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (A.S.A.); (Z.R.); (G.B.); (J.B.); (T.K.)
| | - Arash Sadeghi Amjadi
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (A.S.A.); (Z.R.); (G.B.); (J.B.); (T.K.)
| | - Zdeněk Rozsypálek
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (A.S.A.); (Z.R.); (G.B.); (J.B.); (T.K.)
| | - George Broughton
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (A.S.A.); (Z.R.); (G.B.); (J.B.); (T.K.)
| | - Jan Blaha
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (A.S.A.); (Z.R.); (G.B.); (J.B.); (T.K.)
| | - Keerthy Kusumam
- Department of Computer Science, University of Nottingham, Jubilee Campus, 7301 Wollaton Rd, Lenton, Nottingham NG8 1BB, UK;
| | - Tomáš Krajník
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (A.S.A.); (Z.R.); (G.B.); (J.B.); (T.K.)
| |
Collapse
|
7
|
Mattamala M, Chebrolu N, Fallon M. An Efficient Locally Reactive Controller for Safe Navigation in Visual Teach and Repeat Missions. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3143196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
8
|
Wei P, Liang R, Michelmore A, Kong Z. Vision-Based 2D Navigation of Unmanned Aerial Vehicles in Riverine Environments with Imitation Learning. J INTELL ROBOT SYST 2022. [DOI: 10.1007/s10846-022-01593-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractThere have been many researchers studying how to enable unmanned aerial vehicles (UAVs) to navigate in complex and natural environments autonomously. In this paper, we develop an imitation learning framework and use it to train navigation policies for the UAV flying inside complex and GPS-denied riverine environments. The UAV relies on a forward-pointing camera to perform reactive maneuvers and navigate itself in 2D space by adapting the heading. We compare the performance of a linear regression-based controller, an end-to-end neural network controller and a variational autoencoder (VAE)-based controller trained with data aggregation method in the simulation environments. The results show that the VAE-based controller outperforms the other two controllers in both training and testing environments and is able to navigate the UAV with a longer traveling distance and a lower intervention rate from the pilots.
Collapse
|
9
|
Wang Y, Xue T, Li Q. A Robust Image-Sequence-Based Framework for Visual Place Recognition in Changing Environments. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:152-163. [PMID: 32203043 DOI: 10.1109/tcyb.2020.2977128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article proposes a robust image-sequence-based framework to deal with two challenges of visual place recognition in changing environments: 1) viewpoint variations and 2) environmental condition variations. Our framework includes two main parts. The first part is to calculate the distance between two images from a reference image sequence and a query image sequence. In this part, we remove the deep features of nonoverlap contents in these two images and utilize the remaining deep features to calculate the distance. As the deep features of nonoverlap contents are caused by viewpoint variations, removing these deep features can improve the robustness to viewpoint variations. Based on the first part, in the second part, we first calculate the distances of all pairs of images from a reference image sequence and a query image sequence, and obtain a distance matrix. Afterward, we design two convolutional operators to retrieve the distance submatrix with the minimum diagonal distribution. The minimum diagonal distribution contains more environmental information, which is insensitive to environmental condition variations. The experimental results suggest that our framework exhibits better performance than several state-of-the-art methods. Moreover, the analysis of runtime shows that our framework has the potential to satisfy real-time demands.
Collapse
|
10
|
Burnett K, Wu Y, Yoon DJ, Schoellig AP, Barfoot TD. Are We Ready for Radar to Replace Lidar in All-Weather Mapping and Localization? IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3192885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Keenan Burnett
- University of Toronto, Institute for Aerospace Studies (UTIAS), Toronto, Ontario, Canada
| | - Yuchen Wu
- University of Toronto, Institute for Aerospace Studies (UTIAS), Toronto, Ontario, Canada
| | - David J. Yoon
- University of Toronto, Institute for Aerospace Studies (UTIAS), Toronto, Ontario, Canada
| | - Angela P. Schoellig
- University of Toronto, Institute for Aerospace Studies (UTIAS), Toronto, Ontario, Canada
| | - Timothy D. Barfoot
- University of Toronto, Institute for Aerospace Studies (UTIAS), Toronto, Ontario, Canada
| |
Collapse
|
11
|
Magnenat S, Colas F. A Bayesian tracker for synthesizing mobile robot behaviour from demonstration. Auton Robots 2021. [DOI: 10.1007/s10514-021-10019-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
12
|
Xu M, Fischer T, Sunderhauf N, Milford M. Probabilistic Appearance-Invariant Topometric Localization With New Place Awareness. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3096745] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
13
|
Bista SR, Ward B, Corke P. Image-Based Indoor Topological Navigation with Collision Avoidance for Resource-Constrained Mobile Robots. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01390-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
14
|
Meng X, Xiang Y, Fox D. Learning Composable Behavior Embeddings for Long-Horizon Visual Navigation. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3060649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
15
|
Xu M, Snderhauf N, Milford M. Probabilistic Visual Place Recognition for Hierarchical Localization. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3040134] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
16
|
|
17
|
|
18
|
Alberto Jordan M. Progressive Underwater Exploration with a Corridor-Based Navigation System. UNDERWATER WORK 2021. [DOI: 10.5772/intechopen.90934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
The present work focuses on the exploration of underwater environments by means of autonomous submarines like AUVs using vision-based navigation. An approach called Corridor SLAM (C-SLAM) was developed for this purpose. It implements a global exploration strategy that consists of first creating a trunk corridor on the seabed and then branching as far as possible in different directions to increase the explored region. The system guarantees the safe return of the vehicle to the starting point by taking into account a metric of the corridor lengths that are related to their energy autonomy. Experimental trials in a basin with underwater scenarios demonstrated the feasibility of the approach.
Collapse
|
19
|
Seçkin AÇ. A Natural Navigation Method for Following Path Memories from 2D Maps. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2020. [DOI: 10.1007/s13369-020-04784-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
20
|
De Martini D, Gadd M, Newman P. kRadar++: Coarse-to-Fine FMCW Scanning Radar Localisation. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6002. [PMID: 33105910 PMCID: PMC7660181 DOI: 10.3390/s20216002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 10/16/2020] [Accepted: 10/19/2020] [Indexed: 11/16/2022]
Abstract
This paper presents a novel two-stage system which integrates topological localisation candidates from a radar-only place recognition system with precise pose estimation using spectral landmark-based techniques. We prove that the-recently available-seminal radar place recognition (RPR) and scan matching sub-systems are complementary in a style reminiscent of the mapping and localisation systems underpinning visual teach-and-repeat (VTR) systems which have been exhibited robustly in the last decade. Offline experiments are conducted on the most extensive radar-focused urban autonomy dataset available to the community with performance comparing favourably with and even rivalling alternative state-of-the-art radar localisation systems. Specifically, we show the long-term durability of the approach and of the sensing technology itself to autonomous navigation. We suggest a range of sensible methods of tuning the system, all of which are suitable for online operation. For both tuning regimes, we achieve, over the course of a month of localisation trials against a single static map, high recalls at high precision, and much reduced variance in erroneous metric pose estimation. As such, this work is a necessary first step towards a radar teach-and-repeat (RTR) system and the enablement of autonomy across extreme changes in appearance or inclement conditions.
Collapse
Affiliation(s)
- Daniele De Martini
- Department of Engineering Science, Oxford Robotics Institute, University of Oxford, Oxford OX1 3PJ, UK;
| | - Matthew Gadd
- Department of Engineering Science, Oxford Robotics Institute, University of Oxford, Oxford OX1 3PJ, UK;
| | | |
Collapse
|
21
|
Gao F, Wang L, Zhou B, Zhou X, Pan J, Shen S. Teach-Repeat-Replan: A Complete and Robust System for Aggressive Flight in Complex Environments. IEEE T ROBOT 2020. [DOI: 10.1109/tro.2020.2993215] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
22
|
Hausler S, Chen Z, Hasselmo ME, Milford M. Bio-inspired multi-scale fusion. BIOLOGICAL CYBERNETICS 2020; 114:209-229. [PMID: 32322978 DOI: 10.1007/s00422-020-00831-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 03/27/2020] [Indexed: 06/11/2023]
Abstract
We reveal how implementing the homogeneous, multi-scale mapping frameworks observed in the mammalian brain's mapping systems radically improves the performance of a range of current robotic localization techniques. Roboticists have developed a range of predominantly single- or dual-scale heterogeneous mapping approaches (typically locally metric and globally topological) that starkly contrast with neural encoding of space in mammalian brains: a multi-scale map underpinned by spatially responsive cells like the grid cells found in the rodent entorhinal cortex. Yet the full benefits of a homogeneous multi-scale mapping framework remain unknown in both robotics and biology: in robotics because of the focus on single- or two-scale systems and limits in the scalability and open-field nature of current test environments and benchmark datasets; in biology because of technical limitations when recording from rodents during movement over large areas. New global spatial databases with visual information varying over several orders of magnitude in scale enable us to investigate this question for the first time in real-world environments. In particular, we investigate and answer the following questions: why have multi-scale representations, how many scales should there be, what should the size ratio between consecutive scales be and how does the absolute scale size affect performance? We answer these questions by developing and evaluating a homogeneous, multi-scale mapping framework mimicking aspects of the rodent multi-scale map, but using current robotic place recognition techniques at each scale. Results in large-scale real-world environments demonstrate multi-faceted and significant benefits for mapping and localization performance and identify the key factors that determine performance.
Collapse
|
23
|
Loquercio A, Kaufmann E, Ranftl R, Dosovitskiy A, Koltun V, Scaramuzza D. Deep Drone Racing: From Simulation to Reality With Domain Randomization. IEEE T ROBOT 2020. [DOI: 10.1109/tro.2019.2942989] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
24
|
|
25
|
Zhang K, Yang Y, Fu M, Wang M. Traversability Assessment and Trajectory Planning of Unmanned Ground Vehicles with Suspension Systems on Rough Terrain. SENSORS 2019; 19:s19204372. [PMID: 31658645 PMCID: PMC6833019 DOI: 10.3390/s19204372] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 09/24/2019] [Accepted: 09/27/2019] [Indexed: 11/16/2022]
Abstract
This paper presents a traversability assessment method and a trajectory planning method. They are key features for the navigation of an unmanned ground vehicle (UGV) in a non-planar environment. In this work, a 3D light detection and ranging (LiDAR) sensor is used to obtain the geometric information about a rough terrain surface. For a given SE(2) pose of the vehicle and a specific vehicle model, the SE(3) pose of the vehicle is estimated based on LiDAR points, and then a traversability is computed. The traversability tells the vehicle the effects of its interaction with the rough terrain. Note that the traversability is computed on demand during trajectory planning, so there is not any explicit terrain discretization. The proposed trajectory planner finds an initial path through the non-holonomic A*, which is a modified form of the conventional A* planner. A path is a sequence of poses without timestamps. Then, the initial path is optimized in terms of the traversability, using the method of Lagrange multipliers. The optimization accounts for the model of the vehicle's suspension system. Therefore, the optimized trajectory is dynamically feasible, and the trajectory tracking error is small. The proposed methods were tested in both the simulation and the real-world experiments. The simulation experiments were conducted in a simulator called Gazebo, which uses a physics engine to compute the vehicle motion. The experiments were conducted in various non-planar experiments. The results indicate that the proposed methods could accurately estimate the SE(3) pose of the vehicle. Besides, the trajectory cost of the proposed planner was lower than the trajectory costs of other state-of-the-art trajectory planners.
Collapse
Affiliation(s)
- Kai Zhang
- School of Automation, Beijing Institute of Technology, Beijing 100081, China.
| | - Yi Yang
- School of Automation, Beijing Institute of Technology, Beijing 100081, China.
| | - Mengyin Fu
- School of Automation, Beijing Institute of Technology, Beijing 100081, China.
- School of Automation, Nanjing University of Science and Technology, Nanjing 210094, China.
| | - Meiling Wang
- School of Automation, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
26
|
Tinchev G, Penate-Sanchez A, Fallon M. Learning to See the Wood for the Trees: Deep Laser Localization in Urban and Natural Environments on a CPU. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2895264] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
27
|
Gao F, Wang L, Wang K, Wu W, Zhou B, Han L, Shen S. Optimal Trajectory Generation for Quadrotor Teach-and-Repeat. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2895110] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
28
|
Loop Closure Detection Based on Multi-Scale Deep Feature Fusion. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9061120] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Loop closure detection plays a very important role in the mobile robot navigation field. It is useful in achieving accurate navigation in complex environments and reducing the cumulative error of the robot’s pose estimation. The current mainstream methods are based on the visual bag of word model, but traditional image features are sensitive to illumination changes. This paper proposes a loop closure detection algorithm based on multi-scale deep feature fusion, which uses a Convolutional Neural Network (CNN) to extract more advanced and more abstract features. In order to deal with the different sizes of input images and enrich receptive fields of the feature extractor, this paper uses the spatial pyramid pooling (SPP) of multi-scale to fuse the features. In addition, considering the different contributions of each feature to loop closure detection, the paper defines the distinguishability weight of features and uses it in similarity measurement. It reduces the probability of false positives in loop closure detection. The experimental results show that the loop closure detection algorithm based on multi-scale deep feature fusion has higher precision and recall rates and is more robust to illumination changes than the mainstream methods.
Collapse
|
29
|
Warren M, Greeff M, Patel B, Collier J, Schoellig AP, Barfoot TD. There's No Place Like Home: Visual Teach and Repeat for Emergency Return of Multirotor UAVs During GPS Failure. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2018.2883408] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
30
|
MacTavish K, Paton M, Barfoot TD. Selective memory: Recalling relevant experience for long-term visual localization. J FIELD ROBOT 2018. [DOI: 10.1002/rob.21838] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Kirk MacTavish
- Institute for Aerospace Studies, Faculty of Applied Science & Engineering, University of Toronto; Toronto Ontario Canada
| | - Michael Paton
- Institute for Aerospace Studies, Faculty of Applied Science & Engineering, University of Toronto; Toronto Ontario Canada
| | - Timothy D. Barfoot
- Institute for Aerospace Studies, Faculty of Applied Science & Engineering, University of Toronto; Toronto Ontario Canada
| |
Collapse
|
31
|
Kunze L, Hawes N, Duckett T, Hanheide M, Krajnik T. Artificial Intelligence for Long-Term Robot Autonomy: A Survey. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2860628] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
32
|
Peretroukhin V, Clement L, Kelly J. Inferring sun direction to improve visual odometry: A deep learning approach. Int J Rob Res 2018. [DOI: 10.1177/0278364917749732] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We present a method to incorporate global orientation information from the sun into a visual odometry pipeline using only the existing image stream, in which the sun is typically not visible. We leverage recent advances in Bayesian convolutional neural networks (BCNNs) to train and implement a sun detection model (dubbed Sun-BCNN) that infers a 3D sun direction vector from a single RGB image. Crucially, our method also computes a principled uncertainty associated with each prediction, using a Monte Carlo dropout scheme. We incorporate this uncertainty into a sliding window stereo visual odometry pipeline where accurate uncertainty estimates are critical for optimal data fusion. We evaluate our method on 21.6 km of urban driving data from the KITTI odometry benchmark where it achieves a median error of approximately 12° and yields improvements of up to 42% in translational average root mean squared error (ARMSE) and 32% in rotational ARMSE compared with standard visual odometry. We further evaluate our method on an additional 10 km of visual navigation data from the Devon Island Rover Navigation dataset, achieving a median error of less than 8° and yielding similar improvements in estimation error. In addition to reporting on the accuracy of Sun-BCNN and its impact on visual odometry, we analyze the sensitivity of our model to cloud cover, investigate the possibility of model transfer between urban and planetary analogue environments, and examine the impact of different methods for computing the mean and covariance of a norm-constrained vector on the accuracy and consistency of the estimated sun directions. Finally, we release Sun-BCNN as open-source software.
Collapse
Affiliation(s)
- Valentin Peretroukhin
- Space & Terrestrial Autonomous Robotic Systems (STARS) Laboratory, University of Toronto Institute for Aerospace Studies (UTIAS), Canada
| | - Lee Clement
- Space & Terrestrial Autonomous Robotic Systems (STARS) Laboratory, University of Toronto Institute for Aerospace Studies (UTIAS), Canada
| | - Jonathan Kelly
- Space & Terrestrial Autonomous Robotic Systems (STARS) Laboratory, University of Toronto Institute for Aerospace Studies (UTIAS), Canada
| |
Collapse
|
33
|
Tang L, Wang Y, Ding X, Yin H, Xiong R, Huang S. Topological local-metric framework for mobile robots navigation: a long term perspective. Auton Robots 2018. [DOI: 10.1007/s10514-018-9724-7] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
34
|
Stelzer A, Vayugundla M, Mair E, Suppa M, Burgard W. Towards efficient and scalable visual homing. Int J Rob Res 2018. [DOI: 10.1177/0278364918761115] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
- Annett Stelzer
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics, Oberpfaffenhofen, Germany
| | - Mallikarjuna Vayugundla
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics, Oberpfaffenhofen, Germany
| | - Elmar Mair
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics, Oberpfaffenhofen, Germany
| | | | - Wolfram Burgard
- University of Freiburg, Department of Computer Science, Freiburg, Germany
| |
Collapse
|
35
|
A Node-Based Method for SLAM Navigation in Self-Similar Underwater Environments: A Case Study. ROBOTICS 2017. [DOI: 10.3390/robotics6040029] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
36
|
Meier K, Chung S, Hutchinson S. Visual‐inertial curve simultaneous localization and mapping: Creating a sparse structured world without feature points. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21759] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Kevin Meier
- University of Illinois at Urbana‐Champaign Urbana Ilinois 61801
| | - Soon‐Jo Chung
- California Institute of Technology 1200 East California Boulevard, MC 105‐50 Pasadena California 91125
| | - Seth Hutchinson
- University of Illinois at Urbana‐Champaign Urbana Ilinois 61801
| |
Collapse
|
37
|
Gurău C, Rao D, Tong CH, Posner I. Learn from experience: Probabilistic prediction of perception performance to avoid failure. Int J Rob Res 2017. [DOI: 10.1177/0278364917730603] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Despite significant advances in machine learning and perception over the past few decades, perception algorithms can still be unreliable when deployed in challenging time-varying environments. When these systems are used for autonomous decision-making, such as in self-driving vehicles, the impact of their mistakes can be catastrophic. As such, it is important to characterize the performance of the system and predict when and where it may fail in order to take appropriate action. While similar in spirit to the idea of introspection, this work introduces a new paradigm for predicting the likely performance of a robot’s perception system based on past experience in the same workspace. In particular, we propose two models that probabilistically predict perception performance from observations gathered over time. While both approaches are place-specific, the second approach additionally considers appearance similarity when incorporating past observations. We evaluate our method in a classical decision-making scenario in which the robot must choose when and where to drive autonomously in 60km of driving data from an urban environment. Results demonstrate that both approaches lead to fewer false decisions (in terms of incorrectly offering or denying autonomy) for two different detector models, and show that leveraging visual appearance within a state-of-the-art navigation framework increases the accuracy of our performance predictions.
Collapse
Affiliation(s)
- Corina Gurău
- Oxford Robotics Institute, Oxford University, UK
| | - Dushyant Rao
- Oxford Robotics Institute, Oxford University, UK
| | | | | |
Collapse
|
38
|
Krüsi P, Furgale P, Bosse M, Siegwart R. Driving on Point Clouds: Motion Planning, Trajectory Optimization, and Terrain Assessment in Generic Nonplanar Environments. J FIELD ROBOT 2016. [DOI: 10.1002/rob.21700] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Philipp Krüsi
- Autonomous Systems Lab; ETH Zurich 8092 Zurich Switzerland
| | - Paul Furgale
- Autonomous Systems Lab; ETH Zurich 8092 Zurich Switzerland
| | - Michael Bosse
- Autonomous Systems Lab; ETH Zurich 8092 Zurich Switzerland
| | | |
Collapse
|
39
|
Paton M, Pomerleau F, MacTavish K, Ostafew CJ, Barfoot TD. Expanding the Limits of Vision-based Localization for Long-term Route-following Autonomy. J FIELD ROBOT 2016. [DOI: 10.1002/rob.21669] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Michael Paton
- Institute for Aerospace Studies; University of Toronto; Toronto, ON Canada
| | - François Pomerleau
- Institute for Aerospace Studies; University of Toronto; Toronto, ON Canada
| | - Kirk MacTavish
- Institute for Aerospace Studies; University of Toronto; Toronto, ON Canada
| | - Chris J. Ostafew
- Institute for Aerospace Studies; University of Toronto; Toronto, ON Canada
| | - Timothy D. Barfoot
- Institute for Aerospace Studies; University of Toronto; Toronto, ON Canada
| |
Collapse
|
40
|
Ostafew CJ, Schoellig AP, Barfoot TD. Robust Constrained Learning-based NMPC enabling reliable mobile robot path tracking. Int J Rob Res 2016. [DOI: 10.1177/0278364916645661] [Citation(s) in RCA: 101] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper presents a Robust Constrained Learning-based Nonlinear Model Predictive Control (RC-LB-NMPC) algorithm for path-tracking in off-road terrain. For mobile robots, constraints may represent solid obstacles or localization limits. As a result, constraint satisfaction is required for safety. Constraint satisfaction is typically guaranteed through the use of accurate, a priori models or robust control. However, accurate models are generally not available for off-road operation. Furthermore, robust controllers are often conservative, since model uncertainty is not updated online. In this work our goal is to use learning to generate low-uncertainty, non-parametric models in situ. Based on these models, the predictive controller computes both linear and angular velocities in real-time, such that the robot drives at or near its capabilities while respecting path and localization constraints. Localization for the controller is provided by an on-board, vision-based mapping and navigation system enabling operation in large-scale, off-road environments. The paper presents experimental results, including over 5 km of travel by a 900 kg skid-steered robot at speeds of up to 2.0 m/s. The result is a robust, learning controller that provides safe, conservative control during initial trials when model uncertainty is high and converges to high-performance, optimal control during later trials when model uncertainty is reduced with experience.
Collapse
|
41
|
Clement L, Kelly J, Barfoot TD. Robust Monocular Visual Teach and Repeat Aided by Local Ground Planarity and Color-constant Imagery. J FIELD ROBOT 2016. [DOI: 10.1002/rob.21655] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Lee Clement
- Institute for Aerospace Studies; University of Toronto; Toronto ON Canada M3H 5T6
| | - Jonathan Kelly
- Institute for Aerospace Studies; University of Toronto; Toronto ON Canada M3H 5T6
| | - Timothy D. Barfoot
- Institute for Aerospace Studies; University of Toronto; Toronto ON Canada M3H 5T6
| |
Collapse
|
42
|
Lowry S, Milford MJ. Supervised and Unsupervised Linear Learning Techniques for Visual Place Recognition in Changing Environments. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2016.2545711] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
43
|
Nguyen T, Mann GKI, Gosine RG, Vardy A. Appearance-Based Visual-Teach-And-Repeat Navigation Technique for Micro Aerial Vehicle. J INTELL ROBOT SYST 2016. [DOI: 10.1007/s10846-015-0320-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
44
|
Lowry S, Sunderhauf N, Newman P, Leonard JJ, Cox D, Corke P, Milford MJ. Visual Place Recognition: A Survey. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2015.2496823] [Citation(s) in RCA: 531] [Impact Index Per Article: 59.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
45
|
Vysotska O, Stachniss C. Lazy Data Association For Image Sequences Matching Under Substantial Appearance Changes. IEEE Robot Autom Lett 2016. [DOI: 10.1109/lra.2015.2512936] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
46
|
Mallios A, Ridao P, Ribas D, Carreras M, Camilli R. Toward Autonomous Exploration in Confined Underwater Environments. J FIELD ROBOT 2015. [DOI: 10.1002/rob.21640] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Angelos Mallios
- Applied Ocean Physics & Engineering; Woods Hole Oceanographic Institution; Woods Hole Massachusetts 02543
| | - Pere Ridao
- Computer Vision and Robotics Institute; Universitat de Girona; 17003 Girona Spain
| | - David Ribas
- Computer Vision and Robotics Institute; Universitat de Girona; 17003 Girona Spain
| | - Marc Carreras
- Computer Vision and Robotics Institute; Universitat de Girona; 17003 Girona Spain
| | - Richard Camilli
- Applied Ocean Physics & Engineering; Woods Hole Oceanographic Institution; Woods Hole Massachusetts 02543
| |
Collapse
|
47
|
Mühlfellner P, Bürki M, Bosse M, Derendarz W, Philippsen R, Furgale P. Summary Maps for Lifelong Visual Localization. J FIELD ROBOT 2015. [DOI: 10.1002/rob.21595] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Peter Mühlfellner
- Department for Driver Assistance and Integrated Safety; Volkswagen AG, Halmstad University; Letter Box 011/1777 Wolfsburg Germany
| | - Mathias Bürki
- Autonomous Systems Lab; ETH Zürich, Leonhardstrasse 21 Zürich Switzerland
| | - Michael Bosse
- Autonomous Systems Lab; ETH Zürich, Leonhardstrasse 21 Zürich Switzerland
| | - Wojciech Derendarz
- Department for Driver Assistance and Integrated Safety; Volkswagen AG, Letter Box 011/1777 Wolfsburg Germany
| | - Roland Philippsen
- Intelligent Systems Lab; Halmstad University; Kristian IV's väg 3 Halmstad Sweden
| | - Paul Furgale
- Autonomous Systems Lab; ETH Zürich, Leonhardstrasse 21 Zürich Switzerland
| |
Collapse
|
48
|
Ostafew CJ, Schoellig AP, Barfoot TD, Collier J. Learning-based Nonlinear Model Predictive Control to Improve Vision-based Mobile Robot Path Tracking. J FIELD ROBOT 2015. [DOI: 10.1002/rob.21587] [Citation(s) in RCA: 86] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Chris J. Ostafew
- Institute for Aerospace Studies; University of Toronto; Toronto Ontario Canada
| | - Angela P. Schoellig
- Institute for Aerospace Studies; University of Toronto; Toronto Ontario Canada
| | - Timothy D. Barfoot
- Institute for Aerospace Studies; University of Toronto; Toronto Ontario Canada
| | - Jack Collier
- Defence Research and Development Canada; Suffield Alberta Canada
| |
Collapse
|
49
|
Tribou MJ, Harmat A, Wang DW, Sharf I, Waslander SL. Multi-camera parallel tracking and mapping with non-overlapping fields of view. Int J Rob Res 2015. [DOI: 10.1177/0278364915571429] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A novel real-time pose estimation system is presented for solving the visual simultaneous localization and mapping problem using a rigid set of central cameras arranged such that there is no overlap in their fields-of-view. A new parameterization for point feature position using a spherical coordinate update is formulated which isolates system parameters dependent on global scale, allowing the shape parameters of the system to converge despite the scale remaining uncertain. Furthermore, an initialization scheme is proposed from which the optimization will converge accurately using only the measurements from the cameras at the first time step. The algorithm is implemented and verified in experiments with a camera cluster constructed using multiple perspective cameras mounted on a multirotor aerial vehicle and augmented with tracking markers to collect high-precision ground-truth motion measurements from an optical indoor positioning system. The accuracy and performance of the proposed pose estimation system are confirmed for various motion profiles in both indoor and challenging outdoor environments, despite no overlap in the camera fields-of-view.
Collapse
Affiliation(s)
- Michael J. Tribou
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Adam Harmat
- Department of Mechanical Engineering, McGill University, Montreal, QC, Canada
| | - David W.L. Wang
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Inna Sharf
- Department of Mechanical Engineering, McGill University, Montreal, QC, Canada
| | - Steven L. Waslander
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
50
|
De Cristóforis P, Nitsche M, Krajník T, Pire T, Mejail M. Hybrid vision-based navigation for mobile robots in mixed indoor/outdoor environments. Pattern Recognit Lett 2015. [DOI: 10.1016/j.patrec.2014.10.010] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|