1
|
Abstract
Data analysis methods have scarcely kept pace with the rapid increase in Earth observations, spurring the development of novel algorithms, storage methods, and computational techniques. For scientists interested in Mars, the problem is always the same: there is simultaneously never enough of the right data and an overwhelming amount of data in total. Finding sufficient data needles in a haystack to test a hypothesis requires hours of manual data screening, and more needles and hay are added constantly. To date, the vast majority of Martian research has been focused on either one-off local/regional studies or on hugely time-consuming manual global studies. Machine learning in its numerous forms can be helpful for future such work. Machine learning has the potential to help map and classify a large variety of both features and properties on the surface of Mars and to aid in the planning and execution of future missions. Here, we outline the current extent of machine learning as applied to Mars, summarize why machine learning should be an important tool for planetary geomorphology in particular, and suggest numerous research avenues and funding priorities for future efforts. We conclude that: (1) moving toward methods that require less human input (i.e., self- or semi-supervised) is an important paradigm shift for Martian applications, (2) new robust methods using generative adversarial networks to generate synthetic high-resolution digital terrain models represent an exciting new avenue for Martian geomorphologists, (3) more effort and money must be directed toward developing standardized datasets and benchmark tests, and (4) the community needs a large-scale, generalized, and programmatically accessible geographic information system (GIS).
Collapse
|
2
|
Ou J, Huang P, Zhou J, Zhao Y, Lin L. Automatic Extrinsic Calibration of 3D LIDAR and Multi-Cameras Based on Graph Optimization. SENSORS 2022; 22:s22062221. [PMID: 35336392 PMCID: PMC8954836 DOI: 10.3390/s22062221] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 02/28/2022] [Accepted: 03/10/2022] [Indexed: 12/04/2022]
Abstract
In recent years, multi-sensor fusion technology has made enormous progress in 3D reconstruction, surveying and mapping, autonomous driving, and other related fields, and extrinsic calibration is a necessary condition for multi-sensor fusion applications. This paper proposes a 3D LIDAR-to-camera automatic calibration framework based on graph optimization. The system can automatically identify the position of the pattern and build a set of virtual feature point clouds, and can simultaneously complete the calibration of the LIDAR and multiple cameras. To test this framework, a multi-sensor system is formed using a mobile robot equipped with LIDAR, monocular and binocular cameras, and the pairwise calibration of LIDAR with two cameras is evaluated quantitatively and qualitatively. The results show that this method can produce more accurate calibration results than the state-of-the-art method. The average error on the camera normalization plane is 0.161 mm, which outperforms existing calibration methods. Due to the introduction of graph optimization, the original point cloud is also optimized while optimizing the external parameters between the sensors, which can effectively correct the errors caused during data collection, so it is also robust to bad data.
Collapse
Affiliation(s)
- Jinshun Ou
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (J.O.); (J.Z.); (Y.Z.); (L.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China
| | - Panling Huang
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (J.O.); (J.Z.); (Y.Z.); (L.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China
- Correspondence:
| | - Jun Zhou
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (J.O.); (J.Z.); (Y.Z.); (L.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China
| | - Yifan Zhao
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (J.O.); (J.Z.); (Y.Z.); (L.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China
| | - Lebin Lin
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (J.O.); (J.Z.); (Y.Z.); (L.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture, Ministry of Education, Jinan 250061, China
| |
Collapse
|
3
|
Robotic Mapping Approach under Illumination-Variant Environments at Planetary Construction Sites. REMOTE SENSING 2022. [DOI: 10.3390/rs14041027] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In planetary construction, the semiautonomous teleoperation of robots is expected to perform complex tasks for site preparation and infrastructure emplacement. A highly detailed 3D map is essential for construction planning and management. However, the planetary surface imposes mapping restrictions due to rugged and homogeneous terrains. Additionally, changes in illumination conditions cause the mapping result (or 3D point-cloud map) to have inconsistent color properties that hamper the understanding of the topographic properties of a worksite. Therefore, this paper proposes a robotic construction mapping approach robust to illumination-variant environments. The proposed approach leverages a deep learning-based low-light image enhancement (LLIE) method to improve the mapping capabilities of the visual simultaneous localization and mapping (SLAM)-based robotic mapping method. In the experiment, the robotic mapping system in the emulated planetary worksite collected terrain images during the daytime from noon to late afternoon. Two sets of point-cloud maps, which were created from original and enhanced terrain images, were examined for comparison purposes. The experiment results showed that the LLIE method in the robotic mapping method significantly enhanced the brightness, preserving the inherent colors of the original terrain images. The visibility and the overall accuracy of the point-cloud map were consequently increased.
Collapse
|
4
|
Visual SLAM-Based Robotic Mapping Method for Planetary Construction. SENSORS 2021; 21:s21227715. [PMID: 34833786 PMCID: PMC8621460 DOI: 10.3390/s21227715] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 11/05/2021] [Accepted: 11/15/2021] [Indexed: 12/01/2022]
Abstract
With the recent discovery of water-ice and lava tubes on the Moon and Mars along with the development of in-situ resource utilization (ISRU) technology, the recent planetary exploration has focused on rover (or lander)-based surface missions toward the base construction for long-term human exploration and habitation. However, a 3D terrain map, mostly based on orbiters’ terrain images, has insufficient resolutions for construction purposes. In this regard, this paper introduces the visual simultaneous localization and mapping (SLAM)-based robotic mapping method employing a stereo camera system on a rover. In the method, S-PTAM is utilized as a base framework, with which the disparity map from the self-supervised deep learning is combined to enhance the mapping capabilities under homogeneous and unstructured environments of planetary terrains. The overall performance of the proposed method was evaluated in the emulated planetary terrain and validated with potential results.
Collapse
|
5
|
Abstract
AbstractThis paper presents a new algorithm for lidar data assimilation relying on a new forward model. Current mapping algorithms suffer from multiple shortcomings, which can be related to the lack of clear forward model. In order to address these issues, we provide a mathematical framework where we show how the use of coarse model parameters results in a new data assimilation problem. Understanding this new problem proves essential to derive sound inference algorithms. We introduce a model parameter specifically tailored for lidar data assimilation, which closely relates to the local mean free path. Using this new model parameter, we derive its associated forward model and we provide the resulting mapping algorithm. We further discuss how our proposed algorithm relates to usual occupancy grid mapping. Finally, we present an example with real lidar measurements.
Collapse
|
6
|
INS Error Estimation Based on an ANFIS and Its Application in Complex and Covert Surroundings. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2021. [DOI: 10.3390/ijgi10060388] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Inertial navigation is a crucial part of vehicle navigation systems in complex and covert surroundings. To address the low accuracy of vehicle inertial navigation in multifaced and covert surroundings, in this study, we proposed an inertial navigation error estimation based on an adaptive neuro fuzzy inference system (ANFIS) which can quickly and accurately output the position error of a vehicle end-to-end. The new system was tested using both single-sequence and multi-sequence data collected from a vehicle by the KITTI dataset. The results were compared with an inertial navigation system (INS) position solution method, artificial neural networks (ANNs) method, and a long short-term memory (LSTM) method. Test results indicated that the accumulative position errors in single sequence and multi-sequences experiments decreased from 9.83% and 4.14% to 0.45% and 0.61% by using ANFIS, respectively, which were significantly less than those of the other three approaches. This result suggests that the ANFIS can considerably improve the positioning accuracy of inertial navigation, which has significance for vehicle inertial navigation in complex and covert surroundings.
Collapse
|
7
|
Abstract
By moving a commercial 2D LiDAR, 3D maps of the environment can be built, based on the data of a 2D LiDAR and its movements. Compared to a commercial 3D LiDAR, a moving 2D LiDAR is more economical. A series of problems need to be solved in order for a moving 2D LiDAR to perform better, among them, improving accuracy and real-time performance. In order to solve these problems, estimating the movements of a 2D LiDAR, and identifying and removing moving objects in the environment, are issues that should be studied. More specifically, calibrating the installation error between the 2D LiDAR and the moving unit, the movement estimation of the moving unit, and identifying moving objects at low scanning frequencies, are involved. As actual applications are mostly dynamic, and in these applications, a moving 2D LiDAR moves between multiple moving objects, we believe that, for a moving 2D LiDAR, how to accurately construct 3D maps in dynamic environments will be an important future research topic. Moreover, how to deal with moving objects in a dynamic environment via a moving 2D LiDAR has not been solved by previous research.
Collapse
|
8
|
Yang SP, Seo YH, Kim JB, Kim H, Jeong KH. Optical MEMS devices for compact 3D surface imaging cameras. MICRO AND NANO SYSTEMS LETTERS 2019. [DOI: 10.1186/s40486-019-0087-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
9
|
Bai C, Guo J. Uncertainty-Based Vibration/Gyro Composite Planetary Terrain Mapping. SENSORS 2019; 19:s19122681. [PMID: 31200583 PMCID: PMC6631722 DOI: 10.3390/s19122681] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2019] [Revised: 06/10/2019] [Accepted: 06/11/2019] [Indexed: 11/16/2022]
Abstract
Accurate perception of the detected terrain is a precondition for the planetary rover to perform its own mission. However, terrain measurement based on vision and LIDAR is subject to environmental changes such as strong illumination and dust storms. In this paper, considering the influence of uncertainty in the detection process, a vibration/gyro coupled terrain estimation method based on multipoint ranging information is proposed. The terrain update model is derived by analyzing the measurement uncertainty and motion uncertainty. Combined with Clearpath Jackal unmanned vehicle-the terrain mapping accuracy test based on ROS (Robot Operating System) simulation environment-indoor Optitrack auxiliary environment and outdoor soil environment was completed. The results show that the proposed algorithm has high reconstruction ability for a given scale terrain. The reconstruction accuracy in the above test environments is within 1 cm, 2 cm, and 6 cm, respectively.
Collapse
Affiliation(s)
- Chengchao Bai
- School of Astronautics, Harbin Institute of Technology, Harbin 150001, China.
| | - Jifeng Guo
- School of Astronautics, Harbin Institute of Technology, Harbin 150001, China.
| |
Collapse
|
10
|
Morales J, Plaza-Leiva V, Mandow A, Gomez-Ruiz JA, Serón J, García-Cerezo A. Analysis of 3D Scan Measurement Distribution with Application to a Multi-Beam Lidar on a Rotating Platform. SENSORS 2018; 18:s18020395. [PMID: 29385705 PMCID: PMC5856095 DOI: 10.3390/s18020395] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 01/19/2018] [Accepted: 01/25/2018] [Indexed: 11/16/2022]
Abstract
Multi-beam lidar (MBL) rangefinders are becoming increasingly compact, light, and accessible 3D sensors, but they offer limited vertical resolution and field of view. The addition of a degree-of-freedom to build a rotating multi-beam lidar (RMBL) has the potential to become a common solution for affordable rapid full-3D high resolution scans. However, the overlapping of multiple-beams caused by rotation yields scanning patterns that are more complex than in rotating single beam lidar (RSBL). In this paper, we propose a simulation-based methodology to analyze 3D scanning patterns which is applied to investigate the scan measurement distribution produced by the RMBL configuration. With this purpose, novel contributions include: (i) the adaption of a recent spherical reformulation of Ripley's K function to assess 3D sensor data distribution on a hollow sphere simulation; (ii) a comparison, both qualitative and quantitative, between scan patterns produced by an ideal RMBL based on a Velodyne VLP-16 (Puck) and those of other 3D scan alternatives (i.e., rotating 2D lidar and MBL); and (iii) a new RMBL implementation consisting of a portable tilting platform for VLP-16 scanners, which is presented as a case study for measurement distribution analysis as well as for the discussion of actual scans from representative environments. Results indicate that despite the particular sampling patterns given by a RMBL, its homogeneity even improves that of an equivalent RSBL.
Collapse
Affiliation(s)
- Jesús Morales
- Robotics and Mechatronics Lab, Andalucía Tech, Universidad de Málaga, 29071 Málaga, Spain.
| | - Victoria Plaza-Leiva
- Robotics and Mechatronics Lab, Andalucía Tech, Universidad de Málaga, 29071 Málaga, Spain.
| | - Anthony Mandow
- Robotics and Mechatronics Lab, Andalucía Tech, Universidad de Málaga, 29071 Málaga, Spain.
| | | | - Javier Serón
- Robotics and Mechatronics Lab, Andalucía Tech, Universidad de Málaga, 29071 Málaga, Spain.
| | - Alfonso García-Cerezo
- Robotics and Mechatronics Lab, Andalucía Tech, Universidad de Málaga, 29071 Málaga, Spain.
| |
Collapse
|
11
|
Gonzalez R, Iagnemma K. Slippage estimation and compensation for planetary exploration rovers. State of the art and future challenges. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21761] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Ramon Gonzalez
- Robotic Mobility Group; Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Karl Iagnemma
- Robotic Mobility Group; Massachusetts Institute of Technology, Cambridge, Massachusetts
| |
Collapse
|
12
|
Plaza-Leiva V, Gomez-Ruiz JA, Mandow A, García-Cerezo A. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning. SENSORS (BASEL, SWITZERLAND) 2017; 17:s17030594. [PMID: 28294963 PMCID: PMC5375880 DOI: 10.3390/s17030594] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Revised: 03/01/2017] [Accepted: 03/10/2017] [Indexed: 06/06/2023]
Abstract
Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.
Collapse
Affiliation(s)
- Victoria Plaza-Leiva
- Grupo de Investigación de Ingeniería de Sistemas y Automática, Andalucía Tech, Universidad de Málaga, 29071 Málaga, Spain.
| | - Jose Antonio Gomez-Ruiz
- Grupo de Investigación de Ingeniería de Sistemas y Automática, Andalucía Tech, Universidad de Málaga, 29071 Málaga, Spain.
| | - Anthony Mandow
- Grupo de Investigación de Ingeniería de Sistemas y Automática, Andalucía Tech, Universidad de Málaga, 29071 Málaga, Spain.
| | - Alfonso García-Cerezo
- Grupo de Investigación de Ingeniería de Sistemas y Automática, Andalucía Tech, Universidad de Málaga, 29071 Málaga, Spain.
| |
Collapse
|