1
|
Cruz Ulloa C, Sánchez L, Del Cerro J, Barrientos A. Deep Learning Vision System for Quadruped Robot Gait Pattern Regulation. Biomimetics (Basel) 2023; 8:289. [PMID: 37504177 PMCID: PMC10807447 DOI: 10.3390/biomimetics8030289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 06/12/2023] [Accepted: 06/20/2023] [Indexed: 07/29/2023] Open
Abstract
Robots with bio-inspired locomotion systems, such as quadruped robots, have recently attracted significant scientific interest, especially those designed to tackle missions in unstructured terrains, such as search-and-rescue robotics. On the other hand, artificial intelligence systems have allowed for the improvement and adaptation of the locomotion capabilities of these robots based on specific terrains, imitating the natural behavior of quadruped animals. The main contribution of this work is a method to adjust adaptive gait patterns to overcome unstructured terrains using the ARTU-R (A1 Rescue Task UPM Robot) quadruped robot based on a central pattern generator (CPG), and the automatic identification of terrain and characterization of its obstacles (number, size, position and superability analysis) through convolutional neural networks for pattern regulation. To develop this method, a study of dog gait patterns was carried out, with validation and adjustment through simulation on the robot model in ROS-Gazebo and subsequent transfer to the real robot. Outdoor tests were carried out to evaluate and validate the efficiency of the proposed method in terms of its percentage of success in overcoming stretches of unstructured terrains, as well as the kinematic and dynamic variables of the robot. The main results show that the proposed method has an efficiency of over 93% for terrain characterization (identification of terrain, segmentation and obstacle characterization) and over 91% success in overcoming unstructured terrains. This work was also compared against main developments in state-of-the-art and benchmark models.
Collapse
Affiliation(s)
| | | | | | - Antonio Barrientos
- Centro de Automática y Robótica (CSIC-UPM), Universidad Politécnica de Madrid—Consejo Superior de Investigaciones Científicas, 28006 Madrid, Spain; (C.C.U.); (L.S.); (J.D.C.)
| |
Collapse
|
2
|
Haddeler G, Chuah MY(M, You Y, Chan J, Adiwahono AH, Yau WY, Chew CM. Traversability analysis with vision and terrain probing for safe legged robot navigation. Front Robot AI 2022; 9:887910. [PMID: 36071857 PMCID: PMC9441904 DOI: 10.3389/frobt.2022.887910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 06/28/2022] [Indexed: 11/13/2022] Open
Abstract
Inspired by human behavior when traveling over unknown terrain, this study proposes the use of probing strategies and integrates them into a traversability analysis framework to address safe navigation on unknown rough terrain. Our framework integrates collapsibility information into our existing traversability analysis, as vision and geometric information alone could be misled by unpredictable non-rigid terrains such as soft soil, bush area, or water puddles. With the new traversability analysis framework, our robot has a more comprehensive assessment of unpredictable terrain, which is critical for its safety in outdoor environments. The pipeline first identifies the terrain’s geometric and semantic properties using an RGB-D camera and desired probing locations on questionable terrains. These regions are probed using a force sensor to determine the risk of terrain collapsing when the robot steps over it. This risk is formulated as a collapsibility metric, which estimates an unpredictable region’s ground collapsibility. Thereafter, the collapsibility metric, together with geometric and semantic spatial data, is combined and analyzed to produce global and local traversability grid maps. These traversability grid maps tell the robot whether it is safe to step over different regions of the map. The grid maps are then utilized to generate optimal paths for the robot to safely navigate to its goal. Our approach has been successfully verified on a quadrupedal robot in both simulation and real-world experiments.
Collapse
Affiliation(s)
- Garen Haddeler
- Department of Mechanical Engineering, National University of Singapore (NUS), Singapore, Singapore
- Institute for Infocomm Research (IR), A*STAR, Singapore, Singapore
| | - Meng Yee (Michael) Chuah
- Institute for Infocomm Research (IR), A*STAR, Singapore, Singapore
- *Correspondence: Meng Yee (Michael) Chuah,
| | - Yangwei You
- Institute for Infocomm Research (IR), A*STAR, Singapore, Singapore
| | - Jianle Chan
- Institute for Infocomm Research (IR), A*STAR, Singapore, Singapore
| | | | - Wei Yun Yau
- Institute for Infocomm Research (IR), A*STAR, Singapore, Singapore
| | - Chee-Meng Chew
- Department of Mechanical Engineering, National University of Singapore (NUS), Singapore, Singapore
| |
Collapse
|
3
|
Wang M, Ye L, Sun X. Adaptive online terrain classification method for mobile robot based on vibration signals. INT J ADV ROBOT SYST 2021. [DOI: 10.1177/17298814211062035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
To improve the accuracy of terrain classification during mobile robot operation, an adaptive online terrain classification method based on vibration signals is proposed. First, the time domain and the combined features of the time, frequency, and time–frequency domains in the original vibration signal are extracted. These are adopted as the input of the random forest algorithm to generate classification models with different dimensions. Then, by judging the relationship between the current speed of the mobile robot and its critical speed, the classification model of different dimensions is adaptively selected for online classification. Offline and online experiments are conducted for four different terrains. The experimental results show that the proposed method can effectively avoid the self-vibration interference caused by an increase in the robot’s moving speed and achieve higher terrain classification accuracy.
Collapse
Affiliation(s)
- Mingming Wang
- Department of Electrical and Electronic Engineering, Shijiazhuang Tiedao University, Shijiazhuang, China
| | - Liming Ye
- Department of Electrical and Electronic Engineering, Shijiazhuang Tiedao University, Shijiazhuang, China
| | - Xiaoyun Sun
- Department of Electrical and Electronic Engineering, Shijiazhuang Tiedao University, Shijiazhuang, China
| |
Collapse
|
4
|
Li J, Chen Z, Chen J, Lin Q. Diversity-Sensitive Generative Adversarial Network for Terrain Mapping Under Limited Human Intervention. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:6029-6040. [PMID: 32011273 DOI: 10.1109/tcyb.2019.2962086] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In a collaborative air-ground robotic system, the large-scale terrain mapping using aerial images is important for the ground robot to plan a globally optimal path. However, it is a challenging task in a novel and dynamic field without historical human supervision. To alleviate the reliance on human intervention, this article presents a novel framework that integrates active learning and generative adversarial networks (GANs) to effectively exploit small human-labeled data for terrain mapping. In order to model the diverse terrain patterns, this article designs two novel diversity-sensitive GAN models which can capture fine-grained terrain classes among aerial image patches. The proposed approaches are tested in two real-world scenarios using our collaborative air-ground robotic platform. The empirical results show that our methods can outperform their counterparts in the predictive accuracy of terrain classification, visual quality of terrain mapping, and average length of the planned ground path. In practice, the proposed terrain mapping framework is especially valuable when the budget in time or labor cost is very limited.
Collapse
|
5
|
Zuo J, Zhang Y. A hybrid evolutionary learning classification for robot ground pattern recognition. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-202940] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In the field of intelligent robot engineering, whether it is humanoid, bionic or vehicle robots, the driving forms of standing, moving and walking, and the consciousness discrimination of the environment in which they are located have always been the focus and difficulty of research. Based on such problems, Naive Bayes Classifier (NBC), Support Vector Machine(SVM), k-Nearest-Neighbor (KNN), Decision Tree (DT), Random Forest (RF) and eXtreme Gradient Boosting (XGBoost) were introduced to conduct experiments. The six individual classifiers have an obvious effect on a particular type of ground, but the overall performance is poor. Therefore, the paper proposes a “Novel Hybrid Evolutionary Learning” method (NHEL) which combines every single classifier by means of weighted voting and adopts an improved genetic algorithm (GA) to obtain the optimal weight. According to the fitness function and evolution times, this paper designs the adaptively changing crossover and mutation rate and applies the conjugate gradient (CG) to enhance GA. By making full use of the global search capabilities of GA and the fast local search ability of CG, the convergence speed is accelerated and the search precision is upgraded. The experimental results show that the performance of the proposed model is significantly better than individual machine learning and ensemble classifiers.
Collapse
Affiliation(s)
- Jiankai Zuo
- Department of Computer Science and Technology, and Key Laboratory of Embedded System and Service Computing Ministry of Education, Tongji University, Shanghai
| | - Yaying Zhang
- Department of Computer Science and Technology, and Key Laboratory of Embedded System and Service Computing Ministry of Education, Tongji University, Shanghai
| |
Collapse
|
6
|
Wang Z, Liu H, Xu X, Sun F. Multi‐modal broad learning for material recognition. COGNITIVE COMPUTATION AND SYSTEMS 2021. [DOI: 10.1049/ccs2.12004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Zhaoxin Wang
- Department of Computer Science and Technology Tsinghua University Beijing China
- State Key Laboratory of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Beijing China
| | - Huaping Liu
- Department of Computer Science and Technology Tsinghua University Beijing China
- State Key Laboratory of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Beijing China
| | - Xinying Xu
- Department of Computer Science and Technology Tsinghua University Beijing China
- State Key Laboratory of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Beijing China
| | - Fuchun Sun
- Department of Computer Science and Technology Tsinghua University Beijing China
- State Key Laboratory of Intelligent Technology and Systems Beijing National Research Center for Information Science and Technology Beijing China
| |
Collapse
|
7
|
Li Q, Kroemer O, Su Z, Veiga FF, Kaboli M, Ritter HJ. A Review of Tactile Information: Perception and Action Through Touch. IEEE T ROBOT 2020. [DOI: 10.1109/tro.2020.3003230] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
8
|
Cheng C, Chang J, Lv W, Wu Y, Li K, Li Z, Yuan C, Ma S. Frequency-Temporal Disagreement Adaptation for Robotic Terrain Classification via Vibration in a Dynamic Environment. SENSORS 2020; 20:s20226550. [PMID: 33207829 PMCID: PMC7697547 DOI: 10.3390/s20226550] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 11/08/2020] [Accepted: 11/14/2020] [Indexed: 11/16/2022]
Abstract
The accurate terrain classification in real time is of great importance to an autonomous robot working in field, because the robot could avoid non-geometric hazards, adjust control scheme, or improve localization accuracy, with the aid of terrain classification. In this paper, we investigate the vibration-based terrain classification (VTC) in a dynamic environment, and propose a novel learning framework, named DyVTC, which tackles online-collected unlabeled data with concept drift. In the DyVTC framework, the exterior disagreement (ex-disagreement) and interior disagreement (in-disagreement) are proposed novely based on the feature diversity and intrinsic temporal correlation, respectively. Such a disagreement mechanism is utilized to design a pseudo-labeling algorithm, which shows its compelling advantages in extracting key samples and labeling; and consequently, the classification accuracy could be retrieved by incremental learning in a changing environment. Since two sets of features are extracted from frequency and time domain to generate disagreements, we also name the proposed method feature-temporal disagreement adaptation (FTDA). The real-world experiment shows that the proposed DyVTC could reach an accuracy of 89.5%, but the traditional time- and frequency-domain terrain classification methods could only reach 48.8% and 71.5%, respectively, in a dynamic environment.
Collapse
Affiliation(s)
- Chen Cheng
- Department of Automation, University of Science and Technology of China, Hefei 230027, China; (C.C.); (J.C.); (K.L.); (Z.L.)
- School of Information Engineering, Anhui Institute of International Business, Hefei 231131, China
| | - Ji Chang
- Department of Automation, University of Science and Technology of China, Hefei 230027, China; (C.C.); (J.C.); (K.L.); (Z.L.)
| | - Wenjun Lv
- Department of Automation, University of Science and Technology of China, Hefei 230027, China; (C.C.); (J.C.); (K.L.); (Z.L.)
- Correspondence:
| | - Yuping Wu
- Key Laboratory of Industrial Computer Control Engineering of Hebei Province, Yanshan University, Qinhuangdao 066004, China;
| | - Kun Li
- Department of Automation, University of Science and Technology of China, Hefei 230027, China; (C.C.); (J.C.); (K.L.); (Z.L.)
- Department of Research and Development, Anhui Etown Information Technology Co., Ltd, Hefei 230011, China
| | - Zerui Li
- Department of Automation, University of Science and Technology of China, Hefei 230027, China; (C.C.); (J.C.); (K.L.); (Z.L.)
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China; (C.Y.); (S.M.)
| | - Chenhui Yuan
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China; (C.Y.); (S.M.)
- School of Computer Science and Technology, Anhui University, Hefei 230601, China
| | - Saifei Ma
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China; (C.Y.); (S.M.)
- School of Computer Science and Technology, Anhui University, Hefei 230601, China
| |
Collapse
|
9
|
Abstract
The efficient multi-modal fusion of data streams from different sensors is a crucial ability that a robotic perception system should exhibit to ensure robustness against disturbances. However, as the volume and dimensionality of sensory-feedback increase it might be difficult to manually design a multimodal-data fusion system that can handle heterogeneous data. Nowadays, multi-modal machine learning is an emerging field with research focused mainly on analyzing vision and audio information. Although, from the robotics perspective, haptic sensations experienced from interaction with an environment are essential to successfully execute useful tasks. In our work, we compared four learning-based multi-modal fusion methods on three publicly available datasets containing haptic signals, images, and robots’ poses. During tests, we considered three tasks involving such data, namely grasp outcome classification, texture recognition, and—most challenging—multi-label classification of haptic adjectives based on haptic and visual data. Conducted experiments were focused not only on the verification of the performance of each method but mainly on their robustness against data degradation. We focused on this aspect of multi-modal fusion, as it was rarely considered in the research papers, and such degradation of sensory feedback might occur during robot interaction with its environment. Additionally, we verified the usefulness of data augmentation to increase the robustness of the aforementioned data fusion methods.
Collapse
|
10
|
Almeida L, Santos V, Ferreira J. Learning-Based Analysis of a New Wearable 3D Force System Data to Classify the Underlying Surface of a Walking Robot. INT J HUM ROBOT 2020. [DOI: 10.1142/s0219843620500115] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Biped humanoid robots that operate in real-world environments need to be able to physically recognize different floors to best adapt their gait. In this work, we describe the preparation of a dataset of contact forces obtained with eight force tactile sensors for determining the underlying surface of a walking robot. The data is acquired for four floors with different coefficient of friction, and different robot gaits and speeds. To classify the different floors, the data is used as input for two common computational intelligence techniques (CITs): Artificial neural network (ANN) and extreme learning machine (ELM). After optimizing the parameters for both CITs, a good mapping between inputs and targets is achieved with classification accuracies of about 99%.
Collapse
Affiliation(s)
- Luís Almeida
- Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, Portugal
| | - Vítor Santos
- Institute of Electronics and Informatics Engineering of Aveiro (IEETA), Department of Mechanical Engineering, University of Aveiro, Portugal
| | - João Ferreira
- Institute of Systems and Robotics (ISR), Department of Electrical Engineering, Superior Institute of Engineering of Coimbra, Portugal
| |
Collapse
|
11
|
|
12
|
Kolvenbach H, Bartschi C, Wellhausen L, Grandia R, Hutter M. Haptic Inspection of Planetary Soils With Legged Robots. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2896732] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
13
|
Comparative Study of Different Methods in Vibration-Based Terrain Classification for Wheeled Robots with Shock Absorbers. SENSORS 2019; 19:s19051137. [PMID: 30845726 PMCID: PMC6427223 DOI: 10.3390/s19051137] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 02/28/2019] [Accepted: 03/01/2019] [Indexed: 11/16/2022]
Abstract
Autonomous robots that operate in the field can enhance their security and efficiency by accurate terrain classification, which can be realized by means of robot-terrain interaction-generated vibration signals. In this paper, we explore the vibration-based terrain classification (VTC), in particular for a wheeled robot with shock absorbers. Because the vibration sensors are usually mounted on the main body of the robot, the vibration signals are dampened significantly, which results in the vibration signals collected on different terrains being more difficult to discriminate. Hence, the existing VTC methods applied to a robot with shock absorbers may degrade. The contributions are two-fold: (1) Several experiments are conducted to exhibit the performance of the existing feature-engineering and feature-learning classification methods; and (2) According to the long short-term memory (LSTM) network, we propose a one-dimensional convolutional LSTM (1DCL)-based VTC method to learn both spatial and temporal characteristics of the dampened vibration signals. The experiment results demonstrate that: (1) The feature-engineering methods, which are efficient in VTC of the robot without shock absorbers, are not so accurate in our project; meanwhile, the feature-learning methods are better choices; and (2) The 1DCL-based VTC method outperforms the conventional methods with an accuracy of 80.18%, which exceeds the second method (LSTM) by 8.23%.
Collapse
|
14
|
Belter D, Wietrzykowski J, Skrzypczyński P. Employing Natural Terrain Semantics in Motion Planning for a Multi-Legged Robot. J INTELL ROBOT SYST 2018. [DOI: 10.1007/s10846-018-0865-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
15
|
Luneckas M, Luneckas T, Udris D. Leg placement algorithm for foot impact force minimization. INT J ADV ROBOT SYST 2018. [DOI: 10.1177/1729881417751512] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Walking is considered to be a rather complicated task for autonomous robots. Sustaining dynamic stability, adopting different gaits, and calculating correct foot placement are a necessity to overcome irregular terrain, various environments and completing a range of assignments. Besides that, certain assignments require that robots have to walk on fragile surfaces without damaging it. Furthermore, under some other circumstances, if walking is careless, robots could suffer damage caused by the impact of the terrain. Foot placement, leg motion speed must be controlled to avoid braking surface or even sensors on robot’s feet. In this article, a simple leg placement algorithm is proposed that controls hexapod robot’s leg speed. Thus, force dependence on leg motion speed and step height has been measured by using a piezoelectric sensor. Then, by using leg placement algorithm, we show that the reduction of the impact force between robot’s foot and surface is possible. Using this algorithm, robot feet’s impact force with the surface can be minimized to almost 0 N.
Collapse
Affiliation(s)
| | - Tomas Luneckas
- Vilniaus Gedimino Technikos Universitetas, Vilnius, Lithuania
| | - Dainius Udris
- Vilniaus Gedimino Technikos Universitetas, Vilnius, Lithuania
| |
Collapse
|
16
|
Gonzalez R, Apostolopoulos D, Iagnemma K. Slippage and immobilization detection for planetary exploration rovers via machine learning and proprioceptive sensing. J FIELD ROBOT 2017. [DOI: 10.1002/rob.21736] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
17
|
Otsu K, Ono M, Fuchs TJ, Baldwin I, Kubota T. Autonomous Terrain Classification With Co- and Self-Training Approach. IEEE Robot Autom Lett 2016. [DOI: 10.1109/lra.2016.2525040] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
18
|
|