1
|
Li L, Xu H, Li Z, Zhong B, Lou Z, Wang L. 3D Heterogeneous Sensing System for Multimode Parrallel Signal No-Spatiotemporal Misalignment Recognition. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2025; 37:e2414054. [PMID: 39663744 DOI: 10.1002/adma.202414054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Revised: 11/26/2024] [Indexed: 12/13/2024]
Abstract
The spatiotemporal error caused by planar tiled structure design and the waste of communication resources brought on by the transmission of a single channel are two challenges facing the development of multifunctional intelligent sensors with high-density integration. A homo-spatiotemporal multisensory parallel transmission system (HMPTs) is expanded to realize multisignal no-spatiotemporal misalignment recognition and efficient parallel transmission. First, this system optimizes the distribution of multifunctional sensors, completes the 3D vertical heterogeneous layout of four sensors, and achieves material multi-information detection at a single place with no-spatiotemporal deviation. Additionally, the system couples and transmittes multiple sensory signals, delivering a fourfold increase in transmission efficiency and one-third of the power consumption compared to a single-channel transmission system. Finally, this system is used for the recognition of mixed materials, and human-computer interaction to realize the assignment of materials in VR, demonstrating the great accuracy and transmission efficiency of HMPTs as well as its feasibility in practical application. This is an a priori effort to enhance machine perception accuracy, improve signal transmission effectiveness, and advance human-machine-object triadic integration.
Collapse
Affiliation(s)
- Linlin Li
- State Key Laboratory for Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, 100083, China
- Center of Materials Science and Optoelectronic Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Hao Xu
- State Key Laboratory for Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, 100083, China
- Center of Materials Science and Optoelectronic Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Zhexin Li
- State Key Laboratory for Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, 100083, China
- Center of Materials Science and Optoelectronic Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Bowen Zhong
- State Key Laboratory for Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, 100083, China
- Center of Materials Science and Optoelectronic Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Zheng Lou
- State Key Laboratory for Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, 100083, China
- Center of Materials Science and Optoelectronic Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Lili Wang
- State Key Laboratory for Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, 100083, China
- Center of Materials Science and Optoelectronic Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China
| |
Collapse
|
2
|
Zhang K, Yang Z. Load recognition of connecting-shaft rotor system under complex working conditions. Heliyon 2024; 10:e39956. [PMID: 39583816 PMCID: PMC11582418 DOI: 10.1016/j.heliyon.2024.e39956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2024] [Accepted: 10/28/2024] [Indexed: 11/26/2024] Open
Abstract
A method for qualitatively recognizing the load of the rolling equipment's connecting-shaft rotor system is proposed in this paper due to the complexity of rolling production conditions and the limitations of single source response signals. The method is oriented towards fusing the vibration and motor's current information. First, singular value decomposition and wavelet packet analysis are used to preprocess the two types of response signals. Then, the Bayesian estimation method in feature-level fusion achieves qualitative recognition and analysis of rotor system load types. Corresponding load experiments are completed on a load recognition test platform based on vibration and the motor's current signals. The research results show that the load recognition method based on fusion information can recognize the type of load excitation with a recognition accuracy of 91.7 %, higher than other single-source response signal methods. Therefore, the feasibility of the aforementioned theoretical methods is verified.
Collapse
Affiliation(s)
- Kun Zhang
- College of Mechanical Engineering, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Zhaojian Yang
- College of Mechanical Engineering, Taiyuan University of Technology, Taiyuan, 030024, China
| |
Collapse
|
3
|
Kong Y, Cheng G, Zhang M, Zhao Y, Meng W, Tian X, Sun B, Yang F, Wei D. Highly efficient recognition of similar objects based on ionic robotic tactile sensors. Sci Bull (Beijing) 2024; 69:2089-2098. [PMID: 38777681 DOI: 10.1016/j.scib.2024.04.060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 03/05/2024] [Accepted: 04/22/2024] [Indexed: 05/25/2024]
Abstract
Tactile sensing provides robots the ability of object recognition, fine operation, natural interaction, etc. However, in the actual scenario, robotic tactile recognition of similar objects still faces difficulties such as low efficiency and accuracy, resulting from a lack of high-performance sensors and intelligent recognition algorithms. In this paper, a flexible sensor combining a pyramidal microstructure with a gradient conformal ionic gel coating was demonstrated, exhibiting excellent signal-to-noise ratio (48 dB), low detection limit (1 Pa), high sensitivity (92.96 kPa-1), fast response time (55 ms), and outstanding stability over 15,000 compression-release cycles. Furthermore, a Pressure-Slip Dual-Branch Convolutional Neural Network (PSNet) architecture was proposed to separately extract hardness and texture features and perform feature fusion. In tactile experiments on different kinds of leaves, a recognition rate of 97.16% was achieved, and surpassed that of human hands recognition (72.5%). These researches showed the great potential in a broad application in bionic robots, intelligent prostheses, and precise human-computer interaction.
Collapse
Affiliation(s)
- Yongkang Kong
- Chongqing Key Laboratory of Generic Technology and System of Service Robots, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China; School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Guanyin Cheng
- Chongqing Key Laboratory of Generic Technology and System of Service Robots, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
| | - Mengqin Zhang
- Chongqing Key Laboratory of Generic Technology and System of Service Robots, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
| | - Yongting Zhao
- Chongqing Key Laboratory of Generic Technology and System of Service Robots, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
| | - Wujun Meng
- Chongqing Key Laboratory of Generic Technology and System of Service Robots, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China; School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Xin Tian
- Chongqing Key Laboratory of Generic Technology and System of Service Robots, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
| | - Bihao Sun
- Chongqing Key Laboratory of Generic Technology and System of Service Robots, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China
| | - Fuping Yang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Dapeng Wei
- Chongqing Key Laboratory of Generic Technology and System of Service Robots, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714, China; School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| |
Collapse
|
4
|
Liu Y, Wei C, Yoon SC, Ni X, Wang W, Liu Y, Wang D, Wang X, Guo X. Development of Multimodal Fusion Technology for Tomato Maturity Assessment. SENSORS (BASEL, SWITZERLAND) 2024; 24:2467. [PMID: 38676084 PMCID: PMC11054974 DOI: 10.3390/s24082467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 04/02/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024]
Abstract
The maturity of fruits and vegetables such as tomatoes significantly impacts indicators of their quality, such as taste, nutritional value, and shelf life, making maturity determination vital in agricultural production and the food processing industry. Tomatoes mature from the inside out, leading to an uneven ripening process inside and outside, and these situations make it very challenging to judge their maturity with the help of a single modality. In this paper, we propose a deep learning-assisted multimodal data fusion technique combining color imaging, spectroscopy, and haptic sensing for the maturity assessment of tomatoes. The method uses feature fusion to integrate feature information from images, near-infrared spectra, and haptic modalities into a unified feature set and then classifies the maturity of tomatoes through deep learning. Each modality independently extracts features, capturing the tomatoes' exterior color from color images, internal and surface spectral features linked to chemical compositions in the visible and near-infrared spectra (350 nm to 1100 nm), and physical firmness using haptic sensing. By combining preprocessed and extracted features from multiple modalities, data fusion creates a comprehensive representation of information from all three modalities using an eigenvector in an eigenspace suitable for tomato maturity assessment. Then, a fully connected neural network is constructed to process these fused data. This neural network model achieves 99.4% accuracy in tomato maturity classification, surpassing single-modal methods (color imaging: 94.2%; spectroscopy: 87.8%; haptics: 87.2%). For internal and external maturity unevenness, the classification accuracy reaches 94.4%, demonstrating effective results. A comparative analysis of performance between multimodal fusion and single-modal methods validates the stability and applicability of the multimodal fusion technique. These findings demonstrate the key benefits of multimodal fusion in terms of improving the accuracy of tomato ripening classification and provide a strong theoretical and practical basis for applying multimodal fusion technology to classify the quality and maturity of other fruits and vegetables. Utilizing deep learning (a fully connected neural network) for processing multimodal data provides a new and efficient non-destructive approach for the massive classification of agricultural and food products.
Collapse
Affiliation(s)
- Yang Liu
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| | - Chaojie Wei
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| | - Seung-Chul Yoon
- Quality & Safety Assessment Research Unit, U. S. National Poultry Research Center, USDA-ARS, 950 College Station Rd., Athens, GA 30605, USA
| | - Xinzhi Ni
- Crop Genetics and Breeding Research Unit, United States Department of Agriculture Agricultural Research Service, 2747 Davis Road, Tifton, GA 31793, USA
| | - Wei Wang
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| | - Yizhe Liu
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| | - Daren Wang
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| | - Xiaorong Wang
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| | - Xiaohuan Guo
- Beijing Key Laboratory of Optimization Design for Modern Agricultural Equipment, College of Engineering, China Agricultural University, Beijing 100083, China
| |
Collapse
|
5
|
Mandil W, Rajendran V, Nazari K, Ghalamzan-Esfahani A. Tactile-Sensing Technologies: Trends, Challenges and Outlook in Agri-Food Manipulation. SENSORS (BASEL, SWITZERLAND) 2023; 23:7362. [PMID: 37687818 PMCID: PMC10490130 DOI: 10.3390/s23177362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 08/01/2023] [Accepted: 08/15/2023] [Indexed: 09/10/2023]
Abstract
Tactile sensing plays a pivotal role in achieving precise physical manipulation tasks and extracting vital physical features. This comprehensive review paper presents an in-depth overview of the growing research on tactile-sensing technologies, encompassing state-of-the-art techniques, future prospects, and current limitations. The paper focuses on tactile hardware, algorithmic complexities, and the distinct features offered by each sensor. This paper has a special emphasis on agri-food manipulation and relevant tactile-sensing technologies. It highlights key areas in agri-food manipulation, including robotic harvesting, food item manipulation, and feature evaluation, such as fruit ripeness assessment, along with the emerging field of kitchen robotics. Through this interdisciplinary exploration, we aim to inspire researchers, engineers, and practitioners to harness the power of tactile-sensing technology for transformative advancements in agri-food robotics. By providing a comprehensive understanding of the current landscape and future prospects, this review paper serves as a valuable resource for driving progress in the field of tactile sensing and its application in agri-food systems.
Collapse
Affiliation(s)
- Willow Mandil
- School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK
| | - Vishnu Rajendran
- Lincoln Institute for Agri-Food Technology, University of Lincoln, Lincoln LN6 7TS, UK
| | - Kiyanoush Nazari
- School of Computer Science, University of Lincoln, Lincoln LN6 7TS, UK
| | | |
Collapse
|
6
|
Borhani-Darian P, Li H, Wu P, Closas P. Deep Learning of GNSS Acquisition. SENSORS (BASEL, SWITZERLAND) 2023; 23:1566. [PMID: 36772605 PMCID: PMC9920026 DOI: 10.3390/s23031566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 06/18/2023]
Abstract
Signal acquisition is a crucial step in Global Navigation Satellite System (GNSS) receivers, which is typically solved by maximizing the so-called Cross-Ambiguity Function (CAF) as a hypothesis testing problem. This article proposes to use deep learning models to perform such acquisition, whereby the CAF is fed to a data-driven classifier that outputs binary class posteriors. The class posteriors are used to compute a Bayesian hypothesis test to statistically decide the presence or absence of a GNSS signal. The versatility and computational affordability of the proposed method are addressed by splitting the CAF into smaller overlapping sections, which are fed to a bank of parallel classifiers whose probabilistic results are optimally fused to provide a so-called probability ratio map from which acquisition is decided. Additionally, the article shows how noncoherent integration schemes are enabled through optimal data fusion, with the goal of increasing the resulting classifier accuracy. The article provides simulation results showing that the proposed data-driven method outperforms current CAF maximization strategies, enabling enhanced acquisition at medium-to-high carrier-to-noise density ratios.
Collapse
|
7
|
Huang J, Rosendo A. Variable Stiffness Object Recognition with a CNN-Bayes Classifier on a Soft Gripper. Soft Robot 2022; 9:1220-1231. [PMID: 35275780 DOI: 10.1089/soro.2021.0105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Soft grippers significantly widen the palpation capabilities of robots, ranging from soft to hard materials without the assistance of cameras. From a medical perspective, the detection of size and shape of hard inclusions concealed within soft three-dimensional (3D) objects is meaningful for the early detection of cancer through palpation. This article proposes a framework for variable-stiffness object recognition using tactile information collected by force sensitive resistors on a three-finger soft gripper. A 15 × 50 spatiotemporal tactile image is generated for each 3D palpation process and then fed into a convolutional neural network (CNN) for object identification. The training set consists of tactile images generated from different grasping orientations. We developed our own CNN architecture, named SoftTactNet, and compared its performance with several state-of-the-art CNNs on the image dataset produced by our experiments. The results show that our proposed method excels in distinguishing 3D shapes and sizes of objects enclosed by a thick soft foam. The average recognition rate is significantly improved using a Naive Bayes classifier, reaching a 97% recognition accuracy. The detection of shapes and sizes of hard objects underneath soft tissues is extremely important for breast and testicular cancer early detection, a field where Soft Robots can shine with inexpensive and ubiquitous devices.
Collapse
Affiliation(s)
- Jingyi Huang
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Andre Rosendo
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| |
Collapse
|
8
|
Pastor F, Lin-Yang DH, Gómez-de-Gabriel JM, García-Cerezo AJ. Dataset with Tactile and Kinesthetic Information from a Human Forearm and Its Application to Deep Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:8752. [PMID: 36433347 PMCID: PMC9696784 DOI: 10.3390/s22228752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/16/2022] [Accepted: 11/10/2022] [Indexed: 06/16/2023]
Abstract
There are physical Human-Robot Interaction (pHRI) applications where the robot has to grab the human body, such as rescue or assistive robotics. Being able to precisely estimate the grasping location when grabbing a human limb is crucial to perform a safe manipulation of the human. Computer vision methods provide pre-grasp information with strong constraints imposed by the field environments. Force-based compliant control, after grasping, limits the amount of applied strength. On the other hand, valuable tactile and proprioceptive information can be obtained from the pHRI gripper, which can be used to better know the features of the human and the contact state between the human and the robot. This paper presents a novel dataset of tactile and kinesthetic data obtained from a robot gripper that grabs a human forearm. The dataset is collected with a three-fingered gripper with two underactuated fingers and a fixed finger with a high-resolution tactile sensor. A palpation procedure is performed to record the shape of the forearm and to recognize the bones and muscles in different sections. Moreover, an application for the use of the database is included. In particular, a fusion approach is used to estimate the actual grasped forearm section using both kinesthetic and tactile information on a regression deep-learning neural network. First, tactile and kinesthetic data are trained separately with Long Short-Term Memory (LSTM) neural networks, considering the data are sequential. Then, the outputs are fed to a Fusion neural network to enhance the estimation. The experiments conducted show good results in training both sources separately, with superior performance when the fusion approach is considered.
Collapse
|
9
|
Kirby E, Zenha R, Jamone L. Comparing Single Touch to Dynamic Exploratory Procedures for Robotic Tactile Object Recognition. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3151261] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
10
|
Zhang P, Yu G, Shan D, Chen Z, Wang X. Identifying the Strength Level of Objects' Tactile Attributes Using a Multi-Scale Convolutional Neural Network. SENSORS (BASEL, SWITZERLAND) 2022; 22:1908. [PMID: 35271055 PMCID: PMC8914820 DOI: 10.3390/s22051908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 02/20/2022] [Accepted: 02/21/2022] [Indexed: 06/14/2023]
Abstract
In order to solve the problem in which most currently existing research focuses on the binary tactile attributes of objects and ignores identifying the strength level of tactile attributes, this paper establishes a tactile data set of the strength level of objects' elasticity and hardness attributes to make up for the lack of relevant data, and proposes a multi-scale convolutional neural network to identify the strength level of object attributes. The network recognizes the different attributes and identifies differences in the strength level of the same object attributes by fusing the original features, i.e., the single-channel features and multi-channel features of the data. A variety of evaluation methods were used for comparison with multiple models in terms of strength levels of elasticity and hardness. The results show that our network has a more significant effect in accuracy. In the prediction results of the positive examples in the predicted value, the true value has a higher proportion of positive examples, that is, the precision is better. The prediction effect for the positive examples in the true value is better, that is, the recall is better. Finally, the recognition rate for all classes is higher in terms of f1_score. For the overall sample, the prediction of the multi-scale convolutional neural network has a higher recognition rate and the network's ability to recognize each strength level is more stable.
Collapse
Affiliation(s)
- Peng Zhang
- School of Electrical Engineering and Automation, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China;
| | - Guoqi Yu
- School of Mechanical Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China; (G.Y.); (D.S.)
| | - Dongri Shan
- School of Mechanical Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China; (G.Y.); (D.S.)
| | - Zhenxue Chen
- School of Control Science and Engineering, Shandong University, Jinan 250061, China;
| | - Xiaofang Wang
- School of Electrical Engineering and Automation, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China;
| |
Collapse
|
11
|
Multi-Sensor Perception Strategy to Enhance Autonomy of Robotic Operation for Uncertain Peg-in-Hole Task. SENSORS 2021; 21:s21113818. [PMID: 34073035 PMCID: PMC8198270 DOI: 10.3390/s21113818] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 05/27/2021] [Accepted: 05/28/2021] [Indexed: 11/17/2022]
Abstract
The peg-in-hole task with object feature uncertain is a typical case of robotic operation in the real-world unstructured environment. It is nontrivial to realize object perception and operational decisions autonomously, under the usual visual occlusion and real-time constraints of such tasks. In this paper, a Bayesian networks-based strategy is presented in order to seamlessly combine multiple heterogeneous senses data like humans. In the proposed strategy, an interactive exploration method implemented by hybrid Monte Carlo sampling algorithms and particle filtering is designed to identify the features’ estimated starting value, and the memory adjustment method and the inertial thinking method are introduced to correct the target position and shape features of the object respectively. Based on the Dempster–Shafer evidence theory (D-S theory), a fusion decision strategy is designed using probabilistic models of forces and positions, which guided the robot motion after each acquisition of the estimated features of the object. It also enables the robot to judge whether the desired operation target is achieved or the feature estimate needs to be updated. Meanwhile, the pliability model is introduced into repeatedly perform exploration, planning and execution steps to reduce interaction forces, the number of exploration. The effectiveness of the strategy is validated in simulations and in a physical robot task.
Collapse
|