1
|
Chen Z, Chen H, Ouyang Y, Cao C, Gao W, Hu Q, Jin H, Zhang S. A high-resolution and whole-body dataset of hand-object contact areas based on 3D scanning method. Sci Data 2025; 12:451. [PMID: 40102456 PMCID: PMC11920513 DOI: 10.1038/s41597-025-04770-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Accepted: 03/06/2025] [Indexed: 03/20/2025] Open
Abstract
Hand contact data, reflecting the intricate behaviours of human hands during object operation, exhibits significant potential for analysing hand operation patterns to guide the design of hand-related sensors and robots, and predicting object properties. However, these potential applications are hindered by the constraints of low resolution and incomplete capture of the hand contact data. Leveraging a non-contact and high-precision 3D scanning method for surface capture, a high-resolution and whole-body hand contact dataset, named as Ti3D-contact, is constructed in this work. The dataset, with an average resolution of 0.72 mm, contains 1872 sets of texture images and 3D models. The contact area during hand operation is whole-body painted on gloves, which are captured as the high-resolution original hand contact data through a 3D scanner. Reliability validation on Ti3D-contact is conducted and hand movement classification with 95% precision is achieved using the acquired hand contact dataset. The properties of high-resolution and whole-body capturing make the acquired dataset exhibit a promising potential application in hand posture recognition and hand movement prediction.
Collapse
Affiliation(s)
- Zelin Chen
- Institute of Humanoid Robots, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui, 230027, China
| | - Hanlu Chen
- Institute of Humanoid Robots, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui, 230027, China
| | - Yiming Ouyang
- Institute of Humanoid Robots, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui, 230027, China.
| | - Chenhao Cao
- Institute of Humanoid Robots, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui, 230027, China
| | - Wei Gao
- Institute of Humanoid Robots, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui, 230027, China
| | - Qiqiang Hu
- Department of Biomedical Engineering, City University of Hong Kong, Kowloon, Hong Kong
| | - Hu Jin
- Institute of Humanoid Robots, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui, 230027, China.
| | - Shiwu Zhang
- Institute of Humanoid Robots, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui, 230027, China
| |
Collapse
|
2
|
Li J, Zhong J, Wang N. A multimodal human-robot sign language interaction framework applied in social robots. Front Neurosci 2023; 17:1168888. [PMID: 37113147 PMCID: PMC10126358 DOI: 10.3389/fnins.2023.1168888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Accepted: 03/20/2023] [Indexed: 04/29/2023] Open
Abstract
Deaf-mutes face many difficulties in daily interactions with hearing people through spoken language. Sign language is an important way of expression and communication for deaf-mutes. Therefore, breaking the communication barrier between the deaf-mute and hearing communities is significant for facilitating their integration into society. To help them integrate into social life better, we propose a multimodal Chinese sign language (CSL) gesture interaction framework based on social robots. The CSL gesture information including both static and dynamic gestures is captured from two different modal sensors. A wearable Myo armband and a Leap Motion sensor are used to collect human arm surface electromyography (sEMG) signals and hand 3D vectors, respectively. Two modalities of gesture datasets are preprocessed and fused to improve the recognition accuracy and to reduce the processing time cost of the network before sending it to the classifier. Since the input datasets of the proposed framework are temporal sequence gestures, the long-short term memory recurrent neural network is used to classify these input sequences. Comparative experiments are performed on an NAO robot to test our method. Moreover, our method can effectively improve CSL gesture recognition accuracy, which has potential applications in a variety of gesture interaction scenarios not only in social robots.
Collapse
Affiliation(s)
- Jie Li
- School of Artificial Intelligence, Chongqing Technology and Business University, Chongqing, China
| | - Junpei Zhong
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Ning Wang
- Bristol Robotics Laboratory, University of the West of England, Bristol, United Kingdom
- *Correspondence: Ning Wang,
| |
Collapse
|
3
|
Using Adaptive Directed Acyclic Graph for Human In-Hand Motion Identification with Hybrid Surface Electromyography and Kinect. Symmetry (Basel) 2022. [DOI: 10.3390/sym14102093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
The multi-fingered dexterous robotic hand is increasingly used to achieve more complex and sophisticated human-like manipulation tasks on various occasions. This paper proposes a hybrid Surface Electromyography (SEMG) and Kinect-based human in-hand motion (HIM) capture system architecture for recognizing complex motions of the humans by observing the state information between an object and the human hand, then transferring the manipulation skills into bionic multi-fingered robotic hand realizing dexterous in-hand manipulation. First, an Adaptive Directed Acyclic Graph (ADAG) algorithm for recognizing HIMs is proposed and optimized based on the comparison of multi-class support vector machines; second, ten representative complex in-hand motions are demonstrated by ten subjects, and SEMG and Kinect signals are obtained based on a multi-modal data acquisition platform; then, combined with the proposed algorithm framework, a series of data preprocessing algorithms are realized. There is statistical symmetry in similar types of SEMG signals and images, and asymmetry in different types of SEMG signals and images. A detailed analysis and an in-depth discussion are given from the results of the ADAG recognizing HIMs, motion recognition rates of different perceptrons, motion recognition rates of different subjects, motion recognition rates of different multi-class SVM methods, and motion recognition rates of different machine learning methods. The results of this experiment confirm the feasibility of the proposed method, with a recognition rate of 95.10%.
Collapse
|
4
|
Development of robotic hand tactile sensing system for distributed contact force sensing in robotic dexterous multimodal grasping. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2022. [DOI: 10.1007/s41315-022-00260-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
5
|
Padilla-Magaña JF, Peña-Pitarch E, Sánchez-Suarez I, Ticó-Falguera N. Hand Motion Analysis during the Execution of the Action Research Arm Test Using Multiple Sensors. SENSORS 2022; 22:s22093276. [PMID: 35590966 PMCID: PMC9105674 DOI: 10.3390/s22093276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 04/13/2022] [Accepted: 04/21/2022] [Indexed: 11/16/2022]
Abstract
The Action Research Arm Test (ARAT) is a standardized outcome measure that can be improved by integrating sensors for hand motion analysis. The purpose of this study is to measure the flexion angle of the finger joints and fingertip forces during the performance of three subscales (Grasp, Grip, and Pinch) of the ARAT, using a data glove (CyberGlove II®) and five force-sensing resistors (FSRs) simultaneously. An experimental study was carried out with 25 healthy subjects (right-handed). The results showed that the mean flexion angles of the finger joints required to perform the 16 activities were Thumb (Carpometacarpal Joint (CMC) 28.56°, Metacarpophalangeal Joint (MCP) 26.84°, and Interphalangeal Joint (IP) 13.23°), Index (MCP 46.18°, Index Proximal Interphalangeal Joint (PIP) 38.89°), Middle (MCP 47.5°, PIP 42.62°), Ring (MCP 44.09°, PIP 39.22°), and Little (MCP 31.50°, PIP 22.10°). The averaged fingertip force exerted in the Grasp Subscale was 8.2 N, in Grip subscale 6.61 N and Pinch subscale 3.89 N. These results suggest that the integration of multiple sensors during the performance of the ARAT has clinical relevance, allowing therapists and other health professionals to perform a more sensitive, objective, and quantitative assessment of the hand function.
Collapse
Affiliation(s)
- Jesus Fernando Padilla-Magaña
- Escola Politècnica Superior d’Enginyeria de Manresa (EPSEM), Polytechnic University of Catalonia, 08242 Manresa, Barcelona, Spain;
- Department of Manufacturing Technologies, Polytechnic University of Uruapan Michoacán, Uruapan 60210, Michoacán, Mexico;
- Correspondence: ; Tel.: +34-671251375
| | - Esteban Peña-Pitarch
- Escola Politècnica Superior d’Enginyeria de Manresa (EPSEM), Polytechnic University of Catalonia, 08242 Manresa, Barcelona, Spain;
| | - Isahi Sánchez-Suarez
- Department of Manufacturing Technologies, Polytechnic University of Uruapan Michoacán, Uruapan 60210, Michoacán, Mexico;
| | - Neus Ticó-Falguera
- Physical Medicine and Rehabilitation Service, Althaia Xarxa Assistencial de Manresa, 08243 Manresa, Barcelona, Spain;
| |
Collapse
|
6
|
Carfì A, Patten T, Kuang Y, Hammoud A, Alameh M, Maiettini E, Weinberg AI, Faria D, Mastrogiovanni F, Alenyà G, Natale L, Perdereau V, Vincze M, Billard A. Hand-Object Interaction: From Human Demonstrations to Robot Manipulation. Front Robot AI 2021; 8:714023. [PMID: 34660702 PMCID: PMC8517111 DOI: 10.3389/frobt.2021.714023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 09/14/2021] [Indexed: 11/13/2022] Open
Abstract
Human-object interaction is of great relevance for robots to operate in human environments. However, state-of-the-art robotic hands are far from replicating humans skills. It is, therefore, essential to study how humans use their hands to develop similar robotic capabilities. This article presents a deep dive into hand-object interaction and human demonstrations, highlighting the main challenges in this research area and suggesting desirable future developments. To this extent, the article presents a general definition of the hand-object interaction problem together with a concise review for each of the main subproblems involved, namely: sensing, perception, and learning. Furthermore, the article discusses the interplay between these subproblems and describes how their interaction in learning from demonstration contributes to the success of robot manipulation. In this way, the article provides a broad overview of the interdisciplinary approaches necessary for a robotic system to learn new manipulation skills by observing human behavior in the real world.
Collapse
Affiliation(s)
- Alessandro Carfì
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Timothy Patten
- Vision for Robotics Laboratory, Institut für Automatisierungs- und Regelungstechnik, Technische Universität Wien, Vienna, Austria
| | - Yingyi Kuang
- Robotics, Vision and Intelligent Systems, College of Engineering and Physical Sciences, Aston University, Birmingham, United Kingdom
| | - Ali Hammoud
- Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, Paris, France
| | - Mohamad Alameh
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Elisa Maiettini
- Humanoid Sensing and Perception, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Abraham Itzhak Weinberg
- Robotics, Vision and Intelligent Systems, College of Engineering and Physical Sciences, Aston University, Birmingham, United Kingdom
| | - Diego Faria
- Robotics, Vision and Intelligent Systems, College of Engineering and Physical Sciences, Aston University, Birmingham, United Kingdom
| | - Fulvio Mastrogiovanni
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Guillem Alenyà
- Institut de Robòtica i Informàtica Industrial, CSIC-UPC, Barcelona, Spain
| | - Lorenzo Natale
- Humanoid Sensing and Perception, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Véronique Perdereau
- Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, Paris, France
| | - Markus Vincze
- Vision for Robotics Laboratory, Institut für Automatisierungs- und Regelungstechnik, Technische Universität Wien, Vienna, Austria
| | - Aude Billard
- Learning Algorithms and Systems Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
7
|
Abstract
A prototype portable device that allows for simultaneous hand and fingers motion and precise force measurements has been. Wireless microelectromechanical systems based on inertial and force sensors are suitable for tracking bodily measurements. In particular, they can be used for hand interaction with computer applications. Our interest is to design a multimodal wireless hand grip device that measures and evaluates this activity for ludic or medical rehabilitation purposes. The accuracy and reliability of the proposed device has been evaluated against two different commercial dynamometers (Takei model 5101 TKK, Constant 14192-709E). We introduce a testing application to provide visual feedback of all device signals. The combination of interaction forces and movements makes it possible to simulate the dynamic characteristics of the handling of a virtual object by fingers and palm in rehabilitation applications or some serious games. The combination of these above mentioned technologies and open and portable software are very useful in the design of applications for assistance and rehabilitation purposes that is the main objective of the device.
Collapse
|
8
|
Khan SM, Khan AA, Farooq O. EMG based classification for pick and place task. Biomed Phys Eng Express 2021; 7. [PMID: 33882462 DOI: 10.1088/2057-1976/abfa81] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 04/21/2021] [Indexed: 11/12/2022]
Abstract
The hand amputee is deprived of number of activities of daily living. To help the hand amputee, it is important to learn the pattern of muscles activity. There are several elements of tasks, which involve forearm along with the wrist and hand. The one very important task is pick and place activity performed by the hand. A pick and place action is a compilation of different finger motions for the grasping of objects at different force levels. This action may be better understood by learning the electromyography signals of forearm muscles. Electromyography is the technique to acquire electrical muscle activity that is used for the pattern recognition technique of assistive devices. Regarding this, the different classification characterizations of EMG signals involved in the pick and place action, subjected to variable grip span and weights were considered in this study. A low-level force measuring gripper, capable to bear the changes in weights and object spans was designed and developed to simulate the task. The grip span varied from 6 cm to 9 cm and the maximum weight used in this study was 750 gms. The pattern recognition classification methodology was performed for the differentiation of phases of the pick and place activity, grip force, and the angular deviation of metacarpal phalangeal (MCP) joint. The classifiers used in this study were decision tree (DT), support vector machines (SVM) and k-nearest neighbor (k-NN) based on the feature sets of the EMG signals. After analyses, it was found that k-NN performed best to classify different phases of the activity and relative deviation of MCP joint with an average classification accuracy of 82% and 91% respectively. However; the SVM performed best in classification of force with a particular feature set. The findings of the study would be helpful in designing the assistive devices for hand amputee.
Collapse
Affiliation(s)
- Salman Mohd Khan
- Department of Mechanical Engineering, Aligarh Muslim University, Aligarh, India
| | - Abid Ali Khan
- Department of Mechanical Engineering, Aligarh Muslim University, Aligarh, India
| | - Omar Farooq
- Department of Mechanical Engineering, Aligarh Muslim University, Aligarh, India
| |
Collapse
|
9
|
Hua J, Zeng L, Li G, Ju Z. Learning for a Robot: Deep Reinforcement Learning, Imitation Learning, Transfer Learning. SENSORS 2021; 21:s21041278. [PMID: 33670109 PMCID: PMC7916895 DOI: 10.3390/s21041278] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 02/01/2021] [Accepted: 02/05/2021] [Indexed: 11/16/2022]
Abstract
Dexterous manipulation of the robot is an important part of realizing intelligence, but manipulators can only perform simple tasks such as sorting and packing in a structured environment. In view of the existing problem, this paper presents a state-of-the-art survey on an intelligent robot with the capability of autonomous deciding and learning. The paper first reviews the main achievements and research of the robot, which were mainly based on the breakthrough of automatic control and hardware in mechanics. With the evolution of artificial intelligence, many pieces of research have made further progresses in adaptive and robust control. The survey reveals that the latest research in deep learning and reinforcement learning has paved the way for highly complex tasks to be performed by robots. Furthermore, deep reinforcement learning, imitation learning, and transfer learning in robot control are discussed in detail. Finally, major achievements based on these methods are summarized and analyzed thoroughly, and future research challenges are proposed.
Collapse
Affiliation(s)
- Jiang Hua
- Key Laboratory of Metallurgical Equipment and Control Technology, Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; (J.H.); (L.Z.); (G.L.)
| | - Liangcai Zeng
- Key Laboratory of Metallurgical Equipment and Control Technology, Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; (J.H.); (L.Z.); (G.L.)
| | - Gongfa Li
- Key Laboratory of Metallurgical Equipment and Control Technology, Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; (J.H.); (L.Z.); (G.L.)
| | - Zhaojie Ju
- School of Computing, University of Portsmouth, Portsmouth 03801, UK
- Correspondence:
| |
Collapse
|
10
|
Hachaj T, Piekarczyk M. Evaluation of Pattern Recognition Methods for Head Gesture-Based Interface of a Virtual Reality Helmet Equipped with a Single IMU Sensor. SENSORS 2019; 19:s19245408. [PMID: 31817991 PMCID: PMC6960875 DOI: 10.3390/s19245408] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Revised: 12/05/2019] [Accepted: 12/06/2019] [Indexed: 11/20/2022]
Abstract
The motivation of this paper is to examine the effectiveness of state-of-the-art and newly proposed motion capture pattern recognition methods in the task of head gesture classifications. The head gestures are designed for a user interface that utilizes a virtual reality helmet equipped with an internal measurement unit (IMU) sensor that has 6-axis accelerometer and gyroscope. We will validate a classifier that uses Principal Components Analysis (PCA)-based features with various numbers of dimensions, a two-stage PCA-based method, a feedforward artificial neural network, and random forest. Moreover, we will also propose a Dynamic Time Warping (DTW) classifier trained with extension of DTW Barycenter Averaging (DBA) algorithm that utilizes quaternion averaging and a bagged variation of previous method (DTWb) that utilizes many DTW classifiers that perform voting. The evaluation has been performed on 975 head gesture recordings in seven classes acquired from 12 persons. The highest value of recognition rate in a leave-one-out test has been obtained for DTWb and it equals 0.975 (0.026 better than the best of state-of-the-art methods to which we have compared our approach). Among the most important applications of the proposed method is improving life quality for people who are disabled below the neck by supporting, for example, an assistive autonomous power chair with a head gesture interface or remote controlled interfaces in robotics.
Collapse
|
11
|
Khan SM, Khan AA, Farooq O. Selection of Features and Classifiers for EMG-EEG-Based Upper Limb Assistive Devices-A Review. IEEE Rev Biomed Eng 2019; 13:248-260. [PMID: 31689209 DOI: 10.1109/rbme.2019.2950897] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Bio-signals are distinctive factors in the design of human-machine interface, essentially useful for prosthesis, orthosis, and exoskeletons. Despite the progress in the analysis of pattern recognition based devices; the acceptance of these devices is still questionable. One reason is the lack of information to identify the possible combinations of features and classifiers. Besides; there is also a need for optimal selection of various sensors for sensations such as touch, force, texture, along with EMGs/EEGs. This article reviews the two bio-signal techniques, named as electromyography and electroencephalography. The details of the features and the classifiers used in the data processing for upper limb assist devices are summarised here. Various features and their sets are surveyed and different classifiers for feature sets are discussed on the basis of the classification rate. The review was carried out on the basis of the last 10-12 years of published research in this area. This article also outlines the influence of modality of EMGs and EEGs with other sensors on classifications. Also, other bio-signals used in upper limb devices and future aspects are considered.
Collapse
|
12
|
Using Ontology as a Strategy for Modeling the Interface Between the Cognitive and Robotic Systems. J INTELL ROBOT SYST 2019. [DOI: 10.1007/s10846-019-01076-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
13
|
Dong W, Wang Y, Zhou Y, Bai Y, Ju Z, Guo J, Gu G, Bai K, Ouyang G, Chen S, Zhang Q, Huang Y. Soft human–machine interfaces: design, sensing and stimulation. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2018. [DOI: 10.1007/s41315-018-0060-z] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
14
|
Tang K, Li P, Wang C, Wang Y, Chen X. Real-Time Hand Position Sensing Technology Based on Human Body Electrostatics. SENSORS 2018; 18:s18061677. [PMID: 29882881 PMCID: PMC6021916 DOI: 10.3390/s18061677] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Revised: 05/17/2018] [Accepted: 05/17/2018] [Indexed: 11/16/2022]
Abstract
Non-contact human-computer interactions (HCI) based on hand gestures have been widely investigated. Here, we present a novel method to locate the real-time position of the hand using the electrostatics of the human body. This method has many advantages, including a delay of less than one millisecond, low cost, and does not require a camera or wearable devices. A formula is first created to sense array signals with five spherical electrodes. Next, a solving algorithm for the real-time measured hand position is introduced and solving equations for three-dimensional coordinates of hand position are obtained. A non-contact real-time hand position sensing system was established to perform verification experiments, and the principle error of the algorithm and the systematic noise were also analyzed. The results show that this novel technology can determine the dynamic parameters of hand movements with good robustness to meet the requirements of complicated HCI.
Collapse
Affiliation(s)
- Kai Tang
- State Key Laboratory of Mechatronics Engineering and Control, Beijing Institute of Technology, Beijing 100081, China.
| | - Pengfei Li
- State Key Laboratory of Mechatronics Engineering and Control, Beijing Institute of Technology, Beijing 100081, China.
| | - Chuang Wang
- State Key Laboratory of Mechatronics Engineering and Control, Beijing Institute of Technology, Beijing 100081, China.
| | - Yifei Wang
- State Key Laboratory of Mechatronics Engineering and Control, Beijing Institute of Technology, Beijing 100081, China.
| | - Xi Chen
- State Key Laboratory of Mechatronics Engineering and Control, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|