1
|
Yun J, Jiang D, Huang L, Tao B, Liao S, Liu Y, Liu X, Li G, Chen D, Chen B. Grasping detection of dual manipulators based on Markov decision process with neural network. Neural Netw 2024; 169:778-792. [PMID: 38000180 DOI: 10.1016/j.neunet.2023.09.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 09/03/2023] [Accepted: 09/07/2023] [Indexed: 11/26/2023]
Abstract
With the development of artificial intelligence, robots are widely used in various fields, grasping detection has been the focus of intelligent robot research. A dual manipulator grasping detection model based on Markov decision process is proposed to realize the stable grasping with complex multiple objects in this paper. Based on the principle of Markov decision process, the cross entropy convolutional neural network and full convolutional neural network are used to parameterize the grasping detection model of dual manipulators which are two-finger manipulator and vacuum sucker manipulator for multi-objective unknown objects. The data set generated in the simulated environment is used to train the two grasping detection networks. By comparing the grasping quality of the detection network output the best grasping by the two grasping methods, the network with better detection effect corresponding to the two grasping methods of two-finger and vacuum sucker is determined, and the dual manipulator grasping detection model is constructed in this paper. Robot grasping experiments are carried out, and the experimental results show that the proposed dual manipulator grasping detection method achieves 90.6% success rate, which is much higher than the other groups of experiments. The feasibility and superiority of the dual manipulator grasping detection method based on Markov decision process are verified.
Collapse
Affiliation(s)
- Juntong Yun
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of science and Technology, Wuhan 430081, China
| | - Du Jiang
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of science and Technology, Wuhan 430081, China; Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of science and Technology, Wuhan 430081, China; Hubei Longzhong Laboratory, Xiangyang 441000, Hubei, China.
| | - Li Huang
- College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430081, China; Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan University of Science and Technology, Wuhan 430081, China
| | - Bo Tao
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of science and Technology, Wuhan 430081, China; Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of science and Technology, Wuhan 430081, China
| | - Shangchun Liao
- Hubei Longzhong Laboratory, Xiangyang 441000, Hubei, China
| | - Ying Liu
- Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of science and Technology, Wuhan 430081, China; Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of science and Technology, Wuhan 430081, China.
| | - Xin Liu
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of science and Technology, Wuhan 430081, China
| | - Gongfa Li
- Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; Research Center for Biomimetic Robot and Intelligent Measurement and Control, Wuhan University of science and Technology, Wuhan 430081, China; Hubei Longzhong Laboratory, Xiangyang 441000, Hubei, China.
| | - Disi Chen
- Robotics and machine vision, Bristol Robotics Lab, University of the West of England, United Kingdom
| | - Baojia Chen
- Hubei Key Laboratory of Hydroelectric Machinery Design& Maintenance, China Three Gorges University, Yichang 443002, China
| |
Collapse
|
2
|
Wang R, Wu XJ, Xu T, Hu C, Kittler J. U-SPDNet: An SPD manifold learning-based neural network for visual classification. Neural Netw 2023; 161:382-396. [PMID: 36780861 DOI: 10.1016/j.neunet.2022.11.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 11/07/2022] [Accepted: 11/27/2022] [Indexed: 12/15/2022]
Abstract
With the development of neural networking techniques, several architectures for symmetric positive definite (SPD) matrix learning have recently been put forward in the computer vision and pattern recognition (CV&PR) community for mining fine-grained geometric features. However, the degradation of structural information during multi-stage feature transformation limits their capacity. To cope with this issue, this paper develops a U-shaped neural network on the SPD manifolds (U-SPDNet) for visual classification. The designed U-SPDNet contains two subsystems, one of which is a shrinking path (encoder) making up of a prevailing SPD manifold neural network (SPDNet (Huang and Van Gool, 2017)) for capturing compact representations from the input data. Another is a constructed symmetric expanding path (decoder) to upsample the encoded features, trained by a reconstruction error term. With this design, the degradation problem will be gradually alleviated during training. To enhance the representational capacity of U-SPDNet, we also append skip connections from encoder to decoder, realized by manifold-valued geometric operations, namely Riemannian barycenter and Riemannian optimization. On the MDSD, Virus, FPHA, and UAV-Human datasets, the accuracy achieved by our method is respectively 6.92%, 8.67%, 1.57%, and 1.08% higher than SPDNet, certifying its effectiveness.
Collapse
Affiliation(s)
- Rui Wang
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, Wuxi 214122, China
| | - Xiao-Jun Wu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, Wuxi 214122, China.
| | - Tianyang Xu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, Wuxi 214122, China
| | - Cong Hu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, Wuxi 214122, China
| | - Josef Kittler
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, Guildford GU2 7XH, UK
| |
Collapse
|
3
|
Zou J, Zhang Y, Liu H, Ma L. Monogenic features based single sample face recognition by kernel sparse representation on multiple Riemannian manifolds. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.06.113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|