1
|
Shi Y, Ding C, Li S, Li B, Sun X. New RNN Algorithms for Different Time-Variant Matrix Inequalities Solving Under Discrete-Time Framework. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5244-5257. [PMID: 38625777 DOI: 10.1109/tnnls.2024.3382199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2024]
Abstract
A series of discrete time-variant matrix inequalities is generally regarded as one of the challenging problems in science and engineering fields. As a discrete time-variant problem, the existing solving schemes generally need the theoretical support under the continuous-time framework, and there is no independent solving scheme under the discrete-time framework. The theoretical deficiency of solving scheme greatly limits the theoretical research and practical application of discrete time-variant matrix inequalities. In this article, new discrete-time recurrent neural network (RNN) algorithms are proposed, analyzed, and investigated for solving different time-variant matrix inequalities under the discrete-time framework, including discrete time-variant matrix vector inequality (discrete time-variant MVI), discrete time-variant generalized matrix inequality (discrete time-variant GMI), discrete time-variant generalized-Sylvester matrix inequality (discrete time-variant GSMI), and discrete time-variant complicated-Sylvester matrix inequality (discrete time-variant CSMI), and all solving processes are based on the direct discretization thought. Specifically, first of all, four discrete time-variant matrix inequalities are presented as the target problems of these researches. Second, for solving such problems, we propose corresponding discrete-time recurrent neural network (RNN) (DT-RNN) algorithms (termed DT-RNN-MVI algorithm, DT-RNN-GMI algorithm, DT-RNN-GSMI algorithm, and DT-RNN-CSMI algorithm), which are different from the traditional DT-RNN design thought because second-order Taylor expansion is applied to derive the DT-RNN algorithms. This creative process avoids the intervention of continuous-time framework. Then, theoretical analyses are presented, which show the convergence and precision of the DT-RNN algorithms. Abundant numerical experiments are further carried out, which further confirm the excellent properties of the DT-RNN algorithms.
Collapse
|
2
|
Xiao L, Song W, Jia L, Li X. ZNN for time-variant nonlinear inequality systems: A finite-time solution. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.05.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
3
|
Improved ZND model for solving dynamic linear complex matrix equation and its application. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07581-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
4
|
Double Features Zeroing Neural Network Model for Solving the Pseudoninverse of a Complex-Valued Time-Varying Matrix. MATHEMATICS 2022. [DOI: 10.3390/math10122122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
The solution of a complex-valued matrix pseudoinverse is one of the key steps in various science and engineering fields. Owing to its important roles, researchers had put forward many related algorithms. With the development of research, a time-varying matrix pseudoinverse received more attention than a time-invarying one, as we know that a zeroing neural network (ZNN) is an efficient method to calculate a pseudoinverse of a complex-valued time-varying matrix. Due to the initial ZNN (IZNN) and its extensions lacking a mechanism to deal with both convergence and robustness, that is, most existing research on ZNN models only studied the convergence and robustness, respectively. In order to simultaneously improve the double features (i.e., convergence and robustness) of ZNN in solving a complex-valued time-varying pseudoinverse, this paper puts forward a double features ZNN (DFZNN) model by adopting a specially designed time-varying parameter and a novel nonlinear activation function. Moreover, two nonlinear activation types of complex number are investigated. The global convergence, predefined time convergence, and robustness are proven in theory, and the upper bound of the predefined convergence time is formulated exactly. The results of the numerical simulation verify the theoretical proof, in contrast to the existing complex-valued ZNN models, the DFZNN model has shorter predefined convergence time in a zero noise state, and enhances robustness in different noise states. Both the theoretical and the empirical results show that the DFZNN model has better ability in solving a time-varying complex-valued matrix pseudoinverse. Finally, the proposed DFZNN model is used to track the trajectory of a manipulator, which further verifies the reliability of the model.
Collapse
|
5
|
Shi Y, Jin L, Li S, Li J, Qiang J, Gerontitis DK. Novel Discrete-Time Recurrent Neural Networks Handling Discrete-Form Time-Variant Multi-Augmented Sylvester Matrix Problems and Manipulator Application. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:587-599. [PMID: 33074831 DOI: 10.1109/tnnls.2020.3028136] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this article, the discrete-form time-variant multi-augmented Sylvester matrix problems, including discrete-form time-variant multi-augmented Sylvester matrix equation (MASME) and discrete-form time-variant multi-augmented Sylvester matrix inequality (MASMI), are formulated first. In order to solve the above-mentioned problems, in continuous time-variant environment, aided with the Kronecker product and vectorization techniques, the multi-augmented Sylvester matrix problems are transformed into simple linear matrix problems, which can be solved by using the proposed discrete-time recurrent neural network (RNN) models. Second, the theoretical analyses and comparisons on the computational performance of the recently developed discretization formulas are presented. Based on these theoretical results, a five-instant discretization formula with superior property is leveraged to establish the corresponding discrete-time RNN (DTRNN) models for solving the discrete-form time-variant MASME and discrete-form time-variant MASMI, respectively. Note that these DTRNN models are zero stable, consistent, and convergent with satisfied precision. Furthermore, illustrative numerical experiments are given to substantiate the excellent performance of the proposed DTRNN models for solving discrete-form time-variant multi-augmented Sylvester matrix problems. In addition, an application of robot manipulator further extends the theoretical research and physical realizability of RNN methods.
Collapse
|
6
|
Liu B, Fu D, Qi Y, Huang H, Jin L. Noise-tolerant gradient-oriented neurodynamic model for solving the Sylvester equation. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
7
|
Kong Y, Hu T, Lei J, Han R. A Finite-Time Convergent Neural Network for Solving Time-Varying Linear Equations with Inequality Constraints Applied to Redundant Manipulator. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10623-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
8
|
Design, analysis and verification of recurrent neural dynamics for handling time-variant augmented Sylvester linear system. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.10.036] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
9
|
Kong Y, Jiang Y, Zhou J, Wu H. A time controlling neural network for time‐varying QP solving with application to kinematics of mobile manipulators. INT J INTELL SYST 2021. [DOI: 10.1002/int.22304] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Ying Kong
- Department of Information and Electronic Engineering Zhejiang University of Science and Technology Zhejiang China
| | - Yunliang Jiang
- Department of Information Engineering Huzhou University Huzhou China
| | - Junwen Zhou
- Department of Information and Electronic Engineering Zhejiang University of Science and Technology Zhejiang China
| | - Huifeng Wu
- Department of Intelligent and Software Technology Hangzhou Dianzi University Hangzhou China
| |
Collapse
|
10
|
Guo D, Lin X. Li-Function Activated Zhang Neural Network for Online Solution of Time-Varying Linear Matrix Inequality. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10291-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
11
|
Zeng Y, Xiao L, Li K, Li J, Li K, Jian Z. Design and analysis of three nonlinearly activated ZNN models for solving time-varying linear matrix inequalities in finite time. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.01.070] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
12
|
Zhang Y, Ling Y, Li S, Yang M, Tan N. Discrete-time zeroing neural network for solving time-varying Sylvester-transpose matrix inequation via exp-aided conversion. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.12.053] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
13
|
Chen D, Li S, Wu Q, Liao L. Simultaneous identification, tracking control and disturbance rejection of uncertain nonlinear dynamics systems: A unified neural approach. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.031] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
14
|
Xu F, Li Z, Nie Z, Shao H, Guo D. Zeroing Neural Network for Solving Time-Varying Linear Equation and Inequality Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:2346-2357. [PMID: 30582557 DOI: 10.1109/tnnls.2018.2884543] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
A typical recurrent neural network called zeroing neural network (ZNN) was developed for time-varying problem-solving in a previous study. Many applications result in time-varying linear equation and inequality systems that should be solved in real time. This paper provides a ZNN model for determining the solution of time-varying linear equation and inequality systems. By introducing a nonnegative slack variable, the time-varying linear equation and inequality systems are transformed into a mixed nonlinear system. The ZNN model is established via the definition of an indefinite error function and the usage of an exponential decay formula. Theoretical results indicate the convergence property of the proposed ZNN model. Comparative simulation results prove the ZNN effectiveness and superiority for time-varying linear equation and inequality systems. Furthermore, the proposed ZNN model is employed to robot manipulators, thus showing the ZNN applicability.
Collapse
|
15
|
Zhang Y, Gong H, Yang M, Li J, Yang X. Stepsize Range and Optimal Value for Taylor-Zhang Discretization Formula Applied to Zeroing Neurodynamics Illustrated via Future Equality-Constrained Quadratic Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:959-966. [PMID: 30137015 DOI: 10.1109/tnnls.2018.2861404] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this brief, future equality-constrained quadratic programming (FECQP) is studied. Via a zeroing neurodynamics method, a continuous-time zeroing neurodynamics (CTZN) model is presented. By using Taylor-Zhang discretization formula to discretize the CTZN model, a Taylor-Zhang discrete-time zeroing neurodynamics (TZ-DTZN) model is presented to perform FECQP. Furthermore, we focus on the critical parameter of the TZ-DTZN model, i.e., stepsize. By theoretical analyses, we obtain an effective range of the stepsize, which guarantees the stability of the TZ-DTZN model. In addition, we further discuss the optimal value of the stepsize, which makes the TZ-DTZN model possess the optimal stability (i.e., the best stability with the fastest convergence). Finally, numerical experiments and application experiments for motion generation of a robot manipulator are conducted to verify the high precision of the TZ-DTZN model and the effective range and optimal value of the stepsize for FECQP.
Collapse
|
16
|
Chen D, Zhang Y. Robust Zeroing Neural-Dynamics and Its Time-Varying Disturbances Suppression Model Applied to Mobile Robot Manipulators. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4385-4397. [PMID: 29990177 DOI: 10.1109/tnnls.2017.2764529] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper proposes a novel robust zeroing neural-dynamics (RZND) approach as well as its associated model for solving the inverse kinematics problem of mobile robot manipulators. Unlike existing works based on the assumption that neural network models are free of external disturbances, four common forms of time-varying disturbances suppressed by the proposed RZND model are investigated in this paper. In addition, theoretical analyses on the antidisturbance performance are presented in detail to prove the effectiveness and robustness of the proposed RZND model with time-varying disturbances suppressed for solving the inverse kinematics problem of mobile robot manipulators. That is, the RZND model converges toward the exact solution of the inverse kinematics problem of mobile robot manipulators with bounded or zero-oriented steady-state position error. Moreover, simulation studies and comprehensive comparisons with existing neural network models, e.g., the conventional Zhang neural network model and the gradient-based recurrent neural network model, together with extensive tests with four common forms of time-varying disturbances substantiate the efficacy, robustness, and superiority of the proposed RZND approach as well as its time-varying disturbances suppression model for solving the inverse kinematics problem of mobile robot manipulators.
Collapse
|
17
|
Xiang Q, Liao B, Xiao L, Lin L, Li S. Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversion. Soft comput 2018. [DOI: 10.1007/s00500-018-3119-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
18
|
Bi-criteria minimization with MWVN–INAM type for motion planning and control of redundant robot manipulators. ROBOTICA 2018. [DOI: 10.1017/s0263574717000625] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
SUMMARYThis study proposes and investigates a new type of bi-criteria minimization (BCM) for the motion planning and control of redundant robot manipulators to address the discontinuity problem in the infinity-norm acceleration minimization (INAM) scheme and to guarantee the final joint velocity of motion to be approximate to zero. This new type is based on the combination of minimum weighted velocity norm (MWVN) and INAM criteria, and thus is called the MWVN–INAM–BCM scheme. In formulating such a scheme, joint-angle, joint-velocity, and joint-acceleration limits are incorporated. The proposed MWVN–INAM–BCM scheme is reformulated as a quadratic programming problem solved at the joint-acceleration level. Simulation results based on the PUMA560 robot manipulator validate the efficacy and applicability of the proposed MWVN–INAM–BCM scheme in robotic redundancy resolution. In addition, the physical realizability of the proposed scheme is verified in practical application based on a six-link planar robot manipulator.
Collapse
|
19
|
Signum-function array activated ZNN with easier circuit implementation and finite-time convergence for linear systems solving. INFORM PROCESS LETT 2017. [DOI: 10.1016/j.ipl.2017.04.008] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
20
|
Jin L, Zhang Y, Li S. Integration-Enhanced Zhang Neural Network for Real-Time-Varying Matrix Inversion in the Presence of Various Kinds of Noises. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:2615-2627. [PMID: 26625426 DOI: 10.1109/tnnls.2015.2497715] [Citation(s) in RCA: 67] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Matrix inversion often arises in the fields of science and engineering. Many models for matrix inversion usually assume that the solving process is free of noises or that the denoising has been conducted before the computation. However, time is precious for the real-time-varying matrix inversion in practice, and any preprocessing for noise reduction may consume extra time, possibly violating the requirement of real-time computation. Therefore, a new model for time-varying matrix inversion that is able to handle simultaneously the noises is urgently needed. In this paper, an integration-enhanced Zhang neural network (IEZNN) model is first proposed and investigated for real-time-varying matrix inversion. Then, the conventional ZNN model and the gradient neural network model are presented and employed for comparison. In addition, theoretical analyses show that the proposed IEZNN model has the global exponential convergence property. Moreover, in the presence of various kinds of noises, the proposed IEZNN model is proven to have an improved performance. That is, the proposed IEZNN model converges to the theoretical solution of the time-varying matrix inversion problem no matter how large the matrix-form constant noise is, and the residual errors of the proposed IEZNN model can be arbitrarily small for time-varying noises and random noises. Finally, three illustrative simulation examples, including an application to the inverse kinematic motion planning of a robot manipulator, are provided and analyzed to substantiate the efficacy and superiority of the proposed IEZNN model for real-time-varying matrix inversion.
Collapse
|
21
|
Di Marco M, Forti M, Nistri P, Pancioni L. Nonsmooth Neural Network for Convex Time-Dependent Constraint Satisfaction Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:295-307. [PMID: 25769174 DOI: 10.1109/tnnls.2015.2404773] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper introduces a nonsmooth (NS) neural network that is able to operate in a time-dependent (TD) context and is potentially useful for solving some classes of NS-TD problems. The proposed network is named nonsmooth time-dependent network (NTN) and is an extension to a TD setting of a previous NS neural network for programming problems. Suppose C(t), t ≥ 0, is a nonempty TD convex feasibility set defined by TD inequality constraints. The constraints are in general NS (nondifferentiable) functions of the state variables and time. NTN is described by the subdifferential with respect to the state variables of an NS-TD barrier function and a vector field corresponding to the unconstrained dynamics. This paper shows that for suitable values of the penalty parameter, the NTN dynamics displays two main phases. In the first phase, any solution of NTN not starting in C(0) at t=0 is able to reach the moving set C(·) in finite time th , whereas in the second phase, the solution tracks the moving set, i.e., it stays within C(t) for all subsequent times t ≥ t(h). NTN is thus able to find an exact feasible solution in finite time and also to provide an exact feasible solution for subsequent times. This new and peculiar dynamics displayed by NTN is potentially useful for addressing some significant TD signal processing tasks. As an illustration, this paper discusses a number of examples where NTN is applied to the solution of NS-TD convex feasibility problems.
Collapse
|
22
|
Liao B, Zhang Y, Jin L. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:225-237. [PMID: 26058059 DOI: 10.1109/tnnls.2015.2435014] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.
Collapse
|
23
|
Liao B, Xiao L, Jin J, Ding L, Liu M. Novel Complex-Valued Neural Network for Dynamic Complex-Valued Matrix Inversion. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2016. [DOI: 10.20965/jaciii.2016.p0132] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Static matrix inverse solving has been studied for many years. In this paper, we aim at solving a dynamic complex-valued matrix inverse. Specifically, based on the artful combination of a conventional gradient neural network and the recently-proposed Zhang neural network, a novel complex-valued neural network model is presented and investigated for computing the dynamic complex-valued matrix inverse in real time. A hardware implementation structure is also offered. Moreover, both theoretical analysis and simulation results substantiate the effectiveness and advantages of the proposed recurrent neural network model for dynamic complex-valued matrix inversion.
Collapse
|
24
|
Zhang Z, Li Z, Zhang Y, Luo Y, Li Y. Neural-Dynamic-Method-Based Dual-Arm CMG Scheme With Time-Varying Constraints Applied to Humanoid Robots. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:3251-3262. [PMID: 26340789 DOI: 10.1109/tnnls.2015.2469147] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.
Collapse
|
25
|
Guo D, Zhang Y. Li-function activated ZNN with finite-time convergence applied to redundant-manipulator kinematic control via time-varying Jacobian matrix pseudoinversion. Appl Soft Comput 2014. [DOI: 10.1016/j.asoc.2014.06.045] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|