1
|
Huang H, Zeng Z. An Accelerated Approach on Adaptive Gradient Neural Network for Solving Time-Dependent Linear Equations: A State-Triggered Perspective. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5070-5081. [PMID: 38483798 DOI: 10.1109/tnnls.2024.3371008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
To improve the acceleration performance, a hybrid state-triggered discretization (HSTD) is proposed for the adaptive gradient neural network (AGNN) for solving time-dependent linear equations (TDLEs). Unlike the existing approaches that use an activation function or a time-varying coefficient for acceleration, the proposed HSTD is uniquely designed from a control theory perspective. It comprises two essential components: adaptive sampling interval state-triggered discretization (ASISTD) and adaptive coefficient state-triggered discretization (ACSTD). The former addresses the gap in acceleration methods related to the variable sampling period, while the latter considers the underlying evolutionary dynamics of the Lyapunov function to determine coefficients greedily. Finally, compared with commonly used discretization methods, the acceleration performance and computational advantages of the proposed HSTD are substantiated by the numerical simulations and applications to robotics.
Collapse
|
2
|
Song F, Zhou Y, Xu C, Sun Z. A novel discrete zeroing neural network for online solving time-varying nonlinear optimization problems. Front Neurorobot 2024; 18:1446508. [PMID: 39165272 PMCID: PMC11333311 DOI: 10.3389/fnbot.2024.1446508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Accepted: 07/15/2024] [Indexed: 08/22/2024] Open
Abstract
To reduce transportation time, a discrete zeroing neural network (DZNN) method is proposed to solve the shortest path planning problem with a single starting point and a single target point. The shortest path planning problem is reformulated as an optimization problem, and a discrete nonlinear function related to the energy function is established so that the lowest-energy state corresponds to the optimal path solution. Theoretical analyzes demonstrate that the discrete ZNN model (DZNNM) exhibits zero stability, effectiveness, and real-time performance in handling time-varying nonlinear optimization problems (TVNOPs). Simulations with various parameters confirm the efficiency and real-time performance of the developed DZNNM for TVNOPs, indicating its suitability and superiority for solving the shortest path planning problem in real time.
Collapse
Affiliation(s)
- Feifan Song
- School of Finance, Changchun Finance College, Changchun, China
| | | | - Changxian Xu
- Department of Mechanical and Electrical Engineering, Changchun University of Technology, Changchun, China
| | - Zhongbo Sun
- Department of Control Engineering, Changchun University of Technology, Changchun, China
| |
Collapse
|
3
|
Liao B, Han L, Cao X, Li S, Li J. Double integral‐enhanced Zeroing neural network with linear noise rejection for time‐varying matrix inverse. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023. [DOI: 10.1049/cit2.12161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023] Open
Affiliation(s)
- Bolin Liao
- College of Computer Science and Engineering Jishou University Jishou China
| | - Luyang Han
- College of Computer Science and Engineering Jishou University Jishou China
| | - Xinwei Cao
- School of Management Shanghai University Shanghai China
| | - Shuai Li
- School of Engineering Swansea University Swansea UK
| | - Jianfeng Li
- College of Computer Science and Engineering Jishou University Jishou China
| |
Collapse
|
4
|
Yang M, Zhang Y, Tan N, Hu H. Explicit Linear Left-and-Right 5-Step Formulas With Zeroing Neural Network for Time-Varying Applications. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:1133-1143. [PMID: 34464284 DOI: 10.1109/tcyb.2021.3104138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, being different from conventional time-discretization (simply called discretization) formulas, explicit linear left-and-right 5-step (ELLR5S) formulas with sixth-order precision are proposed. The general sixth-order ELLR5S formula with four variable parameters is developed first, and constraints of these four parameters are displayed to guarantee the zero stability, consistence, and convergence of the formula. Then, by choosing specific parameter values within constraints, eight specific sixth-order ELLR5S formulas are developed. The general sixth-order ELLR5S formula is further utilized to generate discrete zeroing neural network (DZNN) models for solving time-varying linear and nonlinear systems. For comparison, three conventional discretization formulas are also utilized. Theoretical analyses are presented to show the performance of ELLR5S formulas and DZNN models. Furthermore, abundant experiments, including three practical applications, that is, angle-of-arrival (AoA) localization and two redundant manipulators (PUMA560 manipulator and Kinova manipulator) control, are conducted. The synthesized results substantiate the efficacy and superiority of sixth-order ELLR5S formulas as well as the corresponding DZNN models.
Collapse
|
5
|
Vural NM, Ilhan F, Yilmaz SF, Ergut S, Kozat SS. Achieving Online Regression Performance of LSTMs With Simple RNNs. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7632-7643. [PMID: 34138720 DOI: 10.1109/tnnls.2021.3086029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recurrent neural networks (RNNs) are widely used for online regression due to their ability to generalize nonlinear temporal dependencies. As an RNN model, long short-term memory networks (LSTMs) are commonly preferred in practice, as these networks are capable of learning long-term dependencies while avoiding the vanishing gradient problem. However, due to their large number of parameters, training LSTMs requires considerably longer training time compared to simple RNNs (SRNNs). In this article, we achieve the online regression performance of LSTMs with SRNNs efficiently. To this end, we introduce a first-order training algorithm with a linear time complexity in the number of parameters. We show that when SRNNs are trained with our algorithm, they provide very similar regression performance with the LSTMs in two to three times shorter training time. We provide strong theoretical analysis to support our experimental results by providing regret bounds on the convergence rate of our algorithm. Through an extensive set of experiments, we verify our theoretical work and demonstrate significant performance improvements of our algorithm with respect to LSTMs and the other state-of-the-art learning models.
Collapse
|
6
|
A novel form-finding method via noise-tolerant neurodynamic model for symmetric tensegrity structure. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08039-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
7
|
Wang K, Liu T, Zhang Y, Tan N. Discrete-time future nonlinear neural optimization with equality constraint based on ten-instant ZTD formula. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
8
|
Wang G, Hao Z, Zhang B, Fang L, Mao D. A robust newton iterative algorithm for acoustic location based on solving linear matrix equations in the presence of various noises. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03483-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
9
|
Sun Z, Zhao L, Liu K, Jin L, Yu J, Li C. An advanced form-finding of tensegrity structures aided with noise-tolerant zeroing neural network. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06745-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
10
|
Luo G, Yang Z, Zhang Q. Identification of autonomous nonlinear dynamical system based on discrete-time multiscale wavelet neural network. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06142-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
11
|
Zhang Y, Ling Y, Yang M, Yang S, Zhang Z. Inverse-Free Discrete ZNN Models Solving for Future Matrix Pseudoinverse via Combination of Extrapolation and ZeaD Formulas. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2663-2675. [PMID: 32745006 DOI: 10.1109/tnnls.2020.3007509] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Time-varying matrix pseudoinverse (TVMP) problem has been investigated by many researchers in recent years, but a new class of matrix termed Zhang matrix has been found and not been handled by some conventional models, e.g., Getz-Marsden dynamic model. On the other way, future matrix pseudoinverse (FMP), as a more challenging and intractable discrete-time problem, deserves more attention due to its significant role-playing on some engineering applications, such as redundant manipulator. Based on the zeroing neural network (ZNN), this article concentrates on designing new discrete ZNN models appropriately for computing the FMPs of all matrices of full rank, including the Zhang matrix. First, an inverse-free continuous ZNN model for computing TVMP is derived. Subsequently, Zhang et al. discretization (ZeaD) formulas and equidistant extrapolation formulas are used to discretize the continuous ZNN model to two discrete ZNN models for computing FMPs with different truncation errors. The numerical experiments are conducted for the five conventional discrete models and two new discrete ZNN models. Distinct numerical results substantiate the effectiveness and choiceness of newly proposed models. Finally, one of the newly proposed models is implemented on simulating and physical instances of robot manipulators, respectively, to show its practicability.
Collapse
|
12
|
Li J, Shi Y, Xuan H. Unified Model Solving Nine Types of Time-Varying Problems in the Frame of Zeroing Neural Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:1896-1905. [PMID: 32484780 DOI: 10.1109/tnnls.2020.2995396] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Many time-varying problems have been solved using the zeroing neural network proposed by Zhang et al. In this article, nine types of time-varying problems, namely time-varying nonlinear equation system, time-varying linear equation system, time-varying convex nonlinear optimization under linear equalities, unconstrained time-varying convex nonlinear optimization, time-varying convex quadratic programming under linear equalities, unconstrained time-varying convex quadratic programming, time-varying nonlinear inequality system, time-varying linear inequality system, and time-varying division, are investigated to better understand the essence of zeroing neutral network. Discrete-form time-varying problems are studied by considering the nature of unknown future and the requirement of real-time computation for time-varying problems. A unified model is proposed in the frame of zeroing neural network to uniformly solve these time-varying problems on the basis of their connections and a newly developed discretization formula. Theoretical analyses and numerical experiments, including the tracking control of PUMA560 robot manipulator, verify the effectiveness and precision of the proposed unified model.
Collapse
|
13
|
A Vary-Parameter Convergence-Accelerated Recurrent Neural Network for Online Solving Dynamic Matrix Pseudoinverse and its Robot Application. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10440-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
14
|
Design, analysis and verification of recurrent neural dynamics for handling time-variant augmented Sylvester linear system. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.10.036] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
15
|
Shi T, Tian Y, Sun Z, Liu K, Jin L, Yu J. Noise-tolerant neural algorithm for online solving Yang-Baxter-type matrix equation in the presence of noises: A control-based method. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.10.110] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
16
|
Jin J. An Improved Finite Time Convergence Recurrent Neural Network with Application to Time-Varying Linear Complex Matrix Equation Solution. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10426-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
17
|
A better robustness and fast convergence zeroing neural network for solving dynamic nonlinear equations. Neural Comput Appl 2021. [DOI: 10.1007/s00521-020-05617-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
18
|
Xiao L, Dai J, Lu R, Li S, Li J, Wang S. Design and Comprehensive Analysis of a Noise-Tolerant ZNN Model With Limited-Time Convergence for Time-Dependent Nonlinear Minimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5339-5348. [PMID: 32031952 DOI: 10.1109/tnnls.2020.2966294] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Zeroing neural network (ZNN) is a powerful tool to address the mathematical and optimization problems broadly arisen in the science and engineering areas. The convergence and robustness are always co-pursued in ZNN. However, there exists no related work on the ZNN for time-dependent nonlinear minimization that achieves simultaneously limited-time convergence and inherently noise suppression. In this article, for the purpose of satisfying such two requirements, a limited-time robust neural network (LTRNN) is devised and presented to solve time-dependent nonlinear minimization under various external disturbances. Different from the previous ZNN model for this problem either with limited-time convergence or with noise suppression, the proposed LTRNN model simultaneously possesses such two characteristics. Besides, rigorous theoretical analyses are given to prove the superior performance of the LTRNN model when adopted to solve time-dependent nonlinear minimization under external disturbances. Comparative results also substantiate the effectiveness and advantages of LTRNN via solving a time-dependent nonlinear minimization problem.
Collapse
|
19
|
Improved recurrent neural networks for solving Moore-Penrose inverse of real-time full-rank matrix. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.08.026] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
20
|
Zhang J, Jin L, Cheng L. RNN for Perturbed Manipulability Optimization of Manipulators Based on a Distributed Scheme: A Game-Theoretic Perspective. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5116-5126. [PMID: 32011266 DOI: 10.1109/tnnls.2020.2963998] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In order to leverage the unique advantages of redundant manipulators, avoiding the singularity during motion planning and control should be considered as a fundamental issue to handle. In this article, a distributed scheme is proposed to improve the manipulability of redundant manipulators in a group. To this end, the manipulability index is incorporated into the cooperative control of multiple manipulators in a distributed network, which is used to guide manipulators to adjust to the optimal spatial position. Moreover, from the perspective of game theory, this article formulates the problem into a Nash equilibrium. Then, a neural network with anti-noise ability is constructed to seek and approximate the optimal strategy profile of the Nash equilibrium problem with time-varying parameters. Theoretical analyses show that the neural network model has the superior global convergence and noise immunity. Finally, simulation results demonstrate that the neural network is effective in real-time cooperative motion generation of multiple redundant manipulators under perturbations in distributed networks.
Collapse
|
21
|
Xiao L, Jia L, Dai J, Tan Z. Design and Application of A Robust Zeroing Neural Network to Kinematical Resolution of Redundant Manipulators Under Various External Disturbances. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.07.040] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
22
|
Discrete-time nonlinear optimization via zeroing neural dynamics based on explicit linear multi-step methods for tracking control of robot manipulators. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.05.093] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
23
|
Prescribed-time convergent and noise-tolerant Z-type neural dynamics for calculating time-dependent quadratic programming. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05356-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
24
|
Tan Z, Li W, Xiao L, Hu Y. New Varying-Parameter ZNN Models With Finite-Time Convergence and Noise Suppression for Time-Varying Matrix Moore-Penrose Inversion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2980-2992. [PMID: 31536017 DOI: 10.1109/tnnls.2019.2934734] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article aims to solve the Moore-Penrose inverse of time-varying full-rank matrices in the presence of various noises in real time. For this purpose, two varying-parameter zeroing neural networks (VPZNNs) are proposed. Specifically, VPZNN-R and VPZNN-L models, which are based on a new design formula, are designed to solve the right and left Moore-Penrose inversion problems of time-varying full-rank matrices, respectively. The two VPZNN models are activated by two novel varying-parameter nonlinear activation functions. Detailed theoretical derivations are presented to show the desired finite-time convergence and outstanding robustness of the proposed VPZNN models under various kinds of noises. In addition, existing neural models, such as the original ZNN (OZNN) and the integration-enhanced ZNN (IEZNN), are compared with the VPZNN models. Simulation observations verify the advantages of the VPZNN models over the OZNN and IEZNN models in terms of convergence and robustness. The potential of the VPZNN models for robotic applications is then illustrated by an example of robot path tracking.
Collapse
|
25
|
Xiao X, Fu D, Wang G, Liao S, Qi Y, Huang H, Jin L. Two neural dynamics approaches for computing system of time-varying nonlinear equations. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.02.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
26
|
|
27
|
Chen D, Li S, Wu Q, Liao L. Simultaneous identification, tracking control and disturbance rejection of uncertain nonlinear dynamics systems: A unified neural approach. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.031] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
28
|
Gong W, Chen D, Li S. Active Sensing of Robot Arms Based on Zeroing Neural Networks: A Biological-Heuristic Optimization Model. IEEE ACCESS 2020; 8:25976-25989. [DOI: 10.1109/access.2020.2971020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2024]
|
29
|
Noise-suppressing zeroing neural network for online solving time-varying nonlinear optimization problem: a control-based approach. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04639-2] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
30
|
Jin J, Zhao L, Li M, Yu F, Xi Z. Improved zeroing neural networks for finite time solving nonlinear equations. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04622-x] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
31
|
Step-width theoretics and numerics of four-point general DTZN model for future minimization using Jury stability criterion. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.04.054] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
32
|
Zhang Y, Qi Z, Qiu B, Yang M, Xiao M. Zeroing Neural Dynamics and Models for Various Time-Varying Problems Solving with ZLSF Models as Minimization-Type and Euler-Type Special Cases [Research Frontier]. IEEE COMPUT INTELL M 2019. [DOI: 10.1109/mci.2019.2919397] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
33
|
Li J, Zhang Y, Mao M. Five-instant type discrete-time ZND solving discrete time-varying linear system, division and quadratic programming. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.11.064] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
34
|
Qiu B, Zhang Y, Yang Z. New Discrete-Time ZNN Models for Least-Squares Solution of Dynamic Linear Equation System With Time-Varying Rank-Deficient Coefficient. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5767-5776. [PMID: 29993872 DOI: 10.1109/tnnls.2018.2805810] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this brief, a new one-step-ahead numerical differentiation rule called six-instant -cube finite difference (6I CFD) formula is proposed for the first-order derivative approximation with higher precision than existing finite difference formulas (i.e., Euler and Taylor types). Subsequently, by exploiting the proposed 6I CFD formula to discretize the continuous-time Zhang neural network model, two new-type discrete-time ZNN (DTZNN) models, namely, new-type DTZNNK and DTZNNU models, are designed and generalized to compute the least-squares solution of dynamic linear equation system with time-varying rank-deficient coefficient in real time, which is quite different from the existing ZNN-related studies on solving continuous-time and discrete-time (dynamic or static) linear equation systems in the context of full-rank coefficients. Specifically, the corresponding dynamic normal equation system, of which the solution exactly corresponds to the least-squares solution of dynamic linear equation system, is elegantly introduced to solve such a rank-deficient least-squares problem efficiently and accurately. Theoretical analyses show that the maximal steady-state residual errors of the two new-type DTZNN models have an pattern, where denotes the sampling gap. Comparative numerical experimental results further substantiate the superior computational performance of the new-type DTZNN models to solve the rank-deficient least-squares problem of dynamic linear equation systems.
Collapse
|
35
|
|
36
|
Chen D, Zhang Y. Robust Zeroing Neural-Dynamics and Its Time-Varying Disturbances Suppression Model Applied to Mobile Robot Manipulators. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4385-4397. [PMID: 29990177 DOI: 10.1109/tnnls.2017.2764529] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper proposes a novel robust zeroing neural-dynamics (RZND) approach as well as its associated model for solving the inverse kinematics problem of mobile robot manipulators. Unlike existing works based on the assumption that neural network models are free of external disturbances, four common forms of time-varying disturbances suppressed by the proposed RZND model are investigated in this paper. In addition, theoretical analyses on the antidisturbance performance are presented in detail to prove the effectiveness and robustness of the proposed RZND model with time-varying disturbances suppressed for solving the inverse kinematics problem of mobile robot manipulators. That is, the RZND model converges toward the exact solution of the inverse kinematics problem of mobile robot manipulators with bounded or zero-oriented steady-state position error. Moreover, simulation studies and comprehensive comparisons with existing neural network models, e.g., the conventional Zhang neural network model and the gradient-based recurrent neural network model, together with extensive tests with four common forms of time-varying disturbances substantiate the efficacy, robustness, and superiority of the proposed RZND approach as well as its time-varying disturbances suppression model for solving the inverse kinematics problem of mobile robot manipulators.
Collapse
|
37
|
Guo D, Yan L, Nie Z. Design, Analysis, and Representation of Novel Five-Step DTZD Algorithm for Time-Varying Nonlinear Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4248-4260. [PMID: 29990090 DOI: 10.1109/tnnls.2017.2761443] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Continuous-time and discrete-time forms of Zhang dynamics (ZD) for time-varying nonlinear optimization have been developed recently. In this paper, a novel discrete-time ZD (DTZD) algorithm is proposed and investigated based on the previous research. Specifically, the DTZD algorithm for time-varying nonlinear optimization is developed by adopting a new Taylor-type difference rule. This algorithm is a five-step iteration process, and thus, is referred to as the five-step DTZD algorithm in this paper. Theoretical analysis and results of the proposed five-step DTZD algorithm are presented to highlight its excellent computational performance. The geometric representation of the proposed algorithm for time-varying nonlinear optimization is also provided. Comparative numerical results are illustrated with four examples to substantiate the efficacy and superiority of the proposed five-step DTZD algorithm for time-varying nonlinear optimization compared with the previous DTZD algorithms.
Collapse
|
38
|
Xiao L, Liao B, Li S, Chen K. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw 2018; 98:102-113. [DOI: 10.1016/j.neunet.2017.11.011] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Revised: 09/25/2017] [Accepted: 11/16/2017] [Indexed: 10/18/2022]
|
39
|
Shi Y, Zhang Y. Discrete time-variant nonlinear optimization and system solving via integral-type error function and twice ZND formula with noises suppressed. Soft comput 2018. [DOI: 10.1007/s00500-018-3020-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
40
|
Liao B, Xiang Q. Robustness Analyses and Optimal Sampling Gap of Recurrent Neural Network for Dynamic Matrix Pseudoinversion. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2017. [DOI: 10.20965/jaciii.2017.p0778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This study analyses the robustness and convergence characteristics of a neural network. First, a special class of recurrent neural network (RNN), termed a continuous-time Zhang neural network (CTZNN) model, is presented and investigated for dynamic matrix pseudoinversion. Theoretical analysis of the CTZNN model demonstrates that it has good robustness against various types of noise. In addition, considering the requirements of digital implementation and online computation, the optimal sampling gap for a discrete-time Zhang neural network (DTZNN) model under noisy environments is proposed. Finally, experimental results are presented, which further substantiate the theoretical analyses and demonstrate the effectiveness of the proposed ZNN models for computing a dynamic matrix pseudoinverse under noisy environments.
Collapse
|
41
|
Jin L, Zhang Y, Li S. Integration-Enhanced Zhang Neural Network for Real-Time-Varying Matrix Inversion in the Presence of Various Kinds of Noises. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:2615-2627. [PMID: 26625426 DOI: 10.1109/tnnls.2015.2497715] [Citation(s) in RCA: 67] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Matrix inversion often arises in the fields of science and engineering. Many models for matrix inversion usually assume that the solving process is free of noises or that the denoising has been conducted before the computation. However, time is precious for the real-time-varying matrix inversion in practice, and any preprocessing for noise reduction may consume extra time, possibly violating the requirement of real-time computation. Therefore, a new model for time-varying matrix inversion that is able to handle simultaneously the noises is urgently needed. In this paper, an integration-enhanced Zhang neural network (IEZNN) model is first proposed and investigated for real-time-varying matrix inversion. Then, the conventional ZNN model and the gradient neural network model are presented and employed for comparison. In addition, theoretical analyses show that the proposed IEZNN model has the global exponential convergence property. Moreover, in the presence of various kinds of noises, the proposed IEZNN model is proven to have an improved performance. That is, the proposed IEZNN model converges to the theoretical solution of the time-varying matrix inversion problem no matter how large the matrix-form constant noise is, and the residual errors of the proposed IEZNN model can be arbitrarily small for time-varying noises and random noises. Finally, three illustrative simulation examples, including an application to the inverse kinematic motion planning of a robot manipulator, are provided and analyzed to substantiate the efficacy and superiority of the proposed IEZNN model for real-time-varying matrix inversion.
Collapse
|