1
|
Zhang Z, Sun X, Li X, Liu Y. An adaptive variable-parameter dynamic learning network for solving constrained time-varying QP problem. Neural Netw 2025; 184:106968. [PMID: 39671983 DOI: 10.1016/j.neunet.2024.106968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Revised: 10/13/2024] [Accepted: 11/25/2024] [Indexed: 12/15/2024]
Abstract
To efficiently solve the time-varying convex quadratic programming (TVCQP) problem under equational constraint, an adaptive variable-parameter dynamic learning network (AVDLN) is proposed and analyzed. Being different from existing varying-parameter and fixed-parameter convergent-differential neural network (VPCDNN and FPCDNN), the proposed AVDLN integrates the error signals into the time-varying parameter term. To do so, the TVCQP problem is transformed into a time-varying matrix equation. Second, an adaptive time-varying design formulation is designed for the error function, and then, the error function is integrated into the time-varying parameter. Furthermore, the AVDLN is designed with the adaptive time-varying design formulation. Moreover, the convergence and robustness theorems of AVDLN are proved by Lyapunov stability analysis, and Mathematical analysis demonstrates that AVDLN possesses a smaller upper bound on the convergence error and a faster error convergence rate than FPCDNN and VPCDNN. Finally, the validity of AVDLN is demonstrated by simulations, and the comparative results prove that the proposed AVDLN has a faster convergence speed and smaller error fluctuation.
Collapse
Affiliation(s)
- Zhijun Zhang
- School of Automation Science and Engineering, South China University of Technology, China; Key Library of Autonomous Systems and Network Control, Ministry of Education, China; Jiangxi Thousand Talents Plan, Nanchang University, Nanchang, China; College of Computer Science and Engineering, Jishou University, Jishou, China; Guangdong Artificial Intelligence and Digital Economy Laboratory (Pazhou Lab), Guangzhou, China; Shaanxi Provincial Key Laboratory of Industrial Automation, School of Mechanical Engineering, Shaanxi University of Technology, Hanzhong, China; School of Information Science and Engineering, Changsha Normal University, Changsha, China; School of Automation Science and Engineering, and also with the Institute of Artificial Intelligence and Automation, Guangdong University of Petrochemical Technology, Maoming, China; Key Laboratory of Large-Model Embodied-Intelligent Humanoid Robot (2024KSYS004), China.
| | - Xiangliang Sun
- School of Automation Science and Engineering, South China University of Technology, China.
| | - Xingru Li
- School of Automation Science and Engineering, South China University of Technology, China.
| | - Yiqi Liu
- School of Automation Science and Engineering, South China University of Technology, China.
| |
Collapse
|
2
|
Chen J, Pan Y, Zhang Y, Li S, Tan N. Inverse-free zeroing neural network for time-variant nonlinear optimization with manipulator applications. Neural Netw 2024; 178:106462. [PMID: 38901094 DOI: 10.1016/j.neunet.2024.106462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 05/10/2024] [Accepted: 06/11/2024] [Indexed: 06/22/2024]
Abstract
In this paper, the problem of time-variant optimization subject to nonlinear equation constraint is studied. To solve the challenging problem, methods based on the neural networks, such as zeroing neural network and gradient neural network, are commonly adopted due to their performance on handling nonlinear problems. However, the traditional zeroing neural network algorithm requires computing the matrix inverse during the solving process, which is a complicated and time-consuming operation. Although the gradient neural network algorithm does not require computing the matrix inverse, its accuracy is not high enough. Therefore, a novel inverse-free zeroing neural network algorithm without matrix inverse is proposed in this paper. The proposed algorithm not only avoids the matrix inverse, but also avoids matrix multiplication, greatly reducing the computational complexity. In addition, detailed theoretical analyses of the convergence performance of the proposed algorithm is provided to guarantee its excellent capability in solving time-variant optimization problems. Numerical simulations and comparative experiments with traditional zeroing neural network and gradient neural network algorithms substantiate the accuracy and superiority of the novel inverse-free zeroing neural network algorithm. To further validate the performance of the novel inverse-free zeroing neural network algorithm in practical applications, path tracking tasks of three manipulators (i.e., Universal Robot 5, Franka Emika Panda, and Kinova JACO2 manipulators) are conducted, and the results verify the applicability of the proposed algorithm.
Collapse
Affiliation(s)
- Jielong Chen
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China.
| | - Yan Pan
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China.
| | - Yunong Zhang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China.
| | - Shuai Li
- Faculty of Information Technology and Electrical Engineering, University of Oulu, Oulu 905706, Finland; VTT-Technology Research Center of Finland, Oulu 905706, Finland.
| | - Ning Tan
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China.
| |
Collapse
|
3
|
Peng B, Duan J, Chen J, Li SE, Xie G, Zhang C, Guan Y, Mu Y, Sun E. Model-Based Chance-Constrained Reinforcement Learning via Separated Proportional-Integral Lagrangian. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:466-478. [PMID: 35635820 DOI: 10.1109/tnnls.2022.3175595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Safety is essential for reinforcement learning (RL) applied in the real world. Adding chance constraints (or probabilistic constraints) is a suitable way to enhance RL safety under uncertainty. Existing chance-constrained RL methods, such as the penalty methods and the Lagrangian methods, either exhibit periodic oscillations or learn an overconservative or unsafe policy. In this article, we address these shortcomings by proposing a separated proportional-integral Lagrangian (SPIL) algorithm. We first review the constrained policy optimization process from a feedback control perspective, which regards the penalty weight as the control input and the safe probability as the control output. Based on this, the penalty method is formulated as a proportional controller, and the Lagrangian method is formulated as an integral controller. We then unify them and present a proportional-integral Lagrangian method to get both their merits with an integral separation technique to limit the integral value to a reasonable range. To accelerate training, the gradient of safe probability is computed in a model-based manner. The convergence of the overall algorithm is analyzed. We demonstrate that our method can reduce the oscillations and conservatism of RL policy in a car-following simulation. To prove its practicality, we also apply our method to a real-world mobile robot navigation task, where our robot successfully avoids a moving obstacle with highly uncertain or even aggressive behaviors.
Collapse
|
4
|
Wang D, Liu XW. A varying-parameter fixed-time gradient-based dynamic network for convex optimization. Neural Netw 2023; 167:798-809. [PMID: 37738715 DOI: 10.1016/j.neunet.2023.08.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 07/05/2023] [Accepted: 08/28/2023] [Indexed: 09/24/2023]
Abstract
We focus on the fixed-time convergence and robustness of gradient-based dynamic networks for solving convex optimization. Most of the existing gradient-based dynamic networks with fixed-time convergence have limited ability to resist interferences of noises. To improve the convergence of the gradient-based dynamic networks, we design a new activation function and propose a gradient-based dynamic network with fixed-time convergence. The proposed dynamic network has a smaller upper bound of the convergence time than the existing dynamic networks with fixed-time convergence. A time-varying scaling parameter is employed to speed up the convergence. Our gradient-based dynamic network is proved to be robust against bounded noises and is able to resist the interference of unbounded noises. The numerical tests illustrate the effectiveness and superiority of the proposed network.
Collapse
Affiliation(s)
- Dan Wang
- School of Artificial Intelligence, Hebei University of Technology, Tianjin, 300401, China.
| | - Xin-Wei Liu
- Institute of Mathematics, Hebei University of Technology, Tianjin, 300401, China.
| |
Collapse
|
5
|
Wu W, Zhang Y. Novel adaptive zeroing neural dynamics schemes for temporally-varying linear equation handling applied to arm path following and target motion positioning. Neural Netw 2023; 165:435-450. [PMID: 37331233 DOI: 10.1016/j.neunet.2023.05.056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 04/19/2023] [Accepted: 05/29/2023] [Indexed: 06/20/2023]
Abstract
While the handling for temporally-varying linear equation (TVLE) has received extensive attention, most methods focused on trading off the conflict between computational precision and convergence rate. Different from previous studies, this paper proposes two complete adaptive zeroing neural dynamics (ZND) schemes, including a novel adaptive continuous ZND (ACZND) model, two general variable time discretization techniques, and two resultant adaptive discrete ZND (ADZND) algorithms, to essentially eliminate the conflict. Specifically, an error-related varying-parameter ACZND model with global and exponential convergence is first designed and proposed. To further adapt to the digital hardware, two novel variable time discretization techniques are proposed to discretize the ACZND model into two ADZND algorithms. The convergence properties with respect to the convergence rate and precision of ADZND algorithms are proved via rigorous mathematical analyses. By comparing with the traditional discrete ZND (TDZND) algorithms, the superiority of ADZND algorithms in convergence rate and computational precision is shown theoretically and experimentally. Finally, simulative experiments, including numerical experiments on a specific TVLE solving as well as four application experiments on arm path following and target motion positioning are successfully conducted to substantiate the efficacy, superiority, and practicability of ADZND algorithms.
Collapse
Affiliation(s)
- Wenqi Wu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China; Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Guangzhou 510006, China.
| | - Yunong Zhang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China; Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Guangzhou 510006, China.
| |
Collapse
|
6
|
Ju X, Hu D, Li C, He X, Feng G. A Novel Fixed-Time Converging Neurodynamic Approach to Mixed Variational Inequalities and Applications. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12942-12953. [PMID: 34347618 DOI: 10.1109/tcyb.2021.3093076] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article proposes a novel fixed-time converging forward-backward-forward neurodynamic network (FXFNN) to deal with mixed variational inequalities (MVIs). A distinctive feature of the FXFNN is its fast and fixed-time convergence, in contrast to conventional forward-backward-forward neurodynamic network and projected neurodynamic network. It is shown that the solution of the proposed FXFNN exists uniquely and converges to the unique solution of the corresponding MVIs in fixed time under some mild conditions. It is also shown that the fixed-time convergence result obtained for the FXFNN is independent of initial conditions, unlike most of the existing asymptotical and exponential convergence results. Furthermore, the proposed FXFNN is applied in solving sparse recovery problems, variational inequalities, nonlinear complementarity problems, and min-max problems. Finally, numerical and experimental examples are presented to validate the effectiveness of the proposed neurodynamic network.
Collapse
|
7
|
Zuo Q, Li K, Xiao L, Li K. Robust Finite-Time Zeroing Neural Networks With Fixed and Varying Parameters for Solving Dynamic Generalized Lyapunov Equation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7695-7705. [PMID: 34143744 DOI: 10.1109/tnnls.2021.3086500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
For solving dynamic generalized Lyapunov equation, two robust finite-time zeroing neural network (RFTZNN) models with stationary and nonstationary parameters are generated through the usage of an improved sign-bi-power (SBP) activation function (AF). Taking differential errors and model implementation errors into account, two corresponding perturbed RFTZNN models are derived to facilitate the analyses of robustness on the two RFTZNN models. Theoretical analysis gives the quantitatively estimated upper bounds for the convergence time (UBs-CT) of the two derived models, implying a superiority of the convergence that varying parameter RFTZNN (VP-RFTZNN) possesses over the fixed parameter RFTZNN (FP-RFTZNN). When the coefficient matrices and perturbation matrices are uniformly bounded, residual error of FP-RFTZNN is bounded, whereas that of VP-RFTZNN monotonically decreases at a super-exponential rate after a finite time, and eventually converges to 0. When these matrices are bounded but not uniform, residual error of FP-RFTZNN is no longer bounded, but that of VP-RFTZNN still converges. These superiorities of VP-RFTZNN are illustrated by abundant comparative experiments, and its application value is further proved by an application to robot.
Collapse
|
8
|
Hyperbolic Tangent Variant-Parameter Robust ZNN Schemes for Solving Time-Varying Control Equations and Tracking of Mobile Robot. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.08.066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
9
|
Wang D, Liu XW. A gradient-type noise-tolerant finite-time neural network for convex optimization. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
10
|
Lu W, Leung CS, Sum J, Xiao Y. DNN-kWTA With Bounded Random Offset Voltage Drifts in Threshold Logic Units. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3184-3192. [PMID: 33513113 DOI: 10.1109/tnnls.2021.3050493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The dual neural network-based k -winner-take-all (DNN- k WTA) is an analog neural model that is used to identify the k largest numbers from n inputs. Since threshold logic units (TLUs) are key elements in the model, offset voltage drifts in TLUs may affect the operational correctness of a DNN- k WTA network. Previous studies assume that drifts in TLUs follow some particular distributions. This brief considers that only the drift range, given by [-∆, ∆] , is available. We consider two drift cases: time-invariant and time-varying. For the time-invariant case, we show that the state of a DNN- k WTA network converges. The sufficient condition to make a network with the correct operation is given. Furthermore, for uniformly distributed inputs, we prove that the probability that a DNN- k WTA network operates properly is greater than (1-2∆)n . The aforementioned results are generalized for the time-varying case. In addition, for the time-invariant case, we derive a method to compute the exact convergence time for a given data set. For uniformly distributed inputs, we further derive the mean and variance of the convergence time. The convergence time results give us an idea about the operational speed of the DNN- k WTA model. Finally, simulation experiments have been conducted to validate those theoretical results.
Collapse
|
11
|
Luo J, Yang H. A Robust Zeroing Neural Network Model Activated by the Special Nonlinear Function for Solving Time-Variant Linear System in Predefined-Time. Neural Process Lett 2022. [DOI: 10.1007/s11063-021-10726-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
12
|
Distributed k-winners-take-all via multiple neural networks with inertia. Neural Netw 2022; 151:385-397. [DOI: 10.1016/j.neunet.2022.04.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 01/30/2022] [Accepted: 04/06/2022] [Indexed: 11/20/2022]
|
13
|
Zhao X, Zong Q, Tian B, You M. Finite-Time Dynamic Allocation and Control in Multiagent Coordination for Target Tracking. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1872-1880. [PMID: 32603302 DOI: 10.1109/tcyb.2020.2998152] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A new finite-time dynamic allocation and control scheme is developed in this article for multiple agents tracking a moving target. Based on a competitive manner, the dynamic allocation is achieved by k-winners-take-all (k-WTA), which can be realized by a novel finite-time dual neural network with the adaptive-gain activation function. Then, a finite-time disturbance compensation-based control law is proposed for agents to conduct capturing task or return to the specified point in vigilance. The finite-time stability of the system is guaranteed through Lyapunov analysis. Finally, the efficiency of the proposed scheme is illustrated by simulations in which the situation with higher target velocity than trackers is considered.
Collapse
|
14
|
Li W, Han L, Xiao X, Liao B, Peng C. A gradient-based neural network accelerated for vision-based control of an RCM-constrained surgical endoscope robot. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06465-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
15
|
Ma M, Yang J. A novel finite-time q-power recurrent neural network and its application to uncertain portfolio model. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.07.036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
16
|
Peng B, Jin L, Shang M. Multi-robot competitive tracking based on k-WTA neural network with one single neuron. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.07.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
17
|
Kong Y, Hu T, Lei J, Han R. A Finite-Time Convergent Neural Network for Solving Time-Varying Linear Equations with Inequality Constraints Applied to Redundant Manipulator. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10623-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
18
|
Dai J, Jia L, Xiao L. Design and Analysis of Two Prescribed-Time and Robust ZNN Models With Application to Time-Variant Stein Matrix Equation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:1668-1677. [PMID: 32340965 DOI: 10.1109/tnnls.2020.2986275] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The zeroing neural network (ZNN) activated by nonlinear activation functions plays an important role in many fields. However, conventional ZNN can only realize finite-time convergence, which greatly limits the application of ZNN in a noisy environment. Generally, finite-time convergence depends on the original state of ZNN, but the original state is often unknown in advance. In addition, when meeting with different noises, the applied nonlinear activation functions cannot tolerate external disturbances. In this article, on the strength of this idea, two prescribed-time and robust ZNN (PTR-ZNN) models activated by two nonlinear activation functions are put forward to address the time-variant Stein matrix equation. The proposed two PTR-ZNN models own two remarkable advantages simultaneously: 1) prescribed-time convergence that does not rely on original states and 2) superior noise-tolerance performance that can tolerate time-variant bounded vanishing and nonvanishing noises. Furthermore, the detailed theoretical analysis is provided to guarantee the prescribed-time convergence and noise-tolerance performance, with the convergence upper bounds of steady-state residual errors calculated. Finally, simulative comparison results indicate the effectiveness and the superiority of the proposed two PTR-ZNN models for the time-variant Stein matrix equation solving.
Collapse
|
19
|
Performance analysis of nonlinear activated zeroing neural networks for time-varying matrix pseudoinversion with application. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2020.106735] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
20
|
Kong Y, Jiang Y, Zhou J, Wu H. A time controlling neural network for time‐varying QP solving with application to kinematics of mobile manipulators. INT J INTELL SYST 2021. [DOI: 10.1002/int.22304] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Ying Kong
- Department of Information and Electronic Engineering Zhejiang University of Science and Technology Zhejiang China
| | - Yunliang Jiang
- Department of Information Engineering Huzhou University Huzhou China
| | - Junwen Zhou
- Department of Information and Electronic Engineering Zhejiang University of Science and Technology Zhejiang China
| | - Huifeng Wu
- Department of Intelligent and Software Technology Hangzhou Dianzi University Hangzhou China
| |
Collapse
|
21
|
Xiao L, Dai J, Lu R, Li S, Li J, Wang S. Design and Comprehensive Analysis of a Noise-Tolerant ZNN Model With Limited-Time Convergence for Time-Dependent Nonlinear Minimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5339-5348. [PMID: 32031952 DOI: 10.1109/tnnls.2020.2966294] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Zeroing neural network (ZNN) is a powerful tool to address the mathematical and optimization problems broadly arisen in the science and engineering areas. The convergence and robustness are always co-pursued in ZNN. However, there exists no related work on the ZNN for time-dependent nonlinear minimization that achieves simultaneously limited-time convergence and inherently noise suppression. In this article, for the purpose of satisfying such two requirements, a limited-time robust neural network (LTRNN) is devised and presented to solve time-dependent nonlinear minimization under various external disturbances. Different from the previous ZNN model for this problem either with limited-time convergence or with noise suppression, the proposed LTRNN model simultaneously possesses such two characteristics. Besides, rigorous theoretical analyses are given to prove the superior performance of the LTRNN model when adopted to solve time-dependent nonlinear minimization under external disturbances. Comparative results also substantiate the effectiveness and advantages of LTRNN via solving a time-dependent nonlinear minimization problem.
Collapse
|
22
|
Li W, Chiu PWY, Li Z. An Accelerated Finite-Time Convergent Neural Network for Visual Servoing of a Flexible Surgical Endoscope With Physical and RCM Constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5272-5284. [PMID: 32011270 DOI: 10.1109/tnnls.2020.2965553] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article designs and analyzes a recurrent neural network (RNN) for the visual servoing of a flexible surgical endoscope. The flexible surgical endoscope is based on a commercially available UR5 robot with a flexible endoscope attached as an end-effector. Most of the existing visual servo control frameworks of the robotic endoscopes or robot arms have not considered either the physical limits of the robot or the remote center of motion (RCM) constraints (i.e., the fulcrum effect). To tackle this issue, this article first conducts the kinematic modeling of the flexible robotic endoscope to achieve automation by visual servo control. The kinematic modeling results in a quadratic programming (QP) framework with physical limits and RCM constraints involved, making the UR5 robot applicable to surgical field. To solve the QP problem and accomplish the visual task, an RNN activated by a sign-bi-power activation function (AF) is proposed. The motivation of using the sign-bi-power AF is to enable the RNN to exhibit an accelerated finite-time convergence, which is more preferred in time-critical applications. Theoretically, the finite-time convergence of the RNN is rigorously proved using the Lyapunov theory. Compared with the previous AFs applied to the RNN, theoretical analysis shows that the RNN activated by the sign-bi-power AF delivers an accelerated convergence speed. Comparative validations are performed, showing that the proposed finite-time convergent neural network is effective to achieve visual servoing of the flexible endoscope with physical limits and RCM constraints handled simultaneously.
Collapse
|
23
|
|
24
|
Prescribed-time convergent and noise-tolerant Z-type neural dynamics for calculating time-dependent quadratic programming. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05356-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
25
|
Tan Z, Li W, Xiao L, Hu Y. New Varying-Parameter ZNN Models With Finite-Time Convergence and Noise Suppression for Time-Varying Matrix Moore-Penrose Inversion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2980-2992. [PMID: 31536017 DOI: 10.1109/tnnls.2019.2934734] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article aims to solve the Moore-Penrose inverse of time-varying full-rank matrices in the presence of various noises in real time. For this purpose, two varying-parameter zeroing neural networks (VPZNNs) are proposed. Specifically, VPZNN-R and VPZNN-L models, which are based on a new design formula, are designed to solve the right and left Moore-Penrose inversion problems of time-varying full-rank matrices, respectively. The two VPZNN models are activated by two novel varying-parameter nonlinear activation functions. Detailed theoretical derivations are presented to show the desired finite-time convergence and outstanding robustness of the proposed VPZNN models under various kinds of noises. In addition, existing neural models, such as the original ZNN (OZNN) and the integration-enhanced ZNN (IEZNN), are compared with the VPZNN models. Simulation observations verify the advantages of the VPZNN models over the OZNN and IEZNN models in terms of convergence and robustness. The potential of the VPZNN models for robotic applications is then illustrated by an example of robot path tracking.
Collapse
|
26
|
Xiao L, Li K, Duan M. Computing Time-Varying Quadratic Optimization With Finite-Time Convergence and Noise Tolerance: A Unified Framework for Zeroing Neural Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:3360-3369. [PMID: 30716052 DOI: 10.1109/tnnls.2019.2891252] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Zeroing neural network (ZNN), as a powerful calculating tool, is extensively applied in various computation and optimization fields. Convergence and noise-tolerance performance are always pursued and investigated in the ZNN field. Up to now, there are no unified ZNN models that simultaneously achieve the finite-time convergence and inherent noise tolerance for computing time-varying quadratic optimization problems, although this superior property is highly demanded in practical applications. In this paper, for computing time-varying quadratic optimization within finite-time convergence in the presence of various additive noises, a new framework for ZNN is designed to fill this gap in a unified manner. Specifically, different from the previous design formulas either possessing finite-time convergence or possessing noise-tolerance performance, a new design formula with finite-time convergence and noise tolerance is proposed in a unified framework (and thus called unified design formula). Then, on the basis of the unified design formula, a unified ZNN (UZNN) is, thus, proposed and investigated in the unified framework of ZNN for computing time-varying quadratic optimization problems in the presence of various additive noises. In addition, theoretical analyses of the unified design formula and the UZNN model are given to guarantee the finite-time convergence and inherent noise tolerance. Computer simulation results verify the superior property of the UZNN model for computing time-varying quadratic optimization problems, as compared with the previously proposed ZNN models.
Collapse
|
27
|
Su L, Chang CJ, Lynch N. Spike-Based Winner-Take-All Computation: Fundamental Limits and Order-Optimal Circuits. Neural Comput 2019; 31:2523-2561. [PMID: 31614103 DOI: 10.1162/neco_a_01242] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Winner-take-all (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brain's fundamental computational abilities. However, not much is known about the robustness of a spike-based WTA network to the inherent randomness of the input spike trains. In this work, we consider a spike-based k-WTA model wherein n randomly generated input spike trains compete with each other based on their underlying firing rates and k winners are supposed to be selected. We slot the time evenly with each time slot of length 1 ms and model the n input spike trains as n independent Bernoulli processes. We analytically characterize the minimum waiting time needed so that a target minimax decision accuracy (success probability) can be reached. We first derive an information-theoretic lower bound on the waiting time. We show that to guarantee a (minimax) decision error ≤δ (where δ∈(0,1)), the waiting time of any WTA circuit is at least [Formula: see text]where R⊆(0,1) is a finite set of rates and TR is a difficulty parameter of a WTA task with respect to set R for independent input spike trains. Additionally, TR is independent of δ, n, and k. We then design a simple WTA circuit whose waiting time is [Formula: see text]provided that the local memory of each output neuron is sufficiently long. It turns out that for any fixed δ, this decision time is order-optimal (i.e., it matches the above lower bound up to a multiplicative constant factor) in terms of its scaling in n, k, and TR.
Collapse
Affiliation(s)
- Lili Su
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA 02142, U.S.A.
| | - Chia-Jung Chang
- Brain and Cognitive Sciences, MIT, Cambridge, MA 02142, U.S.A.
| | - Nancy Lynch
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA 02142, U.S.A.
| |
Collapse
|
28
|
Jia W, Qin S, Xue X. A generalized neural network for distributed nonsmooth optimization with inequality constraint. Neural Netw 2019; 119:46-56. [PMID: 31376637 DOI: 10.1016/j.neunet.2019.07.019] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Revised: 05/29/2019] [Accepted: 07/22/2019] [Indexed: 10/26/2022]
Abstract
In this paper, a generalized neural network with a novel auxiliary function is proposed to solve a distributed non-differentiable optimization over a multi-agent network. The constructed auxiliary function can ensure that the state solution of proposed neural network is bounded, and enters the inequality constraint set in finite time. Furthermore, the proposed neural network is demonstrated to reach consensus and ultimately converges to the optimal solution under several mild assumptions. Compared with the existing methods, the neural network proposed in this paper has a simple structure with a low amount of state variables, and does not depend on projection operator method for constrained distributed optimization. Finally, two numerical simulations and an application in power system are delineated to show the characteristics and practicability of the presented neural network.
Collapse
Affiliation(s)
- Wenwen Jia
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| | - Xiaoping Xue
- Department of Mathematics, Harbin Institute of Technology, Harbin, PR China.
| |
Collapse
|
29
|
Improved Zhang neural network with finite-time convergence for time-varying linear system of equations solving. INFORM PROCESS LETT 2019. [DOI: 10.1016/j.ipl.2019.03.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
30
|
A new noise-tolerant and predefined-time ZNN model for time-dependent matrix inversion. Neural Netw 2019; 117:124-134. [PMID: 31158644 DOI: 10.1016/j.neunet.2019.05.005] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 03/08/2019] [Accepted: 05/08/2019] [Indexed: 11/23/2022]
Abstract
In this work, a new zeroing neural network (ZNN) using a versatile activation function (VAF) is presented and introduced for solving time-dependent matrix inversion. Unlike existing ZNN models, the proposed ZNN model not only converges to zero within a predefined finite time but also tolerates several noises in solving the time-dependent matrix inversion, and thus called new noise-tolerant ZNN (NNTZNN) model. In addition, the convergence and robustness of this model are mathematically analyzed in detail. Two comparative numerical simulations with different dimensions are used to test the efficiency and superiority of the NNTZNN model to the previous ZNN models using other activation functions. In addition, two practical application examples (i.e., a mobile manipulator and a real Kinova JACO2 robot manipulator) are presented to validate the applicability and physical feasibility of the NNTZNN model in a noisy environment. Both simulative and experimental results demonstrate the effectiveness and tolerant-noise ability of the NNTZNN model.
Collapse
|
31
|
Park GM, Choi JW, Kim JH. Developmental Resonance Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:1278-1284. [PMID: 30176610 DOI: 10.1109/tnnls.2018.2863738] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Adaptive resonance theory (ART) networks deal with normalized input data only, which means that they need the normalization process for the raw input data, under the assumption that the upper and lower bounds of the input data are known in advance. Without such an assumption, ART networks cannot be utilized. To solve this problem and improve the learning performance, inspired by the ART networks, we propose a developmental resonance network (DRN) by employing new techniques of a global weight and node connection and grouping processes. The proposed DRN learns the global weight converging to the unknown range of the input data and properly clusters by grouping similar nodes into one. These techniques enable DRN to learn the raw input data without the normalization process while retaining the stability, plasticity, and memory usage efficiency without node proliferation. Simulation results verify that our DRN, applied to the unsupervised clustering problem, can cluster raw data properly without a prior normalization process.
Collapse
|
32
|
Terminal computing for Sylvester equations solving with application to intelligent control of redundant manipulators. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.01.024] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
33
|
|
34
|
Finite-time leaderless consensus of uncertain multi-agent systems against time-varying actuator faults. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.10.020] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
35
|
Chu X, Peng Z, Wen G, Rahmani A. Distributed fixed-time formation tracking of multi-robot systems with nonholonomic constraints. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.06.044] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
36
|
Lv X, Xiao L, Tan Z, Yang Z. Wsbp function activated Zhang dynamic with finite-time convergence applied to Lyapunov equation. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.06.057] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
37
|
Li S, Zhou M, Luo X. Modified Primal-Dual Neural Networks for Motion Control of Redundant Manipulators With Dynamic Rejection of Harmonic Noises. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4791-4801. [PMID: 29990144 DOI: 10.1109/tnnls.2017.2770172] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In recent decades, primal-dual neural networks, as a special type of recurrent neural networks, have received great success in real-time manipulator control. However, noises are usually ignored when neural controllers are designed based on them, and thus, they may fail to perform well in the presence of intensive noises. Harmonic noises widely exist in real applications and can severely affect the control accuracy. This work proposes a novel primal-dual neural network design that directly takes noise control into account. By taking advantage of the fact that the unknown amplitude and phase information of a harmonic signal can be eliminated from its dynamics, our deliberately designed neural controller is able to reach the accurate tracking of reference trajectories in a noisy environment. Theoretical analysis and extensive simulations show that the proposed controller stabilizes the control system polluted by harmonic noises and converges the position tracking error to zero. Comparisons show that our proposed solution consistently and significantly outperforms the existing primal-dual neural solutions as well as feedforward neural one and adaptive neural one for redundancy resolution of manipulators.
Collapse
|
38
|
Xiao L, Zhang Z, Zhang Z, Li W, Li S. Design, verification and robotic application of a novel recurrent neural network for computing dynamic Sylvester equation. Neural Netw 2018; 105:185-196. [DOI: 10.1016/j.neunet.2018.05.008] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2017] [Revised: 04/01/2018] [Accepted: 05/14/2018] [Indexed: 11/28/2022]
|
39
|
Neural network for nonsmooth pseudoconvex optimization with general convex constraints. Neural Netw 2018; 101:1-14. [DOI: 10.1016/j.neunet.2018.01.008] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 11/13/2017] [Accepted: 01/18/2018] [Indexed: 11/21/2022]
|
40
|
Feng R, Leung CS, Sum J. Robustness Analysis on Dual Neural Network-based $k$ WTA With Input Noise. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:1082-1094. [PMID: 28186910 DOI: 10.1109/tnnls.2016.2645602] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper studies the effects of uniform input noise and Gaussian input noise on the dual neural network-based WTA (DNN- WTA) model. We show that the state of the network (under either uniform input noise or Gaussian input noise) converges to one of the equilibrium points. We then derive a formula to check if the network produce correct outputs or not. Furthermore, for the uniformly distributed inputs, two lower bounds (one for each type of input noise) on the probability that the network produces the correct outputs are presented. Besides, when the minimum separation amongst inputs is given, we derive the condition for the network producing the correct outputs. Finally, experimental results are presented to verify our theoretical results. Since random drift in the comparators can be considered as input noise, our results can be applied to the random drift situation.
Collapse
|
41
|
Simplified neural network for generalized least absolute deviation. Neural Comput Appl 2018. [DOI: 10.1007/s00521-017-3060-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
42
|
Xiao L, Liao B, Li S, Chen K. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw 2018; 98:102-113. [DOI: 10.1016/j.neunet.2017.11.011] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Revised: 09/25/2017] [Accepted: 11/16/2017] [Indexed: 10/18/2022]
|
43
|
Jin L, Li S, Wang H, Zhang Z. Nonconvex projection activated zeroing neurodynamic models for time-varying matrix pseudoinversion with accelerated finite-time convergence. Appl Soft Comput 2018. [DOI: 10.1016/j.asoc.2017.09.016] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
44
|
|
45
|
|
46
|
Jin L, Li S. Nonconvex function activated zeroing neural network models for dynamic quadratic programming subject to equality and inequality constraints. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.05.017] [Citation(s) in RCA: 59] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
47
|
Mirza MA, Li S, Jin L. Simultaneous learning and control of parallel Stewart platforms with unknown parameters. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.05.026] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
48
|
Xiao L, Zhang Y, Liao B, Zhang Z, Ding L, Jin L. A Velocity-Level Bi-Criteria Optimization Scheme for Coordinated Path Tracking of Dual Robot Manipulators Using Recurrent Neural Network. Front Neurorobot 2017; 11:47. [PMID: 28928651 PMCID: PMC5591439 DOI: 10.3389/fnbot.2017.00047] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Accepted: 08/17/2017] [Indexed: 11/13/2022] Open
Abstract
A dual-robot system is a robotic device composed of two robot arms. To eliminate the joint-angle drift and prevent the occurrence of high joint velocity, a velocity-level bi-criteria optimization scheme, which includes two criteria (i.e., the minimum velocity norm and the repetitive motion), is proposed and investigated for coordinated path tracking of dual robot manipulators. Specifically, to realize the coordinated path tracking of dual robot manipulators, two subschemes are first presented for the left and right robot manipulators. After that, such two subschemes are reformulated as two general quadratic programs (QPs), which can be formulated as one unified QP. A recurrent neural network (RNN) is thus presented to solve effectively the unified QP problem. At last, computer simulation results based on a dual three-link planar manipulator further validate the feasibility and the efficacy of the velocity-level optimization scheme for coordinated path tracking using the recurrent neural network.
Collapse
Affiliation(s)
- Lin Xiao
- College of Information Science and Engineering, Jishou University, Jishou, China
| | - Yongsheng Zhang
- College of Information Science and Engineering, Jishou University, Jishou, China
| | - Bolin Liao
- College of Information Science and Engineering, Jishou University, Jishou, China
| | - Zhijun Zhang
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
| | - Lei Ding
- College of Information Science and Engineering, Jishou University, Jishou, China
| | - Long Jin
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China
| |
Collapse
|
49
|
Liao B, Xiang Q. Robustness Analyses and Optimal Sampling Gap of Recurrent Neural Network for Dynamic Matrix Pseudoinversion. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2017. [DOI: 10.20965/jaciii.2017.p0778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This study analyses the robustness and convergence characteristics of a neural network. First, a special class of recurrent neural network (RNN), termed a continuous-time Zhang neural network (CTZNN) model, is presented and investigated for dynamic matrix pseudoinversion. Theoretical analysis of the CTZNN model demonstrates that it has good robustness against various types of noise. In addition, considering the requirements of digital implementation and online computation, the optimal sampling gap for a discrete-time Zhang neural network (DTZNN) model under noisy environments is proposed. Finally, experimental results are presented, which further substantiate the theoretical analyses and demonstrate the effectiveness of the proposed ZNN models for computing a dynamic matrix pseudoinverse under noisy environments.
Collapse
|
50
|
Ye Q, Lou X, Sheng L. Generalized predictive control of a class of MIMO models via a projection neural network. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.12.067] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|