1
|
Zheng B, Li C, Zhang Z, Yu J, Liu PX. An Arbitrarily Predefined-Time Convergent RNN for Dynamic LMVE With Its Applications in UR3 Robotic Arm Control and Multiagent Systems. IEEE TRANSACTIONS ON CYBERNETICS 2025; 55:1789-1800. [PMID: 40031852 DOI: 10.1109/tcyb.2025.3539275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Zeroing neural network (ZNN), as a special type of recurrent neural network (RNN), is very competitive in solving time-varying linear matrix-vector equations. Recently, various ZNNs with predefined-time convergence (PTC) capabilities have been reported. Such ZNNs with PTC capabilities can achieve the predefined convergence time via explicitly presetting multiple parameters related to the upper bounds of their convergence time. However, obtaining suitable and robust values for these parameters through reasonable adjustments is a challenging task in many engineering applications. To address this problem, we propose a novel arbitrarily predefined-time convergent RNN (APTC-RNN) with a novel nonlinear piecewise activation-function (NPAF). Unlike most existing ZNNs with PTC capabilities, the proposed APTC-RNN, due to its NPAF, can achieve arbitrarily PTC (APTC) without adjusting any upper bound parameters. Furthermore, due to the piecewise computation form of the NPAF, the proposed APTC-RNN can provide a lower computational cost compared to most existing RNNs. The stability and APTC capability of the proposed APTC-RNN are proven by rigorous theoretical analysis and mathematical derivation. Numerical simulations show that APTC-RNN has faster and more accurate PTC capability than three state-of-the-art RNNs, while having less computational time. Finally, the practicality of the APTC-RNN is verified by applying it to the UR3 robotic arm and multiagent systems.
Collapse
|
2
|
Huang H, Zeng Z. An Accelerated Approach on Adaptive Gradient Neural Network for Solving Time-Dependent Linear Equations: A State-Triggered Perspective. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5070-5081. [PMID: 38483798 DOI: 10.1109/tnnls.2024.3371008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
To improve the acceleration performance, a hybrid state-triggered discretization (HSTD) is proposed for the adaptive gradient neural network (AGNN) for solving time-dependent linear equations (TDLEs). Unlike the existing approaches that use an activation function or a time-varying coefficient for acceleration, the proposed HSTD is uniquely designed from a control theory perspective. It comprises two essential components: adaptive sampling interval state-triggered discretization (ASISTD) and adaptive coefficient state-triggered discretization (ACSTD). The former addresses the gap in acceleration methods related to the variable sampling period, while the latter considers the underlying evolutionary dynamics of the Lyapunov function to determine coefficients greedily. Finally, compared with commonly used discretization methods, the acceleration performance and computational advantages of the proposed HSTD are substantiated by the numerical simulations and applications to robotics.
Collapse
|
3
|
Zheng L, Yu W, Xu Z, Zhang Z, Deng F. Design, Analysis, and Application of a Discrete Error Redefinition Neural Network for Time-Varying Quadratic Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:13646-13657. [PMID: 37224359 DOI: 10.1109/tnnls.2023.3270381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Time-varying quadratic programming (TV-QP) is widely used in artificial intelligence, robotics, and many other fields. To solve this important problem, a novel discrete error redefinition neural network (D-ERNN) is proposed. By redefining the error monitoring function and discretization, the proposed neural network is superior to some traditional neural networks in terms of convergence speed, robustness, and overshoot. Compared with the continuous ERNN, the proposed discrete neural network is more suitable for computer implementation. Unlike continuous neural networks, this article also analyzes and proves how to select the parameters and step size of the proposed neural networks to ensure the reliability of the network. Moreover, how to achieve the discretization of the ERNN is presented and discussed. The convergence of the proposed neural network without disturbance is proven, and bounded time-varying disturbances can be resisted in theory. Furthermore, the comparison results with other related neural networks show that the proposed D-ERNN has a faster convergence speed, better antidisturbance ability, and lower overshoot.
Collapse
|
4
|
Luo Y, Li X, Li Z, Xie J, Zhang Z, Li X. A Novel Swarm-Exploring Neurodynamic Network for Obtaining Global Optimal Solutions to Nonconvex Nonlinear Programming Problems. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:5866-5876. [PMID: 39088499 DOI: 10.1109/tcyb.2024.3398585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/03/2024]
Abstract
A swarm-exploring neurodynamic network (SENN) based on a two-timescale model is proposed in this study for solving nonconvex nonlinear programming problems. First, by using a convergent-differential neural network (CDNN) as a local quadratic programming (QP) solver and combining it with a two-timescale model design method, a two-timescale convergent-differential (TTCD) model is exploited, and its stability is analyzed and described in detail. Second, swarm exploration neurodynamics are incorporated into the TTCD model to obtain an SENN with global search capabilities. Finally, the feasibility of the proposed SENN is demonstrated via simulation, and the superiority of the SENN is exhibited through a comparison with existing collaborative neurodynamics methods. The advantage of the SENN is that it only needs a single recurrent neural network (RNN) interact, while the compared collaborative neurodynamic approach (CNA) involves multiple RNN runs.
Collapse
|
5
|
Li H, Liao B, Li J, Li S. A Survey on Biomimetic and Intelligent Algorithms with Applications. Biomimetics (Basel) 2024; 9:453. [PMID: 39194432 DOI: 10.3390/biomimetics9080453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 07/12/2024] [Accepted: 07/22/2024] [Indexed: 08/29/2024] Open
Abstract
The question "How does it work" has motivated many scientists. Through the study of natural phenomena and behaviors, many intelligence algorithms have been proposed to solve various optimization problems. This paper aims to offer an informative guide for researchers who are interested in tackling optimization problems with intelligence algorithms. First, a special neural network was comprehensively discussed, and it was called a zeroing neural network (ZNN). It is especially intended for solving time-varying optimization problems, including origin, basic principles, operation mechanism, model variants, and applications. This paper presents a new classification method based on the performance index of ZNNs. Then, two classic bio-inspired algorithms, a genetic algorithm and a particle swarm algorithm, are outlined as representatives, including their origin, design process, basic principles, and applications. Finally, to emphasize the applicability of intelligence algorithms, three practical domains are introduced, including gene feature extraction, intelligence communication, and the image process.
Collapse
Affiliation(s)
- Hao Li
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China
- School of Communication and Electronic Engineering, Jishou University, Jishou 416000, China
| | - Bolin Liao
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China
| | - Jianfeng Li
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China
| | - Shuai Li
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China
| |
Collapse
|
6
|
Zhou J, Ning J, Xiang Z, Yin P. ICDW-YOLO: An Efficient Timber Construction Crack Detection Algorithm. SENSORS (BASEL, SWITZERLAND) 2024; 24:4333. [PMID: 39001112 PMCID: PMC11244569 DOI: 10.3390/s24134333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 06/29/2024] [Accepted: 07/02/2024] [Indexed: 07/16/2024]
Abstract
A robust wood material crack detection algorithm, sensitive to small targets, is indispensable for production and building protection. However, the precise identification and localization of cracks in wooden materials present challenges owing to significant scale variations among cracks and the irregular quality of existing data. In response, we propose a crack detection algorithm tailored to wooden materials, leveraging advancements in the YOLOv8 model, named ICDW-YOLO (improved crack detection for wooden material-YOLO). The ICDW-YOLO model introduces novel designs for the neck network and layer structure, along with an anchor algorithm, which features a dual-layer attention mechanism and dynamic gradient gain characteristics to optimize and enhance the original model. Initially, a new layer structure was crafted using GSConv and GS bottleneck, improving the model's recognition accuracy by maximizing the preservation of hidden channel connections. Subsequently, enhancements to the network are achieved through the gather-distribute mechanism, aimed at augmenting the fusion capability of multi-scale features and introducing a higher-resolution input layer to enhance small target recognition. Empirical results obtained from a customized wooden material crack detection dataset demonstrate the efficacy of the proposed ICDW-YOLO algorithm in effectively detecting targets. Without significant augmentation in model complexity, the mAP50-95 metric attains 79.018%, marking a 1.869% improvement over YOLOv8. Further validation of our algorithm's effectiveness is conducted through experiments on fire and smoke detection datasets, aerial remote sensing image datasets, and the coco128 dataset. The results showcase that ICDW-YOLO achieves a mAP50 of 69.226% and a mAP50-95 of 44.210%, indicating robust generalization and competitiveness vis-à-vis state-of-the-art detectors.
Collapse
Affiliation(s)
- Jieyang Zhou
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China; (J.Z.); (Z.X.)
| | - Jing Ning
- School of Communication and Electronic Engineering, Jishou University, Jishou 416000, China;
| | - Zhiyang Xiang
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China; (J.Z.); (Z.X.)
| | - Pengfei Yin
- College of Computer Science and Engineering, Jishou University, Jishou 416000, China; (J.Z.); (Z.X.)
| |
Collapse
|
7
|
Wu W, Zhang Y. Zeroing Neural Network With Coefficient Functions and Adjustable Parameters for Solving Time-Variant Sylvester Equation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:6757-6766. [PMID: 36256719 DOI: 10.1109/tnnls.2022.3212869] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
To solve the time-variant Sylvester equation, in 2013, Li et al. proposed the zeroing neural network with sign-bi-power function (ZNN-SBPF) model via constructing a nonlinear activation function. In this article, to further improve the convergence rate, the zeroing neural network with coefficient functions and adjustable parameters (ZNN-CFAP) model as a variation in zeroing neural network (ZNN) model is proposed. On the basis of the introduced coefficient functions, an appropriate ZNN-CFAP model can be chosen according to the error function. The high convergence rate of the ZNN-CFAP model can be achieved by choosing appropriate adjustable parameters. Moreover, the finite-time convergence property and convergence time upper bound of the ZNN-CFAP model are proved in theory. Computer simulations and numerical experiments are performed to illustrate the efficacy and validity of the ZNN-CFAP model in time-variant Sylvester equation solving. Comparative experiments among the ZNN-CFAP, ZNN-SBPF, and ZNN with linear function (ZNN-LF) models further substantiate the superiority of the ZNN-CFAP model in view of the convergence rate. Finally, the proposed ZNN-CFAP model is successfully applied to the tracking control of robot manipulator to verify its practicability.
Collapse
|
8
|
Zhang Z, Chen B, Luo Y. A Deep Ensemble Dynamic Learning Network for Corona Virus Disease 2019 Diagnosis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:3912-3926. [PMID: 36054386 DOI: 10.1109/tnnls.2022.3201198] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Corona virus disease 2019 is an extremely fatal pandemic around the world. Intelligently recognizing X-ray chest radiography images for automatically identifying corona virus disease 2019 from other types of pneumonia and normal cases provides clinicians with tremendous conveniences in diagnosis process. In this article, a deep ensemble dynamic learning network is proposed. After a chain of image preprocessing steps and the division of image dataset, convolution blocks and the final average pooling layer are pretrained as a feature extractor. For classifying the extracted feature samples, two-stage bagging dynamic learning network is trained based on neural dynamic learning and bagging algorithms, which diagnoses the presence and types of pneumonia successively. Experimental results manifest that using the proposed deep ensemble dynamic learning network obtains 98.7179% diagnosis accuracy, which indicates more excellent diagnosis effect than existing state-of-the-art models on the open image dataset. Such accurate diagnosis effects provide convincing evidences for further detections and treatments.
Collapse
|
9
|
Wang D, Liu XW. A varying-parameter fixed-time gradient-based dynamic network for convex optimization. Neural Netw 2023; 167:798-809. [PMID: 37738715 DOI: 10.1016/j.neunet.2023.08.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 07/05/2023] [Accepted: 08/28/2023] [Indexed: 09/24/2023]
Abstract
We focus on the fixed-time convergence and robustness of gradient-based dynamic networks for solving convex optimization. Most of the existing gradient-based dynamic networks with fixed-time convergence have limited ability to resist interferences of noises. To improve the convergence of the gradient-based dynamic networks, we design a new activation function and propose a gradient-based dynamic network with fixed-time convergence. The proposed dynamic network has a smaller upper bound of the convergence time than the existing dynamic networks with fixed-time convergence. A time-varying scaling parameter is employed to speed up the convergence. Our gradient-based dynamic network is proved to be robust against bounded noises and is able to resist the interference of unbounded noises. The numerical tests illustrate the effectiveness and superiority of the proposed network.
Collapse
Affiliation(s)
- Dan Wang
- School of Artificial Intelligence, Hebei University of Technology, Tianjin, 300401, China.
| | - Xin-Wei Liu
- Institute of Mathematics, Hebei University of Technology, Tianjin, 300401, China.
| |
Collapse
|
10
|
Liang Y, Liu J, Xu D. Stochastic momentum methods for non-convex learning without bounded assumptions. Neural Netw 2023; 165:830-845. [PMID: 37418864 DOI: 10.1016/j.neunet.2023.06.021] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 05/26/2023] [Accepted: 06/19/2023] [Indexed: 07/09/2023]
Abstract
Stochastic momentum methods are widely used to solve stochastic optimization problems in machine learning. However, most of the existing theoretical analyses rely on either bounded assumptions or strong stepsize conditions. In this paper, we focus on a class of non-convex objective functions satisfying the Polyak-Łojasiewicz (PL) condition and present a unified convergence rate analysis for stochastic momentum methods without any bounded assumptions, which covers stochastic heavy ball (SHB) and stochastic Nesterov accelerated gradient (SNAG). Our analysis achieves the more challenging last-iterate convergence rate of function values under the relaxed growth (RG) condition, which is a weaker assumption than those used in related work. Specifically, we attain the sub-linear rate for stochastic momentum methods with diminishing stepsizes, and the linear convergence rate for constant stepsizes if the strong growth (SG) condition holds. We also examine the iteration complexity for obtaining an ϵ-accurate solution of the last-iterate. Moreover, we provide a more flexible stepsize scheme for stochastic momentum methods in three points: (i) relaxing the last-iterate convergence stepsize from square summable to zero limitation; (ii) extending the minimum-iterate convergence rate stepsize to the non-monotonic case; (iii) expanding the last-iterate convergence rate stepsize to a more general form. Finally, we conduct numerical experiments on benchmark datasets to validate our theoretical findings.
Collapse
Affiliation(s)
- Yuqing Liang
- Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China
| | - Jinlan Liu
- Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China
| | - Dongpo Xu
- Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China.
| |
Collapse
|
11
|
Xiao L, He Y, Wang Y, Dai J, Wang R, Tang W. A Segmented Variable-Parameter ZNN for Dynamic Quadratic Minimization With Improved Convergence and Robustness. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2413-2424. [PMID: 34464280 DOI: 10.1109/tnnls.2021.3106640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
As a category of the recurrent neural network (RNN), zeroing neural network (ZNN) can effectively handle time-variant optimization issues. Compared with the fixed-parameter ZNN that needs to be adjusted frequently to achieve good performance, the conventional variable-parameter ZNN (VPZNN) does not require frequent adjustment, but its variable parameter will tend to infinity as time grows. Besides, the existing noise-tolerant ZNN model is not good enough to deal with time-varying noise. Therefore, a new-type segmented VPZNN (SVPZNN) for handling the dynamic quadratic minimization issue (DQMI) is presented in this work. Unlike the previous ZNNs, the SVPZNN includes an integral term and a nonlinear activation function, in addition to two specially constructed time-varying piecewise parameters. This structure keeps the time-varying parameters stable and makes the model have strong noise tolerance capability. Besides, theoretical analysis on SVPZNN is proposed to determine the upper bound of convergence time in the absence or presence of noise interference. Numerical simulations verify that SVPZNN has shorter convergence time and better robustness than existing ZNN models when handling DQMI.
Collapse
|
12
|
Zhang Z, Yang S, Zheng L. A Punishment Mechanism-Combined Recurrent Neural Network to Solve Motion-Planning Problem of Redundant Robot Manipulators. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:2177-2185. [PMID: 34623289 DOI: 10.1109/tcyb.2021.3111204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In order to make redundant robot manipulators (RRMs) track the complex time-varying trajectory, the motion-planning problem of RRMs can be converted into a constrained time-varying quadratic programming (TVQP) problem. By using a new punishment mechanism-combined recurrent neural network (PMRNN) proposed in this article with reference to the varying-gain neural-dynamic design (VG-NDD) formula, the TVQP problem-based motion-planning scheme can be solved and the optimal angles and velocities of joints of RRMs can also be obtained in the working space. Then, the convergence performance of the PMRNN model in solving the TVQP problem is analyzed theoretically in detail. This novel method has been substantiated to have a faster calculation speed and better accuracy than the traditional method. In addition, the PMRNN model has also been successfully applied to an actual RRM to complete an end-effector trajectory tracking task.
Collapse
|
13
|
Chen W, Jin J, Gerontitis D, Qiu L, Zhu J. Improved Recurrent Neural Networks for Text Classification and Dynamic Sylvester Equation Solving. Neural Process Lett 2023. [DOI: 10.1007/s11063-023-11176-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
14
|
Xiao X, Jiang C, Mei Q, Zhang Y. Noise‐tolerate and adaptive coefficient zeroing neural network for solving dynamic matrix square root. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023. [DOI: 10.1049/cit2.12183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Affiliation(s)
- Xiuchun Xiao
- School of Electronics and Information Engineering Guangdong Ocean University Zhanjiang China
| | - Chengze Jiang
- School of Cyber Science and Engineering Southeast University Nanjing China
| | - Qixiang Mei
- School of Electronics and Information Engineering Guangdong Ocean University Zhanjiang China
| | - Yudong Zhang
- School of Computing and Mathematical Sciences University of Leicester Leicester UK
| |
Collapse
|
15
|
A predefined-time and anti-noise varying-parameter ZNN model for solving time-varying complex Stein equations. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
16
|
He Y, Xiao L, Sun F, Wang Y. A variable-parameter ZNN with predefined-time convergence for dynamic complex-valued Lyapunov equation and its application to AOA positioning. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109703] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
17
|
Gunasekaran N, Thoiyab NM, Zhu Q, Cao J, Muruganantham P. New Global Asymptotic Robust Stability of Dynamical Delayed Neural Networks via Intervalized Interconnection Matrices. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11794-11804. [PMID: 34097631 DOI: 10.1109/tcyb.2021.3079423] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article identifies a new upper bound norm for the intervalized interconnection matrices pertaining to delayed dynamical neural networks under the parameter uncertainties. By formulating the appropriate Lyapunov functional and slope-bounded activation functions, the derived new upper bound norms provide new sufficient conditions corresponding to the equilibrium point of the globally asymptotic robust stability with respect to the delayed neural networks. The new upper bound norm also yields the optimized minimum results as compared with some existing methods. Numerical examples are given to demonstrate the effectiveness of the proposed results obtained through the new upper bound norm method.
Collapse
|
18
|
Xiao L, Jia L, Wang Y, Dai J, Liao Q, Zhu Q. Performance Analysis and Applications of Finite-Time ZNN Models With Constant/Fuzzy Parameters for TVQPEI. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6665-6676. [PMID: 34081588 DOI: 10.1109/tnnls.2021.3082950] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Based on extensive applications of the time-variant quadratic programming with equality and inequality constraints (TVQPEI) problem and the effectiveness of the zeroing neural network (ZNN) to address time-variant problems, this article proposes a novel finite-time ZNN (FT-ZNN) model with a combined activation function, aimed at providing a superior efficient neurodynamic method to solve the TVQPEI problem. The remarkable properties of the FT-ZNN model are faster finite-time convergence and preferable robustness, which are analyzed in detail, where in the case of the robustness discussion, two kinds of noises (i.e., bounded constant noise and bounded time-variant noise) are taken into account. Moreover, the proposed several theorems all compute the convergent time of the nondisturbed FT-ZNN model and the disturbed FT-ZNN model approaching to the upper bound of residual error. Besides, to enhance the performance of the FT-ZNN model, a fuzzy finite-time ZNN (FFT-ZNN), which possesses a fuzzy parameter, is further presented for solving the TVQPEI problem. A simulative example about the FT-ZNN and FFT-ZNN models solving the TVQPEI problem is given, and the experimental results expectably conform to the theoretical analysis. In addition, the designed FT-ZNN model is effectually applied to the repetitive motion of the three-link redundant robot and image fusion to show its potential practical value.
Collapse
|
19
|
Zhu Q, Tan M. A novel activation function based recurrent neural networks and their applications on sentiment classification and dynamic problems solving. Front Neurorobot 2022; 16:1022887. [PMID: 36213146 PMCID: PMC9539977 DOI: 10.3389/fnbot.2022.1022887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 08/31/2022] [Indexed: 12/02/2022] Open
Abstract
In this paper, a nonlinear activation function (NAF) is proposed to constructed three recurrent neural network (RNN) models (Simple RNN (SRNN) model, Long Short-term Memory (LSTM) model and Gated Recurrent Unit (GRU) model) for sentiment classification. The Internet Movie Database (IMDB) sentiment classification experiment results demonstrate that the three RNN models using the NAF achieve better accuracy and lower loss values compared with other commonly used activation functions (AF), such as ReLU, SELU etc. Moreover, in terms of dynamic problems solving, a fixed-time convergent recurrent neural network (FTCRNN) model with the NAF is constructed. Additionally, the fixed-time convergence property of the FTCRNN model is strictly validated and the upper bound convergence time formula of the FTCRNN model is obtained. Furthermore, the numerical simulation results of dynamic Sylvester equation (DSE) solving using the FTCRNN model indicate that the neural state solutions of the FTCRNN model quickly converge to the theoretical solutions of DSE problems whether there are noises or not. Ultimately, the FTCRNN model is also utilized to realize trajectory tracking of robot manipulator and electric circuit currents computation for the further validation of its accurateness and robustness, and the corresponding results further validate its superior performance and widespread applicability.
Collapse
Affiliation(s)
- Qingyi Zhu
- School of Electronics and Internet of Things, Sichuan Vocational College of Information Technology, Guangyuan, China
| | - Mingtao Tan
- School of Computer and Electrical Engineering, Hunan University of Arts and Science, Changde, China
- *Correspondence: Mingtao Tan
| |
Collapse
|
20
|
Dai J, Luo L, Xiao L, Jia L, Li X. An intelligent fuzzy robustness ZNN model with fixed‐time convergence for time‐variant Stein matrix equation. INT J INTELL SYST 2022. [DOI: 10.1002/int.23058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Jianhua Dai
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, MOE‐LCSM Hunan Normal University Changsha China
| | - Liu Luo
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, MOE‐LCSM Hunan Normal University Changsha China
| | - Lin Xiao
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, MOE‐LCSM Hunan Normal University Changsha China
| | - Lei Jia
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, MOE‐LCSM Hunan Normal University Changsha China
| | - Xiaopeng Li
- Hunan Provincial Key Laboratory of Intelligent Computing and Language Information Processing, MOE‐LCSM Hunan Normal University Changsha China
| |
Collapse
|
21
|
Zhang Z, Li Z, Yang S. A Barrier Varying-Parameter Dynamic Learning Network for Solving Time-Varying Quadratic Programming Problems With Multiple Constraints. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:8781-8792. [PMID: 33635808 DOI: 10.1109/tcyb.2021.3051261] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Many scientific research and engineering problems can be converted to time-varying quadratic programming (TVQP) problems with constraints. Thus, TVQP problem solving plays an important role in practical applications. Many existing neural networks, such as the gradient neural network (GNN) or zeroing neural network (ZNN), were designed to solve TVQP problems, but the convergent rate is limited. The recent varying-parameter convergent-differential neural network (VP-CDNN) can accelerate the convergent rate, but it can only solve the equality-constrained problem. To remedy this deficiency, a novel barrier varying-parameter dynamic learning network (BVDLN) is proposed and designed, which can solve the equality-, inequality-, and bound-constrained problem. Specifically, the constrained TVQP problem is first converted into a matrix equation. Second, based on the modified Karush-Kuhn-Tucker (KKT) conditions and varying-parameter neural dynamic design method, the BVDLN model is conducted. The superiorities of the proposed BVDLN model can solve multiple-constrained TVQP problems, and the convergent rate can achieve superexponentially convergence. Comparative simulative experiments verify that the proposed BVDLN is more effective and more accurate. Finally, the proposed BVDLN is applied to solve a robot motion planning problems, which verifies the applicability of the proposed model.
Collapse
|
22
|
Yang M, Zhang Y, Tan N, Mao M, Hu H. 7-Instant Discrete-Time Synthesis Model Solving Future Different-Level Linear Matrix System via Equivalency of Zeroing Neural Network. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:8366-8375. [PMID: 33544686 DOI: 10.1109/tcyb.2021.3051035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Differing from the common linear matrix equation, the future different-level linear matrix system is considered, which is much more interesting and challenging. Because of its complicated structure and future-computation characteristic, traditional methods for static and same-level systems may not be effective on this occasion. For solving this difficult future different-level linear matrix system, the continuous different-level linear matrix system is first considered. On the basis of the zeroing neural network (ZNN), the physical mathematical equivalency is thus proposed, which is called ZNN equivalency (ZE), and it is compared with the traditional concept of mathematical equivalence. Then, on the basis of ZE, the continuous-time synthesis (CTS) model is further developed. To satisfy the future-computation requirement of the future different-level linear matrix system, the 7-instant discrete-time synthesis (DTS) model is further attained by utilizing the high-precision 7-instant Zhang et al. discretization (ZeaD) formula. For a comparison, three different DTS models using three conventional ZeaD formulas are also presented. Meanwhile, the efficacy of the 7-instant DTS model is testified by the theoretical analyses. Finally, experimental results verify the brilliant performance of the 7-instant DTS model in solving the future different-level linear matrix system.
Collapse
|
23
|
Double Features Zeroing Neural Network Model for Solving the Pseudoninverse of a Complex-Valued Time-Varying Matrix. MATHEMATICS 2022. [DOI: 10.3390/math10122122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
The solution of a complex-valued matrix pseudoinverse is one of the key steps in various science and engineering fields. Owing to its important roles, researchers had put forward many related algorithms. With the development of research, a time-varying matrix pseudoinverse received more attention than a time-invarying one, as we know that a zeroing neural network (ZNN) is an efficient method to calculate a pseudoinverse of a complex-valued time-varying matrix. Due to the initial ZNN (IZNN) and its extensions lacking a mechanism to deal with both convergence and robustness, that is, most existing research on ZNN models only studied the convergence and robustness, respectively. In order to simultaneously improve the double features (i.e., convergence and robustness) of ZNN in solving a complex-valued time-varying pseudoinverse, this paper puts forward a double features ZNN (DFZNN) model by adopting a specially designed time-varying parameter and a novel nonlinear activation function. Moreover, two nonlinear activation types of complex number are investigated. The global convergence, predefined time convergence, and robustness are proven in theory, and the upper bound of the predefined convergence time is formulated exactly. The results of the numerical simulation verify the theoretical proof, in contrast to the existing complex-valued ZNN models, the DFZNN model has shorter predefined convergence time in a zero noise state, and enhances robustness in different noise states. Both the theoretical and the empirical results show that the DFZNN model has better ability in solving a time-varying complex-valued matrix pseudoinverse. Finally, the proposed DFZNN model is used to track the trajectory of a manipulator, which further verifies the reliability of the model.
Collapse
|
24
|
A review on varying-parameter convergence differential neural network. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
25
|
Ren X, Zhang P, Zhang Z. Bicriteria Velocity Minimization Approach of Self-Motion for Redundant Robot Manipulators With Varying-Gain Recurrent Neural Network. IEEE Trans Cogn Dev Syst 2022. [DOI: 10.1109/tcds.2021.3054999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Xiaohui Ren
- School of Electrical Engineering, Shaanxi University of Technology, Hanzhong, China
| | - Pengchao Zhang
- Key Laboratory of Industrial Automation of Shaanxi Province, Shaanxi University of Technology, Hanzhong, China
| | - Zhijun Zhang
- Key Laboratory of Industrial Automation of Shaanxi Province, Shaanxi University of Technology, Hanzhong, China
| |
Collapse
|
26
|
Xiao L, Huang W, Jia L, Li X. Two discrete ZNN models for solving time-varying augmented complex Sylvester equation. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.11.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
27
|
Xiao L, He Y, Dai J, Liu X, Liao B, Tan H. A Variable-Parameter Noise-Tolerant Zeroing Neural Network for Time-Variant Matrix Inversion With Guaranteed Robustness. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1535-1545. [PMID: 33361003 DOI: 10.1109/tnnls.2020.3042761] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Matrix inversion frequently occurs in the fields of science, engineering, and related fields. Numerous matrix inversion schemes are often based on the premise that the solution procedure is ideal and noise-free. However, external interference is generally ubiquitous and unavoidable in practice. Therefore, an integrated-enhanced zeroing neural network (IEZNN) model has been proposed to handle the time-variant matrix inversion issue interfered with by noise. However, the IEZNN model can only deal with small time-variant noise interference. With slightly larger noise interference, the IEZNN model may not converge to the theoretical solution exactly. Therefore, a variable-parameter noise-tolerant zeroing neural network (VPNTZNN) model is proposed to overcome shortcomings and improve the inadequacy. Moreover, the excellent convergence and robustness of the VPNTZNN model are rigorously analyzed and proven. Finally, compared with the original zeroing neural network (OZNN) model and the IEZNN model for matrix inversion, numerical simulations and a practical application reveal that the proposed VPNTZNN model has the best robust property under the same external noise interference.
Collapse
|
28
|
Qi Y, Jin L, Luo X, Zhou M. Recurrent Neural Dynamics Models for Perturbed Nonstationary Quadratic Programs: A Control-Theoretical Perspective. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1216-1227. [PMID: 33449881 DOI: 10.1109/tnnls.2020.3041364] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recent decades have witnessed a trend that control-theoretical techniques are widely leveraged in various areas, e.g., design and analysis of computational models. Computational methods can be modeled as a controller and searching the equilibrium point of a dynamical system is identical to solving an algebraic equation. Thus, absorbing mature technologies in control theory and integrating it with neural dynamics models can lead to new achievements. This work makes progress along this direction by applying control-theoretical techniques to construct new recurrent neural dynamics for manipulating a perturbed nonstationary quadratic program (QP) with time-varying parameters considered. Specifically, to break the limitations of existing continuous-time models in handling nonstationary problems, a discrete recurrent neural dynamics model is proposed to robustly deal with noise. This work shows how iterative computational methods for solving nonstationary QP can be revisited, designed, and analyzed in a control framework. A modified Newton iteration model and an improved gradient-based neural dynamics are established by referring to the superior structural technology of the presented recurrent neural dynamics, where the chief breakthrough is their excellent convergence and robustness over the traditional models. Numerical experiments are conducted to show the eminence of the proposed models in solving perturbed nonstationary QP.
Collapse
|
29
|
Zhang Z, Sun J, Chen T. A new dynamically convergent differential neural network for brain signal recognition. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103130] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
30
|
Zhang Z, Zheng L, Qiu T. A gain-adjustment neural network based time-varying underdetermined linear equation solving method. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.096] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
31
|
Zhang Z, Chen B, Sun J, Luo Y. A bagging dynamic deep learning network for diagnosing COVID-19. Sci Rep 2021; 11:16280. [PMID: 34381079 PMCID: PMC8358001 DOI: 10.1038/s41598-021-95537-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 07/26/2021] [Indexed: 01/19/2023] Open
Abstract
COVID-19 is a serious ongoing worldwide pandemic. Using X-ray chest radiography images for automatically diagnosing COVID-19 is an effective and convenient means of providing diagnostic assistance to clinicians in practice. This paper proposes a bagging dynamic deep learning network (B-DDLN) for diagnosing COVID-19 by intelligently recognizing its symptoms in X-ray chest radiography images. After a series of preprocessing steps for images, we pre-train convolution blocks as a feature extractor. For the extracted features, a bagging dynamic learning network classifier is trained based on neural dynamic learning algorithm and bagging algorithm. B-DDLN connects the feature extractor and bagging classifier in series. Experimental results verify that the proposed B-DDLN achieves 98.8889% testing accuracy, which shows the best diagnosis performance among the existing state-of-the-art methods on the open image set. It also provides evidence for further detection and treatment.
Collapse
Affiliation(s)
- Zhijun Zhang
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510640, China.
- Guangdong Artificial Intelligence and Digital Economy Laboratory (Pazhou Lab), Guangzhou, 510335, China.
- School of Automation Science and Engineering, East China Jiaotong University, Nanchang, 330052, China.
- Shaanxi Provincial Key Laboratory of Industrial Automation, School of Mechanical Engineering, Shaanxi University of Technology, Hanzhong, 723001, China.
- School of Information Technology and Management, Hunan University of Finance and Economics, Changsha, 410205, China.
| | - Bozhao Chen
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510640, China
| | - Jiansheng Sun
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510640, China
| | - Yamei Luo
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510640, China
| |
Collapse
|
32
|
Zhang Z, Zheng L, Yang H, Qu X. Design and Analysis of a Novel Integral Recurrent Neural Network for Solving Time-Varying Sylvester Equation. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:4312-4326. [PMID: 31545759 DOI: 10.1109/tcyb.2019.2939350] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
To solve a general time-varying Sylvester equation, a novel integral recurrent neural network (IRNN) is designed and analyzed. This kind of recurrent neural networks is based on an error-integral design equation and does not need training in advance. The IRNN can achieve global convergence performance and strong robustness if odd-monotonically increasing activation functions [i.e., the linear, bipolar-sigmoid, power, or sigmoid-power activation functions (SP-AFs)] are applied. Specifically, if linear or bipolar-sigmoid activation functions are applied, the IRNN possess exponential convergence performance. The IRNN has finite-time convergence property by using power activation function. To obtain faster convergence performance and finite-time convergence property, an SP-AF is designed. Furthermore, by using the discretization method, the discrete IRNN model and its convergence analysis are also presented. Practical application to robot manipulator and computer simulation results with using different activation functions and design parameters have verified the effectiveness, stability, and reliability of the proposed IRNN.
Collapse
|
33
|
Zhang Z, Yang S, Zheng L. A Penalty Strategy Combined Varying-Parameter Recurrent Neural Network for Solving Time-Varying Multi-Type Constrained Quadratic Programming Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2993-3004. [PMID: 32726282 DOI: 10.1109/tnnls.2020.3009201] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
To obtain the optimal solution to the time-varying quadratic programming (TVQP) problem with equality and multitype inequality constraints, a penalty strategy combined varying-parameter recurrent neural network (PS-VP-RNN) for solving TVQP problems is proposed and analyzed. By using a novel penalty function designed in this article, the inequality constraint of the TVQP can be transformed into a penalty term that is added into the objective function of TVQP problems. Then, based on the design method of VP-RNN, a PS-VP-RNN is designed and analyzed for solving the TVQP with penalty term. One of the greatest advantages of PS-VP-RNN is that it cannot only solve the TVQP with equality constraints but can also solve the TVQP with inequality and bounded constraints. The global convergence theorem of PS-VP-RNN is presented and proved. Finally, three numerical simulation experiments with different forms of inequality and bounded constraints verify the effectiveness and accuracy of PS-VP-RNN in solving the TVQP problems.
Collapse
|
34
|
Zheng L, Zhang Z. Convergence and Robustness Analysis of Novel Adaptive Multilayer Neural Dynamics-Based Controllers of Multirotor UAVs. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3710-3723. [PMID: 31295138 DOI: 10.1109/tcyb.2019.2923642] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Because of the simple structure and strong flexibility, multirotor unmanned aerial vehicles (UAVs) have attracted considerable attention among scientific researches and engineering fields during the past decades. In this paper, a novel adaptive multilayer neural dynamic (AMND)-based controllers design method is proposed for designing the attitude angle (the roll angle ϕ , the pitch angle θ , and the yaw angle ψ ), height ( z ), and position ( x and y ) controllers of a general multirotor UAV model. Global convergence and strong robustness of the proposed AMND-based method and controllers are analyzed and proved theoretically. By incorporating the adaptive control method into the general multilayer neural dynamic-based controllers design method, multirotor UAVs with unknown disturbances can complete time-varying trajectory tracking tasks. AMND-based controllers with the self-tuning rates can estimate the unknown disturbances and solve the model uncertainty problems. Both the theoretical theorems and simulation results illustrate that the proposed design method and its controllers with strong anti-interference property can achieve the time-varying trajectory tracking control stably, reliably, and effectively. Moreover, a practical experiment by using a mini multirotor UAV illustrates the practicability of the AMND-based method.
Collapse
|
35
|
Zhang Z, Chen B, Xu S, Chen G, Xie J. A novel voting convergent difference neural network for diagnosing breast cancer. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.083] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
36
|
Zhang X, Chen L, Li S, Stanimirović P, Zhang J, Jin L. Design and analysis of recurrent neural network models with non‐linear activation functions for solving time‐varying quadratic programming problems. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2021. [DOI: 10.1049/cit2.12019] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Xiaoyan Zhang
- School of Information Science and Engineering Lanzhou University Lanzhou China
| | - Liangming Chen
- School of Information Science and Engineering Lanzhou University Lanzhou China
| | - Shuai Li
- School of Information Science and Engineering Lanzhou University Lanzhou China
| | | | - Jiliang Zhang
- Department of Electronic and Electrical Engineering The University of Sheffield Sheffield UK
| | - Long Jin
- School of Information Science and Engineering Lanzhou University Lanzhou China
| |
Collapse
|
37
|
A Vary-Parameter Convergence-Accelerated Recurrent Neural Network for Online Solving Dynamic Matrix Pseudoinverse and its Robot Application. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10440-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
38
|
Xiao L, Dai J, Lu R, Li S, Li J, Wang S. Design and Comprehensive Analysis of a Noise-Tolerant ZNN Model With Limited-Time Convergence for Time-Dependent Nonlinear Minimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5339-5348. [PMID: 32031952 DOI: 10.1109/tnnls.2020.2966294] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Zeroing neural network (ZNN) is a powerful tool to address the mathematical and optimization problems broadly arisen in the science and engineering areas. The convergence and robustness are always co-pursued in ZNN. However, there exists no related work on the ZNN for time-dependent nonlinear minimization that achieves simultaneously limited-time convergence and inherently noise suppression. In this article, for the purpose of satisfying such two requirements, a limited-time robust neural network (LTRNN) is devised and presented to solve time-dependent nonlinear minimization under various external disturbances. Different from the previous ZNN model for this problem either with limited-time convergence or with noise suppression, the proposed LTRNN model simultaneously possesses such two characteristics. Besides, rigorous theoretical analyses are given to prove the superior performance of the LTRNN model when adopted to solve time-dependent nonlinear minimization under external disturbances. Comparative results also substantiate the effectiveness and advantages of LTRNN via solving a time-dependent nonlinear minimization problem.
Collapse
|
39
|
Hu Z, Li K, Li K, Li J, Xiao L. Zeroing neural network with comprehensive performance and its applications to time-varying Lyapunov equation and perturbed robotic tracking. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.08.037] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
40
|
|
41
|
Xiao L, Jia L, Dai J, Tan Z. Design and Application of A Robust Zeroing Neural Network to Kinematical Resolution of Redundant Manipulators Under Various External Disturbances. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.07.040] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
42
|
Model-free motion control of continuum robots based on a zeroing neurodynamic approach. Neural Netw 2020; 133:21-31. [PMID: 33099245 DOI: 10.1016/j.neunet.2020.10.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 09/23/2020] [Accepted: 10/11/2020] [Indexed: 10/23/2022]
Abstract
As a result of inherent flexibility and structural compliance, continuum robots have great potential in practical applications and are attracting more and more attentions. However, these characteristics make it difficult to acquire the accurate kinematics of continuum robots due to uncertainties, deformation and external loads. This paper introduces a method based on a zeroing neurodynamic approach to solve the trajectory tracking problem of continuum robots. The proposed method can achieve the control of a bellows-driven continuum robot just relying on the actuator input and sensory output information, without knowing any information of the kinematic model. This approach reduces the computational load and can guarantee the real time control. The convergence, stability, and robustness of the proposed approach are proved by theoretical analyses. The effectiveness of the proposed method is verified by simulation studies including tracking performance, comparisons with other three methods, and robustness tests.
Collapse
|
43
|
Tan Z, Li W, Xiao L, Hu Y. New Varying-Parameter ZNN Models With Finite-Time Convergence and Noise Suppression for Time-Varying Matrix Moore-Penrose Inversion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2980-2992. [PMID: 31536017 DOI: 10.1109/tnnls.2019.2934734] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article aims to solve the Moore-Penrose inverse of time-varying full-rank matrices in the presence of various noises in real time. For this purpose, two varying-parameter zeroing neural networks (VPZNNs) are proposed. Specifically, VPZNN-R and VPZNN-L models, which are based on a new design formula, are designed to solve the right and left Moore-Penrose inversion problems of time-varying full-rank matrices, respectively. The two VPZNN models are activated by two novel varying-parameter nonlinear activation functions. Detailed theoretical derivations are presented to show the desired finite-time convergence and outstanding robustness of the proposed VPZNN models under various kinds of noises. In addition, existing neural models, such as the original ZNN (OZNN) and the integration-enhanced ZNN (IEZNN), are compared with the VPZNN models. Simulation observations verify the advantages of the VPZNN models over the OZNN and IEZNN models in terms of convergence and robustness. The potential of the VPZNN models for robotic applications is then illustrated by an example of robot path tracking.
Collapse
|
44
|
Li W, Xiao L, Liao B. A Finite-Time Convergent and Noise-Rejection Recurrent Neural Network and Its Discretization for Dynamic Nonlinear Equations Solving. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:3195-3207. [PMID: 31021811 DOI: 10.1109/tcyb.2019.2906263] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The so-called zeroing neural network (ZNN) is an effective recurrent neural network for solving dynamic problems including the dynamic nonlinear equations. There exist numerous unperturbed ZNN models that can converge to the theoretical solution of solvable nonlinear equations in infinity long or finite time. However, when these ZNN models are perturbed by external disturbances, the convergence performance would be dramatically deteriorated. To overcome this issue, this paper for the first time proposes a finite-time convergent ZNN with the noise-rejection capability to endure disturbances and solve dynamic nonlinear equations in finite time. In theory, the finite-time convergence and noise-rejection properties of the finite-time convergent and noise-rejection ZNN (FTNRZNN) are rigorously proved. For potential digital hardware realization, the discrete form of the FTNRZNN model is established based on a recently developed five-step finite difference rule to guarantee a high computational accuracy. The numerical results demonstrate that the discrete-time FTNRZNN can reject constant external noises. When perturbed by dynamic bounded or unbounded linear noises, the discrete-time FTNRZNN achieves the smallest steady-state errors in comparison with those generated by other discrete-time ZNN models that have no or limited ability to handle these noises. Discrete models of the FTNRZNN and the other ZNNs are comparatively applied to redundancy resolution of a robotic arm with superior positioning accuracy of the FTNRZNN verified.
Collapse
|
45
|
Zeng Y, Xiao L, Li K, Li J, Li K, Jian Z. Design and analysis of three nonlinearly activated ZNN models for solving time-varying linear matrix inequalities in finite time. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.01.070] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
46
|
Yang M, Zhang Y, Hu H. Discrete ZNN models of Adams-Bashforth (AB) type solving various future problems with motion control of mobile manipulator. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
47
|
Zhang H, Wan L. Zeroing neural network methods for solving the Yang-Baxter-like matrix equation. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.101] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
48
|
Zuo Q, Xiao L, Li K. Comprehensive design and analysis of time-varying delayed zeroing neural network and its application to matrix inversion. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.101] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
49
|
Zhou M, Chen J, Stanimirović PS, Katsikis VN, Ma H. Complex Varying-Parameter Zhang Neural Networks for Computing Core and Core-EP Inverse. Neural Process Lett 2019. [DOI: 10.1007/s11063-019-10141-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
50
|
A hybrid deep convolutional and recurrent neural network for complex activity recognition using multimodal sensors. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.06.051] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|