1
|
Qi Z, Ning Y, Xiao L, Wang Z, He Y. Efficient Predefined-Time Adaptive Neural Networks for Computing Time-Varying Tensor Moore-Penrose Inverse. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:3659-3670. [PMID: 38289838 DOI: 10.1109/tnnls.2024.3354936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
This article proposes predefined-time adaptive neural network (PTANN) and event-triggered PTANN (ET-PTANN) models to efficiently compute the time-varying tensor Moore-Penrose (MP) inverse. The PTANN model incorporates a novel adaptive parameter and activation function, enabling it to achieve strongly predefined-time convergence. Unlike traditional time-varying parameters that increase over time, the adaptive parameter is proportional to the error norm, thereby better allocating computational resources and improving efficiency. To further enhance efficiency, the ET-PTANN model combines an event trigger with the evolution formula, resulting in the adjustment of step size and reduction of computation frequency compared to the PTANN model. By conducting mathematical derivations, the article derives the upper bound of convergence time for the proposed neural network models and determines the minimum execution interval for the event trigger. A simulation example demonstrates that the PTANN and ET-PTANN models outperform other related neural network models in terms of computational efficiency and convergence rate. Finally, the practicality of the PTANN and ET-PTANN models is demonstrated through their application for mobile sound source localization.
Collapse
|
2
|
Yan J, Jin L, Luo X, Li S. Modified RNN for Solving Comprehensive Sylvester Equation With TDOA Application. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:12553-12563. [PMID: 37037242 DOI: 10.1109/tnnls.2023.3263565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The augmented Sylvester equation, as a comprehensive equation, is of great significance and its special cases (e.g., Lyapunov equation, Sylvester equation, Stein equation) are frequently encountered in various fields. It is worth pointing out that the current research on simultaneously eliminating the lagging error and handling noises in the nonstationary complex-valued field is rather rare. Therefore, this article focuses on solving a nonstationary complex-valued augmented Sylvester equation (NCASE) in real time and proposes two modified recurrent neural network (RNN) models. The first proposed modified RNN model possesses gradient search and velocity compensation, termed as RNN-GV model. The superiority of the proposed RNN-GV model to traditional algorithms including the complex-valued gradient-based RNN (GRNN) model lies in completely eliminating the lagging error when employed in the nonstationary problem. The second model named complex-valued integration enhanced RNN-GV with the nonlinear acceleration (IERNN-GVN) model is proposed to adapt to a noisy environment and accelerate the convergence process. Besides, the convergence and robustness of these two proposed models are proved via theoretical analysis. Simulative results on an illustrative example and an application to the moving source localization coincide with the theoretical analysis and illustrate the excellent performance of the proposed models.
Collapse
|
3
|
Zhang Y, Zhang J, Weng J. Dynamic Moore-Penrose Inversion With Unknown Derivatives: Gradient Neural Network Approach. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10919-10929. [PMID: 35536807 DOI: 10.1109/tnnls.2022.3171715] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Finding dynamic Moore-Penrose inverses (DMPIs) in real-time is a challenging problem due to the time-varying nature of the inverse. Traditional numerical methods for static Moore-Penrose inverse are not efficient for calculating DMPIs and are restricted by serial processing. The current state-of-the-art method for finding DMPIs is called the zeroing neural network (ZNN) method, which requires that the time derivative of the associated matrix is available all the time during the solution process. However, in practice, the time derivative of the associated dynamic matrix may not be available in a real-time manner or be subject to noises caused by differentiators. In this article, we propose a novel gradient-based neural network (GNN) method for computing DMPIs, which does not need the time derivative of the associated dynamic matrix. In particular, the neural state matrix of the proposed GNN converges to the theoretical DMPI in finite time. The finite-time convergence is kept by simply setting a large parameter when there are additive noises in the implementation of the GNN model. Simulation results demonstrate the efficacy and superiority of the proposed GNN method.
Collapse
|
4
|
Prescribed-Time Convergent Adaptive ZNN for Time-Varying Matrix Inversion under Harmonic Noise. ELECTRONICS 2022. [DOI: 10.3390/electronics11101636] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Harmonic noises widely exist in industrial fields and always affect the computational accuracy of neural network models. The existing original adaptive zeroing neural network (OAZNN) model can effectively suppress harmonic noises. Nevertheless, the OAZNN model’s convergence rate only stays at the exponential convergence, that is, its convergence speed is usually greatly affected by the initial state. Consequently, to tackle the above issue, this work combines the dynamic characteristics of harmonic signals with prescribed-time convergence activation function, and proposes a prescribed-time convergent adaptive ZNN (PTCAZNN) for solving time-varying matrix inverse problem (TVMIP) under harmonic noises. Owing to the nonlinear activation function used having the ability to reject noises itself and the adaptive term also being able to compensate the influence of noises, the PTCAZNN model can realize double noise suppression. More importantly, the theoretical analysis of PTCAZNN model with prescribed-time convergence and robustness performance is provided. Finally, by varying a series of conditions such as the frequency of single harmonic noise, the frequency of multi-harmonic noise, and the initial value and the dimension of the matrix, the comparative simulation results further confirm the effectiveness and superiority of the PTCAZNN model.
Collapse
|
5
|
Veerasamy V, Abdul Wahab NI, Ramachandran R, Kamel S, Othman ML, Hizam H, Farade R. Power flow solution using a novel generalized linear Hopfield network based on Moore–Penrose pseudoinverse. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05843-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
6
|
Improved recurrent neural networks for solving Moore-Penrose inverse of real-time full-rank matrix. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.08.026] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
7
|
Tan Z, Li W, Xiao L, Hu Y. New Varying-Parameter ZNN Models With Finite-Time Convergence and Noise Suppression for Time-Varying Matrix Moore-Penrose Inversion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2980-2992. [PMID: 31536017 DOI: 10.1109/tnnls.2019.2934734] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article aims to solve the Moore-Penrose inverse of time-varying full-rank matrices in the presence of various noises in real time. For this purpose, two varying-parameter zeroing neural networks (VPZNNs) are proposed. Specifically, VPZNN-R and VPZNN-L models, which are based on a new design formula, are designed to solve the right and left Moore-Penrose inversion problems of time-varying full-rank matrices, respectively. The two VPZNN models are activated by two novel varying-parameter nonlinear activation functions. Detailed theoretical derivations are presented to show the desired finite-time convergence and outstanding robustness of the proposed VPZNN models under various kinds of noises. In addition, existing neural models, such as the original ZNN (OZNN) and the integration-enhanced ZNN (IEZNN), are compared with the VPZNN models. Simulation observations verify the advantages of the VPZNN models over the OZNN and IEZNN models in terms of convergence and robustness. The potential of the VPZNN models for robotic applications is then illustrated by an example of robot path tracking.
Collapse
|
8
|
Cao C, Hou Q, Gulliver TA, Lan Q. A passive detection algorithm for low-altitude small target based on a wavelet neural network. Soft comput 2019. [DOI: 10.1007/s00500-019-04574-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
9
|
A recurrent neural network applied to optimal motion control of mobile robots with physical constraints. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105880] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
10
|
Zhang Y, Gong H, Yang M, Li J, Yang X. Stepsize Range and Optimal Value for Taylor-Zhang Discretization Formula Applied to Zeroing Neurodynamics Illustrated via Future Equality-Constrained Quadratic Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:959-966. [PMID: 30137015 DOI: 10.1109/tnnls.2018.2861404] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this brief, future equality-constrained quadratic programming (FECQP) is studied. Via a zeroing neurodynamics method, a continuous-time zeroing neurodynamics (CTZN) model is presented. By using Taylor-Zhang discretization formula to discretize the CTZN model, a Taylor-Zhang discrete-time zeroing neurodynamics (TZ-DTZN) model is presented to perform FECQP. Furthermore, we focus on the critical parameter of the TZ-DTZN model, i.e., stepsize. By theoretical analyses, we obtain an effective range of the stepsize, which guarantees the stability of the TZ-DTZN model. In addition, we further discuss the optimal value of the stepsize, which makes the TZ-DTZN model possess the optimal stability (i.e., the best stability with the fastest convergence). Finally, numerical experiments and application experiments for motion generation of a robot manipulator are conducted to verify the high precision of the TZ-DTZN model and the effective range and optimal value of the stepsize for FECQP.
Collapse
|
11
|
Improved Gradient Neural Networks for Solving Moore–Penrose Inverse of Full-Rank Matrix. Neural Process Lett 2019. [DOI: 10.1007/s11063-019-09983-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
12
|
Qiu B, Zhang Y, Yang Z. New Discrete-Time ZNN Models for Least-Squares Solution of Dynamic Linear Equation System With Time-Varying Rank-Deficient Coefficient. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5767-5776. [PMID: 29993872 DOI: 10.1109/tnnls.2018.2805810] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this brief, a new one-step-ahead numerical differentiation rule called six-instant -cube finite difference (6I CFD) formula is proposed for the first-order derivative approximation with higher precision than existing finite difference formulas (i.e., Euler and Taylor types). Subsequently, by exploiting the proposed 6I CFD formula to discretize the continuous-time Zhang neural network model, two new-type discrete-time ZNN (DTZNN) models, namely, new-type DTZNNK and DTZNNU models, are designed and generalized to compute the least-squares solution of dynamic linear equation system with time-varying rank-deficient coefficient in real time, which is quite different from the existing ZNN-related studies on solving continuous-time and discrete-time (dynamic or static) linear equation systems in the context of full-rank coefficients. Specifically, the corresponding dynamic normal equation system, of which the solution exactly corresponds to the least-squares solution of dynamic linear equation system, is elegantly introduced to solve such a rank-deficient least-squares problem efficiently and accurately. Theoretical analyses show that the maximal steady-state residual errors of the two new-type DTZNN models have an pattern, where denotes the sampling gap. Comparative numerical experimental results further substantiate the superior computational performance of the new-type DTZNN models to solve the rank-deficient least-squares problem of dynamic linear equation systems.
Collapse
|
13
|
Shen W, Huang F, Zhang X, Zhu Y, Chen X, Akbarjon N. On-line chemical oxygen demand estimation models for the photoelectrocatalytic oxidation advanced treatment of papermaking wastewater. WATER SCIENCE AND TECHNOLOGY : A JOURNAL OF THE INTERNATIONAL ASSOCIATION ON WATER POLLUTION RESEARCH 2018; 78:310-319. [PMID: 30101766 DOI: 10.2166/wst.2018.299] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Chemical oxygen demand (COD), an important indicative measure of the amount of oxidizable pollutants in wastewater, is often analyzed off-line due to the expensive sensor required for on-line analysis. However, its off-line analysis is time-consuming. An on-line COD estimation method was developed with photoelectrocatalytic (PEC) technology. Based on the on-line data of the oxidation-reduction potential (ORP), dissolved oxygen (DO) and pH of wastewater, four different artificial neural network methods were applied to develop working models for COD estimation. Six different batches of sequence batch reactor (SBR) effluent from a paper mill were treated with PEC oxidation for 90 minutes, and 546 data points were collected from the on-line measurements of ORP, DO and pH, and the off-line COD analysis. After having training and validation with 75% and 25% of data, and evaluation with four statistical criteria (R2, RMSE, MAE and MAPE), the estimation results indicated that the developed radial basis neural network (RBNN) model demonstrated the highest precision. Subsequently, the application of the RBNN model to a new batch of SBR effluent from the paper mill revealed that the RBNN model was acceptable for COD estimation during the PEC advanced treatment process of papermaking wastewater, which implied its possible application in the future.
Collapse
Affiliation(s)
- Wenhao Shen
- State Key Laboratory of Pulp and Paper Engineering, South China University of Technology, Guangzhou 510640, China E-mail:
| | - Feini Huang
- State Key Laboratory of Pulp and Paper Engineering, South China University of Technology, Guangzhou 510640, China E-mail:
| | - Xuewen Zhang
- State Key Laboratory of Pulp and Paper Engineering, South China University of Technology, Guangzhou 510640, China E-mail:
| | - Yuefei Zhu
- State Key Laboratory of Pulp and Paper Engineering, South China University of Technology, Guangzhou 510640, China E-mail:
| | - Xiaoquan Chen
- State Key Laboratory of Pulp and Paper Engineering, South China University of Technology, Guangzhou 510640, China E-mail:
| | - Nishonov Akbarjon
- State Key Laboratory of Pulp and Paper Engineering, South China University of Technology, Guangzhou 510640, China E-mail:
| |
Collapse
|
14
|
Wang H, Liu PX, Li S, Wang D. Adaptive Neural Output-Feedback Control for a Class of Nonlower Triangular Nonlinear Systems With Unmodeled Dynamics. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:3658-3668. [PMID: 28866601 DOI: 10.1109/tnnls.2017.2716947] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper presents the development of an adaptive neural controller for a class of nonlinear systems with unmodeled dynamics and immeasurable states. An observer is designed to estimate system states. The structure consistency of virtual control signals and the variable partition technique are combined to overcome the difficulties appearing in a nonlower triangular form. An adaptive neural output-feedback controller is developed based on the backstepping technique and the universal approximation property of the radial basis function (RBF) neural networks. By using the Lyapunov stability analysis, the semiglobally and uniformly ultimate boundedness of all signals within the closed-loop system is guaranteed. The simulation results show that the controlled system converges quickly, and all the signals are bounded. This paper is novel at least in the two aspects: 1) an output-feedback control strategy is developed for a class of nonlower triangular nonlinear systems with unmodeled dynamics and 2) the nonlinear disturbances and their bounds are the functions of all states, which is in a more general form than existing results.
Collapse
|
15
|
Rehman SU, Tu S, Rehman OU, Huang Y, Magurawalage CMS, Chang CC. Optimization of CNN through Novel Training Strategy for Visual Classification Problems. ENTROPY 2018; 20:e20040290. [PMID: 33265381 PMCID: PMC7512808 DOI: 10.3390/e20040290] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 03/30/2018] [Accepted: 04/14/2018] [Indexed: 11/24/2022]
Abstract
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer vision applications e.g., classification, recognition, detection, etc. However, the global optimization of CNN training is still a problem. Fast classification and training play a key role in the development of the CNN. We hypothesize that the smoother and optimized the training of a CNN goes, the more efficient the end result becomes. Therefore, in this paper, we implement a modified resilient backpropagation (MRPROP) algorithm to improve the convergence and efficiency of CNN training. Particularly, a tolerant band is introduced to avoid network overtraining, which is incorporated with the global best concept for weight updating criteria to allow the training algorithm of the CNN to optimize its weights more swiftly and precisely. For comparison, we present and analyze four different training algorithms for CNN along with MRPROP, i.e., resilient backpropagation (RPROP), Levenberg–Marquardt (LM), conjugate gradient (CG), and gradient descent with momentum (GDM). Experimental results showcase the merit of the proposed approach on a public face and skin dataset.
Collapse
Affiliation(s)
- Sadaqat ur Rehman
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Shanshan Tu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100022, China
- Correspondence:
| | - Obaid ur Rehman
- Department of Electrical Engineering, Sarhad University of Science and IT, Peshawar 25000, Pakistan
| | - Yongfeng Huang
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | | | - Chin-Chen Chang
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung City 407, Taiwan
| |
Collapse
|
16
|
|
17
|
Jin L, Li S, Wang H, Zhang Z. Nonconvex projection activated zeroing neurodynamic models for time-varying matrix pseudoinversion with accelerated finite-time convergence. Appl Soft Comput 2018. [DOI: 10.1016/j.asoc.2017.09.016] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
18
|
A Simplified Architecture of the Zhang Neural Network for Toeplitz Linear Systems Solving. Neural Process Lett 2017. [DOI: 10.1007/s11063-017-9656-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
19
|
Guo D, Zhang Y, Xiao Z, Mao M, Liu J. Common nature of learning between BP-type and Hopfield-type neural networks. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.04.032] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
20
|
Wen S, Zeng Z, Huang T, Meng Q, Yao W. Lag Synchronization of Switched Neural Networks via Neural Activation Function and Applications in Image Encryption. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1493-1502. [PMID: 25594985 DOI: 10.1109/tnnls.2014.2387355] [Citation(s) in RCA: 135] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper investigates the problem of global exponential lag synchronization of a class of switched neural networks with time-varying delays via neural activation function and applications in image encryption. The controller is dependent on the output of the system in the case of packed circuits, since it is hard to measure the inner state of the circuits. Thus, it is critical to design the controller based on the neuron activation function. Comparing the results, in this paper, with the existing ones shows that we improve and generalize the results derived in the previous literature. Several examples are also given to illustrate the effectiveness and potential applications in image encryption.
Collapse
|
21
|
Xiao L, Lu R. Finite-time solution to nonlinear equation using recurrent neural dynamics with a specially-constructed activation function. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.09.047] [Citation(s) in RCA: 84] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
22
|
Guo D, Zhang Y. Li-function activated ZNN with finite-time convergence applied to redundant-manipulator kinematic control via time-varying Jacobian matrix pseudoinversion. Appl Soft Comput 2014. [DOI: 10.1016/j.asoc.2014.06.045] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|