1
|
Xiao L, Huang W, Li X, Sun F, Liao Q, Jia L, Li J, Liu S. ZNNs With a Varying-Parameter Design Formula for Dynamic Sylvester Quaternion Matrix Equation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:9981-9991. [PMID: 35412991 DOI: 10.1109/tnnls.2022.3163293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This article aims to studying how to solve dynamic Sylvester quaternion matrix equation (DSQME) using the neural dynamic method. In order to solve the DSQME, the complex representation method is first adopted to derive the equivalent dynamic Sylvester complex matrix equation (DSCME) from the DSQME. It is proven that the solution to the DSCME is the same as that of the DSQME in essence. Then, a state-of-the-art neural dynamic method is presented to generate a general dynamic-varying parameter zeroing neural network (DVPZNN) model with its global stability being guaranteed by the Lyapunov theory. Specifically, when the linear activation function is utilized in the DVPZNN model, the corresponding model [termed linear DVPZNN (LDVPZNN)] achieves finite-time convergence, and a time range is theoretically calculated. When the nonlinear power-sigmoid activation function is utilized in the DVPZNN model, the corresponding model [termed power-sigmoid DVPZNN (PSDVPZNN)] achieves the better convergence compared with the LDVPZNN model, which is proven in detail. Finally, three examples are presented to compare the solution performance of different neural models for the DSQME and the equivalent DSCME, and the results verify the correctness of the theories and the superiority of the proposed two DVPZNN models.
Collapse
|
2
|
Lu W, Leung CS, Sum J, Xiao Y. DNN-kWTA With Bounded Random Offset Voltage Drifts in Threshold Logic Units. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3184-3192. [PMID: 33513113 DOI: 10.1109/tnnls.2021.3050493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The dual neural network-based k -winner-take-all (DNN- k WTA) is an analog neural model that is used to identify the k largest numbers from n inputs. Since threshold logic units (TLUs) are key elements in the model, offset voltage drifts in TLUs may affect the operational correctness of a DNN- k WTA network. Previous studies assume that drifts in TLUs follow some particular distributions. This brief considers that only the drift range, given by [-∆, ∆] , is available. We consider two drift cases: time-invariant and time-varying. For the time-invariant case, we show that the state of a DNN- k WTA network converges. The sufficient condition to make a network with the correct operation is given. Furthermore, for uniformly distributed inputs, we prove that the probability that a DNN- k WTA network operates properly is greater than (1-2∆)n . The aforementioned results are generalized for the time-varying case. In addition, for the time-invariant case, we derive a method to compute the exact convergence time for a given data set. For uniformly distributed inputs, we further derive the mean and variance of the convergence time. The convergence time results give us an idea about the operational speed of the DNN- k WTA model. Finally, simulation experiments have been conducted to validate those theoretical results.
Collapse
|
3
|
Jin L, Liang S, Luo X, Zhou M. Distributed and Time-Delayed k-Winner-Take-All Network for Competitive Coordination of Multiple Robots. IEEE TRANSACTIONS ON CYBERNETICS 2022; PP:641-652. [PMID: 35533157 DOI: 10.1109/tcyb.2022.3159367] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In this article, a distributed and time-delayed k-winner-take-all (DT-kWTA) network is established and analyzed for competitively coordinated task assignment of a multirobot system. It is considered and designed from the following three aspects. First, a network is built based on a k-winner-take-all (kWTA) competitive algorithm that selects k maximum values from the inputs. Second, a distributed control strategy is used to improve the network in terms of communication load and computational burden. Third, the time-delayed problem prevalent in arbitrary causal systems (especially, in networks) is taken into account in the proposed network. This work combines distributed kWTA competition network with time delay for the first time, thus enabling it to better handle realistic applications than previous work. In addition, it theoretically derives the maximum delay allowed by the network and proves the convergence and robustness of the network. The results are applied to a multirobot system to conduct its robots' competitive coordination to complete the given task.
Collapse
|
4
|
Zhang Y, Li S, Weng J. Distributed Estimation of Algebraic Connectivity. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:3047-3056. [PMID: 33027023 DOI: 10.1109/tcyb.2020.3022653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The measurement algebraic connectivity plays an important role in many graph theory-based investigations, such as cooperative control of multiagent systems. In general, the measurement is considered to be centralized. In this article, a distributed model is proposed to estimate the algebraic connectivity (i.e., the second smallest eigenvalue of the corresponding Laplacian matrix) by the approach of distributed estimation via high-pass consensus filters. The global asymptotic convergence of the proposed model is theoretically guaranteed. Numerical examples are shown to verify the theoretical results and the superiority of the proposed distributed model.
Collapse
|
5
|
Xiao L, He Y, Dai J, Liu X, Liao B, Tan H. A Variable-Parameter Noise-Tolerant Zeroing Neural Network for Time-Variant Matrix Inversion With Guaranteed Robustness. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1535-1545. [PMID: 33361003 DOI: 10.1109/tnnls.2020.3042761] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Matrix inversion frequently occurs in the fields of science, engineering, and related fields. Numerous matrix inversion schemes are often based on the premise that the solution procedure is ideal and noise-free. However, external interference is generally ubiquitous and unavoidable in practice. Therefore, an integrated-enhanced zeroing neural network (IEZNN) model has been proposed to handle the time-variant matrix inversion issue interfered with by noise. However, the IEZNN model can only deal with small time-variant noise interference. With slightly larger noise interference, the IEZNN model may not converge to the theoretical solution exactly. Therefore, a variable-parameter noise-tolerant zeroing neural network (VPNTZNN) model is proposed to overcome shortcomings and improve the inadequacy. Moreover, the excellent convergence and robustness of the VPNTZNN model are rigorously analyzed and proven. Finally, compared with the original zeroing neural network (OZNN) model and the IEZNN model for matrix inversion, numerical simulations and a practical application reveal that the proposed VPNTZNN model has the best robust property under the same external noise interference.
Collapse
|
6
|
Yu Y, Wang X, Zhong S, Yang N, Tashi N. Extended Robust Exponential Stability of Fuzzy Switched Memristive Inertial Neural Networks With Time-Varying Delays on Mode-Dependent Destabilizing Impulsive Control Protocol. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:308-321. [PMID: 32217485 DOI: 10.1109/tnnls.2020.2978542] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article investigates the problem of robust exponential stability of fuzzy switched memristive inertial neural networks (FSMINNs) with time-varying delays on mode-dependent destabilizing impulsive control protocol. The memristive model presented here is treated as a switched system rather than employing the theory of differential inclusion and set-value map. To optimize the robust exponentially stable process and reduce the cost of time, hybrid mode-dependent destabilizing impulsive and adaptive feedback controllers are simultaneously applied to stabilize FSMINNs. In the new model, the multiple impulsive effects exist between two switched modes, and the multiple switched effects may also occur between two impulsive instants. Based on switched analysis techniques, the Takagi-Sugeno (T-S) fuzzy method, and the average dwell time, extended robust exponential stability conditions are derived. Finally, simulation is provided to illustrate the effectiveness of the results.
Collapse
|
7
|
Xiao L, Dai J, Lu R, Li S, Li J, Wang S. Design and Comprehensive Analysis of a Noise-Tolerant ZNN Model With Limited-Time Convergence for Time-Dependent Nonlinear Minimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5339-5348. [PMID: 32031952 DOI: 10.1109/tnnls.2020.2966294] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Zeroing neural network (ZNN) is a powerful tool to address the mathematical and optimization problems broadly arisen in the science and engineering areas. The convergence and robustness are always co-pursued in ZNN. However, there exists no related work on the ZNN for time-dependent nonlinear minimization that achieves simultaneously limited-time convergence and inherently noise suppression. In this article, for the purpose of satisfying such two requirements, a limited-time robust neural network (LTRNN) is devised and presented to solve time-dependent nonlinear minimization under various external disturbances. Different from the previous ZNN model for this problem either with limited-time convergence or with noise suppression, the proposed LTRNN model simultaneously possesses such two characteristics. Besides, rigorous theoretical analyses are given to prove the superior performance of the LTRNN model when adopted to solve time-dependent nonlinear minimization under external disturbances. Comparative results also substantiate the effectiveness and advantages of LTRNN via solving a time-dependent nonlinear minimization problem.
Collapse
|
8
|
Xiao L, Jia L, Dai J, Tan Z. Design and Application of A Robust Zeroing Neural Network to Kinematical Resolution of Redundant Manipulators Under Various External Disturbances. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.07.040] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
9
|
|
10
|
Qiu B, Zhang Y, Yang Z. New Discrete-Time ZNN Models for Least-Squares Solution of Dynamic Linear Equation System With Time-Varying Rank-Deficient Coefficient. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5767-5776. [PMID: 29993872 DOI: 10.1109/tnnls.2018.2805810] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this brief, a new one-step-ahead numerical differentiation rule called six-instant -cube finite difference (6I CFD) formula is proposed for the first-order derivative approximation with higher precision than existing finite difference formulas (i.e., Euler and Taylor types). Subsequently, by exploiting the proposed 6I CFD formula to discretize the continuous-time Zhang neural network model, two new-type discrete-time ZNN (DTZNN) models, namely, new-type DTZNNK and DTZNNU models, are designed and generalized to compute the least-squares solution of dynamic linear equation system with time-varying rank-deficient coefficient in real time, which is quite different from the existing ZNN-related studies on solving continuous-time and discrete-time (dynamic or static) linear equation systems in the context of full-rank coefficients. Specifically, the corresponding dynamic normal equation system, of which the solution exactly corresponds to the least-squares solution of dynamic linear equation system, is elegantly introduced to solve such a rank-deficient least-squares problem efficiently and accurately. Theoretical analyses show that the maximal steady-state residual errors of the two new-type DTZNN models have an pattern, where denotes the sampling gap. Comparative numerical experimental results further substantiate the superior computational performance of the new-type DTZNN models to solve the rank-deficient least-squares problem of dynamic linear equation systems.
Collapse
|
11
|
Li S, Wang H, Rafique MU. A Novel Recurrent Neural Network for Manipulator Control With Improved Noise Tolerance. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:1908-1918. [PMID: 28422689 DOI: 10.1109/tnnls.2017.2672989] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, we propose a novel recurrent neural network to resolve the redundancy of manipulators for efficient kinematic control in the presence of noises in a polynomial type. Leveraging the high-order derivative properties of polynomial noises, a deliberately devised neural network is proposed to eliminate the impact of noises and recover the accurate tracking of desired trajectories in workspace. Rigorous analysis shows that the proposed neural law stabilizes the system dynamics and the position tracking error converges to zero in the presence of noises. Extensive simulations verify the theoretical results. Numerical comparisons show that existing dual neural solutions lose stability when exposed to large constant noises or time-varying noises. In contrast, the proposed approach works well and has a low tracking error comparable to noise-free situations.
Collapse
|
12
|
Feng R, Leung CS, Sum J. Robustness Analysis on Dual Neural Network-based $k$ WTA With Input Noise. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:1082-1094. [PMID: 28186910 DOI: 10.1109/tnnls.2016.2645602] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper studies the effects of uniform input noise and Gaussian input noise on the dual neural network-based WTA (DNN- WTA) model. We show that the state of the network (under either uniform input noise or Gaussian input noise) converges to one of the equilibrium points. We then derive a formula to check if the network produce correct outputs or not. Furthermore, for the uniformly distributed inputs, two lower bounds (one for each type of input noise) on the probability that the network produces the correct outputs are presented. Besides, when the minimum separation amongst inputs is given, we derive the condition for the network producing the correct outputs. Finally, experimental results are presented to verify our theoretical results. Since random drift in the comparators can be considered as input noise, our results can be applied to the random drift situation.
Collapse
|
13
|
|
14
|
Mirza MA, Li S, Jin L. Simultaneous learning and control of parallel Stewart platforms with unknown parameters. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.05.026] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
15
|
|
16
|
Jin L, Zhang Y, Li S. Integration-Enhanced Zhang Neural Network for Real-Time-Varying Matrix Inversion in the Presence of Various Kinds of Noises. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:2615-2627. [PMID: 26625426 DOI: 10.1109/tnnls.2015.2497715] [Citation(s) in RCA: 67] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Matrix inversion often arises in the fields of science and engineering. Many models for matrix inversion usually assume that the solving process is free of noises or that the denoising has been conducted before the computation. However, time is precious for the real-time-varying matrix inversion in practice, and any preprocessing for noise reduction may consume extra time, possibly violating the requirement of real-time computation. Therefore, a new model for time-varying matrix inversion that is able to handle simultaneously the noises is urgently needed. In this paper, an integration-enhanced Zhang neural network (IEZNN) model is first proposed and investigated for real-time-varying matrix inversion. Then, the conventional ZNN model and the gradient neural network model are presented and employed for comparison. In addition, theoretical analyses show that the proposed IEZNN model has the global exponential convergence property. Moreover, in the presence of various kinds of noises, the proposed IEZNN model is proven to have an improved performance. That is, the proposed IEZNN model converges to the theoretical solution of the time-varying matrix inversion problem no matter how large the matrix-form constant noise is, and the residual errors of the proposed IEZNN model can be arbitrarily small for time-varying noises and random noises. Finally, three illustrative simulation examples, including an application to the inverse kinematic motion planning of a robot manipulator, are provided and analyzed to substantiate the efficacy and superiority of the proposed IEZNN model for real-time-varying matrix inversion.
Collapse
|
17
|
Neural network-based discrete-time Z-type model of high accuracy in noisy environments for solving dynamic system of linear equations. Neural Comput Appl 2016. [DOI: 10.1007/s00521-016-2640-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
18
|
Mao M, Li J, Jin L, Li S, Zhang Y. Enhanced discrete-time Zhang neural network for time-variant matrix inversion in the presence of bias noises. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.05.010] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
19
|
|
20
|
Peláez FJR, Aguiar-Furucho MA, Andina D. Intrinsic Plasticity for Natural Competition in Koniocortex-Like Neural Networks. Int J Neural Syst 2016; 26:1650040. [PMID: 27255800 DOI: 10.1142/s0129065716500404] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, we use the neural property known as intrinsic plasticity to develop neural network models that resemble the koniocortex, the fourth layer of sensory cortices. These models evolved from a very basic two-layered neural network to a complex associative koniocortex network. In the initial network, intrinsic and synaptic plasticity govern the shifting of the activation function, and the modification of synaptic weights, respectively. In this first version, competition is forced, so that the most activated neuron is arbitrarily set to one and the others to zero, while in the second, competition occurs naturally due to inhibition between second layer neurons. In the third version of the network, whose architecture is similar to the koniocortex, competition also occurs naturally owing to the interplay between inhibitory interneurons and synaptic and intrinsic plasticity. A more complex associative neural network was developed based on this basic koniocortex-like neural network, capable of dealing with incomplete patterns and ideally suited to operating similarly to a learning vector quantization network. We also discuss the biological plausibility of the networks and their role in a more complex thalamocortical model.
Collapse
Affiliation(s)
| | | | - Diego Andina
- 3 Group for Automation in Signal and Communications, Technical University of Madrid, ETSI Telecomunicación, 28040 Madrid, Spain
| |
Collapse
|
21
|
A winner-take-all approach to emotional neural networks with universal approximation property. Inf Sci (N Y) 2016. [DOI: 10.1016/j.ins.2016.01.055] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
22
|
Li S, You ZH, Guo H, Luo X, Zhao ZQ. Inverse-Free Extreme Learning Machine With Optimal Information Updating. IEEE TRANSACTIONS ON CYBERNETICS 2016; 46:1229-1241. [PMID: 26054082 DOI: 10.1109/tcyb.2015.2434841] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The extreme learning machine (ELM) has drawn insensitive research attentions due to its effectiveness in solving many machine learning problems. However, the matrix inversion operation involved in the algorithm is computational prohibitive and limits the wide applications of ELM in many scenarios. To overcome this problem, in this paper, we propose an inverse-free ELM to incrementally increase the number of hidden nodes, and update the connection weights progressively and optimally. Theoretical analysis proves the monotonic decrease of the training error with the proposed updating procedure and also proves the optimality in every updating step. Extensive numerical experiments show the effectiveness and accuracy of the proposed algorithm.
Collapse
|
23
|
|
24
|
Naz S, Umar AI, Ahmad R, Ahmed SB, Shirazi SH, Siddiqi I, Razzak MI. Offline cursive Urdu-Nastaliq script recognition using multidimensional recurrent neural networks. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.11.030] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
25
|
Zou X, Gong D, Wang L, Chen Z. A novel method to solve inverse variational inequality problems based on neural networks. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.08.073] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
26
|
Stanimirović PS, Zivković IS, Wei Y. Recurrent Neural Network for Computing the Drazin Inverse. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2830-2843. [PMID: 25706892 DOI: 10.1109/tnnls.2015.2397551] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.
Collapse
|
27
|
Feng R, Leung CS, Sum J, Xiao Y. Properties and Performance of Imperfect Dual Neural Network-Based kWTA Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2188-2193. [PMID: 25376043 DOI: 10.1109/tnnls.2014.2358851] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The dual neural network (DNN)-based k -winner-take-all ( k WTA) model is an effective approach for finding the k largest inputs from n inputs. Its major assumption is that the threshold logic units (TLUs) can be implemented in a perfect way. However, when differential bipolar pairs are used for implementing TLUs, the transfer function of TLUs is a logistic function. This brief studies the properties of the DNN- kWTA model under this imperfect situation. We prove that, given any initial state, the network settles down at the unique equilibrium point. Besides, the energy function of the model is revealed. Based on the energy function, we propose an efficient method to study the model performance when the inputs are with continuous distribution functions. Furthermore, for uniformly distributed inputs, we derive a formula to estimate the probability that the model produces the correct outputs. Finally, for the case that the minimum separation ∆min of the inputs is given, we prove that if the gain of the activation function is greater than 1/4∆min max(ln 2n, 2 ln 1 - ϵ/ϵ ), then the network can produce the correct outputs with winner outputs greater than 1-ϵ and loser outputs less than ϵ, where ϵ is the threshold less than 0.5.
Collapse
|
28
|
Wang W, Song Y, Xue Y, Jin H, Hou J, Zhao M. An optimal vibration control strategy for a vehicle's active suspension based on improved cultural algorithm. Appl Soft Comput 2015. [DOI: 10.1016/j.asoc.2014.11.047] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
29
|
Xiao L, Lu R. Finite-time solution to nonlinear equation using recurrent neural dynamics with a specially-constructed activation function. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.09.047] [Citation(s) in RCA: 84] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
30
|
|
31
|
Guo D, Zhang Y. Li-function activated ZNN with finite-time convergence applied to redundant-manipulator kinematic control via time-varying Jacobian matrix pseudoinversion. Appl Soft Comput 2014. [DOI: 10.1016/j.asoc.2014.06.045] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
32
|
Finite time dual neural networks with a tunable activation function for solving quadratic programming problems and its application. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.06.018] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
33
|
Miao P, Shen Y, Huang Y, Wang YW. Solving time-varying quadratic programs based on finite-time Zhang neural networks and their application to robot tracking. Neural Comput Appl 2014. [DOI: 10.1007/s00521-014-1744-4] [Citation(s) in RCA: 80] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
34
|
Li S, Li Y. Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation. IEEE TRANSACTIONS ON CYBERNETICS 2014; 44:1397-1407. [PMID: 24184789 DOI: 10.1109/tcyb.2013.2285166] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.
Collapse
|
35
|
|
36
|
Zeng Z, Zheng WX. Multistability of two kinds of recurrent neural networks with activation functions symmetrical about the origin on the phase plane. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:1749-1762. [PMID: 24808609 DOI: 10.1109/tnnls.2013.2262638] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, we investigate multistability of two kinds of recurrent neural networks with time-varying delays and activation functions symmetrical about the origin on the phase plane. One kind of activation function is with zero slope at the origin on the phase plane, while the other is with nonzero slope at the origin on the phase plane. We derive sufficient conditions under which these two kinds of n-dimensional recurrent neural networks are guaranteed to have (2m+1)(n) equilibrium points, with (m+1)(n) of them being locally exponentially stable. These new conditions improve and extend the existing multistability results for recurrent neural networks. Finally, the validity and performance of the theoretical results are demonstrated through two numerical examples.
Collapse
|
37
|
Xiao L, Zhang Y. From Different Zhang Functions to Various ZNN Models Accelerated to Finite-Time Convergence for Time-Varying Linear Matrix Equation. Neural Process Lett 2013. [DOI: 10.1007/s11063-013-9306-9] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
38
|
Ding S, Zhao H, Zhang Y, Xu X, Nie R. Extreme learning machine: algorithm, theory and applications. Artif Intell Rev 2013. [DOI: 10.1007/s10462-013-9405-z] [Citation(s) in RCA: 305] [Impact Index Per Article: 25.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|