1
|
Zhao Y, Liao X, He X. Novel projection neurodynamic approaches for constrained convex optimization. Neural Netw 2022; 150:336-349. [DOI: 10.1016/j.neunet.2022.03.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 01/02/2022] [Accepted: 03/07/2022] [Indexed: 11/30/2022]
|
2
|
Zhao Y, He X, Huang T, Huang J, Li P. A smoothing neural network for minimization l1-lp in sparse signal reconstruction with measurement noises. Neural Netw 2020; 122:40-53. [DOI: 10.1016/j.neunet.2019.10.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Revised: 08/27/2019] [Accepted: 10/08/2019] [Indexed: 10/25/2022]
|
3
|
Distributed Neuro-Dynamic Algorithm for Price-Based Game in Energy Consumption System. Neural Process Lett 2020. [DOI: 10.1007/s11063-019-10102-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
4
|
Moghaddas M, Tohidi G. A neurodynamic scheme to bi-level revenue-based centralized resource allocation models. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-182953] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Mohammad Moghaddas
- Department of Mathematics, Central Tehran Branch, Islamic Azad University, Tehran, Iran
| | - Ghasem Tohidi
- Department of Mathematics, Central Tehran Branch, Islamic Azad University, Tehran, Iran
| |
Collapse
|
5
|
Liu Y, Zhang D, Lou J, Lu J, Cao J. Stability Analysis of Quaternion-Valued Neural Networks: Decomposition and Direct Approaches. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4201-4211. [PMID: 29989971 DOI: 10.1109/tnnls.2017.2755697] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this paper, we investigate the global stability of quaternion-valued neural networks (QVNNs) with time-varying delays. On one hand, in order to avoid the noncommutativity of quaternion multiplication, the QVNN is decomposed into four real-valued systems based on Hamilton rules: $ij=-ji=k,~jk=-kj=i$ , $ki=-ik=j$ , $i^{2}=j^{2}=k^{2}=ijk=-1$ . With the Lyapunov function method, some criteria are, respectively, presented to ensure the global $\mu $ -stability and power stability of the delayed QVNN. On the other hand, by considering the noncommutativity of quaternion multiplication and time-varying delays, the QVNN is investigated directly by the techniques of the Lyapunov-Krasovskii functional and the linear matrix inequality (LMI) where quaternion self-conjugate matrices and quaternion positive definite matrices are used. Some new sufficient conditions in the form of quaternion-valued LMI are, respectively, established for the global $\mu $ -stability and exponential stability of the considered QVNN. Besides, some assumptions are presented for the two different methods, which can help to choose quaternion-valued activation functions. Finally, two numerical examples are given to show the feasibility and the effectiveness of the main results.
Collapse
|
6
|
Zhao Y, He X, Huang T, Han Q. Analog circuits for solving a class of variational inequality problems. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.03.016] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
7
|
|
8
|
Ebadi M, Hosseini A, Hosseini M. A projection type steepest descent neural network for solving a class of nonsmooth optimization problems. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.01.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
9
|
He X, Huang T, Yu J, Li C, Li C. An Inertial Projection Neural Network for Solving Variational Inequalities. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:809-814. [PMID: 26887026 DOI: 10.1109/tcyb.2016.2523541] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Recently, projection neural network (PNN) was proposed for solving monotone variational inequalities (VIs) and related convex optimization problems. In this paper, considering the inertial term into first order PNNs, an inertial PNN (IPNN) is also proposed for solving VIs. Under certain conditions, the IPNN is proved to be stable, and can be applied to solve a broader class of constrained optimization problems related to VIs. Compared with existing neural networks (NNs), the presence of the inertial term allows us to overcome some drawbacks of many NNs, which are constructed based on the steepest descent method, and this model is more convenient for exploring different Karush-Kuhn-Tucker optimal solution for nonconvex optimization problems. Finally, simulation results on three numerical examples show the effectiveness and performance of the proposed NN.
Collapse
|
10
|
Chedjou JC, Kyamakya K. Benchmarking a recurrent neural network based efficient shortest path problem (SPP) solver concept under difficult dynamic parameter settings conditions. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.02.068] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
11
|
Wang Y, Cheng L, Hou ZG, Yu J, Tan M. Optimal Formation of Multirobot Systems Based on a Recurrent Neural Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:322-333. [PMID: 26316224 DOI: 10.1109/tnnls.2015.2464314] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The optimal formation problem of multirobot systems is solved by a recurrent neural network in this paper. The desired formation is described by the shape theory. This theory can generate a set of feasible formations that share the same relative relation among robots. An optimal formation means that finding one formation from the feasible formation set, which has the minimum distance to the initial formation of the multirobot system. Then, the formation problem is transformed into an optimization problem. In addition, the orientation, scale, and admissible range of the formation can also be considered as the constraints in the optimization problem. Furthermore, if all robots are identical, their positions in the system are exchangeable. Then, each robot does not necessarily move to one specific position in the formation. In this case, the optimal formation problem becomes a combinational optimization problem, whose optimal solution is very hard to obtain. Inspired by the penalty method, this combinational optimization problem can be approximately transformed into a convex optimization problem. Due to the involvement of the Euclidean norm in the distance, the objective function of these optimization problems are nonsmooth. To solve these nonsmooth optimization problems efficiently, a recurrent neural network approach is employed, owing to its parallel computation ability. Finally, some simulations and experiments are given to validate the effectiveness and efficiency of the proposed optimal formation approach.
Collapse
|
12
|
Di Marco M, Forti M, Nistri P, Pancioni L. Nonsmooth Neural Network for Convex Time-Dependent Constraint Satisfaction Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:295-307. [PMID: 25769174 DOI: 10.1109/tnnls.2015.2404773] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper introduces a nonsmooth (NS) neural network that is able to operate in a time-dependent (TD) context and is potentially useful for solving some classes of NS-TD problems. The proposed network is named nonsmooth time-dependent network (NTN) and is an extension to a TD setting of a previous NS neural network for programming problems. Suppose C(t), t ≥ 0, is a nonempty TD convex feasibility set defined by TD inequality constraints. The constraints are in general NS (nondifferentiable) functions of the state variables and time. NTN is described by the subdifferential with respect to the state variables of an NS-TD barrier function and a vector field corresponding to the unconstrained dynamics. This paper shows that for suitable values of the penalty parameter, the NTN dynamics displays two main phases. In the first phase, any solution of NTN not starting in C(0) at t=0 is able to reach the moving set C(·) in finite time th , whereas in the second phase, the solution tracks the moving set, i.e., it stays within C(t) for all subsequent times t ≥ t(h). NTN is thus able to find an exact feasible solution in finite time and also to provide an exact feasible solution for subsequent times. This new and peculiar dynamics displayed by NTN is potentially useful for addressing some significant TD signal processing tasks. As an illustration, this paper discusses a number of examples where NTN is applied to the solution of NS-TD convex feasibility problems.
Collapse
|
13
|
Qiao C, Jing WF, Fang J, Wang YP. The general critical analysis for continuous-time UPPAM recurrent neural networks. Neurocomputing 2016; 175:40-46. [PMID: 26858512 DOI: 10.1016/j.neucom.2015.09.103] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
The uniformly pseudo-projection-anti-monotone (UPPAM) neural network model, which can be considered as the unified continuous-time neural networks (CNNs), includes almost all of the known CNNs individuals. Recently, studies on the critical dynamics behaviors of CNNs have drawn special attentions due to its importance in both theory and applications. In this paper, we will present the analysis of the UPPAM network under the general critical conditions. It is shown that the UPPAM network possesses the global convergence and asymptotical stability under the general critical conditions if the network satisfies one quasi-symmetric requirement on the connective matrices, which is easy to be verified and applied. The general critical dynamics have rarely been studied before, and this work is an attempt to gain an meaningful assurance of general critical convergence and stability of CNNs. Since UPPAM network is the unified model for CNNs, the results obtained here can generalize and extend the existing critical conclusions for CNNs individuals, let alone those non-critical cases. Moreover, the easily verified conditions for general critical convergence and stability can further promote the applications of CNNs.
Collapse
Affiliation(s)
- Chen Qiao
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, P.R. China and with the Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA
| | - Wen-Feng Jing
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, P.R. China
| | - Jian Fang
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, P.R. China and with the Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA
| | - Yu-Ping Wang
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA and the Center of Genomics and Bioinformatics, Tulane University, New Orleans, LA, 70112, USA
| |
Collapse
|
14
|
A new delayed projection neural network for solving quadratic programming problems with equality and inequality constraints. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.05.006] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
15
|
Pan Y, Zhou Q, Lu Q, Wu C. New dissipativity condition of stochastic fuzzy neural networks with discrete and distributed time-varying delays. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.03.045] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
16
|
Xu C, Li P. Dynamics in Four-Neuron Bidirectional Associative Memory Networks with Inertia and Multiple Delays. Cognit Comput 2015. [DOI: 10.1007/s12559-015-9344-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
17
|
|
18
|
|
19
|
|
20
|
Finite time dual neural networks with a tunable activation function for solving quadratic programming problems and its application. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.06.018] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
21
|
|
22
|
|
23
|
|
24
|
Xiao J, Zeng Z, Shen W. Global asymptotic stability of delayed neural networks with discontinuous neuron activations. Neurocomputing 2013. [DOI: 10.1016/j.neucom.2013.02.021] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
25
|
Ge J, Xu J. Stability switches and fold-Hopf bifurcations in an inertial four-neuron network model with coupling delay. Neurocomputing 2013. [DOI: 10.1016/j.neucom.2012.08.048] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
26
|
|
27
|
Zhang H, Huang B, Gong D, Wang Z. New results for neutral-type delayed projection neural network to solve linear variational inequalities. Neural Comput Appl 2012. [DOI: 10.1007/s00521-012-1141-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
28
|
|
29
|
Hu X, Wang J. Solving the assignment problem using continuous-time and discrete-time improved dual networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:821-827. [PMID: 24806130 DOI: 10.1109/tnnls.2012.2187798] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The assignment problem is an archetypal combinatorial optimization problem. In this brief, we present a continuous-time version and a discrete-time version of the improved dual neural network (IDNN) for solving the assignment problem. Compared with most assignment networks in the literature, the two versions of IDNNs are advantageous in circuit implementation due to their simple structures. Both of them are theoretically guaranteed to be globally convergent to a solution of the assignment problem if only the solution is unique.
Collapse
|
30
|
Huang B, Zhang H, Gong D, Wang Z. A new result for projection neural networks to solve linear variational inequalities and related optimization problems. Neural Comput Appl 2012. [DOI: 10.1007/s00521-012-0918-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
31
|
Yang Y, Gao Y. A new neural network for solving nonlinear convex programs with linear constraints. Neurocomputing 2011. [DOI: 10.1016/j.neucom.2011.04.026] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
32
|
Cheng L, Hou ZG, Lin Y, Tan M, Zhang WC, Wu FX. Recurrent Neural Network for Non-Smooth Convex Optimization Problems With Application to the Identification of Genetic Regulatory Networks. ACTA ACUST UNITED AC 2011; 22:714-26. [PMID: 21427022 DOI: 10.1109/tnn.2011.2109735] [Citation(s) in RCA: 116] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
33
|
Liu W, Fu C, Hu H. Global exponential stability of a class of Hopfield neural networks with delays. Neural Comput Appl 2010. [DOI: 10.1007/s00521-010-0470-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|