1
|
Yang X, Ju X, Shi P, Wen G. Two Novel Noise-Suppression Projection Neural Networks With Fixed-Time Convergence for Variational Inequalities and Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:1707-1718. [PMID: 37819816 DOI: 10.1109/tnnls.2023.3321761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
This article proposes two novel projection neural networks (PNNs) with fixed-time ( ) convergence to deal with variational inequality problems (VIPs). The remarkable features of the proposed PNNs are convergence and more accurate upper bounds for arbitrary initial conditions. The robustness of the proposed PNNs under bounded noises is further studied. In addition, the proposed PNNs are applied to deal with absolute value equations (AVEs), noncooperative games, and sparse signal reconstruction problems (SSRPs). The upper bounds of the settling time for the proposed PNNs are tighter than the bounds in the existing neural networks. The effectiveness and advantages of the proposed PNNs are confirmed by numerical examples.
Collapse
|
2
|
Talebi F, Nazemi A, Ataabadi AA. Mean-AVaR in credibilistic portfolio management via an artificial neural network scheme. J EXP THEOR ARTIF IN 2022. [DOI: 10.1080/0952813x.2022.2153271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Affiliation(s)
- Fatemeh Talebi
- Faculty of Mathematical sciences, Shahrood University of Technology, Shahrood, Iran
| | - Alireza Nazemi
- Faculty of Mathematical sciences, Shahrood University of Technology, Shahrood, Iran
| | - Abdolmajid Abdolbaghi Ataabadi
- Department of Management, Faculty of Industrial Engineering and Management, Shahrood University of Technology, Shahrood, Iran
| |
Collapse
|
3
|
Li X, Wang J, Kwong S. Hash Bit Selection Based on Collaborative Neurodynamic Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11144-11155. [PMID: 34415845 DOI: 10.1109/tcyb.2021.3102941] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Hash bit selection determines an optimal subset of hash bits from a candidate bit pool. It is formulated as a zero-one quadratic programming problem subject to binary and cardinality constraints. In this article, the problem is equivalently reformulated as a global optimization problem. A collaborative neurodynamic optimization (CNO) approach is applied to solve the problem by using a group of neurodynamic models initialized with particle swarm optimization iteratively in the CNO. Lévy mutation is used in the CNO to avoid premature convergence by ensuring initial state diversity. A theoretical proof is given to show that the CNO with the Lévy mutation operator is almost surely convergent to global optima. Experimental results are discussed to substantiate the efficacy and superiority of the CNO-based hash bit selection method to the existing methods on three benchmarks.
Collapse
|
4
|
Zhao Y, Liao X, He X. Fixed-Time Stable Neurodynamic Flow to Sparse Signal Recovery via Nonconvex L1-β2-Norm. Neural Comput 2022; 34:1727-1755. [PMID: 35798330 DOI: 10.1162/neco_a_01508] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 02/27/2022] [Indexed: 11/04/2022]
Abstract
This letter develops a novel fixed-time stable neurodynamic flow (FTSNF) implemented in a dynamical system for solving the nonconvex, nonsmooth model L1-β2, β∈[0,1] to recover a sparse signal. FTSNF is composed of many neuron-like elements running in parallel. It is very efficient and has provable fixed-time convergence. First, a closed-form solution of the proximal operator to model L1-β2, β∈[0,1] is presented based on the classic soft thresholding of the L1-norm. Next, the proposed FTSNF is proven to have a fixed-time convergence property without additional assumptions on the convexity and strong monotonicity of the objective functions. In addition, we show that FTSNF can be transformed into other proximal neurodynamic flows that have exponential and finite-time convergence properties. The simulation results of sparse signal recovery verify the effectiveness and superiority of the proposed FTSNF.
Collapse
Affiliation(s)
- You Zhao
- Key Laboratory of Dependable Services Computing in Cyber Physical Society-Ministry of Education, College of Computer Science, Chongqing University, Chongqing 400044, China
| | - Xiaofeng Liao
- IEEE Fellow, and Key Laboratory of Dependable Services Computing in Cyber Physical Society-Ministry of Education, College of Computer Science, Chongqing University, Chongqing 400044, China
| | - Xing He
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, College of Electronics and Information Engineering, Southwest University, Chongqing 400715, China
| |
Collapse
|
5
|
Liu S, Jiang H, Zhang L, Mei X. A neurodynamic optimization approach for complex-variables programming problem. Neural Netw 2020; 129:280-287. [PMID: 32569856 DOI: 10.1016/j.neunet.2020.06.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Revised: 03/22/2020] [Accepted: 06/11/2020] [Indexed: 11/27/2022]
Abstract
A neural network model upon differential inclusion is designed for solving the complex-variables convex programming, and the chain rule for real-valued function with the complex-variables is established in this paper. The model does not need to choose penalty parameters when applied to practical problems, which makes it easier to design. The result is obtained that its state reaches the feasible region in finite time. Furthermore, the convergence for its state to an optimal solution is proved. Some typical examples are shown for the effectiveness of the designed model.
Collapse
Affiliation(s)
- Shuxin Liu
- Institute of Operations Research and Control Theory, School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China; College of Mathematics and Physics, Xinjiang Agricultural University, Urumqi 830052, PR China
| | - Haijun Jiang
- College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, PR China.
| | - Liwei Zhang
- Institute of Operations Research and Control Theory, School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China
| | - Xuehui Mei
- College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, PR China
| |
Collapse
|
6
|
Li X, Wang J, Kwong S. A Discrete-Time Neurodynamic Approach to Sparsity-Constrained Nonnegative Matrix Factorization. Neural Comput 2020; 32:1531-1562. [PMID: 32521214 DOI: 10.1162/neco_a_01294] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sparsity is a desirable property in many nonnegative matrix factorization (NMF) applications. Although some level of sparseness of NMF solutions can be achieved by using regularization, the resulting sparsity depends highly on the regularization parameter to be valued in an ad hoc way. In this letter we formulate sparse NMF as a mixed-integer optimization problem with sparsity as binary constraints. A discrete-time projection neural network is developed for solving the formulated problem. Sufficient conditions for its stability and convergence are analytically characterized by using Lyapunov's method. Experimental results on sparse feature extraction are discussed to substantiate the superiority of this approach to extracting highly sparse features.
Collapse
Affiliation(s)
- Xinqi Li
- Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong, and Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| | - Jun Wang
- Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong, and Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| | - Sam Kwong
- Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong, and Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| |
Collapse
|
7
|
Xu C, Chai Y, Qin S, Wang Z, Feng J. A neurodynamic approach to nonsmooth constrained pseudoconvex optimization problem. Neural Netw 2020; 124:180-192. [DOI: 10.1016/j.neunet.2019.12.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2019] [Revised: 11/15/2019] [Accepted: 12/14/2019] [Indexed: 10/25/2022]
|
8
|
Mohammadi S, Nazemi A. On portfolio management with value at risk and uncertain returns via an artificial neural network scheme. COGN SYST RES 2020. [DOI: 10.1016/j.cogsys.2019.09.024] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
9
|
On the Flexible Dynamics Analysis for the Unified Discrete-Time RNNs. Neural Process Lett 2018. [DOI: 10.1007/s11063-018-9959-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
10
|
Xiang W, Li F, Wang J, Tang B. Quantum weighted gated recurrent unit neural network and its application in performance degradation trend prediction of rotating machinery. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.06.012] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
11
|
A nonnegative matrix factorization algorithm based on a discrete-time projection neural network. Neural Netw 2018; 103:63-71. [PMID: 29642020 DOI: 10.1016/j.neunet.2018.03.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2017] [Revised: 12/18/2017] [Accepted: 03/06/2018] [Indexed: 11/19/2022]
Abstract
This paper presents an algorithm for nonnegative matrix factorization based on a biconvex optimization formulation. First, a discrete-time projection neural network is introduced. An upper bound of its step size is derived to guarantee the stability of the neural network. Then, an algorithm is proposed based on the discrete-time projection neural network and a backtracking step-size adaptation. The proposed algorithm is proven to be able to reduce the objective function value iteratively until attaining a partial optimum of the formulated biconvex optimization problem. Experimental results based on various data sets are presented to substantiate the efficacy of the algorithm.
Collapse
|
12
|
Zhao M, Su H, Wang M, Wang L, Chen MZ. A weighted adaptive-velocity self-organizing model and its high-speed performance. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.08.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
13
|
Boundedness and convergence analysis of weight elimination for cyclic training of neural networks. Neural Netw 2016; 82:49-61. [PMID: 27472447 DOI: 10.1016/j.neunet.2016.06.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2016] [Revised: 05/14/2016] [Accepted: 06/21/2016] [Indexed: 11/22/2022]
Abstract
Weight elimination offers a simple and efficient improvement of training algorithm of feedforward neural networks. It is a general regularization technique in terms of the flexible scaling parameters. Actually, the weight elimination technique also contains the weight decay regularization for a large scaling parameter. Many applications of this technique and its improvements have been reported. However, there is little research concentrated on its convergence behavior. In this paper, we theoretically analyze the weight elimination for cyclic learning method and determine the conditions for the uniform boundedness of weight sequence, and weak and strong convergence. Based on the assumed network parameters, the optimal choice for the scaling parameter can also be determined. Moreover, two illustrative simulations have been done to support the theoretical explorations as well.
Collapse
|
14
|
Qin S, Cheng Q, Chen G. Global exponential stability of uncertain neural networks with discontinuous Lurie-type activation and mixed delays. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.07.147] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
15
|
A winner-take-all approach to emotional neural networks with universal approximation property. Inf Sci (N Y) 2016. [DOI: 10.1016/j.ins.2016.01.055] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
16
|
Liao B, Zhang Y, Jin L. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:225-237. [PMID: 26058059 DOI: 10.1109/tnnls.2015.2435014] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.
Collapse
|
17
|
Liu Q, Wang J. A Projection Neural Network for Constrained Quadratic Minimax Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2891-2900. [PMID: 25966485 DOI: 10.1109/tnnls.2015.2425301] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.
Collapse
|
18
|
Qin S, Xue X. A two-layer recurrent neural network for nonsmooth convex optimization problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1149-1160. [PMID: 25051563 DOI: 10.1109/tnnls.2014.2334364] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.
Collapse
|
19
|
Qin S, Fan D, Wu G, Zhao L. Neural network for constrained nonsmooth optimization using Tikhonov regularization. Neural Netw 2015; 63:272-81. [DOI: 10.1016/j.neunet.2014.12.007] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2013] [Revised: 12/12/2014] [Accepted: 12/16/2014] [Indexed: 11/26/2022]
|
20
|
Li G, Yan Z, Wang J. A one-layer recurrent neural network for constrained nonconvex optimization. Neural Netw 2015; 61:10-21. [DOI: 10.1016/j.neunet.2014.09.009] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2014] [Revised: 08/22/2014] [Accepted: 09/18/2014] [Indexed: 10/24/2022]
|
21
|
Solving portfolio selection models with uncertain returns using an artificial neural network scheme. APPL INTELL 2014. [DOI: 10.1007/s10489-014-0616-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
22
|
He X, Yu J, Huang T, Li C, Li C. Neural network for solving Nash equilibrium problem in application of multiuser power control. Neural Netw 2014; 57:73-8. [DOI: 10.1016/j.neunet.2014.06.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2014] [Revised: 05/22/2014] [Accepted: 06/01/2014] [Indexed: 11/16/2022]
|
23
|
He X, Li C, Huang T, Li C, Huang J. A recurrent neural network for solving bilevel linear programming problem. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:824-830. [PMID: 24807959 DOI: 10.1109/tnnls.2013.2280905] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.
Collapse
|
24
|
He X, Li C, Huang T, Li C. Neural network for solving convex quadratic bilevel programming problems. Neural Netw 2013; 51:17-25. [PMID: 24333480 DOI: 10.1016/j.neunet.2013.11.015] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2013] [Revised: 10/14/2013] [Accepted: 11/18/2013] [Indexed: 10/26/2022]
Abstract
In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network.
Collapse
Affiliation(s)
- Xing He
- School of Electronics and Information Engineering, Southwest University, Chongqing 400715, PR China.
| | - Chuandong Li
- School of Electronics and Information Engineering, Southwest University, Chongqing 400715, PR China.
| | | | - Chaojie Li
- School of Science, Information Technology and Engineering, University of Ballarat, Mt Helen, VIC 3350, Australia.
| |
Collapse
|
25
|
|
26
|
Huang B, Zhang H, Gong D, Wang Z. A new result for projection neural networks to solve linear variational inequalities and related optimization problems. Neural Comput Appl 2012. [DOI: 10.1007/s00521-012-0918-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
27
|
Liu Q, Guo Z, Wang J. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization. Neural Netw 2012; 26:99-109. [DOI: 10.1016/j.neunet.2011.09.001] [Citation(s) in RCA: 116] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2011] [Accepted: 09/01/2011] [Indexed: 11/29/2022]
|
28
|
Wang J, Wu W, Zurada JM. Deterministic convergence of conjugate gradient method for feedforward neural networks. Neurocomputing 2011. [DOI: 10.1016/j.neucom.2011.03.016] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|