1
|
Xia Y, Ye T, Huang L. Analysis and Application of Matrix-Form Neural Networks for Fast Matrix-Variable Convex Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:2259-2273. [PMID: 38157471 DOI: 10.1109/tnnls.2023.3340730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Matrix-variable optimization is a generalization of vector-variable optimization and has been found to have many important applications. To reduce computation time and storage requirement, this article presents two matrix-form recurrent neural networks (RNNs), one continuous-time model and another discrete-time model, for solving matrix-variable optimization problems with linear constraints. The two proposed matrix-form RNNs have low complexity and are suitable for parallel implementation in terms of matrix state space. The proposed continuous-time matrix-form RNN can significantly generalize existing continuous-time vector-form RNN. The proposed discrete-time matrix-form RNN can be effectively used in blind image restoration, where the storage requirement and computational cost are largely reduced. Theoretically, the two proposed matrix-form RNNs are guaranteed to be globally convergent to the optimal solution under mild conditions. Computed results show that the proposed matrix-form RNN-based algorithm is superior to related vector-form RNN and matrix-form RNN-based algorithms, in terms of computation time.
Collapse
|
2
|
Yang X, Ju X, Shi P, Wen G. Two Novel Noise-Suppression Projection Neural Networks With Fixed-Time Convergence for Variational Inequalities and Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:1707-1718. [PMID: 37819816 DOI: 10.1109/tnnls.2023.3321761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
This article proposes two novel projection neural networks (PNNs) with fixed-time ( ) convergence to deal with variational inequality problems (VIPs). The remarkable features of the proposed PNNs are convergence and more accurate upper bounds for arbitrary initial conditions. The robustness of the proposed PNNs under bounded noises is further studied. In addition, the proposed PNNs are applied to deal with absolute value equations (AVEs), noncooperative games, and sparse signal reconstruction problems (SSRPs). The upper bounds of the settling time for the proposed PNNs are tighter than the bounds in the existing neural networks. The effectiveness and advantages of the proposed PNNs are confirmed by numerical examples.
Collapse
|
3
|
Upadhyay A, Pandey R. A proximal neurodynamic model for a system of non-linear inverse mixed variational inequalities. Neural Netw 2024; 176:106323. [PMID: 38653123 DOI: 10.1016/j.neunet.2024.106323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 03/27/2024] [Accepted: 04/14/2024] [Indexed: 04/25/2024]
Abstract
In this article, we introduce a system of non-linear inverse mixed variational inequalities (SNIMVIs). We propose a proximal neurodynamic model (PNDM) for solving SNIMVIs, leveraging proximal mappings. The uniqueness of the continuous solution for the PNDM is proved by assuming Lipschitz continuity. Moreover, we establish the global asymptotic stability of equilibrium points of the PNDM, contingent upon Lipschitz continuity and strong monotonicity. Additionally, an iterative algorithm involving proximal mappings for solving the SNIMVIs is presented. Finally, we provide illustrative examples to support our main findings. Furthermore, we provide an example where the SNIMVIs violate the strong monotonicity condition and exhibit the divergence nature of the trajectories of the corresponding PNDM.
Collapse
Affiliation(s)
- Anjali Upadhyay
- Department of Mathematics, University of Delhi, Delhi, India.
| | - Rahul Pandey
- Mahant Avaidyanath Govt. Degree College, Jungle Kaudiya, Gorakhpur, U.P., India.
| |
Collapse
|
4
|
Gao X, Liao LZ. Novel Continuous- and Discrete-Time Neural Networks for Solving Quadratic Minimax Problems With Linear Equality Constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:9814-9828. [PMID: 37022226 DOI: 10.1109/tnnls.2023.3236695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This article presents two novel continuous- and discrete-time neural networks (NNs) for solving quadratic minimax problems with linear equality constraints. These two NNs are established based on the conditions of the saddle point of the underlying function. For the two NNs, a proper Lyapunov function is constructed so that they are stable in the sense of Lyapunov, and will converge to some saddle point(s) for any starting point under some mild conditions. Compared with the existing NNs for solving quadratic minimax problems, the proposed NNs require weaker stability conditions. The validity and transient behavior of the proposed models are illustrated by some simulation results.
Collapse
|
5
|
Zheng J, Ju X, Zhang N, Xu D. A novel predefined-time neurodynamic approach for mixed variational inequality problems and applications. Neural Netw 2024; 174:106247. [PMID: 38518707 DOI: 10.1016/j.neunet.2024.106247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 02/20/2024] [Accepted: 03/15/2024] [Indexed: 03/24/2024]
Abstract
In this paper, we propose a novel neurodynamic approach with predefined-time stability that offers a solution to address mixed variational inequality problems. Our approach introduces an adjustable time parameter, thereby enhancing flexibility and applicability compared to conventional fixed-time stability methods. By satisfying certain conditions, the proposed approach is capable of converging to a unique solution within a predefined-time, which sets it apart from fixed-time stability and finite-time stability approaches. Furthermore, our approach can be extended to address a wide range of mathematical optimization problems, including variational inequalities, nonlinear complementarity problems, sparse signal recovery problems, and nash equilibria seeking problems in noncooperative games. We provide numerical simulations to validate the theoretical derivation and showcase the effectiveness and feasibility of our proposed method.
Collapse
Affiliation(s)
- Jinlan Zheng
- Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China
| | - Xingxing Ju
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China; Shaanxi Key Laboratory of Information Communication Network and Security, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121, China
| | - Naimin Zhang
- College of Mathematics and Physics, Wenzhou University, Wenzhou 325035, China
| | - Dongpo Xu
- Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China.
| |
Collapse
|
6
|
Zhang Z, Song Y, Zheng L, Luo Y. A Jump-Gain Integral Recurrent Neural Network for Solving Noise-Disturbed Time-Variant Nonlinear Inequality Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:5793-5806. [PMID: 37022813 DOI: 10.1109/tnnls.2023.3241207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Nonlinear inequalities are widely used in science and engineering areas, attracting the attention of many researchers. In this article, a novel jump-gain integral recurrent (JGIR) neural network is proposed to solve noise-disturbed time-variant nonlinear inequality problems. To do so, an integral error function is first designed. Then, a neural dynamic method is adopted and the corresponding dynamic differential equation is obtained. Third, a jump gain is exploited and applied to the dynamic differential equation. Fourth, the derivatives of errors are substituted into the jump-gain dynamic differential equation, and the corresponding JGIR neural network is set up. Global convergence and robustness theorems are proposed and proved theoretically. Computer simulations verify that the proposed JGIR neural network can solve noise-disturbed time-variant nonlinear inequality problems effectively. Compared with some advanced methods, such as modified zeroing neural network (ZNN), noise-tolerant ZNN, and varying-parameter convergent-differential neural network, the proposed JGIR method has smaller computational errors, faster convergence speed, and no overshoot when disturbance exists. In addition, physical experiments on manipulator control have verified the effectiveness and superiority of the proposed JGIR neural network.
Collapse
|
7
|
Ju X, Li C, Che H, He X, Feng G. A Proximal Neurodynamic Network With Fixed-Time Convergence for Equilibrium Problems and Its Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:7500-7514. [PMID: 35143401 DOI: 10.1109/tnnls.2022.3144148] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This article proposes a novel fixed-time converging proximal neurodynamic network (FXPNN) via a proximal operator to deal with equilibrium problems (EPs). A distinctive feature of the proposed FXPNN is its better transient performance in comparison to most existing proximal neurodynamic networks. It is shown that the FXPNN converges to the solution of the corresponding EP in fixed-time under some mild conditions. It is also shown that the settling time of the FXPNN is independent of initial conditions and the fixed-time interval can be prescribed, unlike existing results with asymptotical or exponential convergence. Moreover, the proposed FXPNN is applied to solve composition optimization problems (COPs), l1 -regularized least-squares problems, mixed variational inequalities (MVIs), and variational inequalities (VIs). It is further shown, in the case of solving COPs, that the fixed-time convergence can be established via the Polyak-Lojasiewicz condition, which is a relaxation of the more demanding convexity condition. Finally, numerical examples are presented to validate the effectiveness and advantages of the proposed neurodynamic network.
Collapse
|
8
|
Xia Y, Wang J, Lu Z, Huang L. Two Recurrent Neural Networks With Reduced Model Complexity for Constrained l₁-Norm Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:6173-6185. [PMID: 34986103 DOI: 10.1109/tnnls.2021.3133836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Because of the robustness and sparsity performance of least absolute deviation (LAD or l1 ) optimization, developing effective solution methods becomes an important topic. Recurrent neural networks (RNNs) are reported to be capable of effectively solving constrained l1 -norm optimization problems, but their convergence speed is limited. To accelerate the convergence, this article introduces two RNNs, in form of continuous- and discrete-time systems, for solving l1 -norm optimization problems with linear equality and inequality constraints. The RNNs are theoretically proven to be globally convergent to optimal solutions without any condition. With reduced model complexity, the two RNNs can significantly expedite constrained l1 -norm optimization. Numerical simulation results show that the two RNNs spend much less computational time than related RNNs and numerical optimization algorithms for linearly constrained l1 -norm optimization.
Collapse
|
9
|
Liu J, Liao X, Dong JS, Mansoori A. A subgradient-based neurodynamic algorithm to constrained nonsmooth nonconvex interval-valued optimization. Neural Netw 2023; 160:259-273. [PMID: 36709530 DOI: 10.1016/j.neunet.2023.01.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 12/20/2022] [Accepted: 01/11/2023] [Indexed: 01/21/2023]
Abstract
In this paper, a subgradient-based neurodynamic algorithm is presented to solve the nonsmooth nonconvex interval-valued optimization problem with both partial order and linear equality constraints, where the interval-valued objective function is nonconvex, and interval-valued partial order constraint functions are convex. The designed neurodynamic system is constructed by a differential inclusion with upper semicontinuous right-hand side, whose calculation load is reduced by relieving penalty parameters estimation and complex matrix inversion. Based on nonsmooth analysis and the extension theorem of the solution of differential inclusion, it is obtained that the global existence and boundedness of state solution of neurodynamic system, as well as the asymptotic convergence of state solution to the feasible region and the set of LU-critical points of interval-valued nonconvex optimization problem. Several numerical experiments and the applications to emergency supplies distribution and nondeterministic fractional continuous static games are solved to illustrate the applicability of the proposed neurodynamic algorithm.
Collapse
Affiliation(s)
- Jingxin Liu
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China; School of Computing, National University of Singapore, Singapore 117417, Singapore.
| | - Xiaofeng Liao
- Key Laboratory of Dependable Services Computing in Cyber-Physical Society (Chongqing) Ministry of Education, College of Computer, Chongqing University, Chongqing 400044, China.
| | - Jin-Song Dong
- School of Computing, National University of Singapore, Singapore 117417, Singapore.
| | - Amin Mansoori
- Department of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad 9177948974, Iran; International UNESCO Center for Health Related Basic Sciences and Human Nutrition, Mashhad University of Medical Sciences, Mashhad 9177948974, Iran.
| |
Collapse
|
10
|
Hu J, Peng Y, He L, Zeng C. A Neurodynamic Approach for Solving E-Convex Interval-Valued Programming. Neural Process Lett 2023. [DOI: 10.1007/s11063-023-11154-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
11
|
Ju X, Hu D, Li C, He X, Feng G. A Novel Fixed-Time Converging Neurodynamic Approach to Mixed Variational Inequalities and Applications. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12942-12953. [PMID: 34347618 DOI: 10.1109/tcyb.2021.3093076] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article proposes a novel fixed-time converging forward-backward-forward neurodynamic network (FXFNN) to deal with mixed variational inequalities (MVIs). A distinctive feature of the FXFNN is its fast and fixed-time convergence, in contrast to conventional forward-backward-forward neurodynamic network and projected neurodynamic network. It is shown that the solution of the proposed FXFNN exists uniquely and converges to the unique solution of the corresponding MVIs in fixed time under some mild conditions. It is also shown that the fixed-time convergence result obtained for the FXFNN is independent of initial conditions, unlike most of the existing asymptotical and exponential convergence results. Furthermore, the proposed FXFNN is applied in solving sparse recovery problems, variational inequalities, nonlinear complementarity problems, and min-max problems. Finally, numerical and experimental examples are presented to validate the effectiveness of the proposed neurodynamic network.
Collapse
|
12
|
Zheng J, Chen J, Ju X. Fixed-time stability of projection neurodynamic network for solving pseudomonotone variational inequalities. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.07.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
13
|
Zhang S, Xia Y, Xia Y, Wang J. Matrix-Form Neural Networks for Complex-Variable Basis Pursuit Problem With Application to Sparse Signal Reconstruction. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:7049-7059. [PMID: 33471773 DOI: 10.1109/tcyb.2020.3042519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this article, a continuous-time complex-valued projection neural network (CCPNN) in a matrix state space is first proposed for a general complex-variable basis pursuit problem. The proposed CCPNN is proved to be stable in the sense of Lyapunov and to be globally convergent to the optimal solution under the condition that the sensing matrix is not row full rank. Furthermore, an improved discrete-time complex projection neural network (IDCPNN) is proposed by discretizing the CCPNN model. The proposed IDCPNN consists of a two-step stop strategy to reduce the calculational cost. The proposed IDCPNN is theoretically guaranteed to be global convergent to the optimal solution. Finally, the proposed IDCPNN is applied to the reconstruction of sparse signals based on compressed sensing. Computed results show that the proposed IDCPNN is superior to related complex-valued neural networks and conventional basis pursuit algorithms in terms of solution quality and computation time.
Collapse
|
14
|
Hu D, He X, Ju X. A modified projection neural network with fixed-time convergence. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
15
|
Eshaghnezhad M, Effati S, Mansoori A. A compact MLCP-based projection recurrent neural network model to solve shortest path problem. J EXP THEOR ARTIF IN 2022. [DOI: 10.1080/0952813x.2022.2067247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | - Sohrab Effati
- Department of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Amin Mansoori
- Department of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran
| |
Collapse
|
16
|
Ju X, Che H, Li C, He X, Feng G. Exponential convergence of a proximal projection neural network for mixed variational inequalities and applications. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.04.059] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
17
|
Solving Mixed Variational Inequalities Via a Proximal Neurodynamic Network with Applications. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10628-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
18
|
Mohammadi M. A Compact Neural Network for Fused Lasso Signal Approximator. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:4327-4336. [PMID: 31329147 DOI: 10.1109/tcyb.2019.2925707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The fused lasso signal approximator (FLSA) is a vital optimization problem with extensive applications in signal processing and biomedical engineering. However, the optimization problem is difficult to solve since it is both nonsmooth and nonseparable. The existing numerical solutions implicate the use of several auxiliary variables in order to deal with the nondifferentiable penalty. Thus, the resulting algorithms are both time- and memory-inefficient. This paper proposes a compact neural network to solve the FLSA. The neural network has a one-layer structure with the number of neurons proportionate to the dimension of the given signal, thanks to the utilization of consecutive projections. The proposed neural network is stable in the Lyapunov sense and is guaranteed to converge globally to the optimal solution of the FLSA. Experiments on several applications from signal processing and biomedical engineering confirm the reasonable performance of the proposed neural network.
Collapse
|
19
|
Zhang Z, Yang S, Zheng L. A Penalty Strategy Combined Varying-Parameter Recurrent Neural Network for Solving Time-Varying Multi-Type Constrained Quadratic Programming Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2993-3004. [PMID: 32726282 DOI: 10.1109/tnnls.2020.3009201] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
To obtain the optimal solution to the time-varying quadratic programming (TVQP) problem with equality and multitype inequality constraints, a penalty strategy combined varying-parameter recurrent neural network (PS-VP-RNN) for solving TVQP problems is proposed and analyzed. By using a novel penalty function designed in this article, the inequality constraint of the TVQP can be transformed into a penalty term that is added into the objective function of TVQP problems. Then, based on the design method of VP-RNN, a PS-VP-RNN is designed and analyzed for solving the TVQP with penalty term. One of the greatest advantages of PS-VP-RNN is that it cannot only solve the TVQP with equality constraints but can also solve the TVQP with inequality and bounded constraints. The global convergence theorem of PS-VP-RNN is presented and proved. Finally, three numerical simulation experiments with different forms of inequality and bounded constraints verify the effectiveness and accuracy of PS-VP-RNN in solving the TVQP problems.
Collapse
|
20
|
Two Matrix-Type Projection Neural Networks for Matrix-Valued Optimization with Application to Image Restoration. Neural Process Lett 2021. [DOI: 10.1007/s11063-019-10086-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
21
|
Ju X, Li C, He X, Feng G. A proximal neurodynamic model for solving inverse mixed variational inequalities. Neural Netw 2021; 138:1-9. [PMID: 33610091 DOI: 10.1016/j.neunet.2021.01.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 11/17/2020] [Accepted: 01/14/2021] [Indexed: 11/17/2022]
Abstract
This paper proposes a proximal neurodynamic model (PNDM) for solving inverse mixed variational inequalities (IMVIs) based on the proximal operator. It is shown that the PNDM has a unique continuous solution under the condition of Lipschitz continuity (L-continuity). It is also shown that the equilibrium point of the proposed PNDM is asymptotically stable or exponentially stable under some mild conditions. Finally, three numerical examples are presented to illustrate effectiveness of the proposed PNDM.
Collapse
Affiliation(s)
- Xingxing Ju
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Chuandong Li
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Xing He
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Gang Feng
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong.
| |
Collapse
|
22
|
Mohammadi M. A new discrete-time neural network for quadratic programming with general linear constraints. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2019.11.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
23
|
Xia Y, Wang J, Guo W. Two Projection Neural Networks With Reduced Model Complexity for Nonlinear Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2020-2029. [PMID: 31425123 DOI: 10.1109/tnnls.2019.2927639] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Recent reports show that projection neural networks with a low-dimensional state space can enhance computation speed obviously. This paper proposes two projection neural networks with reduced model dimension and complexity (RDPNNs) for solving nonlinear programming (NP) problems. Compared with existing projection neural networks for solving NP, the proposed two RDPNNs have a low-dimensional state space and low model complexity. Under the condition that the Hessian matrix of the associated Lagrangian function is positive semi-definite and positive definite at each Karush-Kuhn-Tucker point, the proposed two RDPNNs are proven to be globally stable in the sense of Lyapunov and converge globally to a point satisfying the reduced optimality condition of NP. Therefore, the proposed two RDPNNs are theoretically guaranteed to solve convex NP problems and a class of nonconvex NP problems. Computed results show that the proposed two RDPNNs have a faster computation speed than the existing projection neural networks for solving NP problems.
Collapse
|
24
|
Mansoori A, Eshaghnezhad M, Effati S. Recurrent Neural Network Model: A New Strategy to Solve Fuzzy Matrix Games. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:2538-2547. [PMID: 30624230 DOI: 10.1109/tnnls.2018.2885825] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper aims to investigate the fuzzy constrained matrix game (MG) problems using the concepts of recurrent neural networks (RNNs). To the best of our knowledge, this paper is the first in attempting to find a solution for fuzzy game problems using RNN models. For this purpose, a fuzzy game problem is reformulated into a weighting problem. Then, the Karush-Kuhn-Tucker (KKT) optimality conditions are provided for the weighting problem. The KKT conditions are used to propose the RNN model. Moreover, the Lyapunov stability and the global convergence of the RNN model are also confirmed. Finally, three illustrative examples are presented to demonstrate the effectiveness of this approach. The obtained results are compared with the results obtained by the previous approaches for solving fuzzy constrained MG.
Collapse
|
25
|
Shojaeifard A, Amroudi AN, Mansoori A, Erfanian M. Projection Recurrent Neural Network Model: A New Strategy to Solve Weapon-Target Assignment Problem. Neural Process Lett 2019. [DOI: 10.1007/s11063-019-10068-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
26
|
Qiu B, Zhang Y. Two New Discrete-Time Neurodynamic Algorithms Applied to Online Future Matrix Inversion With Nonsingular or Sometimes-Singular Coefficient. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:2032-2045. [PMID: 29993939 DOI: 10.1109/tcyb.2018.2818747] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this paper, a high-precision general discretization formula using six time instants is first proposed to approximate the first-order derivative. Then, such a formula is studied to discretize two continuous-time neurodynamic models, both of which are derived by applying the neurodynamic approaches based on neural networks (i.e., zeroing neurodynamics and gradient neurodynamics). Originating from the general six-instant discretization (6ID) formula, a specific 6ID formula is further presented. Subsequently, two new discrete-time neurodynamic algorithms, i.e., 6ID-type discrete-time zeroing neurodynamic (DTZN) algorithm and 6ID-type discrete-time gradient neurodynamic (DTGN) algorithm, are proposed and investigated for online future matrix inversion (OFMI). In addition to analyzing the usual nonsingular situation of the coefficient, this paper investigates the sometimes-singular situation of the coefficient for OFMI. Finally, two illustrative numerical examples, including an application to the inverse-kinematic control of a PUMA560 robot manipulator, are provided to show respective characteristics and advantages of the proposed 6ID-type DTZN and DTGN algorithms for OFMI in different situations, where the coefficient matrix to be inverted is always-nonsingular or sometimes-singular during time evolution.
Collapse
|
27
|
An efficient neurodynamic model to solve nonlinear programming problems with fuzzy parameters. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.01.012] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
28
|
|
29
|
Mohammadi M, Mansoori A. A Projection Neural Network for Identifying Copy Number Variants. IEEE J Biomed Health Inform 2018; 23:2182-2188. [PMID: 30235154 DOI: 10.1109/jbhi.2018.2871619] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The identification of copy number variations (CNVs) helps the diagnosis of many diseases. One major hurdle in the path of CNVs discovery is that the boundaries of normal and aberrant regions cannot be distinguished from the raw data, since various types of noise contaminate them. To tackle this challenge, the total variation regularization is mostly used in the optimization problems to approximate the noise-free data from corrupted observations. The minimization using such regularization is challenging to deal with since it is non-differentiable. In this paper, we propose a projection neural network to solve the non-smooth problem. The proposed neural network has a simple one-layer structure and is theoretically assured to have the global exponential convergence to the solution of the total variation-regularized problem. The experiments on several real and simulated datasets illustrate the reasonable performance of the proposed neural network and show that its performance is comparable with those of more sophisticated algorithms.
Collapse
|