1
|
Upadhyay A, Pandey R. A proximal neurodynamic model for a system of non-linear inverse mixed variational inequalities. Neural Netw 2024; 176:106323. [PMID: 38653123 DOI: 10.1016/j.neunet.2024.106323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 03/27/2024] [Accepted: 04/14/2024] [Indexed: 04/25/2024]
Abstract
In this article, we introduce a system of non-linear inverse mixed variational inequalities (SNIMVIs). We propose a proximal neurodynamic model (PNDM) for solving SNIMVIs, leveraging proximal mappings. The uniqueness of the continuous solution for the PNDM is proved by assuming Lipschitz continuity. Moreover, we establish the global asymptotic stability of equilibrium points of the PNDM, contingent upon Lipschitz continuity and strong monotonicity. Additionally, an iterative algorithm involving proximal mappings for solving the SNIMVIs is presented. Finally, we provide illustrative examples to support our main findings. Furthermore, we provide an example where the SNIMVIs violate the strong monotonicity condition and exhibit the divergence nature of the trajectories of the corresponding PNDM.
Collapse
Affiliation(s)
- Anjali Upadhyay
- Department of Mathematics, University of Delhi, Delhi, India.
| | - Rahul Pandey
- Mahant Avaidyanath Govt. Degree College, Jungle Kaudiya, Gorakhpur, U.P., India.
| |
Collapse
|
2
|
Gao X, Liao LZ. Novel Continuous- and Discrete-Time Neural Networks for Solving Quadratic Minimax Problems With Linear Equality Constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:9814-9828. [PMID: 37022226 DOI: 10.1109/tnnls.2023.3236695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This article presents two novel continuous- and discrete-time neural networks (NNs) for solving quadratic minimax problems with linear equality constraints. These two NNs are established based on the conditions of the saddle point of the underlying function. For the two NNs, a proper Lyapunov function is constructed so that they are stable in the sense of Lyapunov, and will converge to some saddle point(s) for any starting point under some mild conditions. Compared with the existing NNs for solving quadratic minimax problems, the proposed NNs require weaker stability conditions. The validity and transient behavior of the proposed models are illustrated by some simulation results.
Collapse
|
3
|
Liu J, Liao X, Dong JS. A Recurrent Neural Network Approach for Constrained Distributed Fuzzy Convex Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:9743-9757. [PMID: 37022084 DOI: 10.1109/tnnls.2023.3236607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This article investigates a class of constrained distributed fuzzy convex optimization problems, where the objective function is the sum of a set of local fuzzy convex objective functions, and the constraints include partial order relation and closed convex set constraints. In undirected connected node communication network, each node only knows its own objective function and constraints, and the local objective function and partial order relation functions may be nonsmooth. To solve this problem, a recurrent neural network approach based on differential inclusion framework is proposed. The network model is constructed with the help of the idea of penalty function, and the estimation of penalty parameters in advance is eliminated. Through theoretical analysis, it is proven that the state solution of the network enters the feasible region in finite time and does not escape again, and finally reaches consensus at an optimal solution of the distributed fuzzy optimization problem. Furthermore, the stability and global convergence of the network do not depend on the selection of the initial state. A numerical example and an intelligent ship output power optimization problem are given to illustrate the feasibility and effectiveness of the proposed approach.
Collapse
|
4
|
Xia Z, Liu Y, Wang J, Wang J. Two-timescale recurrent neural networks for distributed minimax optimization. Neural Netw 2023; 165:527-539. [PMID: 37348433 DOI: 10.1016/j.neunet.2023.06.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 06/01/2023] [Accepted: 06/02/2023] [Indexed: 06/24/2023]
Abstract
In this paper, we present two-timescale neurodynamic optimization approaches to distributed minimax optimization. We propose four multilayer recurrent neural networks for solving four different types of generally nonlinear convex-concave minimax problems subject to linear equality and nonlinear inequality constraints. We derive sufficient conditions to guarantee the stability and optimality of the neural networks. We demonstrate the viability and efficiency of the proposed neural networks in two specific paradigms for Nash-equilibrium seeking in a zero-sum game and distributed constrained nonlinear optimization.
Collapse
Affiliation(s)
- Zicong Xia
- School of Mathematical Sciences, Zhejiang Normal University, Jinhua 321004, China
| | - Yang Liu
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua 321004, China; School of Mathematical Sciences, Zhejiang Normal University, Jinhua 321004, China.
| | - Jiasen Wang
- Future Network Research Center, Purple Mountain Laboratories, Nanjing 211111, China
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Hong Kong.
| |
Collapse
|
5
|
Qiu Q, Su H. Sampling-Based Event-Triggered Exponential Synchronization for Reaction-Diffusion Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:1209-1217. [PMID: 34432640 DOI: 10.1109/tnnls.2021.3105126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, the exponential synchronization control issue of reaction-diffusion neural networks (RDNNs) is mainly resolved by the sampling-based event-triggered scheme under Dirichlet boundary conditions. Based on the sampled state information, the event-triggered control protocol is updated only when the triggering condition is met, which effectively reduces the communication burden and saves energy. In addition, the proposed control algorithm is combined with sampled-data control, which can effectively avoid the Zeno phenomenon. By thinking of the proper Lyapunov-Krasovskii functional and using some momentous inequalities, a sufficient condition is obtained for RDNNs to achieve exponential synchronization. Finally, some simulation results are shown to demonstrate the validity of the algorithm.
Collapse
|
6
|
Zhou W, Zhang HT, Wang J. Sparse Bayesian Learning Based on Collaborative Neurodynamic Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13669-13683. [PMID: 34260368 DOI: 10.1109/tcyb.2021.3090204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Regression in a sparse Bayesian learning (SBL) framework is usually formulated as a global optimization problem with a nonconvex objective function and solved in a majorization-minimization framework where the solution quality and consistency depend heavily on the initial values of the used algorithm. In view of the shortcomings, this article presents an SBL algorithm based on collaborative neurodynamic optimization (CNO) for searching global optimal solutions to the global optimization problem. The CNO system consists of a population of recurrent neural networks (RNNs) where each RNN is convergent to a local optimum to the global optimization problem. Reinitialized repetitively via particle swarm optimization with exchanged local optima information, the RNNs iteratively improve their searching performance until reaching global convergence. The proposed CNO-based SBL algorithm is almost surely convergent to a global optimal solution to the formulated global optimization problem. Two applications with experimental results on sparse signal reconstruction and partial differential equation identification are elaborated to substantiate the superiority and efficacy of the proposed method in terms of solution optimality and consistency.
Collapse
|
7
|
Leung MF, Wang J, Li D. Decentralized Robust Portfolio Optimization Based on Cooperative-Competitive Multiagent Systems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12785-12794. [PMID: 34260366 DOI: 10.1109/tcyb.2021.3088884] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article addresses decentralized robust portfolio optimization based on multiagent systems. Decentralized robust portfolio optimization is first formulated as two distributed minimax optimization problems in a Markowitz return-risk framework. Cooperative-competitive multiagent systems are developed and applied for solving the formulated problems. The multiagent systems are shown to be able to reach consensuses in the expected stock prices and convergence in investment allocations through both intergroup and intragroup interactions. Experimental results of the multiagent systems with stock data from four major markets are elaborated to substantiate the efficacy of multiagent systems for decentralized robust portfolio optimization.
Collapse
|
8
|
Zhong J, Feng Y, Tang S, Xiong J, Dai X, Zhang N. A collaborative neurodynamic optimization algorithm to traveling salesman problem. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00884-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractThis paper proposed a collaborative neurodynamic optimization (CNO) method to solve traveling salesman problem (TSP). First, we construct a Hopfield neural network (HNN) with $$n \times n$$
n
×
n
neurons for the n cities. Second, to ensure the convergence of continuous HNN (CHNN), we reformulate TSP to satisfy the convergence condition of CHNN and solve TSP by CHNN. Finally, a population of CHNNs is used to search for local optimal solutions of TSP and the globally optimal solution is obtained using particle swarm optimization. Experimental results show the effectiveness of the CNO approach for solving TSP.
Collapse
|
9
|
Sun M, Li X, Zhong G. Semi-global fixed/predefined-time RNN models with comprehensive comparisons for time-variant neural computing. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07820-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
10
|
Zhang H, Zeng Z. Stability and Synchronization of Nonautonomous Reaction-Diffusion Neural Networks With General Time-Varying Delays. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:5804-5817. [PMID: 33861715 DOI: 10.1109/tnnls.2021.3071404] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article investigates the stability and synchronization of nonautonomous reaction-diffusion neural networks with general time-varying delays. Compared with the existing works concerning reaction-diffusion neural networks, the main innovation of this article is that the network coefficients are time-varying, and the delays are general (which means that fewer constraints are posed on delays; for example, the commonly used conditions of differentiability and boundedness are no longer needed). By Green's formula and some analytical techniques, some easily checkable criteria on stability and synchronization for the underlying neural networks are established. These obtained results not only improve some existing ones but also contain some novel results that have not yet been reported. The effectiveness and superiorities of the established criteria are verified by three numerical examples.
Collapse
|
11
|
Li X, Wang J, Kwong S. Hash Bit Selection Based on Collaborative Neurodynamic Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11144-11155. [PMID: 34415845 DOI: 10.1109/tcyb.2021.3102941] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Hash bit selection determines an optimal subset of hash bits from a candidate bit pool. It is formulated as a zero-one quadratic programming problem subject to binary and cardinality constraints. In this article, the problem is equivalently reformulated as a global optimization problem. A collaborative neurodynamic optimization (CNO) approach is applied to solve the problem by using a group of neurodynamic models initialized with particle swarm optimization iteratively in the CNO. Lévy mutation is used in the CNO to avoid premature convergence by ensuring initial state diversity. A theoretical proof is given to show that the CNO with the Lévy mutation operator is almost surely convergent to global optima. Experimental results are discussed to substantiate the efficacy and superiority of the CNO-based hash bit selection method to the existing methods on three benchmarks.
Collapse
|
12
|
Optimal discrete-time sliding-mode control based on recurrent neural network: a singular value approach. Soft comput 2022. [DOI: 10.1007/s00500-022-07486-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
13
|
Gao Y, Wei W, Wang X, Wang D, Li Y, Yu Q. Trajectory tracking of multi-legged robot based on model predictive and sliding mode control. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.05.069] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
14
|
Che H, Wang J, Cichocki A. Sparse signal reconstruction via collaborative neurodynamic optimization. Neural Netw 2022; 154:255-269. [PMID: 35908375 DOI: 10.1016/j.neunet.2022.07.018] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 07/09/2022] [Accepted: 07/12/2022] [Indexed: 10/17/2022]
Abstract
In this paper, we formulate a mixed-integer problem for sparse signal reconstruction and reformulate it as a global optimization problem with a surrogate objective function subject to underdetermined linear equations. We propose a sparse signal reconstruction method based on collaborative neurodynamic optimization with multiple recurrent neural networks for scattered searches and a particle swarm optimization rule for repeated repositioning. We elaborate on experimental results to demonstrate the outperformance of the proposed approach against ten state-of-the-art algorithms for sparse signal reconstruction.
Collapse
Affiliation(s)
- Hangjun Che
- College of Electronic and Information Engineering and Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing 400715, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| | - Andrzej Cichocki
- Skolkovo Institute of Science and Technology, Moscow 143026, Russia.
| |
Collapse
|
15
|
Su H, Qiu Q, Chen X, Zeng Z. Distributed Adaptive Containment Control for Coupled Reaction-Diffusion Neural Networks With Directed Topology. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:6320-6330. [PMID: 33284762 DOI: 10.1109/tcyb.2020.3034634] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this article, we consider the problem of distributed adaptive leader-follower coordination of partial differential systems (i.e., reaction-diffusion neural networks, RDNNs) with directed communication topology in the case of multiple leaders. Different from the dynamical networks with ordinary differential dynamics, the design of adaptive protocols is more difficult due to the existence of spatial variables and nonlinear terms in the model. Under directed networks, a novel adaptive control protocol is proposed to solve the containment control problem of RDNNs. By constructing proper Lyapunov functional and adopting some important prior knowledge, the stability of containment for coupled RDNNs is theoretically proved. Furthermore, a corollary about the leader-follower synchronization with a leader for coupled RDNNs with directed communication topology is given. In the end, two numerical examples are provided to illustrate the obtained theoretical results.
Collapse
|
16
|
Wang G, Yu D, Zhou P. Neural network interpolation operators optimized by Lagrange polynomial. Neural Netw 2022; 153:179-191. [PMID: 35728337 DOI: 10.1016/j.neunet.2022.06.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Revised: 05/30/2022] [Accepted: 06/03/2022] [Indexed: 11/19/2022]
Abstract
In this paper, we introduce a new type of interpolation operators by using Lagrange polynomials of degree r, which can be regarded as feedforward neural networks with four layers. The approximation rate of the new operators can be estimated by the (r+1)-th modulus of smoothness of the objective functions. By adding some smooth assumptions on the activation function, we establish two important inequalities of the derivatives of the operators. With these two inequalities, by using the K-functional and Berens-Lorentz lemma in approximation theory, we establish the converse theorem of approximation. We also give the Voronovskaja-type asymptotic estimation of the operators for smooth functions. Furthermore, we extend our operators to the multivariate case, and investigate their approximation properties for multivariate functions. Finally, some numerical examples are given to demonstrate the validity of the theoretical results obtained and the superiority of the operators.
Collapse
Affiliation(s)
- Guoshun Wang
- School of Mathematics, Hangzhou Normal University, Hangzhou, Zhejiang 310036, China.
| | - Dansheng Yu
- School of Mathematics, Hangzhou Normal University, Hangzhou, Zhejiang 310036, China.
| | - Ping Zhou
- Department of Mathematics and Statistics, St. Francis Xavier University, Antigonish, NS B2G 2W5, Canada.
| |
Collapse
|
17
|
Xiao L, He Y, Dai J, Liu X, Liao B, Tan H. A Variable-Parameter Noise-Tolerant Zeroing Neural Network for Time-Variant Matrix Inversion With Guaranteed Robustness. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1535-1545. [PMID: 33361003 DOI: 10.1109/tnnls.2020.3042761] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Matrix inversion frequently occurs in the fields of science, engineering, and related fields. Numerous matrix inversion schemes are often based on the premise that the solution procedure is ideal and noise-free. However, external interference is generally ubiquitous and unavoidable in practice. Therefore, an integrated-enhanced zeroing neural network (IEZNN) model has been proposed to handle the time-variant matrix inversion issue interfered with by noise. However, the IEZNN model can only deal with small time-variant noise interference. With slightly larger noise interference, the IEZNN model may not converge to the theoretical solution exactly. Therefore, a variable-parameter noise-tolerant zeroing neural network (VPNTZNN) model is proposed to overcome shortcomings and improve the inadequacy. Moreover, the excellent convergence and robustness of the VPNTZNN model are rigorously analyzed and proven. Finally, compared with the original zeroing neural network (OZNN) model and the IEZNN model for matrix inversion, numerical simulations and a practical application reveal that the proposed VPNTZNN model has the best robust property under the same external noise interference.
Collapse
|
18
|
Zhao Y, Liao X, He X. Novel projection neurodynamic approaches for constrained convex optimization. Neural Netw 2022; 150:336-349. [DOI: 10.1016/j.neunet.2022.03.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 01/02/2022] [Accepted: 03/07/2022] [Indexed: 11/30/2022]
|
19
|
Wang J, Wang J, Han QL. Multivehicle Task Assignment Based on Collaborative Neurodynamic Optimization With Discrete Hopfield Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:5274-5286. [PMID: 34077371 DOI: 10.1109/tnnls.2021.3082528] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article presents a collaborative neurodynamic optimization (CNO) approach to multivehicle task assignments (TAs). The original combinatorial quadratic optimization problem for TA is reformulated as a quadratic unconstrained binary optimization (QUBO) problem with a quadratic utility function and a penalty function for handling load capacity and cooperation constraints. In the framework of CNO with a population of discrete Hopfield networks (DHNs), a TA algorithm is proposed for solving the formulated QUBO problem. Superior experimental results in four typical multivehicle operation scenarios are reported to substantiate the efficacy of the proposed neurodynamics-based TA approach.
Collapse
|
20
|
Leung MF, Wang J. Cardinality-constrained portfolio selection based on collaborative neurodynamic optimization. Neural Netw 2021; 145:68-79. [PMID: 34735892 DOI: 10.1016/j.neunet.2021.10.007] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/28/2021] [Accepted: 10/11/2021] [Indexed: 11/18/2022]
Abstract
Portfolio optimization is one of the most important investment strategies in financial markets. It is practically desirable for investors, especially high-frequency traders, to consider cardinality constraints in portfolio selection, to avoid odd lots and excessive costs such as transaction fees. In this paper, a collaborative neurodynamic optimization approach is presented for cardinality-constrained portfolio selection. The expected return and investment risk in the Markowitz framework are scalarized as a weighted Chebyshev function and the cardinality constraints are equivalently represented using introduced binary variables as an upper bound. Then cardinality-constrained portfolio selection is formulated as a mixed-integer optimization problem and solved by means of collaborative neurodynamic optimization with multiple recurrent neural networks repeatedly repositioned using a particle swarm optimization rule. The distribution of resulting Pareto-optimal solutions is also iteratively perfected by optimizing the weights in the scalarized objective functions based on particle swarm optimization. Experimental results with stock data from four major world markets are discussed to substantiate the superior performance of the collaborative neurodynamic approach to several exact and metaheuristic methods.
Collapse
Affiliation(s)
- Man-Fai Leung
- School of Science and Technology, Hong Kong Metropolitan University, Kowloon, Hong Kong
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| |
Collapse
|
21
|
Li K, Liu Q, Zeng Z. Quantized event-triggered communication based multi-agent system for distributed resource allocation optimization. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.07.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
22
|
Solving Mixed Variational Inequalities Via a Proximal Neurodynamic Network with Applications. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10628-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
23
|
Huang W, Song Q, Zhao Z, Liu Y, Alsaadi FE. Robust stability for a class of fractional-order complex-valued projective neural networks with neutral-type delays and uncertain parameters. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.04.046] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
24
|
Leung MF, Wang J. Minimax and Biobjective Portfolio Selection Based on Collaborative Neurodynamic Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2825-2836. [PMID: 31902773 DOI: 10.1109/tnnls.2019.2957105] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Portfolio selection is one of the important issues in financial investments. This article is concerned with portfolio selection based on collaborative neurodynamic optimization. The classic Markowitz mean-variance (MV) framework and its variant mean conditional value-at-risk (CVaR) are formulated as minimax and biobjective portfolio selection problems. Neurodynamic approaches are then applied for solving these optimization problems. For each of the problems, multiple neural networks work collaboratively to characterize the efficient frontier by means of particle swarm optimization (PSO)-based weight optimization. Experimental results with stock data from four major markets show the performance and characteristics of the collaborative neurodynamic approaches to the portfolio optimization problems.
Collapse
|
25
|
Zhang H, Zeng Z. Synchronization of recurrent neural networks with unbounded delays and time-varying coefficients via generalized differential inequalities. Neural Netw 2021; 143:161-170. [PMID: 34146896 DOI: 10.1016/j.neunet.2021.05.022] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 04/12/2021] [Accepted: 05/17/2021] [Indexed: 11/29/2022]
Abstract
In this paper, we revisit the drive-response synchronization of a class of recurrent neural networks with unbounded delays and time-varying coefficients, contrary to usual in the literature about time-varying neural networks, the signs of self-feedback coefficients are permitted to be indefinite or the time-varying coefficients can be unbounded. A generalized scalar delay differential inequality considering indefinite self-feedback coefficient and unbounded delay simultaneously is established, which covers the existing result with bounded delay, the applicabilities of the sufficient conditions are discussed. Some novel criteria for network synchronization are then derived by constructing different candidate functions. These results have been improved in some aspects compared with the existing ones. Differential inequality in vector form is also derived to obtain a more refined synchronization criterion which removes some strong assumptions. Three examples are presented to verify the effectiveness and show the superiorities of our theoretical results.
Collapse
Affiliation(s)
- Hao Zhang
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China; Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan 430074, China.
| | - Zhigang Zeng
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China; Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan 430074, China.
| |
Collapse
|
26
|
Wang Y, Wang J, Che H. Two-timescale neurodynamic approaches to supervised feature selection based on alternative problem formulations. Neural Netw 2021; 142:180-191. [PMID: 34020085 DOI: 10.1016/j.neunet.2021.04.038] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 04/21/2021] [Accepted: 04/29/2021] [Indexed: 10/21/2022]
Abstract
Feature selection is a crucial step in data processing and machine learning. While many greedy and sequential feature selection approaches are available, a holistic neurodynamics approach to supervised feature selection is recently developed via fractional programming by minimizing feature redundancy and maximizing relevance simultaneously. In view that the gradient of the fractional objective function is also fractional, alternative problem formulations are desirable to obviate the fractional complexity. In this paper, the fractional programming problem formulation is equivalently reformulated as bilevel and bilinear programming problems without using any fractional function. Two two-timescale projection neural networks are adapted for solving the reformulated problems. Experimental results on six benchmark datasets are elaborated to demonstrate the global convergence and high classification performance of the proposed neurodynamic approaches in comparison with six mainstream feature selection approaches.
Collapse
Affiliation(s)
- Yadi Wang
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, 475004, China; Institute of Data and Knowledge Engineering, School of Computer and Information Engineering, Henan University, Kaifeng, 475004, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong; Shenzhen Research Institute, City University of Hong Kong, Shenzhen, Guangdong, China.
| | - Hangjun Che
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China; Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing 400715, China.
| |
Collapse
|
27
|
Yang B, Hao M, Han M, Zhao X, Zong G. Exponential Stability of Discrete-Time Neural Networks With Large Delay. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:2824-2834. [PMID: 31329569 DOI: 10.1109/tcyb.2019.2923244] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We study the exponential stability of discrete-time neural networks (NNs) with a time-varying delay which contains a few intermittent large delays (LDs). By modeling the considered discrete-time NN as a discrete-time switched NN which contains two subsystems and one of them may be unstable over the LD periods (LDPs), switching techniques are employed to analyze the problem. Delay-dependent exponential stability conditions to check the frequency and the length of the LDs allowed for guaranteeing the exponential stability are proposed by applying a novel Lyapunov-Krasovskii functional (LKF) with LDP-based terms, Wirtinger-based summation inequality, and reciprocally convex combination technique. Based on these conditions, associated evaluation algorithms are developed. Finally, two numerical examples are provided to demonstrate the effectiveness of the proposed method.
Collapse
|
28
|
Multi-periodicity of switched neural networks with time delays and periodic external inputs under stochastic disturbances. Neural Netw 2021; 141:107-119. [PMID: 33887601 DOI: 10.1016/j.neunet.2021.03.039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Revised: 03/11/2021] [Accepted: 03/29/2021] [Indexed: 11/21/2022]
Abstract
This paper presents new theoretical results on the multi-periodicity of recurrent neural networks with time delays evoked by periodic inputs under stochastic disturbances and state-dependent switching. Based on the geometric properties of activation function and switching threshold, the neuronal state space is partitioned into 5n regions in which 3n ones are shown to be positively invariant with probability one. Furthermore, by using Itô's formula, Lyapunov functional method, and the contraction mapping theorem, two criteria are proposed to ascertain the existence and mean-square exponential stability of a periodic orbit in every positive invariant set. As a result, the number of mean-square exponentially stable periodic orbits increases to 3n from 2n in a neural network without switching. Two illustrative examples are elaborated to substantiate the efficacy and characteristics of the theoretical results.
Collapse
|
29
|
Ju X, Li C, He X, Feng G. A proximal neurodynamic model for solving inverse mixed variational inequalities. Neural Netw 2021; 138:1-9. [PMID: 33610091 DOI: 10.1016/j.neunet.2021.01.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 11/17/2020] [Accepted: 01/14/2021] [Indexed: 11/17/2022]
Abstract
This paper proposes a proximal neurodynamic model (PNDM) for solving inverse mixed variational inequalities (IMVIs) based on the proximal operator. It is shown that the PNDM has a unique continuous solution under the condition of Lipschitz continuity (L-continuity). It is also shown that the equilibrium point of the proposed PNDM is asymptotically stable or exponentially stable under some mild conditions. Finally, three numerical examples are presented to illustrate effectiveness of the proposed PNDM.
Collapse
Affiliation(s)
- Xingxing Ju
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Chuandong Li
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Xing He
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Gang Feng
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong.
| |
Collapse
|
30
|
Wang Y, Li X, Wang J. A neurodynamic optimization approach to supervised feature selection via fractional programming. Neural Netw 2021; 136:194-206. [PMID: 33497995 DOI: 10.1016/j.neunet.2021.01.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 12/04/2020] [Accepted: 01/07/2021] [Indexed: 11/25/2022]
Abstract
Feature selection is an important issue in machine learning and data mining. Most existing feature selection methods are greedy in nature thus are prone to sub-optimality. Though some global feature selection methods based on unsupervised redundancy minimization can potentiate clustering performance improvements, their efficacy for classification may be limited. In this paper, a neurodynamics-based holistic feature selection approach is proposed via feature redundancy minimization and relevance maximization. An information-theoretic similarity coefficient matrix is defined based on multi-information and entropy to measure feature redundancy with respect to class labels. Supervised feature selection is formulated as a fractional programming problem based on the similarity coefficients. A neurodynamic approach based on two one-layer recurrent neural networks is developed for solving the formulated feature selection problem. Experimental results with eight benchmark datasets are discussed to demonstrate the global convergence of the neural networks and superiority of the proposed neurodynamic approach to several existing feature selection methods in terms of classification accuracy, precision, recall, and F-measure.
Collapse
Affiliation(s)
- Yadi Wang
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, 475004, China; Institute of Data and Knowledge Engineering, School of Computer and Information Engineering, Henan University, Kaifeng, 475004, China; School of Computer Science and Engineering, Southeast University, Nanjing, 211189, China.
| | - Xiaoping Li
- School of Computer Science and Engineering, Southeast University, Nanjing, 211189, China; Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing, 211189, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| |
Collapse
|
31
|
Mu G, Li L, Li X. Quasi-bipartite synchronization of signed delayed neural networks under impulsive effects. Neural Netw 2020; 129:31-42. [DOI: 10.1016/j.neunet.2020.05.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2020] [Revised: 04/19/2020] [Accepted: 05/11/2020] [Indexed: 10/24/2022]
|
32
|
Tan Z, Li W, Xiao L, Hu Y. New Varying-Parameter ZNN Models With Finite-Time Convergence and Noise Suppression for Time-Varying Matrix Moore-Penrose Inversion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2980-2992. [PMID: 31536017 DOI: 10.1109/tnnls.2019.2934734] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article aims to solve the Moore-Penrose inverse of time-varying full-rank matrices in the presence of various noises in real time. For this purpose, two varying-parameter zeroing neural networks (VPZNNs) are proposed. Specifically, VPZNN-R and VPZNN-L models, which are based on a new design formula, are designed to solve the right and left Moore-Penrose inversion problems of time-varying full-rank matrices, respectively. The two VPZNN models are activated by two novel varying-parameter nonlinear activation functions. Detailed theoretical derivations are presented to show the desired finite-time convergence and outstanding robustness of the proposed VPZNN models under various kinds of noises. In addition, existing neural models, such as the original ZNN (OZNN) and the integration-enhanced ZNN (IEZNN), are compared with the VPZNN models. Simulation observations verify the advantages of the VPZNN models over the OZNN and IEZNN models in terms of convergence and robustness. The potential of the VPZNN models for robotic applications is then illustrated by an example of robot path tracking.
Collapse
|
33
|
Xiao L, Li K, Duan M. Computing Time-Varying Quadratic Optimization With Finite-Time Convergence and Noise Tolerance: A Unified Framework for Zeroing Neural Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:3360-3369. [PMID: 30716052 DOI: 10.1109/tnnls.2019.2891252] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Zeroing neural network (ZNN), as a powerful calculating tool, is extensively applied in various computation and optimization fields. Convergence and noise-tolerance performance are always pursued and investigated in the ZNN field. Up to now, there are no unified ZNN models that simultaneously achieve the finite-time convergence and inherent noise tolerance for computing time-varying quadratic optimization problems, although this superior property is highly demanded in practical applications. In this paper, for computing time-varying quadratic optimization within finite-time convergence in the presence of various additive noises, a new framework for ZNN is designed to fill this gap in a unified manner. Specifically, different from the previous design formulas either possessing finite-time convergence or possessing noise-tolerance performance, a new design formula with finite-time convergence and noise tolerance is proposed in a unified framework (and thus called unified design formula). Then, on the basis of the unified design formula, a unified ZNN (UZNN) is, thus, proposed and investigated in the unified framework of ZNN for computing time-varying quadratic optimization problems in the presence of various additive noises. In addition, theoretical analyses of the unified design formula and the UZNN model are given to guarantee the finite-time convergence and inherent noise tolerance. Computer simulation results verify the superior property of the UZNN model for computing time-varying quadratic optimization problems, as compared with the previously proposed ZNN models.
Collapse
|
34
|
A collaborative neurodynamic approach to global and combinatorial optimization. Neural Netw 2019; 114:15-27. [DOI: 10.1016/j.neunet.2019.02.002] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 12/04/2018] [Accepted: 02/04/2019] [Indexed: 11/17/2022]
|
35
|
Tang Y, Deng Z, Hong Y. Optimal Output Consensus of High-Order Multiagent Systems With Embedded Technique. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:1768-1779. [PMID: 29994166 DOI: 10.1109/tcyb.2018.2813431] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this paper, we study an optimal output consensus problem for a multiagent network with agents in the form of multi-input multioutput minimum-phase dynamics. Optimal output consensus can be taken as an extended version of the existing output consensus problem for higher-order agents with an optimization requirement, where the output variables of agents are driven to achieve a consensus on the optimal solution of a global cost function. To solve this problem, we first construct an optimal signal generator, and then propose an embedded control scheme by embedding the generator in the feedback loop. We give two kinds of algorithms based on different available information along with both state feedback and output feedback, and prove that these algorithms with the embedded technique can guarantee the solvability of the problem for high-order multiagent systems under standard assumptions.
Collapse
|
36
|
Zhang Y, Gong H, Yang M, Li J, Yang X. Stepsize Range and Optimal Value for Taylor-Zhang Discretization Formula Applied to Zeroing Neurodynamics Illustrated via Future Equality-Constrained Quadratic Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:959-966. [PMID: 30137015 DOI: 10.1109/tnnls.2018.2861404] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this brief, future equality-constrained quadratic programming (FECQP) is studied. Via a zeroing neurodynamics method, a continuous-time zeroing neurodynamics (CTZN) model is presented. By using Taylor-Zhang discretization formula to discretize the CTZN model, a Taylor-Zhang discrete-time zeroing neurodynamics (TZ-DTZN) model is presented to perform FECQP. Furthermore, we focus on the critical parameter of the TZ-DTZN model, i.e., stepsize. By theoretical analyses, we obtain an effective range of the stepsize, which guarantees the stability of the TZ-DTZN model. In addition, we further discuss the optimal value of the stepsize, which makes the TZ-DTZN model possess the optimal stability (i.e., the best stability with the fastest convergence). Finally, numerical experiments and application experiments for motion generation of a robot manipulator are conducted to verify the high precision of the TZ-DTZN model and the effective range and optimal value of the stepsize for FECQP.
Collapse
|
37
|
Li C, Gao X. One-layer neural network for solving least absolute deviation problem with box and equality constraints. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.11.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
38
|
Xu B, Liu Q, Huang T. A Discrete-Time Projection Neural Network for Sparse Signal Reconstruction With Application to Face Recognition. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:151-162. [PMID: 29994338 DOI: 10.1109/tnnls.2018.2836933] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper deals with sparse signal reconstruction by designing a discrete-time projection neural network. Sparse signal reconstruction can be converted into an L1 -minimization problem, which can also be changed into the unconstrained basis pursuit denoising problem. To solve the L1 -minimization problem, an iterative algorithm is proposed based on the discrete-time projection neural network, and the global convergence of the algorithm is analyzed by using Lyapunov method. Experiments on sparse signal reconstruction and several popular face data sets are organized to illustrate the effectiveness and performance of the proposed algorithm. The experimental results show that the proposed algorithm is not only robust to different levels of sparsity and amplitude of signals and the noise pixels but also insensitive to the diverse values of scalar weight. Moreover, the value of the step size of the proposed algorithm is close to 1/2, thus a fast convergence rate is potentially possible. Furthermore, the proposed algorithm achieves better classification performance compared with some other algorithms for face recognition.
Collapse
|
39
|
Yang S, Liu Q, Wang J. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:981-992. [PMID: 28166509 DOI: 10.1109/tnnls.2017.2652478] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.
Collapse
|
40
|
Simplified neural network for generalized least absolute deviation. Neural Comput Appl 2018. [DOI: 10.1007/s00521-017-3060-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
41
|
Ma Y, Ma N, Chen L, Zheng Y, Han Y. Exponential stability for the neutral-type singular neural network with time-varying delays. INT J MACH LEARN CYB 2017. [DOI: 10.1007/s13042-017-0764-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
42
|
Sheng Y, Shen Y, Zhu M. Delay-Dependent Global Exponential Stability for Delayed Recurrent Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2974-2984. [PMID: 27705864 DOI: 10.1109/tnnls.2016.2608879] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper deals with the global exponential stability for delayed recurrent neural networks (DRNNs). By constructing an augmented Lyapunov-Krasovskii functional and adopting the reciprocally convex combination approach and Wirtinger-based integral inequality, delay-dependent global exponential stability criteria are derived in terms of linear matrix inequalities. Meanwhile, a general and effective method on global exponential stability analysis for DRNNs is given through a lemma, where the exponential convergence rate can be estimated. With this lemma, some global asymptotic stability criteria of DRNNs acquired in previous studies can be generalized to global exponential stability ones. Finally, a frequently utilized numerical example is carried out to illustrate the effectiveness and merits of the proposed theoretical results.
Collapse
|
43
|
Sheng Y, Zhang H, Zeng Z. Synchronization of Reaction-Diffusion Neural Networks With Dirichlet Boundary Conditions and Infinite Delays. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:3005-3017. [PMID: 28436913 DOI: 10.1109/tcyb.2017.2691733] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper is concerned with synchronization for a class of reaction-diffusion neural networks with Dirichlet boundary conditions and infinite discrete time-varying delays. By utilizing theories of partial differential equations, Green's formula, inequality techniques, and the concept of comparison, algebraic criteria are presented to guarantee master-slave synchronization of the underlying reaction-diffusion neural networks via a designed controller. Additionally, sufficient conditions on exponential synchronization of reaction-diffusion neural networks with finite time-varying delays are established. The proposed criteria herein enhance and generalize some published ones. Three numerical examples are presented to substantiate the validity and merits of the obtained theoretical results.
Collapse
|
44
|
Gao X, Li C. A new neural network for convex quadratic minimax problems with box and equality constraints. Comput Chem Eng 2017. [DOI: 10.1016/j.compchemeng.2017.03.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
45
|
Le X, Wang J. A Two-Time-Scale Neurodynamic Approach to Constrained Minimax Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:620-629. [PMID: 28212073 DOI: 10.1109/tnnls.2016.2538288] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper presents a two-time-scale neurodynamic approach to constrained minimax optimization using two coupled neural networks. One of the recurrent neural networks is used for minimizing the objective function and another is used for maximization. It is shown that the coupled neurodynamic systems operating in two different time scales work well for minimax optimization. The effectiveness and characteristics of the proposed approach are illustrated using several examples. Furthermore, the proposed approach is applied for H∞ model predictive control.
Collapse
|
46
|
He X, Huang T, Yu J, Li C, Li C. An Inertial Projection Neural Network for Solving Variational Inequalities. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:809-814. [PMID: 26887026 DOI: 10.1109/tcyb.2016.2523541] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Recently, projection neural network (PNN) was proposed for solving monotone variational inequalities (VIs) and related convex optimization problems. In this paper, considering the inertial term into first order PNNs, an inertial PNN (IPNN) is also proposed for solving VIs. Under certain conditions, the IPNN is proved to be stable, and can be applied to solve a broader class of constrained optimization problems related to VIs. Compared with existing neural networks (NNs), the presence of the inertial term allows us to overcome some drawbacks of many NNs, which are constructed based on the steepest descent method, and this model is more convenient for exploring different Karush-Kuhn-Tucker optimal solution for nonconvex optimization problems. Finally, simulation results on three numerical examples show the effectiveness and performance of the proposed NN.
Collapse
|
47
|
A novel neural network for solving convex quadratic programming problems subject to equality and inequality constraints. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.05.032] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
48
|
Che H, Li C, He X, Huang T. A recurrent neural network for adaptive beamforming and array correction. Neural Netw 2016; 80:110-7. [DOI: 10.1016/j.neunet.2016.04.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2015] [Revised: 03/08/2016] [Accepted: 04/22/2016] [Indexed: 10/21/2022]
|
49
|
Zhou B, Liao X, Huang T, Wang H, Chen G. Distributed multi-agent optimization with inequality constraints and random projections. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.02.064] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
50
|
Xu J, Li C, He X, Huang T. Recurrent neural network for solving model predictive control problem in application of four-tank benchmark. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.01.020] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|