1
|
Liu N, Jia W, Qin S. A smooth gradient approximation neural network for general constrained nonsmooth nonconvex optimization problems. Neural Netw 2025; 184:107121. [PMID: 39798354 DOI: 10.1016/j.neunet.2024.107121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 12/09/2024] [Accepted: 12/31/2024] [Indexed: 01/15/2025]
Abstract
Nonsmooth nonconvex optimization problems are pivotal in engineering practice due to the inherent nonsmooth and nonconvex characteristics of many real-world complex systems and models. The nonsmoothness and nonconvexity of the objective and constraint functions bring great challenges to the design and convergence analysis of the optimization algorithms. This paper presents a smooth gradient approximation neural network for such optimization problems, in which a smooth approximation technique with time-varying control parameter is introduced for handling nonsmooth nonregular objective functions. In addition, a hard comparator function is introduced to ensure that the state solution of the proposed neural network remains within the nonconvex inequality constraint sets. Any accumulation point of the state solution of the proposed neural network is proved to be a stationary point of the nonconvex optimization under consideration. Furthermore, the neural network demonstrates the ability to find optimal solutions for some generalized convex optimization problems. Compared with the related neural networks, the constructed neural network has weaker convergence conditions and simpler algorithm structure. Simulation results and an application in optimizing condition number verify the practical applicability of the presented algorithm.
Collapse
Affiliation(s)
- Na Liu
- School of Mathematical Sciences, Tianjin Normal University, Tianjin, China; Institute of Mathematics and Interdisciplinary Sciences, Tianjin Normal University, Tianjin, China.
| | - Wenwen Jia
- Department of Mathematics, Southeast University, Nanjing, China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, China.
| |
Collapse
|
2
|
Luan L, Wen X, Xue Y, Qin S. Adaptive penalty-based neurodynamic approach for nonsmooth interval-valued optimization problem. Neural Netw 2024; 176:106337. [PMID: 38688071 DOI: 10.1016/j.neunet.2024.106337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 03/08/2024] [Accepted: 04/23/2024] [Indexed: 05/02/2024]
Abstract
The complex and diverse practical background drives this paper to explore a new neurodynamic approach (NA) to solve nonsmooth interval-valued optimization problems (IVOPs) constrained by interval partial order and more general sets. On the one hand, to deal with the uncertainty of interval-valued information, the LU-optimality condition of IVOPs is established through a deterministic form. On the other hand, according to the penalty method and adaptive controller, the interval partial order constraint and set constraint are punished by one adaptive parameter, which is a key enabler for the feasibility of states while having a lower solution space dimension and avoiding estimating exact penalty parameters. Through nonsmooth analysis and Lyapunov theory, the proposed adaptive penalty-based neurodynamic approach (APNA) is proven to converge to an LU-solution of the considered IVOPs. Finally, the feasibility of the proposed APNA is illustrated by numerical simulations and an investment decision-making problem.
Collapse
Affiliation(s)
- Linhua Luan
- Department of Mathematics, Harbin Institute of Technology, Weihai, China.
| | - Xingnan Wen
- Department of Mathematics, Harbin Institute of Technology, Weihai, China.
| | - Yuhan Xue
- School of Economics and Management, Harbin Institute of Technology, Harbin, China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, China.
| |
Collapse
|
3
|
Wei L, Jin L, Luo X. A Robust Coevolutionary Neural-Based Optimization Algorithm for Constrained Nonconvex Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7778-7791. [PMID: 36399592 DOI: 10.1109/tnnls.2022.3220806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
For nonconvex optimization problems, a routine is to assume that there is no perturbation when executing the solution task. Nevertheless, dealing with the perturbation in advance may increase the burden on the system and take up extra time. To remedy this weakness, we propose a robust coevolutionary neural-based optimization algorithm with inherent robustness based on the hybridization between the particle swarm optimization and a class of robust neural dynamics (RND). In this framework, every neural agent guided by the RND supersedes the place of the particle, mutually searches for the optimal solution, and stabilizes itself from different perturbations. The theoretical analysis ensures that the proposed algorithm is globally convergent with probability one. Besides, the effectiveness and robustness of the proposed approach are illustrated by illustrative examples compared with the existing methods. We further apply this proposed algorithm to the source localization and manipulability optimization of the redundant manipulator, simultaneously disposing of perturbation from the internal and exogenous system with satisfactory performance.
Collapse
|
4
|
Wu W, Zhang Y. Zeroing Neural Network With Coefficient Functions and Adjustable Parameters for Solving Time-Variant Sylvester Equation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:6757-6766. [PMID: 36256719 DOI: 10.1109/tnnls.2022.3212869] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
To solve the time-variant Sylvester equation, in 2013, Li et al. proposed the zeroing neural network with sign-bi-power function (ZNN-SBPF) model via constructing a nonlinear activation function. In this article, to further improve the convergence rate, the zeroing neural network with coefficient functions and adjustable parameters (ZNN-CFAP) model as a variation in zeroing neural network (ZNN) model is proposed. On the basis of the introduced coefficient functions, an appropriate ZNN-CFAP model can be chosen according to the error function. The high convergence rate of the ZNN-CFAP model can be achieved by choosing appropriate adjustable parameters. Moreover, the finite-time convergence property and convergence time upper bound of the ZNN-CFAP model are proved in theory. Computer simulations and numerical experiments are performed to illustrate the efficacy and validity of the ZNN-CFAP model in time-variant Sylvester equation solving. Comparative experiments among the ZNN-CFAP, ZNN-SBPF, and ZNN with linear function (ZNN-LF) models further substantiate the superiority of the ZNN-CFAP model in view of the convergence rate. Finally, the proposed ZNN-CFAP model is successfully applied to the tracking control of robot manipulator to verify its practicability.
Collapse
|
5
|
Wu D, Zhang Y. Zhang equivalency of inequation-to-inequation type for constraints of redundant manipulators. Heliyon 2024; 10:e23570. [PMID: 38173488 PMCID: PMC10761789 DOI: 10.1016/j.heliyon.2023.e23570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 11/22/2023] [Accepted: 12/06/2023] [Indexed: 01/05/2024] Open
Abstract
In solving specific problems, physical laws and mathematical theorems directly express the connections between variables with equations/inequations. At times, it could be extremely hard or not viable to solve these equations/inequations directly. The PE (principle of equivalence) is a commonly applied pragmatic method across multiple fields. PE transforms the initial equations/inequations into simplified equivalent equations/inequations that are more manageable to solve, allowing researchers to achieve their objectives. The problem-solving process in many fields benefits from the use of PE. Recently, the ZE (Zhang equivalency) framework has surfaced as a promising approach for addressing time-dependent optimization problems. This ZEF (ZE framework) consolidates constraints at different tiers, demonstrating its capacity for the solving of time-dependent optimization problems. To broaden the application of ZEF in time-dependent optimization problems, specifically in the domain of motion planning for redundant manipulators, the authors systematically investigate the ZEF-I2I (ZEF of the inequation-to-inequation) type. The study concentrates on transforming constraints (i.e., joint constraints and obstacles avoidance depicted in different tiers) into consolidated constraints backed by rigorous mathematical derivations. The effectiveness and applicability of the ZEF-I2I are verified through two optimization motion planning schemes, which consolidate constraints in the velocity-tier and acceleration-tier. Schemes are required to accomplish the goal of repetitive motion planning within constraints. The firstly presented optimization motion planning schemes are then reformulated as two time-dependent quadratic programming problems. Simulative experiments conducted on the basis of a six-joint redundant manipulator confirm the outstanding effectiveness of the firstly presented ZEF-I2I in achieving the goal of motion planning within constraints.
Collapse
Affiliation(s)
- Dongqing Wu
- School of Computational Science, Zhongkai University of Agriculture and Engineering, Guangzhou 51220, Guangdong, China
- Research Institute of Sun Yat-sen University in Shenzhen, Sun Yat-sen University, Shenzhen 518057, Guangdong, China
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, Guangdong, China
| | - Yunong Zhang
- Research Institute of Sun Yat-sen University in Shenzhen, Sun Yat-sen University, Shenzhen 518057, Guangdong, China
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, Guangdong, China
| |
Collapse
|
6
|
Liu J, Liao X. A Projection Neural Network to Nonsmooth Constrained Pseudoconvex Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2001-2015. [PMID: 34464277 DOI: 10.1109/tnnls.2021.3105732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, a single-layer projection neural network based on penalty function and differential inclusion is proposed to solve nonsmooth pseudoconvex optimization problems with linear equality and convex inequality constraints, and the bound constraints, such as box and sphere types, in inequality constraints are processed by projection operator. By introducing the Tikhonov-like regularization method, the proposed neural network no longer needs to calculate the exact penalty parameters. Under mild assumptions, by nonsmooth analysis, it is proved that the state solution of the proposed neural network is always bounded and globally exists, and enters the constrained feasible region in a finite time, and never escapes from this region again. Finally, the state solution converges to an optimal solution for the considered optimization problem. Compared with some other existing neural networks based on subgradients, this algorithm eliminates the dependence on the selection of the initial point, which is a neural network model with a simple structure and low calculation load. Three numerical experiments and two application examples are used to illustrate the global convergence and effectiveness of the proposed neural network.
Collapse
|
7
|
Liu J, Liao X, Dong JS, Mansoori A. A subgradient-based neurodynamic algorithm to constrained nonsmooth nonconvex interval-valued optimization. Neural Netw 2023; 160:259-273. [PMID: 36709530 DOI: 10.1016/j.neunet.2023.01.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 12/20/2022] [Accepted: 01/11/2023] [Indexed: 01/21/2023]
Abstract
In this paper, a subgradient-based neurodynamic algorithm is presented to solve the nonsmooth nonconvex interval-valued optimization problem with both partial order and linear equality constraints, where the interval-valued objective function is nonconvex, and interval-valued partial order constraint functions are convex. The designed neurodynamic system is constructed by a differential inclusion with upper semicontinuous right-hand side, whose calculation load is reduced by relieving penalty parameters estimation and complex matrix inversion. Based on nonsmooth analysis and the extension theorem of the solution of differential inclusion, it is obtained that the global existence and boundedness of state solution of neurodynamic system, as well as the asymptotic convergence of state solution to the feasible region and the set of LU-critical points of interval-valued nonconvex optimization problem. Several numerical experiments and the applications to emergency supplies distribution and nondeterministic fractional continuous static games are solved to illustrate the applicability of the proposed neurodynamic algorithm.
Collapse
Affiliation(s)
- Jingxin Liu
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China; School of Computing, National University of Singapore, Singapore 117417, Singapore.
| | - Xiaofeng Liao
- Key Laboratory of Dependable Services Computing in Cyber-Physical Society (Chongqing) Ministry of Education, College of Computer, Chongqing University, Chongqing 400044, China.
| | - Jin-Song Dong
- School of Computing, National University of Singapore, Singapore 117417, Singapore.
| | - Amin Mansoori
- Department of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad 9177948974, Iran; International UNESCO Center for Health Related Basic Sciences and Human Nutrition, Mashhad University of Medical Sciences, Mashhad 9177948974, Iran.
| |
Collapse
|
8
|
Chen X, Luo X, Jin L, Li S, Liu M. Growing Echo State Network With an Inverse-Free Weight Update Strategy. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:753-764. [PMID: 35316203 DOI: 10.1109/tcyb.2022.3155901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
An echo state network (ESN) draws widespread attention and is applied in many scenarios. As the most typical approach for solving the ESN, the matrix inverse operation of high computational complexity is involved. However, in the modern big data era, addressing the heavy computational burden problem is necessary. In order to reduce the computational load, an inverse-free ESN (IFESN) is proposed for the first time in this article. Besides, an incremental IFESN is constructed to attain the network topology with theoretical proof on the training error's monotone decline property. Simulations and experiments are conducted on several numerical and real-world time-series benchmarks, and corresponding results indicate that the proposed model is superior to some existing models and possesses excellent practical application potential. The source code is publicly available at https://github.com/LongJin-lab/the-supplementary-file-for-CYB-E-2021-04-0944.
Collapse
|
9
|
Yang M, Zhang Y, Tan N, Hu H. Explicit Linear Left-and-Right 5-Step Formulas With Zeroing Neural Network for Time-Varying Applications. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:1133-1143. [PMID: 34464284 DOI: 10.1109/tcyb.2021.3104138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, being different from conventional time-discretization (simply called discretization) formulas, explicit linear left-and-right 5-step (ELLR5S) formulas with sixth-order precision are proposed. The general sixth-order ELLR5S formula with four variable parameters is developed first, and constraints of these four parameters are displayed to guarantee the zero stability, consistence, and convergence of the formula. Then, by choosing specific parameter values within constraints, eight specific sixth-order ELLR5S formulas are developed. The general sixth-order ELLR5S formula is further utilized to generate discrete zeroing neural network (DZNN) models for solving time-varying linear and nonlinear systems. For comparison, three conventional discretization formulas are also utilized. Theoretical analyses are presented to show the performance of ELLR5S formulas and DZNN models. Furthermore, abundant experiments, including three practical applications, that is, angle-of-arrival (AoA) localization and two redundant manipulators (PUMA560 manipulator and Kinova manipulator) control, are conducted. The synthesized results substantiate the efficacy and superiority of sixth-order ELLR5S formulas as well as the corresponding DZNN models.
Collapse
|
10
|
Hu J, Peng Y, He L, Zeng C. A Neurodynamic Approach for Solving E-Convex Interval-Valued Programming. Neural Process Lett 2023. [DOI: 10.1007/s11063-023-11154-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
11
|
Zhou W, Zhang HT, Wang J. Sparse Bayesian Learning Based on Collaborative Neurodynamic Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13669-13683. [PMID: 34260368 DOI: 10.1109/tcyb.2021.3090204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Regression in a sparse Bayesian learning (SBL) framework is usually formulated as a global optimization problem with a nonconvex objective function and solved in a majorization-minimization framework where the solution quality and consistency depend heavily on the initial values of the used algorithm. In view of the shortcomings, this article presents an SBL algorithm based on collaborative neurodynamic optimization (CNO) for searching global optimal solutions to the global optimization problem. The CNO system consists of a population of recurrent neural networks (RNNs) where each RNN is convergent to a local optimum to the global optimization problem. Reinitialized repetitively via particle swarm optimization with exchanged local optima information, the RNNs iteratively improve their searching performance until reaching global convergence. The proposed CNO-based SBL algorithm is almost surely convergent to a global optimal solution to the formulated global optimization problem. Two applications with experimental results on sparse signal reconstruction and partial differential equation identification are elaborated to substantiate the superiority and efficacy of the proposed method in terms of solution optimality and consistency.
Collapse
|
12
|
Qiu B, Li XD, Yang S. A novel discrete-time neurodynamic algorithm for future constrained quadratic programming with wheeled mobile robot control. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07757-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
13
|
Liu N, Su Z, Chai Y, Qin S. Feedback Neural Network for Constrained Bi-objective Convex Optimization. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
14
|
Qi Y, Jin L, Luo X, Shi Y, Liu M. Robust k-WTA Network Generation, Analysis, and Applications to Multiagent Coordination. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:8515-8527. [PMID: 34133299 DOI: 10.1109/tcyb.2021.3079457] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this article, a robust k -winner-take-all ( k -WTA) neural network employing the saturation-allowed activation functions is designed and investigated to perform a k -WTA operation, and is shown to possess enhanced robustness to disturbance compared to existing k -WTA neural networks. Global convergence and robustness of the proposed k -WTA neural network are demonstrated through analysis and simulations. An application studied in detail is competitive multiagent coordination and dynamic task allocation, in which k active agents [among ] are allocated to execute a tracking task with the static m-k ones. This is implemented by adopting a distributed k -WTA network with limited communication, aided with a consensus filter. Simulation results demonstrating the system's efficacy and feasibility are presented.
Collapse
|
15
|
Wang D, Liu XW. A gradient-type noise-tolerant finite-time neural network for convex optimization. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
16
|
Wang K, Liu T, Zhang Y, Tan N. Discrete-time future nonlinear neural optimization with equality constraint based on ten-instant ZTD formula. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
17
|
Wu D, Zhang Y. Discrete-time ZNN-based noise-handling ten-instant algorithm solving Yang-Baxter-like matrix equation with disturbances. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.02.068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
18
|
A projection-based continuous-time algorithm for distributed optimization over multi-agent systems. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-020-00265-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractMulti-agent systems are widely studied due to its ability of solving complex tasks in many fields, especially in deep reinforcement learning. Recently, distributed optimization problem over multi-agent systems has drawn much attention because of its extensive applications. This paper presents a projection-based continuous-time algorithm for solving convex distributed optimization problem with equality and inequality constraints over multi-agent systems. The distinguishing feature of such problem lies in the fact that each agent with private local cost function and constraints can only communicate with its neighbors. All agents aim to cooperatively optimize a sum of local cost functions. By the aid of penalty method, the states of the proposed algorithm will enter equality constraint set in fixed time and ultimately converge to an optimal solution to the objective problem. In contrast to some existed approaches, the continuous-time algorithm has fewer state variables and the testification of the consensus is also involved in the proof of convergence. Ultimately, two simulations are given to show the viability of the algorithm.
Collapse
|
19
|
A One-Layer Recurrent Neural Network for Interval-Valued Optimization Problem with Linear Constraints. Neural Process Lett 2022. [DOI: 10.1007/s11063-021-10681-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
20
|
Wang J, Wang J. Two-Timescale Multilayer Recurrent Neural Networks for Nonlinear Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:37-47. [PMID: 33108292 DOI: 10.1109/tnnls.2020.3027471] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This article presents a neurodynamic approach to nonlinear programming. Motivated by the idea of sequential quadratic programming, a class of two-timescale multilayer recurrent neural networks is presented with neuronal dynamics in their output layer operating at a bigger timescale than in their hidden layers. In the two-timescale multilayer recurrent neural networks, the transient states in the hidden layer(s) undergo faster dynamics than those in the output layer. Sufficient conditions are derived on the convergence of the two-timescale multilayer recurrent neural networks to local optima of nonlinear programming problems. Simulation results of collaborative neurodynamic optimization based on the two-timescale neurodynamic approach on global optimization problems with nonconvex objective functions or constraints are discussed to substantiate the efficacy of the two-timescale neurodynamic approach.
Collapse
|
21
|
Liu N, Wang J, Qin S. A one-layer recurrent neural network for nonsmooth pseudoconvex optimization with quasiconvex inequality and affine equality constraints. Neural Netw 2021; 147:1-9. [PMID: 34953297 DOI: 10.1016/j.neunet.2021.12.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Revised: 11/10/2021] [Accepted: 12/02/2021] [Indexed: 10/19/2022]
Abstract
As two important types of generalized convex functions, pseudoconvex and quasiconvex functions appear in many practical optimization problems. The lack of convexity poses some difficulties in solving pseudoconvex optimization with quasiconvex constraint functions. In this paper, we propose a one-layer recurrent neural network for solving such problems. We prove that the state of the proposed neural network is convergent from the feasible region to an optimal solution of the given optimization problem. We show that the proposed neural network has several advantages over the existing neural networks for pseudoconvex optimization. Specifically, the proposed neural network is applicable to optimization problems with quasiconvex inequality constraints as well as affine equality constraints. In addition, parameter matrix inversion is avoided and some assumptions on the objective function and inequality constraints in existing results are relaxed. We demonstrate the superior performance and characteristics of the proposed neural network with simulation results in three numerical examples.
Collapse
Affiliation(s)
- Na Liu
- Department of Automation, Tsinghua University, Beijing, 100084, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Hong Kong.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, 264209, China.
| |
Collapse
|
22
|
Pose control of constrained redundant arm using recurrent neural networks and one-iteration computing algorithm. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.108007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
23
|
Liu B, Fu D, Qi Y, Huang H, Jin L. Noise-tolerant gradient-oriented neurodynamic model for solving the Sylvester equation. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
24
|
Smoothing neural network for L 0 regularized optimization problem with general convex constraints. Neural Netw 2021; 143:678-689. [PMID: 34403868 DOI: 10.1016/j.neunet.2021.08.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 06/19/2021] [Accepted: 08/01/2021] [Indexed: 11/23/2022]
Abstract
In this paper, we propose a neural network modeled by a differential inclusion to solve a class of discontinuous and nonconvex sparse regression problems with general convex constraints, whose objective function is the sum of a convex but not necessarily differentiable loss function and L0 regularization. We construct a smoothing relaxation function of L0 regularization and propose a neural network to solve the considered problem. We prove that the solution of proposed neural network with any initial point satisfying linear equality constraints is global existent, bounded and reaches the feasible region in finite time and remains there thereafter. Moreover, the solution of proposed neural network is its slow solution and any accumulation point of it is a Clarke stationary point of the brought forward nonconvex smoothing approximation problem. In the box-constrained case, all accumulation points of the solution own a unified lower bound property and have a common support set. Except for a special case, any accumulation point of the solution is a local minimizer of the considered problem. In particular, the proposed neural network has a simple structure than most existing neural networks for solving the locally Lipschitz continuous but nonsmooth nonconvex problems. Finally, we give some numerical experiments to show the efficiency of proposed neural network.
Collapse
|
25
|
Real-domain QR decomposition models employing zeroing neural network and time-discretization formulas for time-varying matrices. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
26
|
Mohammadi M. A Compact Neural Network for Fused Lasso Signal Approximator. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:4327-4336. [PMID: 31329147 DOI: 10.1109/tcyb.2019.2925707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The fused lasso signal approximator (FLSA) is a vital optimization problem with extensive applications in signal processing and biomedical engineering. However, the optimization problem is difficult to solve since it is both nonsmooth and nonseparable. The existing numerical solutions implicate the use of several auxiliary variables in order to deal with the nondifferentiable penalty. Thus, the resulting algorithms are both time- and memory-inefficient. This paper proposes a compact neural network to solve the FLSA. The neural network has a one-layer structure with the number of neurons proportionate to the dimension of the given signal, thanks to the utilization of consecutive projections. The proposed neural network is stable in the Lyapunov sense and is guaranteed to converge globally to the optimal solution of the FLSA. Experiments on several applications from signal processing and biomedical engineering confirm the reasonable performance of the proposed neural network.
Collapse
|
27
|
Wen X, Wang Y, Qin S. A nonautonomous-differential-inclusion neurodynamic approach for nonsmooth distributed optimization on multi-agent systems. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06026-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
28
|
Ma L, Bian W. A Novel Multiagent Neurodynamic Approach to Constrained Distributed Convex Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:1322-1333. [PMID: 30892259 DOI: 10.1109/tcyb.2019.2895885] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper considers a class of distributed convex optimization problems with constraints and gives a novel multiagent neurodynamic approach in continuous-time form. The considered distributed optimization is to search for a minimizer of the summation of nonsmooth convex functions on some agents, which have local general constraints. The proposed approach solves the objective function of each agent individually, and the state solutions of all agents reach consensus asymptotically under mild assumptions. In particular, the existence and boundedness of the global state solution to the dynamical system are guaranteed. Moreover, the state solution reaches the feasible region of equivalent optimization problem asymptotically and the output of each agent is convergent to the optimal solution set of the primal distributed problem. In contrast to the existing methods in a distributed manner, the proposed approach is more convenient for general constrained distributed problems and has low structure complexity which could narrow the bandwidth of communication. Finally, the proposed neurodynamic approach is applied to two numerical examples and a class of power system optimal load-sharing problems to support the theoretical results and its efficiency.
Collapse
|
29
|
Zhang Y, Ming L, Huang H, Chen J, Li Z. Time-varying Schur decomposition via Zhang neural dynamics. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.07.115] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
30
|
Li W, Xiao L, Liao B. A Finite-Time Convergent and Noise-Rejection Recurrent Neural Network and Its Discretization for Dynamic Nonlinear Equations Solving. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:3195-3207. [PMID: 31021811 DOI: 10.1109/tcyb.2019.2906263] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The so-called zeroing neural network (ZNN) is an effective recurrent neural network for solving dynamic problems including the dynamic nonlinear equations. There exist numerous unperturbed ZNN models that can converge to the theoretical solution of solvable nonlinear equations in infinity long or finite time. However, when these ZNN models are perturbed by external disturbances, the convergence performance would be dramatically deteriorated. To overcome this issue, this paper for the first time proposes a finite-time convergent ZNN with the noise-rejection capability to endure disturbances and solve dynamic nonlinear equations in finite time. In theory, the finite-time convergence and noise-rejection properties of the finite-time convergent and noise-rejection ZNN (FTNRZNN) are rigorously proved. For potential digital hardware realization, the discrete form of the FTNRZNN model is established based on a recently developed five-step finite difference rule to guarantee a high computational accuracy. The numerical results demonstrate that the discrete-time FTNRZNN can reject constant external noises. When perturbed by dynamic bounded or unbounded linear noises, the discrete-time FTNRZNN achieves the smallest steady-state errors in comparison with those generated by other discrete-time ZNN models that have no or limited ability to handle these noises. Discrete models of the FTNRZNN and the other ZNNs are comparatively applied to redundancy resolution of a robotic arm with superior positioning accuracy of the FTNRZNN verified.
Collapse
|
31
|
Liu S, Jiang H, Zhang L, Mei X. A neurodynamic optimization approach for complex-variables programming problem. Neural Netw 2020; 129:280-287. [PMID: 32569856 DOI: 10.1016/j.neunet.2020.06.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Revised: 03/22/2020] [Accepted: 06/11/2020] [Indexed: 11/27/2022]
Abstract
A neural network model upon differential inclusion is designed for solving the complex-variables convex programming, and the chain rule for real-valued function with the complex-variables is established in this paper. The model does not need to choose penalty parameters when applied to practical problems, which makes it easier to design. The result is obtained that its state reaches the feasible region in finite time. Furthermore, the convergence for its state to an optimal solution is proved. Some typical examples are shown for the effectiveness of the designed model.
Collapse
Affiliation(s)
- Shuxin Liu
- Institute of Operations Research and Control Theory, School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China; College of Mathematics and Physics, Xinjiang Agricultural University, Urumqi 830052, PR China
| | - Haijun Jiang
- College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, PR China.
| | - Liwei Zhang
- Institute of Operations Research and Control Theory, School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China
| | - Xuehui Mei
- College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, PR China
| |
Collapse
|
32
|
Xia Y, Wang J, Guo W. Two Projection Neural Networks With Reduced Model Complexity for Nonlinear Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2020-2029. [PMID: 31425123 DOI: 10.1109/tnnls.2019.2927639] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Recent reports show that projection neural networks with a low-dimensional state space can enhance computation speed obviously. This paper proposes two projection neural networks with reduced model dimension and complexity (RDPNNs) for solving nonlinear programming (NP) problems. Compared with existing projection neural networks for solving NP, the proposed two RDPNNs have a low-dimensional state space and low model complexity. Under the condition that the Hessian matrix of the associated Lagrangian function is positive semi-definite and positive definite at each Karush-Kuhn-Tucker point, the proposed two RDPNNs are proven to be globally stable in the sense of Lyapunov and converge globally to a point satisfying the reduced optimality condition of NP. Therefore, the proposed two RDPNNs are theoretically guaranteed to solve convex NP problems and a class of nonconvex NP problems. Computed results show that the proposed two RDPNNs have a faster computation speed than the existing projection neural networks for solving NP problems.
Collapse
|
33
|
Yu X, Wu L, Xu C, Hu Y, Ma C. A Novel Neural Network for Solving Nonsmooth Nonconvex Optimization Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1475-1488. [PMID: 31265412 DOI: 10.1109/tnnls.2019.2920408] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In this paper, a novel recurrent neural network (RNN) is presented to deal with a kind of nonsmooth nonconvex optimization problem in which the objective function may be nonsmooth and nonconvex, and the constraints include linear equations and convex inequations. Under certain suitable assumptions, from an arbitrary initial state, each solution to the proposed RNN exists globally and is bounded, and it enters the feasible region within a limited time. Moreover, the solution to the RNN with an arbitrary initial state can converge to the critical point set of the optimization problem. In particular, the RNN does not need the following: 1) abounded feasible region; 2) the computation of an exact penalty parameter; or 3) the initial state being chosen from a given bounded set. Numerical experiments are provided to show the effectiveness and advantages of the RNN.
Collapse
|
34
|
Xu C, Chai Y, Qin S, Wang Z, Feng J. A neurodynamic approach to nonsmooth constrained pseudoconvex optimization problem. Neural Netw 2020; 124:180-192. [DOI: 10.1016/j.neunet.2019.12.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2019] [Revised: 11/15/2019] [Accepted: 12/14/2019] [Indexed: 10/25/2022]
|
35
|
Jiang X, Qin S, Xue X. A penalty-like neurodynamic approach to constrained nonsmooth distributed convex optimization. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.050] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
36
|
Chen D, Li S, Wu Q, Luo X. Super-twisting ZNN for coordinated motion control of multiple robot manipulators with external disturbances suppression. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.08.085] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
37
|
Liu N, Qin S. A Novel Neurodynamic Approach to Constrained Complex-Variable Pseudoconvex Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:3946-3956. [PMID: 30059329 DOI: 10.1109/tcyb.2018.2855724] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Complex-variable pseudoconvex optimization has been widely used in numerous scientific and engineering optimization problems. A neurodynamic approach is proposed in this paper for complex-variable pseudoconvex optimization problems subject to bound and linear equality constraints. An efficient penalty function is introduced to guarantee the boundedness of the state of the presented neural network, and make the state enter the feasible region of the considered optimization in finite time and stay there thereafter. The state is also shown to be convergent to an optimal point of the considered optimization. Compared with other neurodynamic approaches, the presented neural network does not need any penalty parameters, and has lower model complexity. Furthermore, some additional assumptions in other existing related neural networks are also removed in this paper, such as the assumption that the objective function is lower bounded over the equality constraint set and so on. Finally, some numerical examples and an application in beamforming formulation are provided.
Collapse
|
38
|
Zhang Z, Zheng L. A Complex Varying-Parameter Convergent-Differential Neural-Network for Solving Online Time-Varying Complex Sylvester Equation. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:3627-3639. [PMID: 29994668 DOI: 10.1109/tcyb.2018.2841970] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
A novel recurrent neural network, which is named as complex varying-parameter convergent-differential neural network (CVP-CDNN), is proposed in this paper for solving the time-varying complex Sylvester equation. Two kinds of CVP-CDNNs (i.e., CVP-CDNN Type I and Type II) are illustrated and proved to be effective. The proposed CVP-CDNNs can achieve super-exponential performance if the linear activation function is used. Some activation functions are considered for searching the better performance of the CVP-CDNN and the finite time convergence property of the CVP-CDNN with sign-bi-power activation function is testified. The convergence time of the CVP-CDNN with sign-bi-power activation function is shorter than complex fixed-parameter convergent-differential neural network (CFP-CDNN). Moreover, compared with traditional CFP-CDNN, better convergence performances of novel CVP-CDNN are verified by computer simulation comparisons.
Collapse
|
39
|
A neurodynamic approach to compute the generalized eigenvalues of symmetric positive matrix pair. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.06.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
40
|
Jia W, Qin S, Xue X. A generalized neural network for distributed nonsmooth optimization with inequality constraint. Neural Netw 2019; 119:46-56. [PMID: 31376637 DOI: 10.1016/j.neunet.2019.07.019] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Revised: 05/29/2019] [Accepted: 07/22/2019] [Indexed: 10/26/2022]
Abstract
In this paper, a generalized neural network with a novel auxiliary function is proposed to solve a distributed non-differentiable optimization over a multi-agent network. The constructed auxiliary function can ensure that the state solution of proposed neural network is bounded, and enters the inequality constraint set in finite time. Furthermore, the proposed neural network is demonstrated to reach consensus and ultimately converges to the optimal solution under several mild assumptions. Compared with the existing methods, the neural network proposed in this paper has a simple structure with a low amount of state variables, and does not depend on projection operator method for constrained distributed optimization. Finally, two numerical simulations and an application in power system are delineated to show the characteristics and practicability of the presented neural network.
Collapse
Affiliation(s)
- Wenwen Jia
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| | - Xiaoping Xue
- Department of Mathematics, Harbin Institute of Technology, Harbin, PR China.
| |
Collapse
|
41
|
Mansoori A, Eshaghnezhad M, Effati S. Recurrent Neural Network Model: A New Strategy to Solve Fuzzy Matrix Games. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:2538-2547. [PMID: 30624230 DOI: 10.1109/tnnls.2018.2885825] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper aims to investigate the fuzzy constrained matrix game (MG) problems using the concepts of recurrent neural networks (RNNs). To the best of our knowledge, this paper is the first in attempting to find a solution for fuzzy game problems using RNN models. For this purpose, a fuzzy game problem is reformulated into a weighting problem. Then, the Karush-Kuhn-Tucker (KKT) optimality conditions are provided for the weighting problem. The KKT conditions are used to propose the RNN model. Moreover, the Lyapunov stability and the global convergence of the RNN model are also confirmed. Finally, three illustrative examples are presented to demonstrate the effectiveness of this approach. The obtained results are compared with the results obtained by the previous approaches for solving fuzzy constrained MG.
Collapse
|
42
|
Yu F, Liu L, Xiao L, Li K, Cai S. A robust and fixed-time zeroing neural dynamics for computing time-variant nonlinear equation using a novel nonlinear activation function. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.03.053] [Citation(s) in RCA: 100] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
43
|
Kocaoğlu A. An Efficient SMO Algorithm for Solving Non-smooth Problem Arising in
$$\varepsilon $$
ε
-Insensitive Support Vector Regression. Neural Process Lett 2019. [DOI: 10.1007/s11063-018-09975-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
44
|
Liu N, Qin S. A neurodynamic approach to nonlinear optimization problems with affine equality and convex inequality constraints. Neural Netw 2019; 109:147-158. [DOI: 10.1016/j.neunet.2018.10.010] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Revised: 09/03/2018] [Accepted: 10/12/2018] [Indexed: 11/29/2022]
|
45
|
Chu J, Gu H, Su Y, Jing P. Towards a sparse low-rank regression model for memorability prediction of images. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.09.052] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
46
|
Zhang Z, Zheng L, Weng J, Mao Y, Lu W, Xiao L. A New Varying-Parameter Recurrent Neural-Network for Online Solution of Time-Varying Sylvester Equation. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:3135-3148. [PMID: 29994381 DOI: 10.1109/tcyb.2017.2760883] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Solving Sylvester equation is a common algebraic problem in mathematics and control theory. Different from the traditional fixed-parameter recurrent neural networks, such as gradient-based recurrent neural networks or Zhang neural networks, a novel varying-parameter recurrent neural network, [called varying-parameter convergent-differential neural network (VP-CDNN)] is proposed in this paper for obtaining the online solution to the time-varying Sylvester equation. With time passing by, this kind of new varying-parameter neural network can achieve super-exponential performance. Computer simulation comparisons between the fixed-parameter neural networks and the proposed VP-CDNN via using different kinds of activation functions demonstrate that the proposed VP-CDNN has better convergence and robustness properties.
Collapse
|
47
|
Neural network for nonsmooth pseudoconvex optimization with general convex constraints. Neural Netw 2018; 101:1-14. [DOI: 10.1016/j.neunet.2018.01.008] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 11/13/2017] [Accepted: 01/18/2018] [Indexed: 11/21/2022]
|
48
|
Eshaghnezhad M, Effati S, Mansoori A. A Neurodynamic Model to Solve Nonlinear Pseudo-Monotone Projection Equation and Its Applications. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:3050-3062. [PMID: 27705876 DOI: 10.1109/tcyb.2016.2611529] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, a neurodynamic model is given to solve nonlinear pseudo-monotone projection equation. Under pseudo-monotonicity condition and Lipschitz continuous condition, the projection neurodynamic model is proved to be stable in the sense of Lyapunov, globally convergent, globally asymptotically stable, and globally exponentially stable. Also, we show that, our new neurodynamic model is effective to solve the nonconvex optimization problems. Moreover, since monotonicity is a special case of pseudo-monotonicity and also since a co-coercive mapping is Lipschitz continuous and monotone, and a strongly pseudo-monotone mapping is pseudo-monotone, the neurodynamic model can be applied to solve a broader classes of constrained optimization problems related to variational inequalities, pseudo-convex optimization problem, linear and nonlinear complementarity problems, and linear and convex quadratic programming problems. Finally, several illustrative examples are stated to demonstrate the effectiveness and efficiency of our new neurodynamic model.
Collapse
|