1
|
Liu N, Jia W, Qin S. A smooth gradient approximation neural network for general constrained nonsmooth nonconvex optimization problems. Neural Netw 2025; 184:107121. [PMID: 39798354 DOI: 10.1016/j.neunet.2024.107121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 12/09/2024] [Accepted: 12/31/2024] [Indexed: 01/15/2025]
Abstract
Nonsmooth nonconvex optimization problems are pivotal in engineering practice due to the inherent nonsmooth and nonconvex characteristics of many real-world complex systems and models. The nonsmoothness and nonconvexity of the objective and constraint functions bring great challenges to the design and convergence analysis of the optimization algorithms. This paper presents a smooth gradient approximation neural network for such optimization problems, in which a smooth approximation technique with time-varying control parameter is introduced for handling nonsmooth nonregular objective functions. In addition, a hard comparator function is introduced to ensure that the state solution of the proposed neural network remains within the nonconvex inequality constraint sets. Any accumulation point of the state solution of the proposed neural network is proved to be a stationary point of the nonconvex optimization under consideration. Furthermore, the neural network demonstrates the ability to find optimal solutions for some generalized convex optimization problems. Compared with the related neural networks, the constructed neural network has weaker convergence conditions and simpler algorithm structure. Simulation results and an application in optimizing condition number verify the practical applicability of the presented algorithm.
Collapse
Affiliation(s)
- Na Liu
- School of Mathematical Sciences, Tianjin Normal University, Tianjin, China; Institute of Mathematics and Interdisciplinary Sciences, Tianjin Normal University, Tianjin, China.
| | - Wenwen Jia
- Department of Mathematics, Southeast University, Nanjing, China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, China.
| |
Collapse
|
2
|
Luan L, Wen X, Xue Y, Qin S. Adaptive penalty-based neurodynamic approach for nonsmooth interval-valued optimization problem. Neural Netw 2024; 176:106337. [PMID: 38688071 DOI: 10.1016/j.neunet.2024.106337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 03/08/2024] [Accepted: 04/23/2024] [Indexed: 05/02/2024]
Abstract
The complex and diverse practical background drives this paper to explore a new neurodynamic approach (NA) to solve nonsmooth interval-valued optimization problems (IVOPs) constrained by interval partial order and more general sets. On the one hand, to deal with the uncertainty of interval-valued information, the LU-optimality condition of IVOPs is established through a deterministic form. On the other hand, according to the penalty method and adaptive controller, the interval partial order constraint and set constraint are punished by one adaptive parameter, which is a key enabler for the feasibility of states while having a lower solution space dimension and avoiding estimating exact penalty parameters. Through nonsmooth analysis and Lyapunov theory, the proposed adaptive penalty-based neurodynamic approach (APNA) is proven to converge to an LU-solution of the considered IVOPs. Finally, the feasibility of the proposed APNA is illustrated by numerical simulations and an investment decision-making problem.
Collapse
Affiliation(s)
- Linhua Luan
- Department of Mathematics, Harbin Institute of Technology, Weihai, China.
| | - Xingnan Wen
- Department of Mathematics, Harbin Institute of Technology, Weihai, China.
| | - Yuhan Xue
- School of Economics and Management, Harbin Institute of Technology, Harbin, China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, China.
| |
Collapse
|
3
|
Wang Y, Wang W, Pal NR. Supervised Feature Selection via Collaborative Neurodynamic Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:6878-6892. [PMID: 36306292 DOI: 10.1109/tnnls.2022.3213167] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
As a crucial part of machine learning and pattern recognition, feature selection aims at selecting a subset of the most informative features from the set of all available features. In this article, supervised feature selection is at first formulated as a mixed-integer optimization problem with an objective function of weighted feature redundancy and relevancy subject to a cardinality constraint on the number of selected features. It is equivalently reformulated as a bound-constrained mixed-integer optimization problem by augmenting the objective function with a penalty function for realizing the cardinality constraint. With additional bilinear and linear equality constraints for realizing the integrality constraints, it is further reformulated as a bound-constrained biconvex optimization problem with two more penalty terms. Two collaborative neurodynamic optimization (CNO) approaches are proposed for solving the formulated and reformulated feature selection problems. One of the proposed CNO approaches uses a population of discrete-time recurrent neural networks (RNNs), and the other use a pair of continuous-time projection networks operating concurrently on two timescales. Experimental results on 13 benchmark datasets are elaborated to substantiate the superiority of the CNO approaches to several mainstream methods in terms of average classification accuracy with three commonly used classifiers.
Collapse
|
4
|
Xia Z, Liu Y, Wang J. An event-triggered collaborative neurodynamic approach to distributed global optimization. Neural Netw 2024; 169:181-190. [PMID: 37890367 DOI: 10.1016/j.neunet.2023.10.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 08/29/2023] [Accepted: 10/15/2023] [Indexed: 10/29/2023]
Abstract
In this paper, we propose an event-triggered collaborative neurodynamic approach to distributed global optimization in the presence of nonconvexity. We design a projection neural network group consisting of multiple projection neural networks coupled via a communication network. We prove the convergence of the projection neural network group to Karush-Kuhn-Tucker points of a given global optimization problem. To reduce communication bandwidth consumption, we adopt an event-triggered mechanism to liaise with other neural networks in the group with the Zeno behavior being precluded. We employ multiple projection neural network groups for scattered searches and re-initialize their states using a meta-heuristic rule in the collaborative neurodynamic optimization framework. In addition, we apply the collaborative neurodynamic approach for distributed optimal chiller loading in a heating, ventilation, and air conditioning system.
Collapse
Affiliation(s)
- Zicong Xia
- School of Mathematical Sciences, Zhejiang Normal University, Jinhua 321004, China; School of Mathematics, Southeast University, Nanjing 210096, China
| | - Yang Liu
- School of Mathematical Sciences, Zhejiang Normal University, Jinhua 321004, China; Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua 321004, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| |
Collapse
|
5
|
Liu J, Liao X. A Projection Neural Network to Nonsmooth Constrained Pseudoconvex Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2001-2015. [PMID: 34464277 DOI: 10.1109/tnnls.2021.3105732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, a single-layer projection neural network based on penalty function and differential inclusion is proposed to solve nonsmooth pseudoconvex optimization problems with linear equality and convex inequality constraints, and the bound constraints, such as box and sphere types, in inequality constraints are processed by projection operator. By introducing the Tikhonov-like regularization method, the proposed neural network no longer needs to calculate the exact penalty parameters. Under mild assumptions, by nonsmooth analysis, it is proved that the state solution of the proposed neural network is always bounded and globally exists, and enters the constrained feasible region in a finite time, and never escapes from this region again. Finally, the state solution converges to an optimal solution for the considered optimization problem. Compared with some other existing neural networks based on subgradients, this algorithm eliminates the dependence on the selection of the initial point, which is a neural network model with a simple structure and low calculation load. Three numerical experiments and two application examples are used to illustrate the global convergence and effectiveness of the proposed neural network.
Collapse
|
6
|
Hu J, Peng Y, He L, Zeng C. A Neurodynamic Approach for Solving E-Convex Interval-Valued Programming. Neural Process Lett 2023. [DOI: 10.1007/s11063-023-11154-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
7
|
Talebi F, Nazemi A, Ataabadi AA. Mean-AVaR in credibilistic portfolio management via an artificial neural network scheme. J EXP THEOR ARTIF IN 2022. [DOI: 10.1080/0952813x.2022.2153271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Affiliation(s)
- Fatemeh Talebi
- Faculty of Mathematical sciences, Shahrood University of Technology, Shahrood, Iran
| | - Alireza Nazemi
- Faculty of Mathematical sciences, Shahrood University of Technology, Shahrood, Iran
| | - Abdolmajid Abdolbaghi Ataabadi
- Department of Management, Faculty of Industrial Engineering and Management, Shahrood University of Technology, Shahrood, Iran
| |
Collapse
|
8
|
Leung MF, Wang J, Li D. Decentralized Robust Portfolio Optimization Based on Cooperative-Competitive Multiagent Systems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12785-12794. [PMID: 34260366 DOI: 10.1109/tcyb.2021.3088884] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article addresses decentralized robust portfolio optimization based on multiagent systems. Decentralized robust portfolio optimization is first formulated as two distributed minimax optimization problems in a Markowitz return-risk framework. Cooperative-competitive multiagent systems are developed and applied for solving the formulated problems. The multiagent systems are shown to be able to reach consensuses in the expected stock prices and convergence in investment allocations through both intergroup and intragroup interactions. Experimental results of the multiagent systems with stock data from four major markets are elaborated to substantiate the efficacy of multiagent systems for decentralized robust portfolio optimization.
Collapse
|
9
|
Wang J, Gan X. Neurodynamics-driven portfolio optimization with targeted performance criteria. Neural Netw 2022; 157:404-421. [DOI: 10.1016/j.neunet.2022.10.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 08/29/2022] [Accepted: 10/14/2022] [Indexed: 11/07/2022]
|
10
|
Liu N, Su Z, Chai Y, Qin S. Feedback Neural Network for Constrained Bi-objective Convex Optimization. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
11
|
Che H, Wang J, Cichocki A. Sparse signal reconstruction via collaborative neurodynamic optimization. Neural Netw 2022; 154:255-269. [PMID: 35908375 DOI: 10.1016/j.neunet.2022.07.018] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 07/09/2022] [Accepted: 07/12/2022] [Indexed: 10/17/2022]
Abstract
In this paper, we formulate a mixed-integer problem for sparse signal reconstruction and reformulate it as a global optimization problem with a surrogate objective function subject to underdetermined linear equations. We propose a sparse signal reconstruction method based on collaborative neurodynamic optimization with multiple recurrent neural networks for scattered searches and a particle swarm optimization rule for repeated repositioning. We elaborate on experimental results to demonstrate the outperformance of the proposed approach against ten state-of-the-art algorithms for sparse signal reconstruction.
Collapse
Affiliation(s)
- Hangjun Che
- College of Electronic and Information Engineering and Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing 400715, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| | - Andrzej Cichocki
- Skolkovo Institute of Science and Technology, Moscow 143026, Russia.
| |
Collapse
|
12
|
Leung MF, Wang J, Che H. Cardinality-constrained portfolio selection based on two-timescale duplex neurodynamic optimization. Neural Netw 2022; 153:399-410. [DOI: 10.1016/j.neunet.2022.06.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 05/13/2022] [Accepted: 06/16/2022] [Indexed: 11/26/2022]
|
13
|
A One-Layer Recurrent Neural Network for Interval-Valued Optimization Problem with Linear Constraints. Neural Process Lett 2022. [DOI: 10.1007/s11063-021-10681-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
14
|
Wang J, Wang J. Two-Timescale Multilayer Recurrent Neural Networks for Nonlinear Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:37-47. [PMID: 33108292 DOI: 10.1109/tnnls.2020.3027471] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This article presents a neurodynamic approach to nonlinear programming. Motivated by the idea of sequential quadratic programming, a class of two-timescale multilayer recurrent neural networks is presented with neuronal dynamics in their output layer operating at a bigger timescale than in their hidden layers. In the two-timescale multilayer recurrent neural networks, the transient states in the hidden layer(s) undergo faster dynamics than those in the output layer. Sufficient conditions are derived on the convergence of the two-timescale multilayer recurrent neural networks to local optima of nonlinear programming problems. Simulation results of collaborative neurodynamic optimization based on the two-timescale neurodynamic approach on global optimization problems with nonconvex objective functions or constraints are discussed to substantiate the efficacy of the two-timescale neurodynamic approach.
Collapse
|
15
|
Liu N, Wang J, Qin S. A one-layer recurrent neural network for nonsmooth pseudoconvex optimization with quasiconvex inequality and affine equality constraints. Neural Netw 2021; 147:1-9. [PMID: 34953297 DOI: 10.1016/j.neunet.2021.12.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Revised: 11/10/2021] [Accepted: 12/02/2021] [Indexed: 10/19/2022]
Abstract
As two important types of generalized convex functions, pseudoconvex and quasiconvex functions appear in many practical optimization problems. The lack of convexity poses some difficulties in solving pseudoconvex optimization with quasiconvex constraint functions. In this paper, we propose a one-layer recurrent neural network for solving such problems. We prove that the state of the proposed neural network is convergent from the feasible region to an optimal solution of the given optimization problem. We show that the proposed neural network has several advantages over the existing neural networks for pseudoconvex optimization. Specifically, the proposed neural network is applicable to optimization problems with quasiconvex inequality constraints as well as affine equality constraints. In addition, parameter matrix inversion is avoided and some assumptions on the objective function and inequality constraints in existing results are relaxed. We demonstrate the superior performance and characteristics of the proposed neural network with simulation results in three numerical examples.
Collapse
Affiliation(s)
- Na Liu
- Department of Automation, Tsinghua University, Beijing, 100084, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Hong Kong.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, 264209, China.
| |
Collapse
|
16
|
Leung MF, Wang J. Cardinality-constrained portfolio selection based on collaborative neurodynamic optimization. Neural Netw 2021; 145:68-79. [PMID: 34735892 DOI: 10.1016/j.neunet.2021.10.007] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/28/2021] [Accepted: 10/11/2021] [Indexed: 11/18/2022]
Abstract
Portfolio optimization is one of the most important investment strategies in financial markets. It is practically desirable for investors, especially high-frequency traders, to consider cardinality constraints in portfolio selection, to avoid odd lots and excessive costs such as transaction fees. In this paper, a collaborative neurodynamic optimization approach is presented for cardinality-constrained portfolio selection. The expected return and investment risk in the Markowitz framework are scalarized as a weighted Chebyshev function and the cardinality constraints are equivalently represented using introduced binary variables as an upper bound. Then cardinality-constrained portfolio selection is formulated as a mixed-integer optimization problem and solved by means of collaborative neurodynamic optimization with multiple recurrent neural networks repeatedly repositioned using a particle swarm optimization rule. The distribution of resulting Pareto-optimal solutions is also iteratively perfected by optimizing the weights in the scalarized objective functions based on particle swarm optimization. Experimental results with stock data from four major world markets are discussed to substantiate the superior performance of the collaborative neurodynamic approach to several exact and metaheuristic methods.
Collapse
Affiliation(s)
- Man-Fai Leung
- School of Science and Technology, Hong Kong Metropolitan University, Kowloon, Hong Kong
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| |
Collapse
|
17
|
Smoothing neural network for L 0 regularized optimization problem with general convex constraints. Neural Netw 2021; 143:678-689. [PMID: 34403868 DOI: 10.1016/j.neunet.2021.08.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 06/19/2021] [Accepted: 08/01/2021] [Indexed: 11/23/2022]
Abstract
In this paper, we propose a neural network modeled by a differential inclusion to solve a class of discontinuous and nonconvex sparse regression problems with general convex constraints, whose objective function is the sum of a convex but not necessarily differentiable loss function and L0 regularization. We construct a smoothing relaxation function of L0 regularization and propose a neural network to solve the considered problem. We prove that the solution of proposed neural network with any initial point satisfying linear equality constraints is global existent, bounded and reaches the feasible region in finite time and remains there thereafter. Moreover, the solution of proposed neural network is its slow solution and any accumulation point of it is a Clarke stationary point of the brought forward nonconvex smoothing approximation problem. In the box-constrained case, all accumulation points of the solution own a unified lower bound property and have a common support set. Except for a special case, any accumulation point of the solution is a local minimizer of the considered problem. In particular, the proposed neural network has a simple structure than most existing neural networks for solving the locally Lipschitz continuous but nonsmooth nonconvex problems. Finally, we give some numerical experiments to show the efficiency of proposed neural network.
Collapse
|
18
|
Leung MF, Wang J. Minimax and Biobjective Portfolio Selection Based on Collaborative Neurodynamic Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2825-2836. [PMID: 31902773 DOI: 10.1109/tnnls.2019.2957105] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Portfolio selection is one of the important issues in financial investments. This article is concerned with portfolio selection based on collaborative neurodynamic optimization. The classic Markowitz mean-variance (MV) framework and its variant mean conditional value-at-risk (CVaR) are formulated as minimax and biobjective portfolio selection problems. Neurodynamic approaches are then applied for solving these optimization problems. For each of the problems, multiple neural networks work collaboratively to characterize the efficient frontier by means of particle swarm optimization (PSO)-based weight optimization. Experimental results with stock data from four major markets show the performance and characteristics of the collaborative neurodynamic approaches to the portfolio optimization problems.
Collapse
|
19
|
Wen X, Luan L, Qin S. A continuous-time neurodynamic approach and its discretization for distributed convex optimization over multi-agent systems. Neural Netw 2021; 143:52-65. [PMID: 34087529 DOI: 10.1016/j.neunet.2021.05.020] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 03/25/2021] [Accepted: 05/17/2021] [Indexed: 10/21/2022]
Abstract
Distributed optimization problem (DOP) over multi-agent systems, which can be described by minimizing the sum of agents' local objective functions, has recently attracted widespread attention owing to its applications in diverse domains. In this paper, inspired by penalty method and subgradient descent method, a continuous-time neurodynamic approach is proposed for solving a DOP with inequality and set constraints. The state of continuous-time neurodynamic approach exists globally and converges to an optimal solution of the considered DOP. Comparisons reveal that the proposed neurodynamic approach can not only resolve more general convex DOPs, but also has lower dimension of solution space. Additionally, the discretization of the neurodynamic approach is also introduced for the convenience of implementation in practice. The iteration sequence of discrete-time method is also convergent to an optimal solution of DOP from any initial point. The effectiveness of the neurodynamic approach is verified by simulation examples and an application in L1-norm minimization problem in the end.
Collapse
Affiliation(s)
- Xingnan Wen
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| | - Linhua Luan
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| |
Collapse
|
20
|
Wang Y, Wang J, Che H. Two-timescale neurodynamic approaches to supervised feature selection based on alternative problem formulations. Neural Netw 2021; 142:180-191. [PMID: 34020085 DOI: 10.1016/j.neunet.2021.04.038] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 04/21/2021] [Accepted: 04/29/2021] [Indexed: 10/21/2022]
Abstract
Feature selection is a crucial step in data processing and machine learning. While many greedy and sequential feature selection approaches are available, a holistic neurodynamics approach to supervised feature selection is recently developed via fractional programming by minimizing feature redundancy and maximizing relevance simultaneously. In view that the gradient of the fractional objective function is also fractional, alternative problem formulations are desirable to obviate the fractional complexity. In this paper, the fractional programming problem formulation is equivalently reformulated as bilevel and bilinear programming problems without using any fractional function. Two two-timescale projection neural networks are adapted for solving the reformulated problems. Experimental results on six benchmark datasets are elaborated to demonstrate the global convergence and high classification performance of the proposed neurodynamic approaches in comparison with six mainstream feature selection approaches.
Collapse
Affiliation(s)
- Yadi Wang
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, 475004, China; Institute of Data and Knowledge Engineering, School of Computer and Information Engineering, Henan University, Kaifeng, 475004, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong; Shenzhen Research Institute, City University of Hong Kong, Shenzhen, Guangdong, China.
| | - Hangjun Che
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China; Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing 400715, China.
| |
Collapse
|
21
|
Wang Y, Li X, Wang J. A neurodynamic optimization approach to supervised feature selection via fractional programming. Neural Netw 2021; 136:194-206. [PMID: 33497995 DOI: 10.1016/j.neunet.2021.01.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 12/04/2020] [Accepted: 01/07/2021] [Indexed: 11/25/2022]
Abstract
Feature selection is an important issue in machine learning and data mining. Most existing feature selection methods are greedy in nature thus are prone to sub-optimality. Though some global feature selection methods based on unsupervised redundancy minimization can potentiate clustering performance improvements, their efficacy for classification may be limited. In this paper, a neurodynamics-based holistic feature selection approach is proposed via feature redundancy minimization and relevance maximization. An information-theoretic similarity coefficient matrix is defined based on multi-information and entropy to measure feature redundancy with respect to class labels. Supervised feature selection is formulated as a fractional programming problem based on the similarity coefficients. A neurodynamic approach based on two one-layer recurrent neural networks is developed for solving the formulated feature selection problem. Experimental results with eight benchmark datasets are discussed to demonstrate the global convergence of the neural networks and superiority of the proposed neurodynamic approach to several existing feature selection methods in terms of classification accuracy, precision, recall, and F-measure.
Collapse
Affiliation(s)
- Yadi Wang
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, 475004, China; Institute of Data and Knowledge Engineering, School of Computer and Information Engineering, Henan University, Kaifeng, 475004, China; School of Computer Science and Engineering, Southeast University, Nanjing, 211189, China.
| | - Xiaoping Li
- School of Computer Science and Engineering, Southeast University, Nanjing, 211189, China; Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing, 211189, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| |
Collapse
|
22
|
Liu S, Jiang H, Zhang L, Mei X. A neurodynamic optimization approach for complex-variables programming problem. Neural Netw 2020; 129:280-287. [PMID: 32569856 DOI: 10.1016/j.neunet.2020.06.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Revised: 03/22/2020] [Accepted: 06/11/2020] [Indexed: 11/27/2022]
Abstract
A neural network model upon differential inclusion is designed for solving the complex-variables convex programming, and the chain rule for real-valued function with the complex-variables is established in this paper. The model does not need to choose penalty parameters when applied to practical problems, which makes it easier to design. The result is obtained that its state reaches the feasible region in finite time. Furthermore, the convergence for its state to an optimal solution is proved. Some typical examples are shown for the effectiveness of the designed model.
Collapse
Affiliation(s)
- Shuxin Liu
- Institute of Operations Research and Control Theory, School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China; College of Mathematics and Physics, Xinjiang Agricultural University, Urumqi 830052, PR China
| | - Haijun Jiang
- College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, PR China.
| | - Liwei Zhang
- Institute of Operations Research and Control Theory, School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China
| | - Xuehui Mei
- College of Mathematics and System Sciences, Xinjiang University, Urumqi 830046, PR China
| |
Collapse
|
23
|
Li X, Wang J, Kwong S. A Discrete-Time Neurodynamic Approach to Sparsity-Constrained Nonnegative Matrix Factorization. Neural Comput 2020; 32:1531-1562. [PMID: 32521214 DOI: 10.1162/neco_a_01294] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sparsity is a desirable property in many nonnegative matrix factorization (NMF) applications. Although some level of sparseness of NMF solutions can be achieved by using regularization, the resulting sparsity depends highly on the regularization parameter to be valued in an ad hoc way. In this letter we formulate sparse NMF as a mixed-integer optimization problem with sparsity as binary constraints. A discrete-time projection neural network is developed for solving the formulated problem. Sufficient conditions for its stability and convergence are analytically characterized by using Lyapunov's method. Experimental results on sparse feature extraction are discussed to substantiate the superiority of this approach to extracting highly sparse features.
Collapse
Affiliation(s)
- Xinqi Li
- Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong, and Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| | - Jun Wang
- Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong, and Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| | - Sam Kwong
- Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong, and Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| |
Collapse
|
24
|
Xu C, Chai Y, Qin S, Wang Z, Feng J. A neurodynamic approach to nonsmooth constrained pseudoconvex optimization problem. Neural Netw 2020; 124:180-192. [DOI: 10.1016/j.neunet.2019.12.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2019] [Revised: 11/15/2019] [Accepted: 12/14/2019] [Indexed: 10/25/2022]
|
25
|
Zhu Y, Yu W, Wen G, Chen G. Projected Primal-Dual Dynamics for Distributed Constrained Nonsmooth Convex Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:1776-1782. [PMID: 30530351 DOI: 10.1109/tcyb.2018.2883095] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
A distributed nonsmooth convex optimization problem subject to a general type of constraint, including equality and inequality as well as bounded constraints, is studied in this paper for a multiagent network with a fixed and connected communication topology. To collectively solve such a complex optimization problem, primal-dual dynamics with projection operation are investigated under optimal conditions. For the nonsmooth convex optimization problem, a framework under the LaSalle's invariance principle from nonsmooth analysis is established, where the asymptotic stability of the primal-dual dynamics at an optimal solution is guaranteed. For the case where inequality and bounded constraints are not involved and the objective function is twice differentiable and strongly convex, the globally exponential convergence of the primal-dual dynamics is established. Finally, two simulations are provided to verify and visualize the theoretical results.
Collapse
|
26
|
A consensus algorithm based on collective neurodynamic system for distributed optimization with linear and bound constraints. Neural Netw 2020; 122:144-151. [DOI: 10.1016/j.neunet.2019.10.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Revised: 08/13/2019] [Accepted: 10/10/2019] [Indexed: 11/21/2022]
|
27
|
Mohammadi S, Nazemi A. On portfolio management with value at risk and uncertain returns via an artificial neural network scheme. COGN SYST RES 2020. [DOI: 10.1016/j.cogsys.2019.09.024] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
28
|
Liu N, Qin S. A Novel Neurodynamic Approach to Constrained Complex-Variable Pseudoconvex Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:3946-3956. [PMID: 30059329 DOI: 10.1109/tcyb.2018.2855724] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Complex-variable pseudoconvex optimization has been widely used in numerous scientific and engineering optimization problems. A neurodynamic approach is proposed in this paper for complex-variable pseudoconvex optimization problems subject to bound and linear equality constraints. An efficient penalty function is introduced to guarantee the boundedness of the state of the presented neural network, and make the state enter the feasible region of the considered optimization in finite time and stay there thereafter. The state is also shown to be convergent to an optimal point of the considered optimization. Compared with other neurodynamic approaches, the presented neural network does not need any penalty parameters, and has lower model complexity. Furthermore, some additional assumptions in other existing related neural networks are also removed in this paper, such as the assumption that the objective function is lower bounded over the equality constraint set and so on. Finally, some numerical examples and an application in beamforming formulation are provided.
Collapse
|
29
|
Moghaddas M, Tohidi G. A neurodynamic scheme to bi-level revenue-based centralized resource allocation models. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-182953] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Mohammad Moghaddas
- Department of Mathematics, Central Tehran Branch, Islamic Azad University, Tehran, Iran
| | - Ghasem Tohidi
- Department of Mathematics, Central Tehran Branch, Islamic Azad University, Tehran, Iran
| |
Collapse
|
30
|
Jia W, Qin S, Xue X. A generalized neural network for distributed nonsmooth optimization with inequality constraint. Neural Netw 2019; 119:46-56. [PMID: 31376637 DOI: 10.1016/j.neunet.2019.07.019] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Revised: 05/29/2019] [Accepted: 07/22/2019] [Indexed: 10/26/2022]
Abstract
In this paper, a generalized neural network with a novel auxiliary function is proposed to solve a distributed non-differentiable optimization over a multi-agent network. The constructed auxiliary function can ensure that the state solution of proposed neural network is bounded, and enters the inequality constraint set in finite time. Furthermore, the proposed neural network is demonstrated to reach consensus and ultimately converges to the optimal solution under several mild assumptions. Compared with the existing methods, the neural network proposed in this paper has a simple structure with a low amount of state variables, and does not depend on projection operator method for constrained distributed optimization. Finally, two numerical simulations and an application in power system are delineated to show the characteristics and practicability of the presented neural network.
Collapse
Affiliation(s)
- Wenwen Jia
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, PR China.
| | - Xiaoping Xue
- Department of Mathematics, Harbin Institute of Technology, Harbin, PR China.
| |
Collapse
|
31
|
A varying-gain recurrent neural-network with super exponential convergence rate for solving nonlinear time-varying systems. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.04.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
32
|
|
33
|
Liu N, Qin S. A neurodynamic approach to nonlinear optimization problems with affine equality and convex inequality constraints. Neural Netw 2019; 109:147-158. [DOI: 10.1016/j.neunet.2018.10.010] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Revised: 09/03/2018] [Accepted: 10/12/2018] [Indexed: 11/29/2022]
|
34
|
Le X, Chen S, Yan Z, Xi J. A Neurodynamic Approach to Distributed Optimization With Globally Coupled Constraints. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:3149-3158. [PMID: 29053459 DOI: 10.1109/tcyb.2017.2760908] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, a distributed neurodynamic approach is proposed for constrained convex optimization. The objective function is a sum of local convex subproblems, whereas the constraints of these subproblems are coupled. Each local objective function is minimized individually with the proposed neurodynamic optimization approach. Through information exchange between connected neighbors only, all nodes can reach consensus on the Lagrange multipliers of all global equality and inequality constraints, and the decision variables converge to the global optimum in a distributed manner. Simulation results of two power system cases are discussed to substantiate the effectiveness and characteristics of the proposed approach.
Collapse
|
35
|
Neural network for nonsmooth pseudoconvex optimization with general convex constraints. Neural Netw 2018; 101:1-14. [DOI: 10.1016/j.neunet.2018.01.008] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 11/13/2017] [Accepted: 01/18/2018] [Indexed: 11/21/2022]
|
36
|
Maratos N, Moraitis M. Some results on the Sign recurrent neural network for unconstrained minimization. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.09.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
37
|
Yang S, Liu Q, Wang J. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:981-992. [PMID: 28166509 DOI: 10.1109/tnnls.2017.2652478] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.
Collapse
|
38
|
Li S, Zhang Y, Jin L. Kinematic Control of Redundant Manipulators Using Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2243-2254. [PMID: 27352398 DOI: 10.1109/tnnls.2016.2574363] [Citation(s) in RCA: 61] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Redundancy resolution is a critical problem in the control of robotic manipulators. Recurrent neural networks (RNNs), as inherently parallel processing models for time-sequence processing, are potentially applicable for the motion control of manipulators. However, the development of neural models for high-accuracy and real-time control is a challenging problem. This paper identifies two limitations of the existing RNN solutions for manipulator control, i.e., position error accumulation and the convex restriction on the projection set, and overcomes them by proposing two modified neural network models. Our method allows nonconvex sets for projection operations, and control error does not accumulate over time in the presence of noise. Unlike most works in which RNNs are used to process time sequences, the proposed approach is model-based and training-free, which makes it possible to achieve fast tracking of reference signals with superior robustness and accuracy. Theoretical analysis reveals the global stability of a system under the control of the proposed neural networks. Simulation results confirm the effectiveness of the proposed control method in both the position regulation and tracking control of redundant PUMA 560 manipulators.
Collapse
|
39
|
Qin S, Yang X, Xue X, Song J. A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Problems With Equality and Inequality Constraints. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:3063-3074. [PMID: 27244757 DOI: 10.1109/tcyb.2016.2567449] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Pseudoconvex optimization problem, as an important nonconvex optimization problem, plays an important role in scientific and engineering applications. In this paper, a recurrent one-layer neural network is proposed for solving the pseudoconvex optimization problem with equality and inequality constraints. It is proved that from any initial state, the state of the proposed neural network reaches the feasible region in finite time and stays there thereafter. It is also proved that the state of the proposed neural network is convergent to an optimal solution of the related problem. Compared with the related existing recurrent neural networks for the pseudoconvex optimization problems, the proposed neural network in this paper does not need the penalty parameters and has a better convergence. Meanwhile, the proposed neural network is used to solve three nonsmooth optimization problems, and we make some detailed comparisons with the known related conclusions. In the end, some numerical examples are provided to illustrate the effectiveness of the performance of the proposed neural network.
Collapse
|
40
|
Wang T, He X, Huang T, Li C, Zhang W. Collective neurodynamic optimization for economic emission dispatch problem considering valve point effect in microgrid. Neural Netw 2017; 93:126-136. [DOI: 10.1016/j.neunet.2017.05.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Revised: 03/06/2017] [Accepted: 05/03/2017] [Indexed: 11/26/2022]
|
41
|
Le X, Yan Z, Xi J. A Collective Neurodynamic System for Distributed Optimization with Applications in Model Predictive Control. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2017. [DOI: 10.1109/tetci.2017.2716377] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
42
|
Le X, Wang J. A Two-Time-Scale Neurodynamic Approach to Constrained Minimax Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:620-629. [PMID: 28212073 DOI: 10.1109/tnnls.2016.2538288] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper presents a two-time-scale neurodynamic approach to constrained minimax optimization using two coupled neural networks. One of the recurrent neural networks is used for minimizing the objective function and another is used for maximization. It is shown that the coupled neurodynamic systems operating in two different time scales work well for minimax optimization. The effectiveness and characteristics of the proposed approach are illustrated using several examples. Furthermore, the proposed approach is applied for H∞ model predictive control.
Collapse
|
43
|
|
44
|
Fan Q, Wu W, Zurada JM. Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks. SPRINGERPLUS 2016; 5:295. [PMID: 27066332 PMCID: PMC4783325 DOI: 10.1186/s40064-016-1931-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2015] [Accepted: 02/24/2016] [Indexed: 11/10/2022]
Abstract
This paper presents new theoretical results on the backpropagation algorithm with smoothing [Formula: see text] regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes to a fixed point as n (n is iteration steps) tends to infinity, respectively. Also, our results are more general since we do not require the error function to be quadratic or uniformly convex, and neuronal activation functions are relaxed. Moreover, compared with existed algorithms, our novel algorithm can get more sparse network structure, namely it forces weights to become smaller during the training and can eventually removed after the training, which means that it can simply the network structure and lower operation time. Finally, two numerical experiments are presented to show the characteristics of the main results in detail.
Collapse
Affiliation(s)
- Qinwei Fan
- />School of Science, Xi’an Polytechnic University, Xi’an, 710048 People’s Republic of China
- />School of Mathematical Sciences, Dalian University of Technology, Dalian, 116024 People’s Republic of China
- />Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY 40292 USA
| | - Wei Wu
- />School of Mathematical Sciences, Dalian University of Technology, Dalian, 116024 People’s Republic of China
| | - Jacek M. Zurada
- />Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY 40292 USA
- />Spoleczna Akademia Nauk, 90-011 Lodz, Poland
| |
Collapse
|
45
|
Pwasong A, Sathasivam S. A new hybrid quadratic regression and cascade forward backpropagation neural network. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.12.034] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
46
|
Li C, Yu X, Huang T, Chen G, He X. A Generalized Hopfield Network for Nonsmooth Constrained Convex Optimization: Lie Derivative Approach. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:308-321. [PMID: 26595931 DOI: 10.1109/tnnls.2015.2496658] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper proposes a generalized Hopfield network for solving general constrained convex optimization problems. First, the existence and the uniqueness of solutions to the generalized Hopfield network in the Filippov sense are proved. Then, the Lie derivative is introduced to analyze the stability of the network using a differential inclusion. The optimality of the solution to the nonsmooth constrained optimization problems is shown to be guaranteed by the enhanced Fritz John conditions. The convergence rate of the generalized Hopfield network can be estimated by the second-order derivative of the energy function. The effectiveness of the proposed network is evaluated on several typical nonsmooth optimization problems and used to solve the hierarchical and distributed model predictive control four-tank benchmark.
Collapse
|
47
|
Guo Z, Baruah SK. A Neurodynamic Approach for Real-Time Scheduling via Maximizing Piecewise Linear Utility. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:238-248. [PMID: 26336153 DOI: 10.1109/tnnls.2015.2466612] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, we study a set of real-time scheduling problems whose objectives can be expressed as piecewise linear utility functions. This model has very wide applications in scheduling-related problems, such as mixed criticality, response time minimization, and tardiness analysis. Approximation schemes and matrix vectorization techniques are applied to transform scheduling problems into linear constraint optimization with a piecewise linear and concave objective; thus, a neural network-based optimization method can be adopted to solve such scheduling problems efficiently. This neural network model has a parallel structure, and can also be implemented on circuits, on which the converging time can be significantly limited to meet real-time requirements. Examples are provided to illustrate how to solve the optimization problem and to form a schedule. An approximation ratio bound of 0.5 is further provided. Experimental studies on a large number of randomly generated sets suggest that our algorithm is optimal when the set is nonoverloaded, and outperforms existing typical scheduling strategies when there is overload. Moreover, the number of steps for finding an approximate solution remains at the same level when the size of the problem (number of jobs within a set) increases.
Collapse
|
48
|
|
49
|
Zou X, Gong D, Wang L, Chen Z. A novel method to solve inverse variational inequality problems based on neural networks. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.08.073] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
50
|
Liu D, Wang L, Pan Y, Ma H. Mean square exponential stability for discrete-time stochastic fuzzy neural networks with mixed time-varying delay. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.06.045] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|