1
|
Zheng J, Ju X, Zhang N, Xu D. A novel predefined-time neurodynamic approach for mixed variational inequality problems and applications. Neural Netw 2024; 174:106247. [PMID: 38518707 DOI: 10.1016/j.neunet.2024.106247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 02/20/2024] [Accepted: 03/15/2024] [Indexed: 03/24/2024]
Abstract
In this paper, we propose a novel neurodynamic approach with predefined-time stability that offers a solution to address mixed variational inequality problems. Our approach introduces an adjustable time parameter, thereby enhancing flexibility and applicability compared to conventional fixed-time stability methods. By satisfying certain conditions, the proposed approach is capable of converging to a unique solution within a predefined-time, which sets it apart from fixed-time stability and finite-time stability approaches. Furthermore, our approach can be extended to address a wide range of mathematical optimization problems, including variational inequalities, nonlinear complementarity problems, sparse signal recovery problems, and nash equilibria seeking problems in noncooperative games. We provide numerical simulations to validate the theoretical derivation and showcase the effectiveness and feasibility of our proposed method.
Collapse
Affiliation(s)
- Jinlan Zheng
- Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China
| | - Xingxing Ju
- College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China; Shaanxi Key Laboratory of Information Communication Network and Security, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121, China
| | - Naimin Zhang
- College of Mathematics and Physics, Wenzhou University, Wenzhou 325035, China
| | - Dongpo Xu
- Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun 130024, China.
| |
Collapse
|
2
|
Wang Y, Wang W, Pal NR. Supervised Feature Selection via Collaborative Neurodynamic Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:6878-6892. [PMID: 36306292 DOI: 10.1109/tnnls.2022.3213167] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
As a crucial part of machine learning and pattern recognition, feature selection aims at selecting a subset of the most informative features from the set of all available features. In this article, supervised feature selection is at first formulated as a mixed-integer optimization problem with an objective function of weighted feature redundancy and relevancy subject to a cardinality constraint on the number of selected features. It is equivalently reformulated as a bound-constrained mixed-integer optimization problem by augmenting the objective function with a penalty function for realizing the cardinality constraint. With additional bilinear and linear equality constraints for realizing the integrality constraints, it is further reformulated as a bound-constrained biconvex optimization problem with two more penalty terms. Two collaborative neurodynamic optimization (CNO) approaches are proposed for solving the formulated and reformulated feature selection problems. One of the proposed CNO approaches uses a population of discrete-time recurrent neural networks (RNNs), and the other use a pair of continuous-time projection networks operating concurrently on two timescales. Experimental results on 13 benchmark datasets are elaborated to substantiate the superiority of the CNO approaches to several mainstream methods in terms of average classification accuracy with three commonly used classifiers.
Collapse
|
3
|
Xia Y, Wang J, Lu Z, Huang L. Two Recurrent Neural Networks With Reduced Model Complexity for Constrained l₁-Norm Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:6173-6185. [PMID: 34986103 DOI: 10.1109/tnnls.2021.3133836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Because of the robustness and sparsity performance of least absolute deviation (LAD or l1 ) optimization, developing effective solution methods becomes an important topic. Recurrent neural networks (RNNs) are reported to be capable of effectively solving constrained l1 -norm optimization problems, but their convergence speed is limited. To accelerate the convergence, this article introduces two RNNs, in form of continuous- and discrete-time systems, for solving l1 -norm optimization problems with linear equality and inequality constraints. The RNNs are theoretically proven to be globally convergent to optimal solutions without any condition. With reduced model complexity, the two RNNs can significantly expedite constrained l1 -norm optimization. Numerical simulation results show that the two RNNs spend much less computational time than related RNNs and numerical optimization algorithms for linearly constrained l1 -norm optimization.
Collapse
|
4
|
Dawn S, Bandyopadhyay S. IV-GNN : interval valued data handling using graph neural network. APPL INTELL 2023; 53:5697-5713. [PMID: 36845996 PMCID: PMC9940678 DOI: 10.1007/s10489-022-03780-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/17/2022] [Indexed: 11/02/2022]
Abstract
Interval-valued data is an effective way to represent complex information where uncertainty, inaccuracy etc. are involved in the data space and they are worthy of taking into account. Interval analysis together with neural network has proven to work well on Euclidean data. However, in real-life scenarios, data follows a much more complex structure and is often represented as graphs, which is non-Euclidean in nature. Graph Neural Network is a powerful tool to handle graph like data with countable feature space. So, there is a research gap between the interval-valued data handling approaches and existing GNN model. No model in GNN literature can handle a graph with interval-valued features and, on the other hand, Multi Layer Perceptron (MLP) based on interval mathematics can not process the same due to non-Euclidean structure behind the graph. This article proposes an Interval-Valued Graph Neural Network, a novel GNN model where, for the first time, we relax the restriction of the feature space being countable without compromising the time complexity of the best performing GNN model in the literature. Our model is much more general than existing models as any countable set is always a subset of the universal set ℝ n , which is uncountable. Here, to deal with interval-valued feature vectors, we propose a new aggregation scheme of intervals and show its expressive power to capture different interval structures. We validate our theoretical findings about our model for graph classification task by comparing its performance with those of the state-of-the-art models on several benchmark and synthetic network datasets.
Collapse
Affiliation(s)
- Sucheta Dawn
- grid.39953.350000 0001 2157 0617Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India
| | - Sanghamitra Bandyopadhyay
- grid.39953.350000 0001 2157 0617Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India
| |
Collapse
|
5
|
Ju X, Hu D, Li C, He X, Feng G. A Novel Fixed-Time Converging Neurodynamic Approach to Mixed Variational Inequalities and Applications. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12942-12953. [PMID: 34347618 DOI: 10.1109/tcyb.2021.3093076] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article proposes a novel fixed-time converging forward-backward-forward neurodynamic network (FXFNN) to deal with mixed variational inequalities (MVIs). A distinctive feature of the FXFNN is its fast and fixed-time convergence, in contrast to conventional forward-backward-forward neurodynamic network and projected neurodynamic network. It is shown that the solution of the proposed FXFNN exists uniquely and converges to the unique solution of the corresponding MVIs in fixed time under some mild conditions. It is also shown that the fixed-time convergence result obtained for the FXFNN is independent of initial conditions, unlike most of the existing asymptotical and exponential convergence results. Furthermore, the proposed FXFNN is applied in solving sparse recovery problems, variational inequalities, nonlinear complementarity problems, and min-max problems. Finally, numerical and experimental examples are presented to validate the effectiveness of the proposed neurodynamic network.
Collapse
|
6
|
Zhao X, Zong Q, Tian B, You M. Finite-Time Dynamic Allocation and Control in Multiagent Coordination for Target Tracking. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1872-1880. [PMID: 32603302 DOI: 10.1109/tcyb.2020.2998152] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A new finite-time dynamic allocation and control scheme is developed in this article for multiple agents tracking a moving target. Based on a competitive manner, the dynamic allocation is achieved by k-winners-take-all (k-WTA), which can be realized by a novel finite-time dual neural network with the adaptive-gain activation function. Then, a finite-time disturbance compensation-based control law is proposed for agents to conduct capturing task or return to the specified point in vigilance. The finite-time stability of the system is guaranteed through Lyapunov analysis. Finally, the efficiency of the proposed scheme is illustrated by simulations in which the situation with higher target velocity than trackers is considered.
Collapse
|
7
|
Wang J, Wang J, Han QL. Multivehicle Task Assignment Based on Collaborative Neurodynamic Optimization With Discrete Hopfield Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:5274-5286. [PMID: 34077371 DOI: 10.1109/tnnls.2021.3082528] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article presents a collaborative neurodynamic optimization (CNO) approach to multivehicle task assignments (TAs). The original combinatorial quadratic optimization problem for TA is reformulated as a quadratic unconstrained binary optimization (QUBO) problem with a quadratic utility function and a penalty function for handling load capacity and cooperation constraints. In the framework of CNO with a population of discrete Hopfield networks (DHNs), a TA algorithm is proposed for solving the formulated QUBO problem. Superior experimental results in four typical multivehicle operation scenarios are reported to substantiate the efficacy of the proposed neurodynamics-based TA approach.
Collapse
|
8
|
Wang Y, Wang J, Che H. Two-timescale neurodynamic approaches to supervised feature selection based on alternative problem formulations. Neural Netw 2021; 142:180-191. [PMID: 34020085 DOI: 10.1016/j.neunet.2021.04.038] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 04/21/2021] [Accepted: 04/29/2021] [Indexed: 10/21/2022]
Abstract
Feature selection is a crucial step in data processing and machine learning. While many greedy and sequential feature selection approaches are available, a holistic neurodynamics approach to supervised feature selection is recently developed via fractional programming by minimizing feature redundancy and maximizing relevance simultaneously. In view that the gradient of the fractional objective function is also fractional, alternative problem formulations are desirable to obviate the fractional complexity. In this paper, the fractional programming problem formulation is equivalently reformulated as bilevel and bilinear programming problems without using any fractional function. Two two-timescale projection neural networks are adapted for solving the reformulated problems. Experimental results on six benchmark datasets are elaborated to demonstrate the global convergence and high classification performance of the proposed neurodynamic approaches in comparison with six mainstream feature selection approaches.
Collapse
Affiliation(s)
- Yadi Wang
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, 475004, China; Institute of Data and Knowledge Engineering, School of Computer and Information Engineering, Henan University, Kaifeng, 475004, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong; Shenzhen Research Institute, City University of Hong Kong, Shenzhen, Guangdong, China.
| | - Hangjun Che
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China; Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing 400715, China.
| |
Collapse
|
9
|
Wang Y, Li X, Wang J. A neurodynamic optimization approach to supervised feature selection via fractional programming. Neural Netw 2021; 136:194-206. [PMID: 33497995 DOI: 10.1016/j.neunet.2021.01.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 12/04/2020] [Accepted: 01/07/2021] [Indexed: 11/25/2022]
Abstract
Feature selection is an important issue in machine learning and data mining. Most existing feature selection methods are greedy in nature thus are prone to sub-optimality. Though some global feature selection methods based on unsupervised redundancy minimization can potentiate clustering performance improvements, their efficacy for classification may be limited. In this paper, a neurodynamics-based holistic feature selection approach is proposed via feature redundancy minimization and relevance maximization. An information-theoretic similarity coefficient matrix is defined based on multi-information and entropy to measure feature redundancy with respect to class labels. Supervised feature selection is formulated as a fractional programming problem based on the similarity coefficients. A neurodynamic approach based on two one-layer recurrent neural networks is developed for solving the formulated feature selection problem. Experimental results with eight benchmark datasets are discussed to demonstrate the global convergence of the neural networks and superiority of the proposed neurodynamic approach to several existing feature selection methods in terms of classification accuracy, precision, recall, and F-measure.
Collapse
Affiliation(s)
- Yadi Wang
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, 475004, China; Institute of Data and Knowledge Engineering, School of Computer and Information Engineering, Henan University, Kaifeng, 475004, China; School of Computer Science and Engineering, Southeast University, Nanjing, 211189, China.
| | - Xiaoping Li
- School of Computer Science and Engineering, Southeast University, Nanjing, 211189, China; Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing, 211189, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| |
Collapse
|
10
|
Li W, Bian W, Xue X. Projected Neural Network for a Class of Non-Lipschitz Optimization Problems With Linear Constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:3361-3373. [PMID: 31689212 DOI: 10.1109/tnnls.2019.2944388] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this article, we consider a class of nonsmooth, nonconvex, and non-Lipschitz optimization problems, which have wide applications in sparse optimization. We generalize the Clarke stationary point and define a kind of generalized stationary point of the problems with a stronger optimal capability. Based on the smoothing method, we propose a projected neural network for solving this kind of optimization problem. Under the condition that the level set of objective function in the feasible region is bounded, we prove that the solution of the proposed neural network is globally existent and bounded. The uniqueness of the solution of the proposed network is also analyzed. When the feasible region is bounded, any accumulation point of the proposed neural network is a generalized stationary point of the optimization model. Based on some suitable conditions, any solution of the proposed neural network is asymptotic convergent to one stationary point. In particular, we give some deep analysis on the proposed network for solving a special class of the non-Lipschitz optimization problem, which indicates a lower bound property and the unify identification for the nonzero elements of all accumulation points. Finally, some numerical results are presented to show the efficiency of the proposed neural network for solving some kinds of sparse optimization models.
Collapse
|
11
|
Xu C, Chai Y, Qin S, Wang Z, Feng J. A neurodynamic approach to nonsmooth constrained pseudoconvex optimization problem. Neural Netw 2020; 124:180-192. [DOI: 10.1016/j.neunet.2019.12.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2019] [Revised: 11/15/2019] [Accepted: 12/14/2019] [Indexed: 10/25/2022]
|
12
|
Zhu Y, Yu W, Wen G, Chen G. Projected Primal-Dual Dynamics for Distributed Constrained Nonsmooth Convex Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:1776-1782. [PMID: 30530351 DOI: 10.1109/tcyb.2018.2883095] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
A distributed nonsmooth convex optimization problem subject to a general type of constraint, including equality and inequality as well as bounded constraints, is studied in this paper for a multiagent network with a fixed and connected communication topology. To collectively solve such a complex optimization problem, primal-dual dynamics with projection operation are investigated under optimal conditions. For the nonsmooth convex optimization problem, a framework under the LaSalle's invariance principle from nonsmooth analysis is established, where the asymptotic stability of the primal-dual dynamics at an optimal solution is guaranteed. For the case where inequality and bounded constraints are not involved and the objective function is twice differentiable and strongly convex, the globally exponential convergence of the primal-dual dynamics is established. Finally, two simulations are provided to verify and visualize the theoretical results.
Collapse
|
13
|
Kong J, Huang J, Yu H, Deng H, Gong J, Chen H. RNN-based default logic for route planning in urban environments. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.02.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
14
|
Marco MD, Forti M, Grazzini M, Pancioni L. Multistability of delayed neural networks with hard-limiter saturation nonlinearities. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.03.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
15
|
Maratos N, Moraitis M. Some results on the Sign recurrent neural network for unconstrained minimization. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.09.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
16
|
Yang S, Liu Q, Wang J. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:981-992. [PMID: 28166509 DOI: 10.1109/tnnls.2017.2652478] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.
Collapse
|
17
|
Yan Z, Fan J, Wang J. A Collective Neurodynamic Approach to Constrained Global Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:1206-1215. [PMID: 27046909 DOI: 10.1109/tnnls.2016.2524619] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Global optimization is a long-lasting research topic in the field of optimization, posting many challenging theoretic and computational issues. This paper presents a novel collective neurodynamic method for solving constrained global optimization problems. At first, a one-layer recurrent neural network (RNN) is presented for searching the Karush-Kuhn-Tucker points of the optimization problem under study. Next, a collective neuroydnamic optimization approach is developed by emulating the paradigm of brainstorming. Multiple RNNs are exploited cooperatively to search for the global optimal solutions in a framework of particle swarm optimization. Each RNN carries out a precise local search and converges to a candidate solution according to its own neurodynamics. The neuronal state of each neural network is repetitively reset by exchanging historical information of each individual network and the entire group. Wavelet mutation is performed to avoid prematurity, add diversity, and promote global convergence. It is proved in the framework of stochastic optimization that the proposed collective neurodynamic approach is capable of computing the global optimal solutions with probability one provided that a sufficiently large number of neural networks are utilized. The essence of the collective neurodynamic optimization approach lies in its potential to solve constrained global optimization problems in real time. The effectiveness and characteristics of the proposed approach are illustrated by using benchmark optimization problems.
Collapse
|
18
|
Ebadi M, Hosseini A, Hosseini M. A projection type steepest descent neural network for solving a class of nonsmooth optimization problems. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.01.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
19
|
Di Marco M, Forti M, Nistri P, Pancioni L. Discontinuous Neural Networks for Finite-Time Solution of Time-Dependent Linear Equations. IEEE TRANSACTIONS ON CYBERNETICS 2016; 46:2509-2520. [PMID: 26441464 DOI: 10.1109/tcyb.2015.2479118] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper considers a class of nonsmooth neural networks with discontinuous hard-limiter (signum) neuron activations for solving time-dependent (TD) systems of algebraic linear equations (ALEs). The networks are defined by the subdifferential with respect to the state variables of an energy function given by the L 1 norm of the error between the state and the TD-ALE solution. It is shown that when the penalty parameter exceeds a quantitatively estimated threshold the networks are able to reach in finite time, and exactly track thereafter, the target solution of the TD-ALE. Furthermore, this paper discusses the tightness of the estimated threshold and also points out key differences in the role played by this threshold with respect to networks for solving time-invariant ALEs. It is also shown that these convergence results are robust with respect to small perturbations of the neuron interconnection matrices. The dynamics of the proposed networks are rigorously studied by using tools from nonsmooth analysis, the concept of subdifferential of convex functions, and that of solutions in the sense of Filippov of dynamical systems with discontinuous nonlinearities.
Collapse
|
20
|
Xiao L. A nonlinearly-activated neurodynamic model and its finite-time solution to equality-constrained quadratic optimization with nonstationary coefficients. Appl Soft Comput 2016. [DOI: 10.1016/j.asoc.2015.11.023] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
21
|
Li C, Yu X, Huang T, Chen G, He X. A Generalized Hopfield Network for Nonsmooth Constrained Convex Optimization: Lie Derivative Approach. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:308-321. [PMID: 26595931 DOI: 10.1109/tnnls.2015.2496658] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper proposes a generalized Hopfield network for solving general constrained convex optimization problems. First, the existence and the uniqueness of solutions to the generalized Hopfield network in the Filippov sense are proved. Then, the Lie derivative is introduced to analyze the stability of the network using a differential inclusion. The optimality of the solution to the nonsmooth constrained optimization problems is shown to be guaranteed by the enhanced Fritz John conditions. The convergence rate of the generalized Hopfield network can be estimated by the second-order derivative of the energy function. The effectiveness of the proposed network is evaluated on several typical nonsmooth optimization problems and used to solve the hierarchical and distributed model predictive control four-tank benchmark.
Collapse
|
22
|
Guo Z, Baruah SK. A Neurodynamic Approach for Real-Time Scheduling via Maximizing Piecewise Linear Utility. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:238-248. [PMID: 26336153 DOI: 10.1109/tnnls.2015.2466612] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, we study a set of real-time scheduling problems whose objectives can be expressed as piecewise linear utility functions. This model has very wide applications in scheduling-related problems, such as mixed criticality, response time minimization, and tardiness analysis. Approximation schemes and matrix vectorization techniques are applied to transform scheduling problems into linear constraint optimization with a piecewise linear and concave objective; thus, a neural network-based optimization method can be adopted to solve such scheduling problems efficiently. This neural network model has a parallel structure, and can also be implemented on circuits, on which the converging time can be significantly limited to meet real-time requirements. Examples are provided to illustrate how to solve the optimization problem and to form a schedule. An approximation ratio bound of 0.5 is further provided. Experimental studies on a large number of randomly generated sets suggest that our algorithm is optimal when the set is nonoverloaded, and outperforms existing typical scheduling strategies when there is overload. Moreover, the number of steps for finding an approximate solution remains at the same level when the size of the problem (number of jobs within a set) increases.
Collapse
|
23
|
Qiao C, Jing WF, Fang J, Wang YP. The general critical analysis for continuous-time UPPAM recurrent neural networks. Neurocomputing 2016; 175:40-46. [PMID: 26858512 DOI: 10.1016/j.neucom.2015.09.103] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
The uniformly pseudo-projection-anti-monotone (UPPAM) neural network model, which can be considered as the unified continuous-time neural networks (CNNs), includes almost all of the known CNNs individuals. Recently, studies on the critical dynamics behaviors of CNNs have drawn special attentions due to its importance in both theory and applications. In this paper, we will present the analysis of the UPPAM network under the general critical conditions. It is shown that the UPPAM network possesses the global convergence and asymptotical stability under the general critical conditions if the network satisfies one quasi-symmetric requirement on the connective matrices, which is easy to be verified and applied. The general critical dynamics have rarely been studied before, and this work is an attempt to gain an meaningful assurance of general critical convergence and stability of CNNs. Since UPPAM network is the unified model for CNNs, the results obtained here can generalize and extend the existing critical conclusions for CNNs individuals, let alone those non-critical cases. Moreover, the easily verified conditions for general critical convergence and stability can further promote the applications of CNNs.
Collapse
Affiliation(s)
- Chen Qiao
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, P.R. China and with the Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA
| | - Wen-Feng Jing
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, P.R. China
| | - Jian Fang
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, P.R. China and with the Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA
| | - Yu-Ping Wang
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA and the Center of Genomics and Bioinformatics, Tulane University, New Orleans, LA, 70112, USA
| |
Collapse
|
24
|
Zou X, Gong D, Wang L, Chen Z. A novel method to solve inverse variational inequality problems based on neural networks. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.08.073] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
25
|
Stanimirović PS, Zivković IS, Wei Y. Recurrent Neural Network for Computing the Drazin Inverse. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2830-2843. [PMID: 25706892 DOI: 10.1109/tnnls.2015.2397551] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.
Collapse
|
26
|
Qin S, Xue X. A two-layer recurrent neural network for nonsmooth convex optimization problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1149-1160. [PMID: 25051563 DOI: 10.1109/tnnls.2014.2334364] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.
Collapse
|
27
|
Gu S, Cui R. An efficient algorithm for the subset sum problem based on finite-time convergent recurrent neural network. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2013.12.063] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
28
|
Guo Z, Wang J, Yan Z. Passivity and passification of memristor-based recurrent neural networks with time-varying delays. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:2099-2109. [PMID: 25330432 DOI: 10.1109/tnnls.2014.2305440] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper presents new theoretical results on the passivity and passification of a class of memristor-based recurrent neural networks (MRNNs) with time-varying delays. The casual assumptions on the boundedness and Lipschitz continuity of neuronal activation functions are relaxed. By constructing appropriate Lyapunov-Krasovskii functionals and using the characteristic function technique, passivity conditions are cast in the form of linear matrix inequalities (LMIs), which can be checked numerically using an LMI toolbox. Based on these conditions, two procedures for designing passification controllers are proposed, which guarantee that MRNNs with time-varying delays are passive. Finally, two illustrative examples are presented to show the characteristics of the main results in detail.
Collapse
|
29
|
Finite time dual neural networks with a tunable activation function for solving quadratic programming problems and its application. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.06.018] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
30
|
Yan Z, Wang J, Li G. A collective neurodynamic optimization approach to bound-constrained nonconvex optimization. Neural Netw 2014; 55:20-9. [DOI: 10.1016/j.neunet.2014.03.006] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2014] [Revised: 03/10/2014] [Accepted: 03/13/2014] [Indexed: 10/25/2022]
|
31
|
From different ZFs to different ZNN models accelerated via Li activation functions to finite-time convergence for time-varying matrix pseudoinversion. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2013.12.001] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
32
|
Li Q, Liu Y, Zhu L. Neural network for nonsmooth pseudoconvex optimization with general constraints. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2013.10.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
33
|
Bian W, Chen X. Neural network for nonsmooth, nonconvex constrained minimization via smooth approximation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:545-556. [PMID: 24807450 DOI: 10.1109/tnnls.2013.2278427] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
A neural network based on smoothing approximation is presented for a class of nonsmooth, nonconvex constrained optimization problems, where the objective function is nonsmooth and nonconvex, the equality constraint functions are linear and the inequality constraint functions are nonsmooth, convex. This approach can find a Clarke stationary point of the optimization problem by following a continuous path defined by a solution of an ordinary differential equation. The global convergence is guaranteed if either the feasible set is bounded or the objective function is level bounded. Specially, the proposed network does not require: 1) the initial point to be feasible; 2) a prior penalty parameter to be chosen exactly; 3) a differential inclusion to be solved. Numerical experiments and comparisons with some existing algorithms are presented to illustrate the theoretical results and show the efficiency of the proposed network.
Collapse
|
34
|
|
35
|
Li G, Yan Z, Wang J. A one-layer recurrent neural network for constrained nonsmooth invex optimization. Neural Netw 2014; 50:79-89. [DOI: 10.1016/j.neunet.2013.11.007] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2013] [Revised: 11/09/2013] [Accepted: 11/10/2013] [Indexed: 10/26/2022]
|
36
|
|
37
|
Liu Q, Wang J. A one-layer projection neural network for nonsmooth optimization subject to linear equalities and bound constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:812-824. [PMID: 24808430 DOI: 10.1109/tnnls.2013.2244908] [Citation(s) in RCA: 106] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents a one-layer projection neural network for solving nonsmooth optimization problems with generalized convex objective functions and subject to linear equalities and bound constraints. The proposed neural network is designed based on two projection operators: linear equality constraints, and bound constraints. The objective function in the optimization problem can be any nonsmooth function which is not restricted to be convex but is required to be convex (pseudoconvex) on a set defined by the constraints. Compared with existing recurrent neural networks for nonsmooth optimization, the proposed model does not have any design parameter, which is more convenient for design and implementation. It is proved that the output variables of the proposed neural network are globally convergent to the optimal solutions provided that the objective function is at least pseudoconvex. Simulation results of numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.
Collapse
|
38
|
|
39
|
Zhang Y, Guo D, Li Z. Common nature of learning between back-propagation and Hopfield-type neural networks for generalized matrix inversion with simplified models. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:579-592. [PMID: 24808379 DOI: 10.1109/tnnls.2013.2238555] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, two simple-structure neural networks based on the error back-propagation (BP) algorithm (i.e., BP-type neural networks, BPNNs) are proposed, developed, and investigated for online generalized matrix inversion. Specifically, the BPNN-L and BPNN-R models are proposed and investigated for the left and right generalized matrix inversion, respectively. In addition, for the same problem-solving task, two discrete-time Hopfield-type neural networks (HNNs) are developed and investigated in this paper. Similar to the classification of the presented BPNN-L and BPNN-R models, the presented HNN-L and HNN-R models correspond to the left and right generalized matrix inversion, respectively. Comparing the BPNN weight-updating formula with the HNN state-transition equation for the specific (i.e., left or right) generalized matrix inversion, we show that such two derived learning-expressions turn out to be the same (in mathematics), although the BP and Hopfield-type neural networks are evidently different from each other a great deal, in terms of network architecture, physical meaning, and training patterns. Numerical results with different illustrative examples further demonstrate the efficacy of the presented BPNNs and HNNs for online generalized matrix inversion and, more importantly, their common natures of learning.
Collapse
|
40
|
A class of finite-time dual neural networks for solving quadratic programming problems and its -winners-take-all application. Neural Netw 2013; 39:27-39. [DOI: 10.1016/j.neunet.2012.12.009] [Citation(s) in RCA: 121] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2012] [Revised: 12/14/2012] [Accepted: 12/14/2012] [Indexed: 11/22/2022]
|
41
|
Li S, Liu B, Li Y. Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:301-309. [PMID: 24808283 DOI: 10.1109/tnnls.2012.2230451] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The winner-take-all (WTA) competition is widely observed in both inanimate and biological media and society. Many mathematical models are proposed to describe the phenomena discovered in different fields. These models are capable of demonstrating the WTA competition. However, they are often very complicated due to the compromise with experimental realities in the particular fields; it is often difficult to explain the underlying mechanism of such a competition from the perspective of feedback based on those sophisticate models. In this paper, we make steps in that direction and present a simple model, which produces the WTA competition by taking advantage of selective positive-negative feedback through the interaction of neurons via p-norm. Compared to existing models, this model has an explicit explanation of the competition mechanism. The ultimate convergence behavior of this model is proven analytically. The convergence rate is discussed and simulations are conducted in both static and dynamic competition scenarios. Both theoretical and numerical results validate the effectiveness of the dynamic equation in describing the nonlinear phenomena of WTA competition.
Collapse
|
42
|
Zhang H, Huang B, Gong D, Wang Z. New results for neutral-type delayed projection neural network to solve linear variational inequalities. Neural Comput Appl 2012. [DOI: 10.1007/s00521-012-1141-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
43
|
Balavoine A, Romberg J, Rozell CJ. Convergence and rate analysis of neural networks for sparse approximation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:1377-1389. [PMID: 24199030 PMCID: PMC3816963 DOI: 10.1109/tnnls.2012.2202400] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present an analysis of the Locally Competitive Algorithm (LCA), which is a Hopfield-style neural network that efficiently solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few nonzero coefficients). This class of problems plays a significant role in both theories of neural coding and applications in signal processing. However, the LCA lacks analysis of its convergence properties, and previous results on neural networks for nonsmooth optimization do not apply to the specifics of the LCA architecture. We show that the LCA has desirable convergence properties, such as stability and global convergence to the optimum of the objective function when it is unique. Under some mild conditions, the support of the solution is also proven to be reached in finite time. Furthermore, some restrictions on the problem specifics allow us to characterize the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate. We support our analysis with several illustrative simulations.
Collapse
|
44
|
Huang B, Zhang H, Gong D, Wang Z. A new result for projection neural networks to solve linear variational inequalities and related optimization problems. Neural Comput Appl 2012. [DOI: 10.1007/s00521-012-0918-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
45
|
Bian W, Chen X. Smoothing neural network for constrained non-Lipschitz optimization with applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:399-411. [PMID: 24808547 DOI: 10.1109/tnnls.2011.2181867] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, a smoothing neural network (SNN) is proposed for a class of constrained non-Lipschitz optimization problems, where the objective function is the sum of a nonsmooth, nonconvex function, and a non-Lipschitz function, and the feasible set is a closed convex subset of . Using the smoothing approximate techniques, the proposed neural network is modeled by a differential equation, which can be implemented easily. Under the level bounded condition on the objective function in the feasible set, we prove the global existence and uniform boundedness of the solutions of the SNN with any initial point in the feasible set. The uniqueness of the solution of the SNN is provided under the Lipschitz property of smoothing functions. We show that any accumulation point of the solutions of the SNN is a stationary point of the optimization problem. Numerical results including image restoration, blind source separation, variable selection, and minimizing condition number are presented to illustrate the theoretical results and show the efficiency of the SNN. Comparisons with some existing algorithms show the advantages of the SNN.
Collapse
|
46
|
Zhishan Guo, Qingshan Liu, Jun Wang. A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Subject to Linear Equality Constraints. ACTA ACUST UNITED AC 2011; 22:1892-900. [DOI: 10.1109/tnn.2011.2169682] [Citation(s) in RCA: 87] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
47
|
Min Han, Jianchao Fan, Jun Wang. A Dynamic Feedforward Neural Network Based on Gaussian Particle Swarm Optimization and its Application for Predictive Control. ACTA ACUST UNITED AC 2011; 22:1457-68. [DOI: 10.1109/tnn.2011.2162341] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|