1
|
Xia Y, Ye T, Huang L. Analysis and Application of Matrix-Form Neural Networks for Fast Matrix-Variable Convex Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:2259-2273. [PMID: 38157471 DOI: 10.1109/tnnls.2023.3340730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Matrix-variable optimization is a generalization of vector-variable optimization and has been found to have many important applications. To reduce computation time and storage requirement, this article presents two matrix-form recurrent neural networks (RNNs), one continuous-time model and another discrete-time model, for solving matrix-variable optimization problems with linear constraints. The two proposed matrix-form RNNs have low complexity and are suitable for parallel implementation in terms of matrix state space. The proposed continuous-time matrix-form RNN can significantly generalize existing continuous-time vector-form RNN. The proposed discrete-time matrix-form RNN can be effectively used in blind image restoration, where the storage requirement and computational cost are largely reduced. Theoretically, the two proposed matrix-form RNNs are guaranteed to be globally convergent to the optimal solution under mild conditions. Computed results show that the proposed matrix-form RNN-based algorithm is superior to related vector-form RNN and matrix-form RNN-based algorithms, in terms of computation time.
Collapse
|
2
|
Upadhyay A, Pandey R. A proximal neurodynamic model for a system of non-linear inverse mixed variational inequalities. Neural Netw 2024; 176:106323. [PMID: 38653123 DOI: 10.1016/j.neunet.2024.106323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 03/27/2024] [Accepted: 04/14/2024] [Indexed: 04/25/2024]
Abstract
In this article, we introduce a system of non-linear inverse mixed variational inequalities (SNIMVIs). We propose a proximal neurodynamic model (PNDM) for solving SNIMVIs, leveraging proximal mappings. The uniqueness of the continuous solution for the PNDM is proved by assuming Lipschitz continuity. Moreover, we establish the global asymptotic stability of equilibrium points of the PNDM, contingent upon Lipschitz continuity and strong monotonicity. Additionally, an iterative algorithm involving proximal mappings for solving the SNIMVIs is presented. Finally, we provide illustrative examples to support our main findings. Furthermore, we provide an example where the SNIMVIs violate the strong monotonicity condition and exhibit the divergence nature of the trajectories of the corresponding PNDM.
Collapse
Affiliation(s)
- Anjali Upadhyay
- Department of Mathematics, University of Delhi, Delhi, India.
| | - Rahul Pandey
- Mahant Avaidyanath Govt. Degree College, Jungle Kaudiya, Gorakhpur, U.P., India.
| |
Collapse
|
3
|
Gao X, Liao LZ. Novel Continuous- and Discrete-Time Neural Networks for Solving Quadratic Minimax Problems With Linear Equality Constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:9814-9828. [PMID: 37022226 DOI: 10.1109/tnnls.2023.3236695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This article presents two novel continuous- and discrete-time neural networks (NNs) for solving quadratic minimax problems with linear equality constraints. These two NNs are established based on the conditions of the saddle point of the underlying function. For the two NNs, a proper Lyapunov function is constructed so that they are stable in the sense of Lyapunov, and will converge to some saddle point(s) for any starting point under some mild conditions. Compared with the existing NNs for solving quadratic minimax problems, the proposed NNs require weaker stability conditions. The validity and transient behavior of the proposed models are illustrated by some simulation results.
Collapse
|
4
|
Wu D, Zhang Y. Zhang equivalency of inequation-to-inequation type for constraints of redundant manipulators. Heliyon 2024; 10:e23570. [PMID: 38173488 PMCID: PMC10761789 DOI: 10.1016/j.heliyon.2023.e23570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 11/22/2023] [Accepted: 12/06/2023] [Indexed: 01/05/2024] Open
Abstract
In solving specific problems, physical laws and mathematical theorems directly express the connections between variables with equations/inequations. At times, it could be extremely hard or not viable to solve these equations/inequations directly. The PE (principle of equivalence) is a commonly applied pragmatic method across multiple fields. PE transforms the initial equations/inequations into simplified equivalent equations/inequations that are more manageable to solve, allowing researchers to achieve their objectives. The problem-solving process in many fields benefits from the use of PE. Recently, the ZE (Zhang equivalency) framework has surfaced as a promising approach for addressing time-dependent optimization problems. This ZEF (ZE framework) consolidates constraints at different tiers, demonstrating its capacity for the solving of time-dependent optimization problems. To broaden the application of ZEF in time-dependent optimization problems, specifically in the domain of motion planning for redundant manipulators, the authors systematically investigate the ZEF-I2I (ZEF of the inequation-to-inequation) type. The study concentrates on transforming constraints (i.e., joint constraints and obstacles avoidance depicted in different tiers) into consolidated constraints backed by rigorous mathematical derivations. The effectiveness and applicability of the ZEF-I2I are verified through two optimization motion planning schemes, which consolidate constraints in the velocity-tier and acceleration-tier. Schemes are required to accomplish the goal of repetitive motion planning within constraints. The firstly presented optimization motion planning schemes are then reformulated as two time-dependent quadratic programming problems. Simulative experiments conducted on the basis of a six-joint redundant manipulator confirm the outstanding effectiveness of the firstly presented ZEF-I2I in achieving the goal of motion planning within constraints.
Collapse
Affiliation(s)
- Dongqing Wu
- School of Computational Science, Zhongkai University of Agriculture and Engineering, Guangzhou 51220, Guangdong, China
- Research Institute of Sun Yat-sen University in Shenzhen, Sun Yat-sen University, Shenzhen 518057, Guangdong, China
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, Guangdong, China
| | - Yunong Zhang
- Research Institute of Sun Yat-sen University in Shenzhen, Sun Yat-sen University, Shenzhen 518057, Guangdong, China
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, Guangdong, China
| |
Collapse
|
5
|
Ju X, Li C, Che H, He X, Feng G. A Proximal Neurodynamic Network With Fixed-Time Convergence for Equilibrium Problems and Its Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:7500-7514. [PMID: 35143401 DOI: 10.1109/tnnls.2022.3144148] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This article proposes a novel fixed-time converging proximal neurodynamic network (FXPNN) via a proximal operator to deal with equilibrium problems (EPs). A distinctive feature of the proposed FXPNN is its better transient performance in comparison to most existing proximal neurodynamic networks. It is shown that the FXPNN converges to the solution of the corresponding EP in fixed-time under some mild conditions. It is also shown that the settling time of the FXPNN is independent of initial conditions and the fixed-time interval can be prescribed, unlike existing results with asymptotical or exponential convergence. Moreover, the proposed FXPNN is applied to solve composition optimization problems (COPs), l1 -regularized least-squares problems, mixed variational inequalities (MVIs), and variational inequalities (VIs). It is further shown, in the case of solving COPs, that the fixed-time convergence can be established via the Polyak-Lojasiewicz condition, which is a relaxation of the more demanding convexity condition. Finally, numerical examples are presented to validate the effectiveness and advantages of the proposed neurodynamic network.
Collapse
|
6
|
Xia Z, Liu Y, Qiu J, Ruan Q, Cao J. An RNN-Based Algorithm for Decentralized-Partial-Consensus Constrained Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:534-542. [PMID: 34464262 DOI: 10.1109/tnnls.2021.3098668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This technical note proposes a decentralized-partial-consensus optimization (DPCO) problem with inequality constraints. The partial-consensus matrix originating from the Laplacian matrix is constructed to tackle the partial-consensus constraints. A continuous-time algorithm based on multiple interconnected recurrent neural networks (RNNs) is derived to solve the optimization problem. In addition, based on nonsmooth analysis and Lyapunov theory, the convergence of continuous-time algorithm is further proved. Finally, several examples demonstrate the effectiveness of main results.
Collapse
|
7
|
Ju X, Hu D, Li C, He X, Feng G. A Novel Fixed-Time Converging Neurodynamic Approach to Mixed Variational Inequalities and Applications. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12942-12953. [PMID: 34347618 DOI: 10.1109/tcyb.2021.3093076] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article proposes a novel fixed-time converging forward-backward-forward neurodynamic network (FXFNN) to deal with mixed variational inequalities (MVIs). A distinctive feature of the FXFNN is its fast and fixed-time convergence, in contrast to conventional forward-backward-forward neurodynamic network and projected neurodynamic network. It is shown that the solution of the proposed FXFNN exists uniquely and converges to the unique solution of the corresponding MVIs in fixed time under some mild conditions. It is also shown that the fixed-time convergence result obtained for the FXFNN is independent of initial conditions, unlike most of the existing asymptotical and exponential convergence results. Furthermore, the proposed FXFNN is applied in solving sparse recovery problems, variational inequalities, nonlinear complementarity problems, and min-max problems. Finally, numerical and experimental examples are presented to validate the effectiveness of the proposed neurodynamic network.
Collapse
|
8
|
Zhong J, Feng Y, Tang S, Xiong J, Dai X, Zhang N. A collaborative neurodynamic optimization algorithm to traveling salesman problem. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00884-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractThis paper proposed a collaborative neurodynamic optimization (CNO) method to solve traveling salesman problem (TSP). First, we construct a Hopfield neural network (HNN) with $$n \times n$$
n
×
n
neurons for the n cities. Second, to ensure the convergence of continuous HNN (CHNN), we reformulate TSP to satisfy the convergence condition of CHNN and solve TSP by CHNN. Finally, a population of CHNNs is used to search for local optimal solutions of TSP and the globally optimal solution is obtained using particle swarm optimization. Experimental results show the effectiveness of the CNO approach for solving TSP.
Collapse
|
9
|
Li X, Wang J, Kwong S. Hash Bit Selection via Collaborative Neurodynamic Optimization With Discrete Hopfield Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:5116-5124. [PMID: 33835923 DOI: 10.1109/tnnls.2021.3068500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Hash bit selection (HBS) aims to find the most discriminative and informative hash bits from a hash pool generated by using different hashing algorithms. It is usually formulated as a binary quadratic programming problem with an information-theoretic objective function and a string-length constraint. In this article, it is equivalently reformulated in the form of a quadratic unconstrained binary optimization problem by augmenting the objective function with a penalty function. The reformulated problem is solved via collaborative neurodynamic optimization (CNO) with a population of classic discrete Hopfield networks. The two most important hyperparameters of the CNO approach are determined based on Monte Carlo test results. Experimental results on three benchmark data sets are elaborated to substantiate the superiority of the collaborative neurodynamic approach to several existing methods for HBS.
Collapse
|
10
|
Optimal discrete-time sliding-mode control based on recurrent neural network: a singular value approach. Soft comput 2022. [DOI: 10.1007/s00500-022-07486-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
11
|
Resilient Penalty Function Method for Distributed Constrained Optimization under Byzantine Attack. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.02.055] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
12
|
A projection-based continuous-time algorithm for distributed optimization over multi-agent systems. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-020-00265-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractMulti-agent systems are widely studied due to its ability of solving complex tasks in many fields, especially in deep reinforcement learning. Recently, distributed optimization problem over multi-agent systems has drawn much attention because of its extensive applications. This paper presents a projection-based continuous-time algorithm for solving convex distributed optimization problem with equality and inequality constraints over multi-agent systems. The distinguishing feature of such problem lies in the fact that each agent with private local cost function and constraints can only communicate with its neighbors. All agents aim to cooperatively optimize a sum of local cost functions. By the aid of penalty method, the states of the proposed algorithm will enter equality constraint set in fixed time and ultimately converge to an optimal solution to the objective problem. In contrast to some existed approaches, the continuous-time algorithm has fewer state variables and the testification of the consensus is also involved in the proof of convergence. Ultimately, two simulations are given to show the viability of the algorithm.
Collapse
|
13
|
Wang J, Wang J. Two-Timescale Multilayer Recurrent Neural Networks for Nonlinear Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:37-47. [PMID: 33108292 DOI: 10.1109/tnnls.2020.3027471] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This article presents a neurodynamic approach to nonlinear programming. Motivated by the idea of sequential quadratic programming, a class of two-timescale multilayer recurrent neural networks is presented with neuronal dynamics in their output layer operating at a bigger timescale than in their hidden layers. In the two-timescale multilayer recurrent neural networks, the transient states in the hidden layer(s) undergo faster dynamics than those in the output layer. Sufficient conditions are derived on the convergence of the two-timescale multilayer recurrent neural networks to local optima of nonlinear programming problems. Simulation results of collaborative neurodynamic optimization based on the two-timescale neurodynamic approach on global optimization problems with nonconvex objective functions or constraints are discussed to substantiate the efficacy of the two-timescale neurodynamic approach.
Collapse
|
14
|
Xu C, Liu Q. An inertial neural network approach for robust time-of-arrival localization considering clock asynchronization. Neural Netw 2021; 146:98-106. [PMID: 34852299 DOI: 10.1016/j.neunet.2021.11.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 07/21/2021] [Accepted: 11/09/2021] [Indexed: 12/01/2022]
Abstract
This paper presents an inertial neural network to solve the source localization optimization problem with l1-norm objective function based on the time of arrival (TOA) localization technique. The convergence and stability of the inertial neural network are analyzed by the Lyapunov function method. An inertial neural network iterative approach is further used to find a better solution among the solutions with different inertial parameters. Furthermore, the clock asynchronization is considered in the TOA l1-norm model for more general real applications, and the corresponding inertial neural network iterative approach is addressed. The numerical simulations and real data are both considered in the experiments. In the simulation experiments, the noise contains uncorrelated zero-mean Gaussian noise and uniform distributed outliers. In the real experiments, the data is obtained by using the ultra wide band (UWB) technology hardware modules. Whether or not there is clock asynchronization, the results show that the proposed approach always can find a more accurate source position compared with some of the existing algorithms, which implies that the proposed approach is more effective than the compared ones.
Collapse
Affiliation(s)
- Chentao Xu
- School of Cyber Science and Engineering, Frontiers Science Center for Mobile Information Communication and Security, Southeast University, Nanjing 210096, China; Purple Mountain Laboratories, Nanjing 211111, China.
| | - Qingshan Liu
- School of Mathematics, Frontiers Science Center for Mobile Information Communication and Security, Southeast University, Nanjing 210096, China; Purple Mountain Laboratories, Nanjing 211111, China.
| |
Collapse
|
15
|
Wang J, Wang J, Han QL. Multivehicle Task Assignment Based on Collaborative Neurodynamic Optimization With Discrete Hopfield Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:5274-5286. [PMID: 34077371 DOI: 10.1109/tnnls.2021.3082528] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article presents a collaborative neurodynamic optimization (CNO) approach to multivehicle task assignments (TAs). The original combinatorial quadratic optimization problem for TA is reformulated as a quadratic unconstrained binary optimization (QUBO) problem with a quadratic utility function and a penalty function for handling load capacity and cooperation constraints. In the framework of CNO with a population of discrete Hopfield networks (DHNs), a TA algorithm is proposed for solving the formulated QUBO problem. Superior experimental results in four typical multivehicle operation scenarios are reported to substantiate the efficacy of the proposed neurodynamics-based TA approach.
Collapse
|
16
|
Leung MF, Wang J. Cardinality-constrained portfolio selection based on collaborative neurodynamic optimization. Neural Netw 2021; 145:68-79. [PMID: 34735892 DOI: 10.1016/j.neunet.2021.10.007] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/28/2021] [Accepted: 10/11/2021] [Indexed: 11/18/2022]
Abstract
Portfolio optimization is one of the most important investment strategies in financial markets. It is practically desirable for investors, especially high-frequency traders, to consider cardinality constraints in portfolio selection, to avoid odd lots and excessive costs such as transaction fees. In this paper, a collaborative neurodynamic optimization approach is presented for cardinality-constrained portfolio selection. The expected return and investment risk in the Markowitz framework are scalarized as a weighted Chebyshev function and the cardinality constraints are equivalently represented using introduced binary variables as an upper bound. Then cardinality-constrained portfolio selection is formulated as a mixed-integer optimization problem and solved by means of collaborative neurodynamic optimization with multiple recurrent neural networks repeatedly repositioned using a particle swarm optimization rule. The distribution of resulting Pareto-optimal solutions is also iteratively perfected by optimizing the weights in the scalarized objective functions based on particle swarm optimization. Experimental results with stock data from four major world markets are discussed to substantiate the superior performance of the collaborative neurodynamic approach to several exact and metaheuristic methods.
Collapse
Affiliation(s)
- Man-Fai Leung
- School of Science and Technology, Hong Kong Metropolitan University, Kowloon, Hong Kong
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| |
Collapse
|
17
|
Ju X, Che H, Li C, He X, Feng G. Exponential convergence of a proximal projection neural network for mixed variational inequalities and applications. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.04.059] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
18
|
Two Matrix-Type Projection Neural Networks for Matrix-Valued Optimization with Application to Image Restoration. Neural Process Lett 2021. [DOI: 10.1007/s11063-019-10086-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
19
|
Ju X, Li C, He X, Feng G. A proximal neurodynamic model for solving inverse mixed variational inequalities. Neural Netw 2021; 138:1-9. [PMID: 33610091 DOI: 10.1016/j.neunet.2021.01.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 11/17/2020] [Accepted: 01/14/2021] [Indexed: 11/17/2022]
Abstract
This paper proposes a proximal neurodynamic model (PNDM) for solving inverse mixed variational inequalities (IMVIs) based on the proximal operator. It is shown that the PNDM has a unique continuous solution under the condition of Lipschitz continuity (L-continuity). It is also shown that the equilibrium point of the proposed PNDM is asymptotically stable or exponentially stable under some mild conditions. Finally, three numerical examples are presented to illustrate effectiveness of the proposed PNDM.
Collapse
Affiliation(s)
- Xingxing Ju
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Chuandong Li
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Xing He
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China.
| | - Gang Feng
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong.
| |
Collapse
|
20
|
Abstract
AbstractThis paper investigates the problem of finite-time stability (FTS) for a class of delayed genetic regulatory networks with reaction-diffusion terms. In order to fully utilize the system information, a linear parameterization method is proposed. Firstly, by applying the Lagrange’s mean-value theorem, the linear parameterization method is applied to transform the nonlinear system into a linear one with time-varying bounded uncertain terms. Secondly, a new generalized convex combination lemma is proposed to dispose the relationship of bounded uncertainties with respect to their boundaries. Thirdly, sufficient conditions are established to ensure the FTS by resorting to Lyapunov Krasovskii theory, convex combination technique, Jensen’s inequality, linear matrix inequality, etc. Finally, the simulation verifications indicate the validity of the theoretical results.
Collapse
|
21
|
Neurodynamical classifiers with low model complexity. Neural Netw 2020; 132:405-415. [PMID: 33011671 DOI: 10.1016/j.neunet.2020.08.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 07/18/2020] [Accepted: 08/11/2020] [Indexed: 11/18/2022]
Abstract
The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane classifier by minimizing an upper bound on the Vapnik-Chervonenkis (VC) dimension. The VC dimension measures the capacity or model complexity of a learning machine. Vapnik's risk formula indicates that models with smaller VC dimension are expected to show improved generalization. On many benchmark datasets, the MCM generalizes better than SVMs and uses far fewer support vectors than the number used by SVMs. In this paper, we describe a neural network that converges to the MCM solution. We employ the MCM neurodynamical system as the final layer of a neural network architecture. Our approach also optimizes the weights of all layers in order to minimize the objective, which is a combination of a bound on the VC dimension and the classification error. We illustrate the use of this model for robust binary and multi-class classification. Numerical experiments on benchmark datasets from the UCI repository show that the proposed approach is scalable and accurate, and learns models with improved accuracies and fewer support vectors.
Collapse
|
22
|
Relaxed Inertial Tseng’s Type Method for Solving the Inclusion Problem with Application to Image Restoration. MATHEMATICS 2020. [DOI: 10.3390/math8050818] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The relaxed inertial Tseng-type method for solving the inclusion problem involving a maximally monotone mapping and a monotone mapping is proposed in this article. The study modifies the Tseng forward-backward forward splitting method by using both the relaxation parameter, as well as the inertial extrapolation step. The proposed method follows from time explicit discretization of a dynamical system. A weak convergence of the iterates generated by the method involving monotone operators is given. Moreover, the iterative scheme uses a variable step size, which does not depend on the Lipschitz constant of the underlying operator given by a simple updating rule. Furthermore, the proposed algorithm is modified and used to derive a scheme for solving a split feasibility problem. The proposed schemes are used in solving the image deblurring problem to illustrate the applicability of the proposed methods in comparison with the existing state-of-the-art methods.
Collapse
|
23
|
Lu J, Yan Z, Han J, Zhang G. Data-Driven Decision-Making (D3M): Framework, Methodology, and Directions. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2019. [DOI: 10.1109/tetci.2019.2915813] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
24
|
A combined neurodynamic approach to optimize the real-time price-based demand response management problem using mixed zero-one programming. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04283-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
25
|
|
26
|
|
27
|
Lv Y, Wan Z. A solving method based on neural network for a class of multi-leader–follower games. Neural Comput Appl 2018. [DOI: 10.1007/s00521-016-2648-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
28
|
Adaptive consensus control of output-constrained second-order nonlinear systems via neurodynamic optimization. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.12.052] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
29
|
A neural dynamic system for solving convex nonlinear optimization problems with hybrid constraints. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3422-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
30
|
Gao X, Liao LZ. A Novel Neural Network for Generally Constrained Variational Inequalities. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2062-2075. [PMID: 27323376 DOI: 10.1109/tnnls.2016.2570257] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper presents a novel neural network for solving generally constrained variational inequality problems by constructing a system of double projection equations. By defining proper convex energy functions, the proposed neural network is proved to be stable in the sense of Lyapunov and converges to an exact solution of the original problem for any starting point under the weaker cocoercivity condition or the monotonicity condition of the gradient mapping on the linear equation set. Furthermore, two sufficient conditions are provided to ensure the stability of the proposed neural network for a special case. The proposed model overcomes some shortcomings of existing continuous-time neural networks for constrained variational inequality, and its stability only requires some monotonicity conditions of the underlying mapping and the concavity of nonlinear inequality constraints on the equation set. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.
Collapse
|
31
|
Le X, Wang J. A Two-Time-Scale Neurodynamic Approach to Constrained Minimax Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:620-629. [PMID: 28212073 DOI: 10.1109/tnnls.2016.2538288] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper presents a two-time-scale neurodynamic approach to constrained minimax optimization using two coupled neural networks. One of the recurrent neural networks is used for minimizing the objective function and another is used for maximization. It is shown that the coupled neurodynamic systems operating in two different time scales work well for minimax optimization. The effectiveness and characteristics of the proposed approach are illustrated using several examples. Furthermore, the proposed approach is applied for H∞ model predictive control.
Collapse
|
32
|
A novel neural network for solving convex quadratic programming problems subject to equality and inequality constraints. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.05.032] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
33
|
Che H, Li C, He X, Huang T. A recurrent neural network for adaptive beamforming and array correction. Neural Netw 2016; 80:110-7. [DOI: 10.1016/j.neunet.2016.04.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2015] [Revised: 03/08/2016] [Accepted: 04/22/2016] [Indexed: 10/21/2022]
|
34
|
Wang Y, Cheng L, Hou ZG, Yu J, Tan M. Optimal Formation of Multirobot Systems Based on a Recurrent Neural Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:322-333. [PMID: 26316224 DOI: 10.1109/tnnls.2015.2464314] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The optimal formation problem of multirobot systems is solved by a recurrent neural network in this paper. The desired formation is described by the shape theory. This theory can generate a set of feasible formations that share the same relative relation among robots. An optimal formation means that finding one formation from the feasible formation set, which has the minimum distance to the initial formation of the multirobot system. Then, the formation problem is transformed into an optimization problem. In addition, the orientation, scale, and admissible range of the formation can also be considered as the constraints in the optimization problem. Furthermore, if all robots are identical, their positions in the system are exchangeable. Then, each robot does not necessarily move to one specific position in the formation. In this case, the optimal formation problem becomes a combinational optimization problem, whose optimal solution is very hard to obtain. Inspired by the penalty method, this combinational optimization problem can be approximately transformed into a convex optimization problem. Due to the involvement of the Euclidean norm in the distance, the objective function of these optimization problems are nonsmooth. To solve these nonsmooth optimization problems efficiently, a recurrent neural network approach is employed, owing to its parallel computation ability. Finally, some simulations and experiments are given to validate the effectiveness and efficiency of the proposed optimal formation approach.
Collapse
|
35
|
Li C, Yu X, Huang T, Chen G, He X. A Generalized Hopfield Network for Nonsmooth Constrained Convex Optimization: Lie Derivative Approach. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:308-321. [PMID: 26595931 DOI: 10.1109/tnnls.2015.2496658] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper proposes a generalized Hopfield network for solving general constrained convex optimization problems. First, the existence and the uniqueness of solutions to the generalized Hopfield network in the Filippov sense are proved. Then, the Lie derivative is introduced to analyze the stability of the network using a differential inclusion. The optimality of the solution to the nonsmooth constrained optimization problems is shown to be guaranteed by the enhanced Fritz John conditions. The convergence rate of the generalized Hopfield network can be estimated by the second-order derivative of the energy function. The effectiveness of the proposed network is evaluated on several typical nonsmooth optimization problems and used to solve the hierarchical and distributed model predictive control four-tank benchmark.
Collapse
|
36
|
Di Marco M, Forti M, Nistri P, Pancioni L. Nonsmooth Neural Network for Convex Time-Dependent Constraint Satisfaction Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:295-307. [PMID: 25769174 DOI: 10.1109/tnnls.2015.2404773] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper introduces a nonsmooth (NS) neural network that is able to operate in a time-dependent (TD) context and is potentially useful for solving some classes of NS-TD problems. The proposed network is named nonsmooth time-dependent network (NTN) and is an extension to a TD setting of a previous NS neural network for programming problems. Suppose C(t), t ≥ 0, is a nonempty TD convex feasibility set defined by TD inequality constraints. The constraints are in general NS (nondifferentiable) functions of the state variables and time. NTN is described by the subdifferential with respect to the state variables of an NS-TD barrier function and a vector field corresponding to the unconstrained dynamics. This paper shows that for suitable values of the penalty parameter, the NTN dynamics displays two main phases. In the first phase, any solution of NTN not starting in C(0) at t=0 is able to reach the moving set C(·) in finite time th , whereas in the second phase, the solution tracks the moving set, i.e., it stays within C(t) for all subsequent times t ≥ t(h). NTN is thus able to find an exact feasible solution in finite time and also to provide an exact feasible solution for subsequent times. This new and peculiar dynamics displayed by NTN is potentially useful for addressing some significant TD signal processing tasks. As an illustration, this paper discusses a number of examples where NTN is applied to the solution of NS-TD convex feasibility problems.
Collapse
|
37
|
Hosseini A. A non-penalty recurrent neural network for solving a class of constrained optimization problems. Neural Netw 2016; 73:10-25. [DOI: 10.1016/j.neunet.2015.09.013] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2015] [Revised: 08/12/2015] [Accepted: 09/29/2015] [Indexed: 11/29/2022]
|
38
|
Che H, Li C, He X, Huang T. An intelligent method of swarm neural networks for equalities-constrained nonconvex optimization. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.04.033] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
39
|
Le X, Wang J. Neurodynamics-Based Robust Pole Assignment for High-Order Descriptor Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2962-2971. [PMID: 26357408 DOI: 10.1109/tnnls.2015.2461553] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, a neurodynamic optimization approach is proposed for synthesizing high-order descriptor linear systems with state feedback control via robust pole assignment. With a new robustness measure serving as the objective function, the robust eigenstructure assignment problem is formulated as a pseudoconvex optimization problem. A neurodynamic optimization approach is applied and shown to be capable of maximizing the robust stability margin for high-order singular systems with guaranteed optimality and exact pole assignment. Two numerical examples and vehicle vibration control application are discussed to substantiate the efficacy of the proposed approach.
Collapse
|
40
|
Le X, Wang J. Robust pole assignment for synthesizing feedback control systems using recurrent neural networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:383-393. [PMID: 24807036 DOI: 10.1109/tnnls.2013.2275732] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents a neurodynamic optimization approach to robust pole assignment for synthesizing linear control systems via state and output feedback. The problem is formulated as a pseudoconvex optimization problem with robustness measure: i.e., the spectral condition number as the objective function and linear matrix equality constraints for exact pole assignment. Two coupled recurrent neural networks are applied for solving the formulated problem in real time. In contrast to existing approaches, the exponential convergence of the proposed neurodynamics to global optimal solutions can be guaranteed even with lower model complexity in terms of the number of variables. Simulation results of the proposed neurodynamic approach for 11 benchmark problems are reported to demonstrate its superiority.
Collapse
|
41
|
Li J, Li C, Wu Z, Huang J. A feedback neural network for solving convex quadratic bi-level programming problems. Neural Comput Appl 2013. [DOI: 10.1007/s00521-013-1530-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
42
|
Nazemi A, Tahmasbi N. A computational intelligence method for solving a class of portfolio optimization problems. Soft comput 2013. [DOI: 10.1007/s00500-013-1186-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
43
|
Liu Q, Wang J. A one-layer projection neural network for nonsmooth optimization subject to linear equalities and bound constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:812-824. [PMID: 24808430 DOI: 10.1109/tnnls.2013.2244908] [Citation(s) in RCA: 106] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents a one-layer projection neural network for solving nonsmooth optimization problems with generalized convex objective functions and subject to linear equalities and bound constraints. The proposed neural network is designed based on two projection operators: linear equality constraints, and bound constraints. The objective function in the optimization problem can be any nonsmooth function which is not restricted to be convex but is required to be convex (pseudoconvex) on a set defined by the constraints. Compared with existing recurrent neural networks for nonsmooth optimization, the proposed model does not have any design parameter, which is more convenient for design and implementation. It is proved that the output variables of the proposed neural network are globally convergent to the optimal solutions provided that the objective function is at least pseudoconvex. Simulation results of numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.
Collapse
|
44
|
Bahar Yaakob S, Watada J, Fulcher J. Structural learning of the Boltzmann machine and its application to life cycle management. Neurocomputing 2011. [DOI: 10.1016/j.neucom.2011.02.018] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
45
|
|
46
|
|
47
|
Xiaolin Hu, Bo Zhang. An Alternative Recurrent Neural Network for Solving Variational Inequalities and Related Optimization Problems. ACTA ACUST UNITED AC 2009; 39:1640-5. [DOI: 10.1109/tsmcb.2009.2025700] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
48
|
Xing-Bao Gao, Li-Zhi Liao. A New Projection-Based Neural Network for Constrained Variational Inequalities. ACTA ACUST UNITED AC 2009; 20:373-88. [DOI: 10.1109/tnn.2008.2006263] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
49
|
Qiao C, Xu Z. A critical global convergence analysis of recurrent neural networks with general projection mappings. Neurocomputing 2009. [DOI: 10.1016/j.neucom.2008.06.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
50
|
Barbarosou M, Maratos N. A Nonfeasible Gradient Projection Recurrent Neural Network for Equality-Constrained Optimization Problems. ACTA ACUST UNITED AC 2008; 19:1665-77. [DOI: 10.1109/tnn.2008.2000993] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|