1
|
Wu D, Lisser A. Parallel Solution of Nonlinear Projection Equations in a Multitask Learning Framework. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:3490-3503. [PMID: 38261500 DOI: 10.1109/tnnls.2024.3350335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
Abstract
Nonlinear projection equations (NPEs) provide a unified framework for addressing various constrained nonlinear optimization and engineering problems. However, when it comes to solving multiple NPEs, traditional numerical integration methods are not efficient enough. This is because traditional methods solve each NPE iteratively and independently. In this article, we propose a novel approach based on multitask learning (MTL) for solving multiple NPEs. The solution procedure is outlined as follows. First, we model each NPE as a system of ordinary differential equations (ODEs) using neurodynamic optimization. Second, for each ODE system, we use a physics-informed neural network (PINN) as the solution. Third, we use a multibranch MTL framework, where each branch corresponds to a PINN model. This allows us to solve multiple NPEs in parallel by training a single neural network model. Experimental results show that our approach has superior computational performance, especially when the number of NPEs to be solved is large.
Collapse
|
2
|
Li Y, Xia Z, Liu Y, Wang J. A collaborative neurodynamic approach with two-timescale projection neural networks designed via majorization-minimization for global optimization and distributed global optimization. Neural Netw 2024; 179:106525. [PMID: 39042949 DOI: 10.1016/j.neunet.2024.106525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 06/12/2024] [Accepted: 07/06/2024] [Indexed: 07/25/2024]
Abstract
In this paper, two two-timescale projection neural networks are proposed based on the majorization-minimization principle for nonconvex optimization and distributed nonconvex optimization. They are proved to be globally convergent to Karush-Kuhn-Tucker points. A collaborative neurodynamic approach leverages multiple two-timescale projection neural networks repeatedly re-initialized using a meta-heuristic rule for global optimization and distributed global optimization. Two numerical examples are elaborated to demonstrate the efficacy of the proposed approaches.
Collapse
Affiliation(s)
- Yangxia Li
- School of Mathematical Sciences, Zhejiang Normal University, Jinhua, Zhejiang, 321004, China
| | - Zicong Xia
- School of Mathematics, Southeast University, Nanjing, Jiangsu, 210096, China
| | - Yang Liu
- School of Mathematical Sciences, Zhejiang Normal University, Jinhua, Zhejiang, 321004, China; Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, Zhejiang, 321004, China; School of Automation and Electrical Engineering, Linyi University, Linyi, Shandong, 276000, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| |
Collapse
|
3
|
Luan L, Wen X, Xue Y, Qin S. Adaptive penalty-based neurodynamic approach for nonsmooth interval-valued optimization problem. Neural Netw 2024; 176:106337. [PMID: 38688071 DOI: 10.1016/j.neunet.2024.106337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 03/08/2024] [Accepted: 04/23/2024] [Indexed: 05/02/2024]
Abstract
The complex and diverse practical background drives this paper to explore a new neurodynamic approach (NA) to solve nonsmooth interval-valued optimization problems (IVOPs) constrained by interval partial order and more general sets. On the one hand, to deal with the uncertainty of interval-valued information, the LU-optimality condition of IVOPs is established through a deterministic form. On the other hand, according to the penalty method and adaptive controller, the interval partial order constraint and set constraint are punished by one adaptive parameter, which is a key enabler for the feasibility of states while having a lower solution space dimension and avoiding estimating exact penalty parameters. Through nonsmooth analysis and Lyapunov theory, the proposed adaptive penalty-based neurodynamic approach (APNA) is proven to converge to an LU-solution of the considered IVOPs. Finally, the feasibility of the proposed APNA is illustrated by numerical simulations and an investment decision-making problem.
Collapse
Affiliation(s)
- Linhua Luan
- Department of Mathematics, Harbin Institute of Technology, Weihai, China.
| | - Xingnan Wen
- Department of Mathematics, Harbin Institute of Technology, Weihai, China.
| | - Yuhan Xue
- School of Economics and Management, Harbin Institute of Technology, Harbin, China.
| | - Sitian Qin
- Department of Mathematics, Harbin Institute of Technology, Weihai, China.
| |
Collapse
|
4
|
Huang B, Liu Y, Jiang YL, Wang J. Two-timescale projection neural networks in collaborative neurodynamic approaches to global optimization and distributed optimization. Neural Netw 2024; 169:83-91. [PMID: 37864998 DOI: 10.1016/j.neunet.2023.10.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 09/15/2023] [Accepted: 10/10/2023] [Indexed: 10/23/2023]
Abstract
In this paper, we propose a two-timescale projection neural network (PNN) for solving optimization problems with nonconvex functions. We prove the convergence of the PNN with sufficiently different timescales to a local optimal solution. We develop a collaborative neurodynamic approach with multiple such PNNs to search for global optimal solutions. In addition, we develop a collaborative neurodynamic approach with multiple PNNs connected via a directed graph for distributed global optimization. We elaborate on four numerical examples to illustrate the characteristics of the approaches.
Collapse
Affiliation(s)
- Banghua Huang
- School of Mathematical Sciences, Zhejiang Normal University, JinhuaZhejiang 321004, China
| | - Yang Liu
- School of Mathematical Sciences, Zhejiang Normal University, JinhuaZhejiang 321004, China; Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, Zhejiang, 321004, China.
| | - Yun-Liang Jiang
- School of Computer Science and Technology, Zhejiang Normal University, Jinhua, Zhejiang, 321004, China; School of Information Engineering, Huzhou University, Huzhou, Zhejiang, 313000, China
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| |
Collapse
|
5
|
Ju X, Li C, Che H, He X, Feng G. A Proximal Neurodynamic Network With Fixed-Time Convergence for Equilibrium Problems and Its Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:7500-7514. [PMID: 35143401 DOI: 10.1109/tnnls.2022.3144148] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This article proposes a novel fixed-time converging proximal neurodynamic network (FXPNN) via a proximal operator to deal with equilibrium problems (EPs). A distinctive feature of the proposed FXPNN is its better transient performance in comparison to most existing proximal neurodynamic networks. It is shown that the FXPNN converges to the solution of the corresponding EP in fixed-time under some mild conditions. It is also shown that the settling time of the FXPNN is independent of initial conditions and the fixed-time interval can be prescribed, unlike existing results with asymptotical or exponential convergence. Moreover, the proposed FXPNN is applied to solve composition optimization problems (COPs), l1 -regularized least-squares problems, mixed variational inequalities (MVIs), and variational inequalities (VIs). It is further shown, in the case of solving COPs, that the fixed-time convergence can be established via the Polyak-Lojasiewicz condition, which is a relaxation of the more demanding convexity condition. Finally, numerical examples are presented to validate the effectiveness and advantages of the proposed neurodynamic network.
Collapse
|
6
|
Chen Z, Wang J, Han QL. Event-Triggered Cardinality-Constrained Cooling and Electrical Load Dispatch Based on Collaborative Neurodynamic Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:5464-5475. [PMID: 35358052 DOI: 10.1109/tnnls.2022.3160645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This article addresses event-triggered optimal load dispatching based on collaborative neurodynamic optimization. Two cardinality-constrained global optimization problems are formulated and two event-triggering functions are defined for event-triggered load dispatching in thermal energy and electric power systems. An event-triggered dispatching method is developed in the collaborative neurodynamic optimization framework with multiple projection neural networks and a meta-heuristic updating rule. Experimental results are elaborated to demonstrate the efficacy and superiority of the approach against many existing methods for optimal load dispatching in air conditioning systems and electric power generation systems.
Collapse
|
7
|
Xia Z, Liu Y, Wang J, Wang J. Two-timescale recurrent neural networks for distributed minimax optimization. Neural Netw 2023; 165:527-539. [PMID: 37348433 DOI: 10.1016/j.neunet.2023.06.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 06/01/2023] [Accepted: 06/02/2023] [Indexed: 06/24/2023]
Abstract
In this paper, we present two-timescale neurodynamic optimization approaches to distributed minimax optimization. We propose four multilayer recurrent neural networks for solving four different types of generally nonlinear convex-concave minimax problems subject to linear equality and nonlinear inequality constraints. We derive sufficient conditions to guarantee the stability and optimality of the neural networks. We demonstrate the viability and efficiency of the proposed neural networks in two specific paradigms for Nash-equilibrium seeking in a zero-sum game and distributed constrained nonlinear optimization.
Collapse
Affiliation(s)
- Zicong Xia
- School of Mathematical Sciences, Zhejiang Normal University, Jinhua 321004, China
| | - Yang Liu
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua 321004, China; School of Mathematical Sciences, Zhejiang Normal University, Jinhua 321004, China.
| | - Jiasen Wang
- Future Network Research Center, Purple Mountain Laboratories, Nanjing 211111, China
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Hong Kong.
| |
Collapse
|
8
|
Mohammadi M, Atashin AA, Tamburri DA. From ℓ1 subgradient to projection: A compact neural network for ℓ1-regularized logistic regression. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
9
|
Wang Y, Wang J. Neurodynamics-driven holistic approaches to semi-supervised feature selection. Neural Netw 2022; 157:377-386. [DOI: 10.1016/j.neunet.2022.10.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 10/25/2022] [Accepted: 10/27/2022] [Indexed: 11/06/2022]
|
10
|
Xu C, Wang M, Chi G, Liu Q. An inertial neural network approach for loco-manipulation trajectory tracking of mobile robot with redundant manipulator. Neural Netw 2022; 155:215-223. [DOI: 10.1016/j.neunet.2022.08.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 07/20/2022] [Accepted: 08/11/2022] [Indexed: 10/31/2022]
|
11
|
Zhang S, Xia Y, Xia Y, Wang J. Matrix-Form Neural Networks for Complex-Variable Basis Pursuit Problem With Application to Sparse Signal Reconstruction. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:7049-7059. [PMID: 33471773 DOI: 10.1109/tcyb.2020.3042519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this article, a continuous-time complex-valued projection neural network (CCPNN) in a matrix state space is first proposed for a general complex-variable basis pursuit problem. The proposed CCPNN is proved to be stable in the sense of Lyapunov and to be globally convergent to the optimal solution under the condition that the sensing matrix is not row full rank. Furthermore, an improved discrete-time complex projection neural network (IDCPNN) is proposed by discretizing the CCPNN model. The proposed IDCPNN consists of a two-step stop strategy to reduce the calculational cost. The proposed IDCPNN is theoretically guaranteed to be global convergent to the optimal solution. Finally, the proposed IDCPNN is applied to the reconstruction of sparse signals based on compressed sensing. Computed results show that the proposed IDCPNN is superior to related complex-valued neural networks and conventional basis pursuit algorithms in terms of solution quality and computation time.
Collapse
|
12
|
Hu D, He X, Ju X. A modified projection neural network with fixed-time convergence. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
13
|
Luan L, Wen X, Qin S. Distributed neurodynamic approaches to nonsmooth optimization problems with inequality and set constraints. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00770-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractIn this paper, neurodynamic approaches are proposed for solving nonsmooth distributed optimization problems under inequality and set constraints, that is to find the solution that minimizes the sum of local cost functions. A continuous-time neurodynamic approach is designed and its state solution exists globally and converges to an optimal solution of the corresponding distributed optimization problem. Then, a neurodynamic approach with event-triggered mechanism is considered for the purpose of saving communication costs, and then, the convergence and its Zeno-free property are proved. Moreover, to realize the practical application of the neurodynamic approach, a discrete-time neurodynamic approach is proposed to solve nonsmooth distributed optimization problems under inequality and set constraints. It is rigorously proved that the iterative sequence generated by the discrete-time neurodynamic approach converges to the optimal solution set of the distributed optimization problem. Finally, numerical examples are solved to demonstrate the effectiveness of the proposed neurodynamic approaches, and the neurodynamic approach is further applied to solve the ill-conditioned Least Absolute Deviation problem and the load sharing optimization problem.
Collapse
|
14
|
Finite Time Stability of Caputo–Katugampola Fractional Order Time Delay Projection Neural Networks. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10838-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
15
|
Zhang Y, Liu H. A new projection neural network for linear and convex quadratic second-order cone programming. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-210164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
A new projection neural network approach is presented for the linear and convex quadratic second-order cone programming. In the method, the optimal conditions of the linear and convex second-order cone programming are equivalent to the cone projection equations. A Lyapunov function is given based on the G-norm distance function. Based on the cone projection function, the descent direction of Lyapunov function is used to design the new projection neural network. For the proposed neural network, we give the Lyapunov stability analysis and prove the global convergence. Finally, some numerical examples and two kinds of grasping force optimization problems are used to test the efficiency of the proposed neural network. The simulation results show that the proposed neural network is efficient for solving some linear and convex quadratic second-order cone programming problems. Especially, the proposed neural network can overcome the oscillating trajectory of the exist projection neural network for some linear second-order cone programming examples and the min-max grasping force optimization problem.
Collapse
Affiliation(s)
- Yaling Zhang
- School of Mathematics and Statistics, Xidian University, Xi’an, China
- School of Computer Science, Xi’an Science and Technology University, Xi’an, China
| | - Hongwei Liu
- School of Mathematics and Statistics, Xidian University, Xi’an, China
| |
Collapse
|
16
|
Qi Y, Jin L, Luo X, Zhou M. Recurrent Neural Dynamics Models for Perturbed Nonstationary Quadratic Programs: A Control-Theoretical Perspective. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1216-1227. [PMID: 33449881 DOI: 10.1109/tnnls.2020.3041364] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recent decades have witnessed a trend that control-theoretical techniques are widely leveraged in various areas, e.g., design and analysis of computational models. Computational methods can be modeled as a controller and searching the equilibrium point of a dynamical system is identical to solving an algebraic equation. Thus, absorbing mature technologies in control theory and integrating it with neural dynamics models can lead to new achievements. This work makes progress along this direction by applying control-theoretical techniques to construct new recurrent neural dynamics for manipulating a perturbed nonstationary quadratic program (QP) with time-varying parameters considered. Specifically, to break the limitations of existing continuous-time models in handling nonstationary problems, a discrete recurrent neural dynamics model is proposed to robustly deal with noise. This work shows how iterative computational methods for solving nonstationary QP can be revisited, designed, and analyzed in a control framework. A modified Newton iteration model and an improved gradient-based neural dynamics are established by referring to the superior structural technology of the presented recurrent neural dynamics, where the chief breakthrough is their excellent convergence and robustness over the traditional models. Numerical experiments are conducted to show the eminence of the proposed models in solving perturbed nonstationary QP.
Collapse
|
17
|
Di Marco M, Forti M, Pancioni L, Innocenti G, Tesi A. Memristor Neural Networks for Linear and Quadratic Programming Problems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1822-1835. [PMID: 32559170 DOI: 10.1109/tcyb.2020.2997686] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This article introduces a new class of memristor neural networks (NNs) for solving, in real-time, quadratic programming (QP) and linear programming (LP) problems. The networks, which are called memristor programming NNs (MPNNs), use a set of filamentary-type memristors with sharp memristance transitions for constraint satisfaction and an additional set of memristors with smooth memristance transitions for memorizing the result of a computation. The nonlinear dynamics and global optimization capabilities of MPNNs for QP and LP problems are thoroughly investigated via a recently introduced technique called the flux-charge analysis method. One main feature of MPNNs is that the processing is performed in the flux-charge domain rather than in the conventional voltage-current domain. This enables exploiting the unconventional features of memristors to obtain advantages over the traditional NNs for QP and LP problems operating in the voltage-current domain. One advantage is that operating in the flux-charge domain allows for reduced power consumption, since in an MPNN, voltages, currents, and, hence, power vanish when the quick analog transient is over. Moreover, an MPNN works in accordance with the fundamental principle of in-memory computing, that is, the nonlinearity of the memristor is used in the dynamic computation, but the same memristor is also used to memorize in a nonvolatile way the result of a computation.
Collapse
|
18
|
A finite-time projection neural network to solve the joint optimal dispatching problem of CHP and wind power. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06867-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
19
|
Wang J, Wang J. Two-Timescale Multilayer Recurrent Neural Networks for Nonlinear Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:37-47. [PMID: 33108292 DOI: 10.1109/tnnls.2020.3027471] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This article presents a neurodynamic approach to nonlinear programming. Motivated by the idea of sequential quadratic programming, a class of two-timescale multilayer recurrent neural networks is presented with neuronal dynamics in their output layer operating at a bigger timescale than in their hidden layers. In the two-timescale multilayer recurrent neural networks, the transient states in the hidden layer(s) undergo faster dynamics than those in the output layer. Sufficient conditions are derived on the convergence of the two-timescale multilayer recurrent neural networks to local optima of nonlinear programming problems. Simulation results of collaborative neurodynamic optimization based on the two-timescale neurodynamic approach on global optimization problems with nonconvex objective functions or constraints are discussed to substantiate the efficacy of the two-timescale neurodynamic approach.
Collapse
|
20
|
Xu C, Liu Q. An inertial neural network approach for robust time-of-arrival localization considering clock asynchronization. Neural Netw 2021; 146:98-106. [PMID: 34852299 DOI: 10.1016/j.neunet.2021.11.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 07/21/2021] [Accepted: 11/09/2021] [Indexed: 12/01/2022]
Abstract
This paper presents an inertial neural network to solve the source localization optimization problem with l1-norm objective function based on the time of arrival (TOA) localization technique. The convergence and stability of the inertial neural network are analyzed by the Lyapunov function method. An inertial neural network iterative approach is further used to find a better solution among the solutions with different inertial parameters. Furthermore, the clock asynchronization is considered in the TOA l1-norm model for more general real applications, and the corresponding inertial neural network iterative approach is addressed. The numerical simulations and real data are both considered in the experiments. In the simulation experiments, the noise contains uncorrelated zero-mean Gaussian noise and uniform distributed outliers. In the real experiments, the data is obtained by using the ultra wide band (UWB) technology hardware modules. Whether or not there is clock asynchronization, the results show that the proposed approach always can find a more accurate source position compared with some of the existing algorithms, which implies that the proposed approach is more effective than the compared ones.
Collapse
Affiliation(s)
- Chentao Xu
- School of Cyber Science and Engineering, Frontiers Science Center for Mobile Information Communication and Security, Southeast University, Nanjing 210096, China; Purple Mountain Laboratories, Nanjing 211111, China.
| | - Qingshan Liu
- School of Mathematics, Frontiers Science Center for Mobile Information Communication and Security, Southeast University, Nanjing 210096, China; Purple Mountain Laboratories, Nanjing 211111, China.
| |
Collapse
|
21
|
An Optimization Technique for Solving a Class of Ridge Fuzzy Regression Problems. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10538-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
22
|
Solving Mixed Variational Inequalities Via a Proximal Neurodynamic Network with Applications. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10628-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
23
|
Mohammadi M. A Compact Neural Network for Fused Lasso Signal Approximator. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:4327-4336. [PMID: 31329147 DOI: 10.1109/tcyb.2019.2925707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The fused lasso signal approximator (FLSA) is a vital optimization problem with extensive applications in signal processing and biomedical engineering. However, the optimization problem is difficult to solve since it is both nonsmooth and nonseparable. The existing numerical solutions implicate the use of several auxiliary variables in order to deal with the nondifferentiable penalty. Thus, the resulting algorithms are both time- and memory-inefficient. This paper proposes a compact neural network to solve the FLSA. The neural network has a one-layer structure with the number of neurons proportionate to the dimension of the given signal, thanks to the utilization of consecutive projections. The proposed neural network is stable in the Lyapunov sense and is guaranteed to converge globally to the optimal solution of the FLSA. Experiments on several applications from signal processing and biomedical engineering confirm the reasonable performance of the proposed neural network.
Collapse
|
24
|
Leung MF, Wang J. Minimax and Biobjective Portfolio Selection Based on Collaborative Neurodynamic Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2825-2836. [PMID: 31902773 DOI: 10.1109/tnnls.2019.2957105] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Portfolio selection is one of the important issues in financial investments. This article is concerned with portfolio selection based on collaborative neurodynamic optimization. The classic Markowitz mean-variance (MV) framework and its variant mean conditional value-at-risk (CVaR) are formulated as minimax and biobjective portfolio selection problems. Neurodynamic approaches are then applied for solving these optimization problems. For each of the problems, multiple neural networks work collaboratively to characterize the efficient frontier by means of particle swarm optimization (PSO)-based weight optimization. Experimental results with stock data from four major markets show the performance and characteristics of the collaborative neurodynamic approaches to the portfolio optimization problems.
Collapse
|
25
|
Sun J, Fu W, Alcantara JH, Chen JS. A Neural Network Based on the Metric Projector for Solving SOCCVI Problem. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2886-2900. [PMID: 32755866 DOI: 10.1109/tnnls.2020.3008661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose an efficient neural network for solving the second-order cone constrained variational inequality (SOCCVI). The network is constructed using the Karush-Kuhn-Tucker (KKT) conditions of the variational inequality (VI), which is used to recast the SOCCVI as a system of equations by using a smoothing function for the metric projection mapping to deal with the complementarity condition. Aside from standard stability results, we explore second-order sufficient conditions to obtain exponential stability. Especially, we prove the nonsingularity of the Jacobian of the KKT system based on the second-order sufficient condition and constraint nondegeneracy. Finally, we present some numerical experiments, illustrating the efficiency of the neural network in solving SOCCVI problems. Our numerical simulations reveal that, in general, the new neural network is more dominant than all other neural networks in the SOCCVI literature in terms of stability and convergence rates of trajectories to SOCCVI solution.
Collapse
|
26
|
Two Matrix-Type Projection Neural Networks for Matrix-Valued Optimization with Application to Image Restoration. Neural Process Lett 2021. [DOI: 10.1007/s11063-019-10086-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
27
|
Wang Y, Wang J, Che H. Two-timescale neurodynamic approaches to supervised feature selection based on alternative problem formulations. Neural Netw 2021; 142:180-191. [PMID: 34020085 DOI: 10.1016/j.neunet.2021.04.038] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 04/21/2021] [Accepted: 04/29/2021] [Indexed: 10/21/2022]
Abstract
Feature selection is a crucial step in data processing and machine learning. While many greedy and sequential feature selection approaches are available, a holistic neurodynamics approach to supervised feature selection is recently developed via fractional programming by minimizing feature redundancy and maximizing relevance simultaneously. In view that the gradient of the fractional objective function is also fractional, alternative problem formulations are desirable to obviate the fractional complexity. In this paper, the fractional programming problem formulation is equivalently reformulated as bilevel and bilinear programming problems without using any fractional function. Two two-timescale projection neural networks are adapted for solving the reformulated problems. Experimental results on six benchmark datasets are elaborated to demonstrate the global convergence and high classification performance of the proposed neurodynamic approaches in comparison with six mainstream feature selection approaches.
Collapse
Affiliation(s)
- Yadi Wang
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, 475004, China; Institute of Data and Knowledge Engineering, School of Computer and Information Engineering, Henan University, Kaifeng, 475004, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong; Shenzhen Research Institute, City University of Hong Kong, Shenzhen, Guangdong, China.
| | - Hangjun Che
- College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China; Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing 400715, China.
| |
Collapse
|
28
|
Multi-periodicity of switched neural networks with time delays and periodic external inputs under stochastic disturbances. Neural Netw 2021; 141:107-119. [PMID: 33887601 DOI: 10.1016/j.neunet.2021.03.039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Revised: 03/11/2021] [Accepted: 03/29/2021] [Indexed: 11/21/2022]
Abstract
This paper presents new theoretical results on the multi-periodicity of recurrent neural networks with time delays evoked by periodic inputs under stochastic disturbances and state-dependent switching. Based on the geometric properties of activation function and switching threshold, the neuronal state space is partitioned into 5n regions in which 3n ones are shown to be positively invariant with probability one. Furthermore, by using Itô's formula, Lyapunov functional method, and the contraction mapping theorem, two criteria are proposed to ascertain the existence and mean-square exponential stability of a periodic orbit in every positive invariant set. As a result, the number of mean-square exponentially stable periodic orbits increases to 3n from 2n in a neural network without switching. Two illustrative examples are elaborated to substantiate the efficacy and characteristics of the theoretical results.
Collapse
|
29
|
Wang Y, Li X, Wang J. A neurodynamic optimization approach to supervised feature selection via fractional programming. Neural Netw 2021; 136:194-206. [PMID: 33497995 DOI: 10.1016/j.neunet.2021.01.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 12/04/2020] [Accepted: 01/07/2021] [Indexed: 11/25/2022]
Abstract
Feature selection is an important issue in machine learning and data mining. Most existing feature selection methods are greedy in nature thus are prone to sub-optimality. Though some global feature selection methods based on unsupervised redundancy minimization can potentiate clustering performance improvements, their efficacy for classification may be limited. In this paper, a neurodynamics-based holistic feature selection approach is proposed via feature redundancy minimization and relevance maximization. An information-theoretic similarity coefficient matrix is defined based on multi-information and entropy to measure feature redundancy with respect to class labels. Supervised feature selection is formulated as a fractional programming problem based on the similarity coefficients. A neurodynamic approach based on two one-layer recurrent neural networks is developed for solving the formulated feature selection problem. Experimental results with eight benchmark datasets are discussed to demonstrate the global convergence of the neural networks and superiority of the proposed neurodynamic approach to several existing feature selection methods in terms of classification accuracy, precision, recall, and F-measure.
Collapse
Affiliation(s)
- Yadi Wang
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, 475004, China; Institute of Data and Knowledge Engineering, School of Computer and Information Engineering, Henan University, Kaifeng, 475004, China; School of Computer Science and Engineering, Southeast University, Nanjing, 211189, China.
| | - Xiaoping Li
- School of Computer Science and Engineering, Southeast University, Nanjing, 211189, China; Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing, 211189, China.
| | - Jun Wang
- Department of Computer Science and School of Data Science, City University of Hong Kong, Kowloon, Hong Kong.
| |
Collapse
|
30
|
A novel generalization of the natural residual function and a neural network approach for the NCP. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.06.059] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
31
|
Jia J, Zeng Z. LMI-based criterion for global Mittag-Leffler lag quasi-synchronization of fractional-order memristor-based neural networks via linear feedback pinning control. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.05.074] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
32
|
Yu X, Wu L, Xu C, Hu Y, Ma C. A Novel Neural Network for Solving Nonsmooth Nonconvex Optimization Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1475-1488. [PMID: 31265412 DOI: 10.1109/tnnls.2019.2920408] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In this paper, a novel recurrent neural network (RNN) is presented to deal with a kind of nonsmooth nonconvex optimization problem in which the objective function may be nonsmooth and nonconvex, and the constraints include linear equations and convex inequations. Under certain suitable assumptions, from an arbitrary initial state, each solution to the proposed RNN exists globally and is bounded, and it enters the feasible region within a limited time. Moreover, the solution to the RNN with an arbitrary initial state can converge to the critical point set of the optimization problem. In particular, the RNN does not need the following: 1) abounded feasible region; 2) the computation of an exact penalty parameter; or 3) the initial state being chosen from a given bounded set. Numerical experiments are provided to show the effectiveness and advantages of the RNN.
Collapse
|
33
|
Zhu Y, Yu W, Wen G, Chen G. Projected Primal-Dual Dynamics for Distributed Constrained Nonsmooth Convex Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:1776-1782. [PMID: 30530351 DOI: 10.1109/tcyb.2018.2883095] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
A distributed nonsmooth convex optimization problem subject to a general type of constraint, including equality and inequality as well as bounded constraints, is studied in this paper for a multiagent network with a fixed and connected communication topology. To collectively solve such a complex optimization problem, primal-dual dynamics with projection operation are investigated under optimal conditions. For the nonsmooth convex optimization problem, a framework under the LaSalle's invariance principle from nonsmooth analysis is established, where the asymptotic stability of the primal-dual dynamics at an optimal solution is guaranteed. For the case where inequality and bounded constraints are not involved and the objective function is twice differentiable and strongly convex, the globally exponential convergence of the primal-dual dynamics is established. Finally, two simulations are provided to verify and visualize the theoretical results.
Collapse
|
34
|
Distributed Neuro-Dynamic Algorithm for Price-Based Game in Energy Consumption System. Neural Process Lett 2020. [DOI: 10.1007/s11063-019-10102-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
35
|
|
36
|
Moghaddas M, Tohidi G. A neurodynamic scheme to bi-level revenue-based centralized resource allocation models. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-182953] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Mohammad Moghaddas
- Department of Mathematics, Central Tehran Branch, Islamic Azad University, Tehran, Iran
| | - Ghasem Tohidi
- Department of Mathematics, Central Tehran Branch, Islamic Azad University, Tehran, Iran
| |
Collapse
|
37
|
Alcantara JH, Chen JS. Neural networks based on three classes of NCP-functions for solving nonlinear complementarity problems. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.05.078] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
38
|
Nazemi A, Sabeghi A. A new neural network framework for solving convex second-order cone constrained variational inequality problems with an application in multi-finger robot hands. J EXP THEOR ARTIF IN 2019. [DOI: 10.1080/0952813x.2019.1647559] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Alireza Nazemi
- Faculty of Mathematical Sciences, Shahrood University of Technology, Shahrood, Iran
| | - Atiye Sabeghi
- Faculty of Mathematical Sciences, Shahrood University of Technology, Shahrood, Iran
| |
Collapse
|
39
|
A combined neurodynamic approach to optimize the real-time price-based demand response management problem using mixed zero-one programming. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04283-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
40
|
A collaborative neurodynamic approach to global and combinatorial optimization. Neural Netw 2019; 114:15-27. [DOI: 10.1016/j.neunet.2019.02.002] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 12/04/2018] [Accepted: 02/04/2019] [Indexed: 11/17/2022]
|
41
|
Liu C, Li C, Li W. Computationally efficient MPC for path following of underactuated marine vessels using projection neural network. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04273-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
42
|
Li Z, Yuan W, Zhao S, Yu Z, Kang Y, Chen CLP. Brain-Actuated Control of Dual-Arm Robot Manipulation With Relative Motion. IEEE Trans Cogn Dev Syst 2019. [DOI: 10.1109/tcds.2017.2770168] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
43
|
Fang X, He X, Huang J. A strategy to optimize the multi-energy system in microgrid based on neurodynamic algorithm. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2018.06.053] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
44
|
Le X, Chen S, Yan Z, Xi J. A Neurodynamic Approach to Distributed Optimization With Globally Coupled Constraints. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:3149-3158. [PMID: 29053459 DOI: 10.1109/tcyb.2017.2760908] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, a distributed neurodynamic approach is proposed for constrained convex optimization. The objective function is a sum of local convex subproblems, whereas the constraints of these subproblems are coupled. Each local objective function is minimized individually with the proposed neurodynamic optimization approach. Through information exchange between connected neighbors only, all nodes can reach consensus on the Lagrange multipliers of all global equality and inequality constraints, and the decision variables converge to the global optimum in a distributed manner. Simulation results of two power system cases are discussed to substantiate the effectiveness and characteristics of the proposed approach.
Collapse
|
45
|
Leung MF, Wang J. A Collaborative Neurodynamic Approach to Multiobjective Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5738-5748. [PMID: 29994099 DOI: 10.1109/tnnls.2018.2806481] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
There are two ultimate goals in multiobjective optimization. The primary goal is to obtain a set of Pareto-optimal solutions while the secondary goal is to obtain evenly distributed solutions to characterize the efficient frontier. In this paper, a collaborative neurodynamic approach to multiobjective optimization is presented to attain both goals of Pareto optimality and solution diversity. The multiple objectives are first scalarized using a weighted Chebyshev function. Multiple projection neural networks are employed to search for Pareto-optimal solutions with the help of a particle swarm optimization (PSO) algorithm in reintialization. To diversify the Pareto-optimal solutions, a holistic approach is proposed by maximizing the hypervolume (HV) using again a PSO algorithm. The experimental results show that the proposed approach outperforms three other state-of-the-art multiobjective algorithms (i.e., HMOEA/D, MOEA/DD, and NSGAIII) most of times on 37 benchmark datasets in terms of HV and inverted generational distance.
Collapse
|
46
|
Mohammadi M, Mansoori A. A Projection Neural Network for Identifying Copy Number Variants. IEEE J Biomed Health Inform 2018; 23:2182-2188. [PMID: 30235154 DOI: 10.1109/jbhi.2018.2871619] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The identification of copy number variations (CNVs) helps the diagnosis of many diseases. One major hurdle in the path of CNVs discovery is that the boundaries of normal and aberrant regions cannot be distinguished from the raw data, since various types of noise contaminate them. To tackle this challenge, the total variation regularization is mostly used in the optimization problems to approximate the noise-free data from corrupted observations. The minimization using such regularization is challenging to deal with since it is non-differentiable. In this paper, we propose a projection neural network to solve the non-smooth problem. The proposed neural network has a simple one-layer structure and is theoretically assured to have the global exponential convergence to the solution of the total variation-regularized problem. The experiments on several real and simulated datasets illustrate the reasonable performance of the proposed neural network and show that its performance is comparable with those of more sophisticated algorithms.
Collapse
|
47
|
Zhao Y, He X, Huang T, Han Q. Analog circuits for solving a class of variational inequality problems. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.03.016] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
48
|
Yang S, Liu Q, Wang J. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:981-992. [PMID: 28166509 DOI: 10.1109/tnnls.2017.2652478] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.
Collapse
|
49
|
Dai X, Li C, He X, Li C. Nonnegative matrix factorization algorithms based on the inertial projection neural network. Neural Comput Appl 2018. [DOI: 10.1007/s00521-017-3337-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
50
|
|