1
|
Zhang M, He X. A continuous-time neurodynamic approach in matrix form for rank minimization. Neural Netw 2024; 172:106128. [PMID: 38242008 DOI: 10.1016/j.neunet.2024.106128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 12/07/2023] [Accepted: 01/12/2024] [Indexed: 01/21/2024]
Abstract
This article proposes a continuous-time neurodynamic approach for solving the rank minimization under affine constraints. As opposed to the traditional neurodynamic approach, the proposed neurodynamic approach extends the form of the variables from the vector form to the matrix form. First, a continuous-time neurodynamic approach with variables in matrix form is developed by combining the optimal rank r projection and the gradient. Then, the optimality of the proposed neurodynamic approach is rigorously analyzed by demonstrating that the objective function satisfies the functional property which is called as (2r,4r)-restricted strong convexity and smoothness ((2r,4r)-RSCS). Furthermore, the convergence and stability analysis of the proposed neurodynamic approach is rigorously conducted by establishing appropriate Lyapunov functions and considering the relevant restricted isometry property (RIP) condition associated with the affine transformation. Finally, through experiments involving low-rank matrix recovery under affine transformations and the completion of low-rank real image, the effectiveness of this approach has been demonstrated, along with its superiority compared to the vector-based approach.
Collapse
Affiliation(s)
- Meng Zhang
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, 400715, Chongqing, China.
| | - Xing He
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, School of Electronic and Information Engineering, Southwest University, 400715, Chongqing, China.
| |
Collapse
|
2
|
Xiao L, He Y, Wang Y, Dai J, Wang R, Tang W. A Segmented Variable-Parameter ZNN for Dynamic Quadratic Minimization With Improved Convergence and Robustness. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2413-2424. [PMID: 34464280 DOI: 10.1109/tnnls.2021.3106640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
As a category of the recurrent neural network (RNN), zeroing neural network (ZNN) can effectively handle time-variant optimization issues. Compared with the fixed-parameter ZNN that needs to be adjusted frequently to achieve good performance, the conventional variable-parameter ZNN (VPZNN) does not require frequent adjustment, but its variable parameter will tend to infinity as time grows. Besides, the existing noise-tolerant ZNN model is not good enough to deal with time-varying noise. Therefore, a new-type segmented VPZNN (SVPZNN) for handling the dynamic quadratic minimization issue (DQMI) is presented in this work. Unlike the previous ZNNs, the SVPZNN includes an integral term and a nonlinear activation function, in addition to two specially constructed time-varying piecewise parameters. This structure keeps the time-varying parameters stable and makes the model have strong noise tolerance capability. Besides, theoretical analysis on SVPZNN is proposed to determine the upper bound of convergence time in the absence or presence of noise interference. Numerical simulations verify that SVPZNN has shorter convergence time and better robustness than existing ZNN models when handling DQMI.
Collapse
|
3
|
He X, Wen H, Huang T. A Fixed-Time Projection Neural Network for Solving L₁-Minimization Problem. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7818-7828. [PMID: 34166204 DOI: 10.1109/tnnls.2021.3088535] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this article, a new projection neural network (PNN) for solving L1 -minimization problem is proposed, which is based on classic PNN and sliding mode control technique. Furthermore, the proposed network can be used to make sparse signal reconstruction and image reconstruction. First, a sign function is introduced into the PNN model to design fixed-time PNN (FPNN). Then, under the condition that the projection matrix satisfies the restricted isometry property (RIP), the stability and fixed-time convergence of the proposed FPNN are proved by the Lyapunov method. Finally, based on the experimental results of signal simulation and image reconstruction, the proposed FPNN shows the effectiveness and superiority compared with that of the existing PNNs.
Collapse
|
4
|
Li X, Wang J, Kwong S. Hash Bit Selection Based on Collaborative Neurodynamic Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11144-11155. [PMID: 34415845 DOI: 10.1109/tcyb.2021.3102941] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Hash bit selection determines an optimal subset of hash bits from a candidate bit pool. It is formulated as a zero-one quadratic programming problem subject to binary and cardinality constraints. In this article, the problem is equivalently reformulated as a global optimization problem. A collaborative neurodynamic optimization (CNO) approach is applied to solve the problem by using a group of neurodynamic models initialized with particle swarm optimization iteratively in the CNO. Lévy mutation is used in the CNO to avoid premature convergence by ensuring initial state diversity. A theoretical proof is given to show that the CNO with the Lévy mutation operator is almost surely convergent to global optima. Experimental results are discussed to substantiate the efficacy and superiority of the CNO-based hash bit selection method to the existing methods on three benchmarks.
Collapse
|
5
|
Wang X, Park JH, Yang H, Zhong S. A New Settling-time Estimation Protocol to Finite-time Synchronization of Impulsive Memristor-Based Neural Networks. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:4312-4322. [PMID: 33055055 DOI: 10.1109/tcyb.2020.3025932] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this article, the issues of finite-time synchronization and finite-time adaptive synchronization for the impulsive memristive neural networks (IMNNs) with discontinuous activation functions (DAFs) and hybrid impulsive effects are probed into and elaborated on, where the stabilizing impulses (SIs), inactive impulses (IIs), and destabilizing impulses (DIs) are taken into account, respectively. Not resembling several earlier works, a more extensive range of impulses in the context of impulsive effects has been analyzed without using the known average impulsive interval strategy (AIIS). In light of the theories of differential inclusions and set-valued map, as well as impulsive control, new sufficient criteria with respect to the estimated settling time for synchronization of the related IMNNs are established using two types of switching control approaches, which sufficiently utilize information from not only the SIs, DIs, and DAFs but also the impulse sequences. Two simulation experiments are presented to the efficiency of the proposed results.
Collapse
|
6
|
Zhao Y, Liao X, He X. Novel projection neurodynamic approaches for constrained convex optimization. Neural Netw 2022; 150:336-349. [DOI: 10.1016/j.neunet.2022.03.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 01/02/2022] [Accepted: 03/07/2022] [Indexed: 11/30/2022]
|
7
|
A finite-time projection neural network to solve the joint optimal dispatching problem of CHP and wind power. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06867-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
8
|
Xiao H, Zhu Q, Karimi HR. Stability of stochastic delay switched neural networks with all unstable subsystems: A multiple discretized Lyapunov-Krasovskii functionals method. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.09.027] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
9
|
Global dynamics and learning algorithm of non-autonomous neural networks with time-varying delays. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.03.093] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
10
|
Wu Z, Karimi HR, Dang C. A Deterministic Annealing Neural Network Algorithm for the Minimum Concave Cost Transportation Problem. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:4354-4366. [PMID: 31869806 DOI: 10.1109/tnnls.2019.2955137] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this article, a deterministic annealing neural network algorithm is proposed to solve the minimum concave cost transportation problem. Specifically, the algorithm is derived from two neural network models and Lagrange-barrier functions. The Lagrange function is used to handle linear equality constraints, and the barrier function is used to force the solution to move to the global or near-global optimal solution. In both neural network models, two descent directions are constructed, and an iterative procedure for the optimization of the neural network is proposed. As a result, two corresponding Lyapunov functions are naturally obtained from these two descent directions. Furthermore, the proposed neural network models are proved to be completely stable and converge to the stable equilibrium state, therefore, the proposed algorithm converges. At last, the computer simulations on several test problems are made, and the results indicate that the proposed algorithm always generates global or near-global optimal solutions.
Collapse
|
11
|
Li X, Wang J, Kwong S. A Discrete-Time Neurodynamic Approach to Sparsity-Constrained Nonnegative Matrix Factorization. Neural Comput 2020; 32:1531-1562. [PMID: 32521214 DOI: 10.1162/neco_a_01294] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sparsity is a desirable property in many nonnegative matrix factorization (NMF) applications. Although some level of sparseness of NMF solutions can be achieved by using regularization, the resulting sparsity depends highly on the regularization parameter to be valued in an ad hoc way. In this letter we formulate sparse NMF as a mixed-integer optimization problem with sparsity as binary constraints. A discrete-time projection neural network is developed for solving the formulated problem. Sufficient conditions for its stability and convergence are analytically characterized by using Lyapunov's method. Experimental results on sparse feature extraction are discussed to substantiate the superiority of this approach to extracting highly sparse features.
Collapse
Affiliation(s)
- Xinqi Li
- Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong, and Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| | - Jun Wang
- Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong, and Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| | - Sam Kwong
- Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong, and Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| |
Collapse
|
12
|
A projection-based recurrent neural network and its application in solving convex quadratic bilevel optimization problems. Neural Comput Appl 2020. [DOI: 10.1007/s00521-019-04391-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
13
|
Wang H, Liu PX, Bao J, Xie XJ, Li S. Adaptive Neural Output-Feedback Decentralized Control for Large-Scale Nonlinear Systems With Stochastic Disturbances. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:972-983. [PMID: 31265406 DOI: 10.1109/tnnls.2019.2912082] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper addresses the problem of adaptive neural output-feedback decentralized control for a class of strongly interconnected nonlinear systems suffering stochastic disturbances. An state observer is designed to approximate the unmeasurable state signals. Using the approximation capability of radial basis function neural networks (NNs) and employing classic adaptive control strategy, an observer-based adaptive backstepping decentralized controller is developed. In the control design process, NNs are applied to model the uncertain nonlinear functions, and adaptive control and backstepping are combined to construct the controller. The developed control scheme can guarantee that all signals in the closed-loop systems are semiglobally uniformly ultimately bounded in fourth-moment. The simulation results demonstrate the effectiveness of the presented control scheme.
Collapse
|
14
|
Liang X, He X, Huang T. Distributed Neuro-Dynamic Optimization for Multi-Objective Power Management Problem in Micro-Grid. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.05.096] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
15
|
Moghaddas M, Tohidi G. A neurodynamic scheme to bi-level revenue-based centralized resource allocation models. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-182953] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Mohammad Moghaddas
- Department of Mathematics, Central Tehran Branch, Islamic Azad University, Tehran, Iran
| | - Ghasem Tohidi
- Department of Mathematics, Central Tehran Branch, Islamic Azad University, Tehran, Iran
| |
Collapse
|
16
|
|
17
|
Wu Z, Karimi HR, Dang C. An approximation algorithm for graph partitioning via deterministic annealing neural network. Neural Netw 2019; 117:191-200. [PMID: 31174047 DOI: 10.1016/j.neunet.2019.05.010] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Revised: 05/08/2019] [Accepted: 05/09/2019] [Indexed: 11/26/2022]
Abstract
Graph partitioning, a classical NP-hard combinatorial optimization problem, is widely applied to industrial or management problems. In this study, an approximated solution of the graph partitioning problem is obtained by using a deterministic annealing neural network algorithm. The algorithm is a continuation method that attempts to obtain a high-quality solution by following a path of minimum points of a barrier problem as the barrier parameter is reduced from a sufficiently large positive number to 0. With the barrier parameter assumed to be any positive number, one minimum solution of the barrier problem can be found by the algorithm in a feasible descent direction. With a globally convergent iterative procedure, the feasible descent direction could be obtained by renewing Lagrange multipliers red. A distinctive feature of it is that the upper and lower bounds on the variables will be automatically satisfied on the condition that the step length is a value from 0 to 1. Four well-known algorithms are compared with the proposed one on 100 test samples. Simulation results show effectiveness of the proposed algorithm.
Collapse
Affiliation(s)
- Zhengtian Wu
- School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China; Department of Mechanical Engineering, Politecnico di Milano, Milan, Italy.
| | - Hamid Reza Karimi
- Department of Mechanical Engineering, Politecnico di Milano, Milan, Italy.
| | - Chuangyin Dang
- Department of Systems Engineering and Engineering Management, City University of Hong Kong, Hong Kong
| |
Collapse
|
18
|
Xu B, Liu Q, Huang T. A Discrete-Time Projection Neural Network for Sparse Signal Reconstruction With Application to Face Recognition. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:151-162. [PMID: 29994338 DOI: 10.1109/tnnls.2018.2836933] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper deals with sparse signal reconstruction by designing a discrete-time projection neural network. Sparse signal reconstruction can be converted into an L1 -minimization problem, which can also be changed into the unconstrained basis pursuit denoising problem. To solve the L1 -minimization problem, an iterative algorithm is proposed based on the discrete-time projection neural network, and the global convergence of the algorithm is analyzed by using Lyapunov method. Experiments on sparse signal reconstruction and several popular face data sets are organized to illustrate the effectiveness and performance of the proposed algorithm. The experimental results show that the proposed algorithm is not only robust to different levels of sparsity and amplitude of signals and the noise pixels but also insensitive to the diverse values of scalar weight. Moreover, the value of the step size of the proposed algorithm is close to 1/2, thus a fast convergence rate is potentially possible. Furthermore, the proposed algorithm achieves better classification performance compared with some other algorithms for face recognition.
Collapse
|
19
|
Zhu L, Wang J, He X, Zhao Y. An inertial projection neural network for sparse signal reconstruction via l1−2 minimization. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.06.050] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
20
|
Leung MF, Wang J. A Collaborative Neurodynamic Approach to Multiobjective Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5738-5748. [PMID: 29994099 DOI: 10.1109/tnnls.2018.2806481] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
There are two ultimate goals in multiobjective optimization. The primary goal is to obtain a set of Pareto-optimal solutions while the secondary goal is to obtain evenly distributed solutions to characterize the efficient frontier. In this paper, a collaborative neurodynamic approach to multiobjective optimization is presented to attain both goals of Pareto optimality and solution diversity. The multiple objectives are first scalarized using a weighted Chebyshev function. Multiple projection neural networks are employed to search for Pareto-optimal solutions with the help of a particle swarm optimization (PSO) algorithm in reintialization. To diversify the Pareto-optimal solutions, a holistic approach is proposed by maximizing the hypervolume (HV) using again a PSO algorithm. The experimental results show that the proposed approach outperforms three other state-of-the-art multiobjective algorithms (i.e., HMOEA/D, MOEA/DD, and NSGAIII) most of times on 37 benchmark datasets in terms of HV and inverted generational distance.
Collapse
|
21
|
Kan X, Liang J, Liu Y, Alsaadi FE. Robust H∞ state estimation for BAM neural networks with randomly occurring uncertainties and sensor saturations. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.05.062] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
22
|
Guo D, Xu F, Yan L, Nie Z, Shao H. A New Noise-Tolerant Obstacle Avoidance Scheme for Motion Planning of Redundant Robot Manipulators. Front Neurorobot 2018; 12:51. [PMID: 30210328 PMCID: PMC6124349 DOI: 10.3389/fnbot.2018.00051] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Accepted: 08/07/2018] [Indexed: 11/13/2022] Open
Abstract
Avoiding obstacle(s) is a challenging issue in the research of redundant robot manipulators. In addition, noise from truncation, rounding, and model uncertainty is an important factor that affects greatly the obstacle avoidance scheme. In this paper, based on the neural dynamics design formula, a new scheme with the pseudoinverse-type formulation is proposed for obstacle avoidance of redundant robot manipulators in a noisy environment. Such a scheme has the capability of suppressing constant and bounded time-varying noises, and it is thus termed as the noise-tolerant obstacle avoidance (NTOA) scheme in this paper. Theoretical results are also given to show the excellent property of the proposed NTOA scheme (particularly in noise situation). Based on a PA10 robot manipulator with point and window-shaped obstacles, computer simulation results are presented to further substantiate the efficacy and superiority of the proposed NTOA scheme for motion planning of redundant robot manipulators.
Collapse
Affiliation(s)
- Dongsheng Guo
- College of Information Science and Engineering, Huaqiao University, Xiamen, China
| | - Feng Xu
- College of Information Science and Engineering, Huaqiao University, Xiamen, China
| | - Laicheng Yan
- College of Information Science and Engineering, Huaqiao University, Xiamen, China
| | - Zhuoyun Nie
- College of Information Science and Engineering, Huaqiao University, Xiamen, China
| | - Hui Shao
- College of Information Science and Engineering, Huaqiao University, Xiamen, China
| |
Collapse
|
23
|
Zhao Y, He X, Huang T, Han Q. Analog circuits for solving a class of variational inequality problems. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.03.016] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
24
|
Huang Z, Yang C, Zhou X, Gui W. A Novel Cognitively Inspired State Transition Algorithm for Solving the Linear Bi-Level Programming Problem. Cognit Comput 2018. [DOI: 10.1007/s12559-018-9561-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
25
|
A nonnegative matrix factorization algorithm based on a discrete-time projection neural network. Neural Netw 2018; 103:63-71. [PMID: 29642020 DOI: 10.1016/j.neunet.2018.03.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2017] [Revised: 12/18/2017] [Accepted: 03/06/2018] [Indexed: 11/19/2022]
Abstract
This paper presents an algorithm for nonnegative matrix factorization based on a biconvex optimization formulation. First, a discrete-time projection neural network is introduced. An upper bound of its step size is derived to guarantee the stability of the neural network. Then, an algorithm is proposed based on the discrete-time projection neural network and a backtracking step-size adaptation. The proposed algorithm is proven to be able to reduce the objective function value iteratively until attaining a partial optimum of the formulated biconvex optimization problem. Experimental results based on various data sets are presented to substantiate the efficacy of the algorithm.
Collapse
|
26
|
|
27
|
Xie X, Wen S, Zeng Z, Huang T. Memristor-based circuit implementation of pulse-coupled neural network with dynamical threshold generators. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.01.024] [Citation(s) in RCA: 55] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
28
|
Zhao Y, He X, Huang T, Huang J. Smoothing inertial projection neural network for minimization Lp−q in sparse signal reconstruction. Neural Netw 2018; 99:31-41. [DOI: 10.1016/j.neunet.2017.12.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Revised: 10/21/2017] [Accepted: 12/12/2017] [Indexed: 10/18/2022]
|
29
|
Global Dissipativity of Inertial Neural Networks with Proportional Delay via New Generalized Halanay Inequalities. Neural Process Lett 2018. [DOI: 10.1007/s11063-018-9788-6] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
30
|
Dai X, Li C, He X, Li C. Nonnegative matrix factorization algorithms based on the inertial projection neural network. Neural Comput Appl 2018. [DOI: 10.1007/s00521-017-3337-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
31
|
Ma Y, Ma N, Chen L, Zheng Y, Han Y. Exponential stability for the neutral-type singular neural network with time-varying delays. INT J MACH LEARN CYB 2017. [DOI: 10.1007/s13042-017-0764-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
32
|
|
33
|
Qin S, Le X, Wang J. A Neurodynamic Optimization Approach to Bilevel Quadratic Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2580-2591. [PMID: 28113639 DOI: 10.1109/tnnls.2016.2595489] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper presents a neurodynamic optimization approach to bilevel quadratic programming (BQP). Based on the Karush-Kuhn-Tucker (KKT) theorem, the BQP problem is reduced to a one-level mathematical program subject to complementarity constraints (MPCC). It is proved that the global solution of the MPCC is the minimal one of the optimal solutions to multiple convex optimization subproblems. A recurrent neural network is developed for solving these convex optimization subproblems. From any initial state, the state of the proposed neural network is convergent to an equilibrium point of the neural network, which is just the optimal solution of the convex optimization subproblem. Compared with existing recurrent neural networks for BQP, the proposed neural network is guaranteed for delivering the exact optimal solutions to any convex BQP problems. Moreover, it is proved that the proposed neural network for bilevel linear programming is convergent to an equilibrium point in finite time. Finally, three numerical examples are elaborated to substantiate the efficacy of the proposed approach.
Collapse
|
34
|
Wen S, Zeng Z, Chen MZQ, Huang T. Synchronization of Switched Neural Networks With Communication Delays via the Event-Triggered Control. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2334-2343. [PMID: 27429449 DOI: 10.1109/tnnls.2016.2580609] [Citation(s) in RCA: 68] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper addresses the issue of synchronization of switched delayed neural networks with communication delays via event-triggered control. For synchronizing coupled switched neural networks, we propose a novel event-triggered control law which could greatly reduce the number of control updates for synchronization tasks of coupled switched neural networks involving embedded microprocessors with limited on-board resources. The control signals are driven by properly defined events, which depend on the measurement errors and current-sampled states. By using a delay system method, a novel model of synchronization error system with delays is proposed with the communication delays and event-triggered control in the unified framework for coupled switched neural networks. The criteria are derived for the event-triggered synchronization analysis and control synthesis of switched neural networks via the Lyapunov-Krasovskii functional method and free weighting matrix approach. A numerical example is elaborated on to illustrate the effectiveness of the derived results.
Collapse
|
35
|
Zhang W, Huang T, Li C, Yang J. Robust Stability of Inertial BAM Neural Networks with Time Delays and Uncertainties via Impulsive Effect. Neural Process Lett 2017. [DOI: 10.1007/s11063-017-9713-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
36
|
Wang T, He X, Huang T, Li C, Zhang W. Collective neurodynamic optimization for economic emission dispatch problem considering valve point effect in microgrid. Neural Netw 2017; 93:126-136. [DOI: 10.1016/j.neunet.2017.05.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Revised: 03/06/2017] [Accepted: 05/03/2017] [Indexed: 11/26/2022]
|
37
|
Feng J, Qin S, Shi F, Zhao X. A recurrent neural network with finite-time convergence for convex quadratic bilevel programming problems. Neural Comput Appl 2017. [DOI: 10.1007/s00521-017-2926-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
38
|
Sun M, He X, Wang T, Tan J, Xia D. Circuit implementation of digitally programmable transconductance amplifier in analog simulation of reaction-diffusion neural model. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.07.062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
39
|
|
40
|
He X, Huang T, Yu J, Li C, Li C. An Inertial Projection Neural Network for Solving Variational Inequalities. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:809-814. [PMID: 26887026 DOI: 10.1109/tcyb.2016.2523541] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Recently, projection neural network (PNN) was proposed for solving monotone variational inequalities (VIs) and related convex optimization problems. In this paper, considering the inertial term into first order PNNs, an inertial PNN (IPNN) is also proposed for solving VIs. Under certain conditions, the IPNN is proved to be stable, and can be applied to solve a broader class of constrained optimization problems related to VIs. Compared with existing neural networks (NNs), the presence of the inertial term allows us to overcome some drawbacks of many NNs, which are constructed based on the steepest descent method, and this model is more convenient for exploring different Karush-Kuhn-Tucker optimal solution for nonconvex optimization problems. Finally, simulation results on three numerical examples show the effectiveness and performance of the proposed NN.
Collapse
|
41
|
New Criteria on Exponential Lag Synchronization of Switched Neural Networks with Time-Varying Delays. Neural Process Lett 2017. [DOI: 10.1007/s11063-017-9599-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
42
|
Zhang P, Li C, Huang T, Chen L, Chen Y. Forgetting memristor based neuromorphic system for pattern training and recognition. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.10.012] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
43
|
Tan J, Li C. Finite-Time Stability of Neural Networks with Impulse Effects and Time-Varying Delay. Neural Process Lett 2016. [DOI: 10.1007/s11063-016-9570-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
44
|
Guo D, Nie Z, Yan L. Theoretical analysis, numerical verification and geometrical representation of new three-step DTZD algorithm for time-varying nonlinear equations solving. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.06.032] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
45
|
|
46
|
Che H, Li C, He X, Huang T. A recurrent neural network for adaptive beamforming and array correction. Neural Netw 2016; 80:110-7. [DOI: 10.1016/j.neunet.2016.04.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2015] [Revised: 03/08/2016] [Accepted: 04/22/2016] [Indexed: 10/21/2022]
|
47
|
|
48
|
Zhou B, Liao X, Huang T, Wang H, Chen G. Distributed multi-agent optimization with inequality constraints and random projections. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.02.064] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
49
|
Xu J, Li C, He X, Huang T. Recurrent neural network for solving model predictive control problem in application of four-tank benchmark. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.01.020] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
50
|
|