1
|
Gao X, Liao LZ. Novel Continuous- and Discrete-Time Neural Networks for Solving Quadratic Minimax Problems With Linear Equality Constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:9814-9828. [PMID: 37022226 DOI: 10.1109/tnnls.2023.3236695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This article presents two novel continuous- and discrete-time neural networks (NNs) for solving quadratic minimax problems with linear equality constraints. These two NNs are established based on the conditions of the saddle point of the underlying function. For the two NNs, a proper Lyapunov function is constructed so that they are stable in the sense of Lyapunov, and will converge to some saddle point(s) for any starting point under some mild conditions. Compared with the existing NNs for solving quadratic minimax problems, the proposed NNs require weaker stability conditions. The validity and transient behavior of the proposed models are illustrated by some simulation results.
Collapse
|
2
|
Xia Y, Wang J, Lu Z, Huang L. Two Recurrent Neural Networks With Reduced Model Complexity for Constrained l₁-Norm Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:6173-6185. [PMID: 34986103 DOI: 10.1109/tnnls.2021.3133836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Because of the robustness and sparsity performance of least absolute deviation (LAD or l1 ) optimization, developing effective solution methods becomes an important topic. Recurrent neural networks (RNNs) are reported to be capable of effectively solving constrained l1 -norm optimization problems, but their convergence speed is limited. To accelerate the convergence, this article introduces two RNNs, in form of continuous- and discrete-time systems, for solving l1 -norm optimization problems with linear equality and inequality constraints. The RNNs are theoretically proven to be globally convergent to optimal solutions without any condition. With reduced model complexity, the two RNNs can significantly expedite constrained l1 -norm optimization. Numerical simulation results show that the two RNNs spend much less computational time than related RNNs and numerical optimization algorithms for linearly constrained l1 -norm optimization.
Collapse
|
3
|
Mohammadi M, Atashin AA, Tamburri DA. From ℓ1 subgradient to projection: A compact neural network for ℓ1-regularized logistic regression. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
4
|
Sang H, Nie H, Zhao J. Event-triggered asynchronous synchronization control for switched generalized neural networks with time-varying delay. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.07.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
5
|
Zhang S, Xia Y, Xia Y, Wang J. Matrix-Form Neural Networks for Complex-Variable Basis Pursuit Problem With Application to Sparse Signal Reconstruction. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:7049-7059. [PMID: 33471773 DOI: 10.1109/tcyb.2020.3042519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this article, a continuous-time complex-valued projection neural network (CCPNN) in a matrix state space is first proposed for a general complex-variable basis pursuit problem. The proposed CCPNN is proved to be stable in the sense of Lyapunov and to be globally convergent to the optimal solution under the condition that the sensing matrix is not row full rank. Furthermore, an improved discrete-time complex projection neural network (IDCPNN) is proposed by discretizing the CCPNN model. The proposed IDCPNN consists of a two-step stop strategy to reduce the calculational cost. The proposed IDCPNN is theoretically guaranteed to be global convergent to the optimal solution. Finally, the proposed IDCPNN is applied to the reconstruction of sparse signals based on compressed sensing. Computed results show that the proposed IDCPNN is superior to related complex-valued neural networks and conventional basis pursuit algorithms in terms of solution quality and computation time.
Collapse
|
6
|
A fuzzy adaptive zeroing neural network with superior finite-time convergence for solving time-variant linear matrix equations. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108405] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
7
|
Suo J, Li N, Li Q. Event-triggered H∞ state estimation for discrete-time delayed switched stochastic neural networks with persistent dwell-time switching regularities and sensor saturations. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.131] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
8
|
Mohammadi M. A Compact Neural Network for Fused Lasso Signal Approximator. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:4327-4336. [PMID: 31329147 DOI: 10.1109/tcyb.2019.2925707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The fused lasso signal approximator (FLSA) is a vital optimization problem with extensive applications in signal processing and biomedical engineering. However, the optimization problem is difficult to solve since it is both nonsmooth and nonseparable. The existing numerical solutions implicate the use of several auxiliary variables in order to deal with the nondifferentiable penalty. Thus, the resulting algorithms are both time- and memory-inefficient. This paper proposes a compact neural network to solve the FLSA. The neural network has a one-layer structure with the number of neurons proportionate to the dimension of the given signal, thanks to the utilization of consecutive projections. The proposed neural network is stable in the Lyapunov sense and is guaranteed to converge globally to the optimal solution of the FLSA. Experiments on several applications from signal processing and biomedical engineering confirm the reasonable performance of the proposed neural network.
Collapse
|
9
|
Mohammadi M. A new discrete-time neural network for quadratic programming with general linear constraints. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2019.11.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
10
|
Neurodynamical classifiers with low model complexity. Neural Netw 2020; 132:405-415. [PMID: 33011671 DOI: 10.1016/j.neunet.2020.08.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 07/18/2020] [Accepted: 08/11/2020] [Indexed: 11/18/2022]
Abstract
The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane classifier by minimizing an upper bound on the Vapnik-Chervonenkis (VC) dimension. The VC dimension measures the capacity or model complexity of a learning machine. Vapnik's risk formula indicates that models with smaller VC dimension are expected to show improved generalization. On many benchmark datasets, the MCM generalizes better than SVMs and uses far fewer support vectors than the number used by SVMs. In this paper, we describe a neural network that converges to the MCM solution. We employ the MCM neurodynamical system as the final layer of a neural network architecture. Our approach also optimizes the weights of all layers in order to minimize the objective, which is a combination of a bound on the VC dimension and the classification error. We illustrate the use of this model for robust binary and multi-class classification. Numerical experiments on benchmark datasets from the UCI repository show that the proposed approach is scalable and accurate, and learns models with improved accuracies and fewer support vectors.
Collapse
|
11
|
Xia Y, Wang J, Guo W. Two Projection Neural Networks With Reduced Model Complexity for Nonlinear Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2020-2029. [PMID: 31425123 DOI: 10.1109/tnnls.2019.2927639] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Recent reports show that projection neural networks with a low-dimensional state space can enhance computation speed obviously. This paper proposes two projection neural networks with reduced model dimension and complexity (RDPNNs) for solving nonlinear programming (NP) problems. Compared with existing projection neural networks for solving NP, the proposed two RDPNNs have a low-dimensional state space and low model complexity. Under the condition that the Hessian matrix of the associated Lagrangian function is positive semi-definite and positive definite at each Karush-Kuhn-Tucker point, the proposed two RDPNNs are proven to be globally stable in the sense of Lyapunov and converge globally to a point satisfying the reduced optimality condition of NP. Therefore, the proposed two RDPNNs are theoretically guaranteed to solve convex NP problems and a class of nonconvex NP problems. Computed results show that the proposed two RDPNNs have a faster computation speed than the existing projection neural networks for solving NP problems.
Collapse
|
12
|
Basha DK, Venkateswarlu T. Linear Regression Supporting Vector Machine and Hybrid LOG Filter-Based Image Restoration. JOURNAL OF INTELLIGENT SYSTEMS 2019. [DOI: 10.1515/jisys-2018-0492] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
The image restoration (IR) technique is a part of image processing to improve the quality of an image that is affected by noise and blur. Thus, IR is required to attain a better quality of image. In this paper, IR is performed using linear regression-based support vector machine (LR-SVM). This LR-SVM has two steps: training and testing. The training and testing stages have a distinct windowing process for extracting blocks from the images. The LR-SVM is trained through a block-by-block training sequence. The extracted block-by-block values of images are used to enhance the classification process of IR. In training, the imperfections on the image are easily identified by setting the target vectors as the original images. Then, the noisy image is given at LR-SVM testing, based on the original image restored from the dictionary. Finally, the image block from the testing stage is enhanced using the hybrid Laplacian of Gaussian (HLOG) filter. The denoising of the HLOG filter provides enhanced results by using block-by-block values. This proposed approach is named as LR-SVM-HLOG. A dataset used in this LR-SVM-HLOG method is the Berkeley Segmentation Database. The performance of LR-SVM-HLOG was analyzed as peak signal-to-noise ratio (PSNR) and structural similarity index. The PSNR values of the house and pepper image (color image) are 40.82 and 36.56 dB, respectively, which are higher compared to the inter- and intra-block sparse estimation method and block matching and three-dimensional filtering for color images at 20% noise.
Collapse
Affiliation(s)
- D. Khalandar Basha
- Department of Electronics and Communication Engineering, Sri Venkateswara University, Tirupati, India
- Department of Electronics and Communication Engineering, Institute of Aeronautical Engineering, Dundigal, Hyderabad, India
| | - T. Venkateswarlu
- Department of Electronics and Communication Engineering, SVU College of Engineering, Sri Venkateswara University, Tirupati, India
| |
Collapse
|
13
|
|
14
|
Li Y, Gao X. Alternative continuous- and discrete-time neural networks for image restoration. NETWORK (BRISTOL, ENGLAND) 2019; 30:107-124. [PMID: 31662021 DOI: 10.1080/0954898x.2019.1677955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2018] [Revised: 08/02/2019] [Accepted: 10/03/2019] [Indexed: 06/10/2023]
Abstract
This paper presents alternative continuous- and discrete-time neural networks for image restoration in real time by introducing new vectors and transforming its optimization conditions into a system of double projection equations. The proposed neural networks are shown to be stable in the sense of Lyapunov and convergent for any starting point. Compared with the existing neural networks for image restoration, the proposed models have the least neurons, a one-layer structure and the faster convergence, and is suitable to parallel implementation. The validity and transient behaviour of the proposed neural network is demonstrated by numerical examples.
Collapse
Affiliation(s)
- Yawei Li
- School of Mathematics and Information Science, Shaanxi Normal University, Xi'an, Shaanxi, P. R. China
| | - Xingbao Gao
- School of Mathematics and Information Science, Shaanxi Normal University, Xi'an, Shaanxi, P. R. China
| |
Collapse
|
15
|
Li C, Gao X. One-layer neural network for solving least absolute deviation problem with box and equality constraints. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.11.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
16
|
Xia Y, Wang J. Robust Regression Estimation Based on Low-Dimensional Recurrent Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5935-5946. [PMID: 29993932 DOI: 10.1109/tnnls.2018.2814824] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The robust Huber's M-estimator is widely used in signal and image processing, classification, and regression. From an optimization point of view, Huber's M-estimation problem is often formulated as a large-sized quadratic programming (QP) problem in view of its nonsmooth cost function. This paper presents a generalized regression estimator which minimizes a reduced-sized QP problem. The generalized regression estimator may be viewed as a significant generalization of several robust regression estimators including Huber's M-estimator. The performance of the generalized regression estimator is analyzed in terms of robustness and approximation accuracy. Furthermore, two low-dimensional recurrent neural networks (RNNs) are introduced for robust estimation. The two RNNs have low model complexity and enhanced computational efficiency. Finally, the experimental results of two examples and an application to image restoration are presented to substantiate superior performance of the proposed method over conventional algorithms for robust regression estimation in terms of approximation accuracy and convergence rate.
Collapse
|
17
|
Li S, Zhou M, Luo X. Modified Primal-Dual Neural Networks for Motion Control of Redundant Manipulators With Dynamic Rejection of Harmonic Noises. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4791-4801. [PMID: 29990144 DOI: 10.1109/tnnls.2017.2770172] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In recent decades, primal-dual neural networks, as a special type of recurrent neural networks, have received great success in real-time manipulator control. However, noises are usually ignored when neural controllers are designed based on them, and thus, they may fail to perform well in the presence of intensive noises. Harmonic noises widely exist in real applications and can severely affect the control accuracy. This work proposes a novel primal-dual neural network design that directly takes noise control into account. By taking advantage of the fact that the unknown amplitude and phase information of a harmonic signal can be eliminated from its dynamics, our deliberately designed neural controller is able to reach the accurate tracking of reference trajectories in a noisy environment. Theoretical analysis and extensive simulations show that the proposed controller stabilizes the control system polluted by harmonic noises and converges the position tracking error to zero. Comparisons show that our proposed solution consistently and significantly outperforms the existing primal-dual neural solutions as well as feedforward neural one and adaptive neural one for redundancy resolution of manipulators.
Collapse
|
18
|
Han Z, Leung CS, So HC, Constantinides AG. Augmented Lagrange Programming Neural Network for Localization Using Time-Difference-of-Arrival Measurements. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:3879-3884. [PMID: 28816681 DOI: 10.1109/tnnls.2017.2731325] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
A commonly used measurement model for locating a mobile source is time-difference-of-arrival (TDOA). As each TDOA measurement defines a hyperbola, it is not straightforward to compute the mobile source position due to the nonlinear relationship in the measurements. This brief exploits the Lagrange programming neural network (LPNN), which provides a general framework to solve nonlinear constrained optimization problems, for the TDOA-based localization. The local stability of the proposed LPNN solution is also analyzed. Simulation results are included to evaluate the localization accuracy of the LPNN scheme by comparing with the state-of-the-art methods and the optimality benchmark of Cramér-Rao lower bound.
Collapse
|
19
|
Lagrange Programming Neural Network for TOA-Based Localization with Clock Asynchronization and Sensor Location Uncertainties. SENSORS 2018; 18:s18072293. [PMID: 30011959 PMCID: PMC6068907 DOI: 10.3390/s18072293] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 07/08/2018] [Accepted: 07/10/2018] [Indexed: 12/02/2022]
Abstract
Source localization based on time of arrival (TOA) measurements in the presence of clock asynchronization and sensor position uncertainties is investigated in this paper. Different from the traditional numerical algorithms, a neural circuit named Lagrange programming neural network (LPNN) is employed to tackle the nonlinear and nonconvex constrained optimization problem of source localization. With the augmented term, two types of neural networks are developed from the original maximum likelihood functions based on the general framework provided by LPNN. The convergence and local stability of the proposed neural networks are analyzed in this paper. In addition, the Cramér-Rao lower bound is also derived as a benchmark in the presence of clock asynchronization and sensor position uncertainties. Simulation results verify the superior performance of the proposed LPNN over the traditional numerical algorithms and its robustness to resist the impact of a high level of measurement noise, clock asynchronization, as well as sensor position uncertainties.
Collapse
|
20
|
Simplified neural network for generalized least absolute deviation. Neural Comput Appl 2018. [DOI: 10.1007/s00521-017-3060-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
21
|
Xiao L, Liao B, Li S, Chen K. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw 2018; 98:102-113. [DOI: 10.1016/j.neunet.2017.11.011] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Revised: 09/25/2017] [Accepted: 11/16/2017] [Indexed: 10/18/2022]
|
22
|
Eshaghnezhad M, Effati S, Mansoori A. A Neurodynamic Model to Solve Nonlinear Pseudo-Monotone Projection Equation and Its Applications. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:3050-3062. [PMID: 27705876 DOI: 10.1109/tcyb.2016.2611529] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, a neurodynamic model is given to solve nonlinear pseudo-monotone projection equation. Under pseudo-monotonicity condition and Lipschitz continuous condition, the projection neurodynamic model is proved to be stable in the sense of Lyapunov, globally convergent, globally asymptotically stable, and globally exponentially stable. Also, we show that, our new neurodynamic model is effective to solve the nonconvex optimization problems. Moreover, since monotonicity is a special case of pseudo-monotonicity and also since a co-coercive mapping is Lipschitz continuous and monotone, and a strongly pseudo-monotone mapping is pseudo-monotone, the neurodynamic model can be applied to solve a broader classes of constrained optimization problems related to variational inequalities, pseudo-convex optimization problem, linear and nonlinear complementarity problems, and linear and convex quadratic programming problems. Finally, several illustrative examples are stated to demonstrate the effectiveness and efficiency of our new neurodynamic model.
Collapse
|
23
|
Li S, Zhang Y, Jin L. Kinematic Control of Redundant Manipulators Using Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2243-2254. [PMID: 27352398 DOI: 10.1109/tnnls.2016.2574363] [Citation(s) in RCA: 61] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Redundancy resolution is a critical problem in the control of robotic manipulators. Recurrent neural networks (RNNs), as inherently parallel processing models for time-sequence processing, are potentially applicable for the motion control of manipulators. However, the development of neural models for high-accuracy and real-time control is a challenging problem. This paper identifies two limitations of the existing RNN solutions for manipulator control, i.e., position error accumulation and the convex restriction on the projection set, and overcomes them by proposing two modified neural network models. Our method allows nonconvex sets for projection operations, and control error does not accumulate over time in the presence of noise. Unlike most works in which RNNs are used to process time sequences, the proposed approach is model-based and training-free, which makes it possible to achieve fast tracking of reference signals with superior robustness and accuracy. Theoretical analysis reveals the global stability of a system under the control of the proposed neural networks. Simulation results confirm the effectiveness of the proposed control method in both the position regulation and tracking control of redundant PUMA 560 manipulators.
Collapse
|
24
|
Feng R, Leung CS, Constantinides AG, Zeng WJ. Lagrange Programming Neural Network for Nondifferentiable Optimization Problems in Sparse Approximation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2395-2407. [PMID: 27479978 DOI: 10.1109/tnnls.2016.2575860] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The major limitation of the Lagrange programming neural network (LPNN) approach is that the objective function and the constraints should be twice differentiable. Since sparse approximation involves nondifferentiable functions, the original LPNN approach is not suitable for recovering sparse signals. This paper proposes a new formulation of the LPNN approach based on the concept of the locally competitive algorithm (LCA). Unlike the classical LCA approach which is able to solve unconstrained optimization problems only, the proposed LPNN approach is able to solve the constrained optimization problems. Two problems in sparse approximation are considered. They are basis pursuit (BP) and constrained BP denoise (CBPDN). We propose two LPNN models, namely, BP-LPNN and CBPDN-LPNN, to solve these two problems. For these two models, we show that the equilibrium points of the models are the optimal solutions of the two problems, and that the optimal solutions of the two problems are the equilibrium points of the two models. Besides, the equilibrium points are stable. Simulations are carried out to verify the effectiveness of these two LPNN models.
Collapse
|
25
|
Zhang L, Zhu Y, Zheng WX. State Estimation of Discrete-Time Switched Neural Networks With Multiple Communication Channels. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:1028-1040. [PMID: 27046885 DOI: 10.1109/tcyb.2016.2536748] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, the state estimation problem for a class of discrete-time switched neural networks with modal persistent dwell time (MPDT) switching and mixed time delays is investigated. The considered switching law, not only generalizes the commonly studied dwell-time (DT) and average DT (ADT) switchings, but also further attaches mode-dependency to the persistent DT (PDT) switching that is shown to be more general. Multiple communication channels, which include one primary channel and multiredundant channels, are considered to coexist for the state estimation of underlying switched neural networks. The desired mode-dependent filters are designed such that the resulting filtering error system is exponentially mean-square stable with a guaranteed nonweighted generalized H2 performance index. It is verified that better filtering performance index can be achieved as the number of channels to be used increases. The potential and effectiveness of the developed theoretical results are demonstrated via a numerical example.
Collapse
|
26
|
Xia Y, Leung H, Kamel MS. A discrete-time learning algorithm for image restoration using a novel L2-norm noise constrained estimation. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.06.111] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
27
|
Tao D, Lin X, Jin L, Li X. Principal Component 2-D Long Short-Term Memory for Font Recognition on Single Chinese Characters. IEEE TRANSACTIONS ON CYBERNETICS 2016; 46:756-765. [PMID: 25838536 DOI: 10.1109/tcyb.2015.2414920] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Chinese character font recognition (CCFR) has received increasing attention as the intelligent applications based on optical character recognition becomes popular. However, traditional CCFR systems do not handle noisy data effectively. By analyzing in detail the basic strokes of Chinese characters, we propose that font recognition on a single Chinese character is a sequence classification problem, which can be effectively solved by recurrent neural networks. For robust CCFR, we integrate a principal component convolution layer with the 2-D long short-term memory (2DLSTM) and develop principal component 2DLSTM (PC-2DLSTM) algorithm. PC-2DLSTM considers two aspects: 1) the principal component layer convolution operation helps remove the noise and get a rational and complete font information and 2) simultaneously, 2DLSTM deals with the long-range contextual processing along scan directions that can contribute to capture the contrast between character trajectory and background. Experiments using the frequently used CCFR dataset suggest the effectiveness of PC-2DLSTM compared with other state-of-the-art font recognition methods.
Collapse
|
28
|
Li C, Yu X, Huang T, Chen G, He X. A Generalized Hopfield Network for Nonsmooth Constrained Convex Optimization: Lie Derivative Approach. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:308-321. [PMID: 26595931 DOI: 10.1109/tnnls.2015.2496658] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper proposes a generalized Hopfield network for solving general constrained convex optimization problems. First, the existence and the uniqueness of solutions to the generalized Hopfield network in the Filippov sense are proved. Then, the Lie derivative is introduced to analyze the stability of the network using a differential inclusion. The optimality of the solution to the nonsmooth constrained optimization problems is shown to be guaranteed by the enhanced Fritz John conditions. The convergence rate of the generalized Hopfield network can be estimated by the second-order derivative of the energy function. The effectiveness of the proposed network is evaluated on several typical nonsmooth optimization problems and used to solve the hierarchical and distributed model predictive control four-tank benchmark.
Collapse
|
29
|
Xia Y, Wang J. A Bi-Projection Neural Network for Solving Constrained Quadratic Optimization Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:214-224. [PMID: 26672052 DOI: 10.1109/tnnls.2015.2500618] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, a bi-projection neural network for solving a class of constrained quadratic optimization problems is proposed. It is proved that the proposed neural network is globally stable in the sense of Lyapunov, and the output trajectory of the proposed neural network will converge globally to an optimal solution. Compared with existing projection neural networks (PNNs), the proposed neural network has a very small model size owing to its bi-projection structure. Furthermore, an application to data fusion shows that the proposed neural network is very effective. Numerical results demonstrate that the proposed neural network is much faster than the existing PNNs.
Collapse
|
30
|
Zou X, Gong D, Wang L, Chen Z. A novel method to solve inverse variational inequality problems based on neural networks. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.08.073] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
31
|
Zhang L, Zhu Y, Zheng WX. Energy-to-peak state estimation for Markov jump RNNs with time-varying delays via nonsynchronous filter with nonstationary mode transitions. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2346-2356. [PMID: 25576580 DOI: 10.1109/tnnls.2014.2382093] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, the problem of energy-to-peak state estimation for a class of discrete-time Markov jump recurrent neural networks (RNNs) with randomly occurring nonlinearities (RONs) and time-varying delays is investigated. A practical phenomenon of nonsynchronous jumps between RNNs modes and desired mode-dependent filters is considered, and a nonstationary mode transition among the filters is used to model the nonsynchronous jumps to different degrees that are also mode dependent. The RONs are used to model a class of sector-like nonlinearities that occur in a probabilistic way according to a Bernoulli sequence. The time-varying delays are supposed to be mode dependent and unknown, but with known lower and upper bounds a priori. Sufficient conditions on the existence of the nonsynchronous filters are obtained such that the filtering error system is stochastically stable and achieves a prescribed energy-to-peak performance index. Further to the recent study on the class of nonsynchronous estimation problem, a monotonicity is observed in obtaining filtering performance index, while changing the degree of nonsynchronous jumps. A numerical example is presented to verify the theoretical findings.
Collapse
|
32
|
Zhu Y, Zhang L, Ning Z, Zhu Z, Shammakh W, Hayat T. H∞ state estimation for discrete-time switching neural networks with persistent dwell-time switching regularities. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.03.036] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
33
|
Hou W, Gao X, Tao D, Li X. Blind image quality assessment via deep learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1275-1286. [PMID: 25122842 DOI: 10.1109/tnnls.2014.2336852] [Citation(s) in RCA: 90] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness.
Collapse
|
34
|
Pérez-Ilzarbe MJ. Improvement of the convergence speed of a discrete-time recurrent neural network for quadratic optimization with general linear constraints. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.05.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
35
|
Perez-Ilzarbe MJ. New discrete-time recurrent neural network proposal for quadratic optimization with general linear constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:322-328. [PMID: 24808285 DOI: 10.1109/tnnls.2012.2223484] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this brief, the quadratic problem with general linear constraints is reformulated using the Wolfe dual theory, and a very simple discrete-time recurrent neural network is proved to be able to solve it. Conditions that guarantee global convergence of this network to the constrained minimum are developed. The computational complexity of the method is analyzed, and experimental work is presented that shows its high efficiency.
Collapse
|