1
|
Mohammadi M. A Compact Neural Network for Fused Lasso Signal Approximator. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:4327-4336. [PMID: 31329147 DOI: 10.1109/tcyb.2019.2925707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The fused lasso signal approximator (FLSA) is a vital optimization problem with extensive applications in signal processing and biomedical engineering. However, the optimization problem is difficult to solve since it is both nonsmooth and nonseparable. The existing numerical solutions implicate the use of several auxiliary variables in order to deal with the nondifferentiable penalty. Thus, the resulting algorithms are both time- and memory-inefficient. This paper proposes a compact neural network to solve the FLSA. The neural network has a one-layer structure with the number of neurons proportionate to the dimension of the given signal, thanks to the utilization of consecutive projections. The proposed neural network is stable in the Lyapunov sense and is guaranteed to converge globally to the optimal solution of the FLSA. Experiments on several applications from signal processing and biomedical engineering confirm the reasonable performance of the proposed neural network.
Collapse
|
2
|
Mohammadi M. A new discrete-time neural network for quadratic programming with general linear constraints. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2019.11.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
3
|
Neurodynamical classifiers with low model complexity. Neural Netw 2020; 132:405-415. [PMID: 33011671 DOI: 10.1016/j.neunet.2020.08.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 07/18/2020] [Accepted: 08/11/2020] [Indexed: 11/18/2022]
Abstract
The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane classifier by minimizing an upper bound on the Vapnik-Chervonenkis (VC) dimension. The VC dimension measures the capacity or model complexity of a learning machine. Vapnik's risk formula indicates that models with smaller VC dimension are expected to show improved generalization. On many benchmark datasets, the MCM generalizes better than SVMs and uses far fewer support vectors than the number used by SVMs. In this paper, we describe a neural network that converges to the MCM solution. We employ the MCM neurodynamical system as the final layer of a neural network architecture. Our approach also optimizes the weights of all layers in order to minimize the objective, which is a combination of a bound on the VC dimension and the classification error. We illustrate the use of this model for robust binary and multi-class classification. Numerical experiments on benchmark datasets from the UCI repository show that the proposed approach is scalable and accurate, and learns models with improved accuracies and fewer support vectors.
Collapse
|
4
|
Mohammadi M. A Projection Neural Network for the Generalized Lasso. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2217-2221. [PMID: 31398133 DOI: 10.1109/tnnls.2019.2927282] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The generalized lasso (GLasso) is an extension of the lasso regression in which there is an l1 penalty term (or regularization) of the linearly transformed coefficient vector. Finding the optimal solution of GLasso is not straightforward since the penalty term is not differentiable. This brief presents a novel one-layer neural network to solve the generalized lasso for a wide range of penalty transformation matrices. The proposed neural network is proven to be stable in the sense of Lyapunov and converges globally to the optimal solution of the GLasso. It is also shown that the proposed neural solution can solve many optimization problems, including sparse and weighted sparse representations, (weighted) total variation denoising, fused lasso signal approximator, and trend filtering. Disparate experiments on the above problems illustrate and confirm the excellent performance of the proposed neural network in comparison to other competing techniques.
Collapse
|
5
|
Zhang Y, Gong H, Yang M, Li J, Yang X. Stepsize Range and Optimal Value for Taylor-Zhang Discretization Formula Applied to Zeroing Neurodynamics Illustrated via Future Equality-Constrained Quadratic Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:959-966. [PMID: 30137015 DOI: 10.1109/tnnls.2018.2861404] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this brief, future equality-constrained quadratic programming (FECQP) is studied. Via a zeroing neurodynamics method, a continuous-time zeroing neurodynamics (CTZN) model is presented. By using Taylor-Zhang discretization formula to discretize the CTZN model, a Taylor-Zhang discrete-time zeroing neurodynamics (TZ-DTZN) model is presented to perform FECQP. Furthermore, we focus on the critical parameter of the TZ-DTZN model, i.e., stepsize. By theoretical analyses, we obtain an effective range of the stepsize, which guarantees the stability of the TZ-DTZN model. In addition, we further discuss the optimal value of the stepsize, which makes the TZ-DTZN model possess the optimal stability (i.e., the best stability with the fastest convergence). Finally, numerical experiments and application experiments for motion generation of a robot manipulator are conducted to verify the high precision of the TZ-DTZN model and the effective range and optimal value of the stepsize for FECQP.
Collapse
|
6
|
Zhang Y, Li S. A Neural Controller for Image-Based Visual Servoing of Manipulators With Physical Constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5419-5429. [PMID: 29994741 DOI: 10.1109/tnnls.2018.2802650] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Main issues in visual servoing of manipulators mainly include rapid convergence of feature errors to zero and the safety of joints regarding joint physical limits. To address the two issues, in this paper, an image-based visual servoing scheme is proposed for manipulators with an eye-in-hand configuration. Compared with existing schemes, the proposed one does not require performing pseudoinversion for the image Jacobian matrix or inversion for the Jacobian matrix associated with the forward kinematics of the manipulators. Theoretical analysis shows that the proposed scheme not only guarantees the asymptotic convergence of feature errors to zero but also the compliance with joint angle and velocity limits of the manipulators. Besides, simulation results based on a PUMA560 manipulator with a camera mounted on the end effector verify the theoretical conclusions and the efficacy of the proposed scheme.
Collapse
|
7
|
Guo D, Yan L, Nie Z. Design, Analysis, and Representation of Novel Five-Step DTZD Algorithm for Time-Varying Nonlinear Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4248-4260. [PMID: 29990090 DOI: 10.1109/tnnls.2017.2761443] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Continuous-time and discrete-time forms of Zhang dynamics (ZD) for time-varying nonlinear optimization have been developed recently. In this paper, a novel discrete-time ZD (DTZD) algorithm is proposed and investigated based on the previous research. Specifically, the DTZD algorithm for time-varying nonlinear optimization is developed by adopting a new Taylor-type difference rule. This algorithm is a five-step iteration process, and thus, is referred to as the five-step DTZD algorithm in this paper. Theoretical analysis and results of the proposed five-step DTZD algorithm are presented to highlight its excellent computational performance. The geometric representation of the proposed algorithm for time-varying nonlinear optimization is also provided. Comparative numerical results are illustrated with four examples to substantiate the efficacy and superiority of the proposed five-step DTZD algorithm for time-varying nonlinear optimization compared with the previous DTZD algorithms.
Collapse
|
8
|
Gao X, Liao LZ. A Novel Neural Network for Generally Constrained Variational Inequalities. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2062-2075. [PMID: 27323376 DOI: 10.1109/tnnls.2016.2570257] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper presents a novel neural network for solving generally constrained variational inequality problems by constructing a system of double projection equations. By defining proper convex energy functions, the proposed neural network is proved to be stable in the sense of Lyapunov and converges to an exact solution of the original problem for any starting point under the weaker cocoercivity condition or the monotonicity condition of the gradient mapping on the linear equation set. Furthermore, two sufficient conditions are provided to ensure the stability of the proposed neural network for a special case. The proposed model overcomes some shortcomings of existing continuous-time neural networks for constrained variational inequality, and its stability only requires some monotonicity conditions of the underlying mapping and the concavity of nonlinear inequality constraints on the equation set. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.
Collapse
|
9
|
Guo D, Nie Z, Yan L. Theoretical analysis, numerical verification and geometrical representation of new three-step DTZD algorithm for time-varying nonlinear equations solving. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.06.032] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
10
|
Tao D, Lin X, Jin L, Li X. Principal Component 2-D Long Short-Term Memory for Font Recognition on Single Chinese Characters. IEEE TRANSACTIONS ON CYBERNETICS 2016; 46:756-765. [PMID: 25838536 DOI: 10.1109/tcyb.2015.2414920] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Chinese character font recognition (CCFR) has received increasing attention as the intelligent applications based on optical character recognition becomes popular. However, traditional CCFR systems do not handle noisy data effectively. By analyzing in detail the basic strokes of Chinese characters, we propose that font recognition on a single Chinese character is a sequence classification problem, which can be effectively solved by recurrent neural networks. For robust CCFR, we integrate a principal component convolution layer with the 2-D long short-term memory (2DLSTM) and develop principal component 2DLSTM (PC-2DLSTM) algorithm. PC-2DLSTM considers two aspects: 1) the principal component layer convolution operation helps remove the noise and get a rational and complete font information and 2) simultaneously, 2DLSTM deals with the long-range contextual processing along scan directions that can contribute to capture the contrast between character trajectory and background. Experiments using the frequently used CCFR dataset suggest the effectiveness of PC-2DLSTM compared with other state-of-the-art font recognition methods.
Collapse
|
11
|
Liao B, Zhang Y, Jin L. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:225-237. [PMID: 26058059 DOI: 10.1109/tnnls.2015.2435014] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.
Collapse
|
12
|
Zheng M, Mao Z, Li K, Fei M. Quadratic separation framework for stability analysis of a class of systems with time delays. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.04.110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
13
|
Jin L, Zhang Y. Discrete-Time Zhang Neural Network for Online Time-Varying Nonlinear Optimization With Application to Manipulator Motion Generation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1525-1531. [PMID: 25122845 DOI: 10.1109/tnnls.2014.2342260] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this brief, a discrete-time Zhang neural network (DTZNN) model is first proposed, developed, and investigated for online time-varying nonlinear optimization (OTVNO). Then, Newton iteration is shown to be derived from the proposed DTZNN model. In addition, to eliminate the explicit matrix-inversion operation, the quasi-Newton Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is introduced, which can effectively approximate the inverse of Hessian matrix. A DTZNN-BFGS model is thus proposed and investigated for OTVNO, which is the combination of the DTZNN model and the quasi-Newton BFGS method. In addition, theoretical analyses show that, with step-size h=1 and/or with zero initial error, the maximal residual error of the DTZNN model has an O(τ(2)) pattern, whereas the maximal residual error of the Newton iteration has an O(τ) pattern, with τ denoting the sampling gap. Besides, when h ≠ 1 and h ∈ (0,2) , the maximal steady-state residual error of the DTZNN model has an O(τ(2)) pattern. Finally, an illustrative numerical experiment and an application example to manipulator motion generation are provided and analyzed to substantiate the efficacy of the proposed DTZNN and DTZNN-BFGS models for OTVNO.
Collapse
|
14
|
Pérez-Ilzarbe MJ. Improvement of the convergence speed of a discrete-time recurrent neural network for quadratic optimization with general linear constraints. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.05.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
15
|
He X, Li C, Huang T, Li C, Huang J. A recurrent neural network for solving bilevel linear programming problem. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:824-830. [PMID: 24807959 DOI: 10.1109/tnnls.2013.2280905] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.
Collapse
|