1
|
Zhang Z, He H, Deng X. An FPGA-Implemented Antinoise Fuzzy Recurrent Neural Network for Motion Planning of Redundant Robot Manipulators. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:12263-12275. [PMID: 37145948 DOI: 10.1109/tnnls.2023.3253801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
When a robot completes end-effector tasks, internal error noises always exist. To resist internal error noises of robots, a novel fuzzy recurrent neural network (FRNN) is proposed, designed, and implemented on field-programmable gated array (FPGA). The implementation is pipeline-based, which guarantees the order of overall operations. The data processing is based on across-clock domain, which is beneficial for computing units' acceleration. Compared with traditional gradient-based neural networks (NNs) and zeroing neural networks (ZNNs), the proposed FRNN has faster convergence rate and higher correctness. Practical experiments on a 3 degree-of-freedom (DOs) planar robot manipulator show that the proposed fuzzy RNN coprocessor needs 496 lookup table random access memories (LUTRAMs), 205.5 block random access memories (BRAMs), 41384 lookup tables (LUTs), and 16743 flip-flops (FFs) of the Xilinx XCZU9EG chip.
Collapse
|
2
|
Wen H, Qu Y, He X, Sun S, Yang H, Li T, Zhou F. First/second-order predefined-time convergent ZNN models for time-varying quadratic programming and robotic manipulator application. ISA TRANSACTIONS 2024; 146:42-49. [PMID: 38129244 DOI: 10.1016/j.isatra.2023.12.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 12/15/2023] [Accepted: 12/16/2023] [Indexed: 12/23/2023]
Abstract
Zeroing neural network (ZNN) model, an important class of recurrent neural network, has been widely applied in the field of computation and optimization. In this paper, two ZNN models with predefined-time convergence are proposed for the time-varying quadratic programming (TVQP) problem. First, in the framework of the traditional ZNN model, the first-order predefined-time convergent ZNN (FPTZNN) model is proposed in combination with a predefined-time controller. Compared with the existing ZNN models, the proposed ZNN model is error vector combined with sliding mode control technique. Then, the FPTZNN model is further extended and the second-order predefined-time convergent ZNN (SPTZNN) model is developed. Combined with the Lyapunov method and the concept of predefined-time stability, it is shown that the proposed FPTZNN and SPTZNN models have the properties of predefined-time convergence, and their convergence time can be flexibly adjusted by predefined-time control parameters. Finally, the proposed FPTZNN and SPTZNN models are compared with the existing ZNN models for the TVQP problem in simulation experiment, and the simulation experiment results verify the effectiveness and superior performance of the proposed FPTZNN and SPTZNN models. In addition, the proposed FPTZNN model for robot motion planning problem is applied and successfully implemented to verify the practicality of the model.
Collapse
Affiliation(s)
- Hongsong Wen
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, College of Electronic and Information Engineering, Southwest University, 400715, China.
| | - Youran Qu
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, College of Electronic and Information Engineering, Southwest University, 400715, China.
| | - Xing He
- Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, College of Electronic and Information Engineering, Southwest University, 400715, China.
| | - Shiying Sun
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| | - Hongjun Yang
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| | - Tao Li
- Department of Critical Care Medicine, the First Medical Centre, Chinese PLA General Hospital, Beijing 100853, China; Medical Engineering Laboratory of Chinese PLA General Hospital, Beijing 100853, China.
| | - Feihu Zhou
- Department of Critical Care Medicine, the First Medical Centre, Chinese PLA General Hospital, Beijing 100853, China; Medical Engineering Laboratory of Chinese PLA General Hospital, Beijing 100853, China.
| |
Collapse
|
3
|
Talebi F, Nazemi A, Ataabadi AA. Mean-AVaR in credibilistic portfolio management via an artificial neural network scheme. J EXP THEOR ARTIF IN 2022. [DOI: 10.1080/0952813x.2022.2153271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Affiliation(s)
- Fatemeh Talebi
- Faculty of Mathematical sciences, Shahrood University of Technology, Shahrood, Iran
| | - Alireza Nazemi
- Faculty of Mathematical sciences, Shahrood University of Technology, Shahrood, Iran
| | - Abdolmajid Abdolbaghi Ataabadi
- Department of Management, Faculty of Industrial Engineering and Management, Shahrood University of Technology, Shahrood, Iran
| |
Collapse
|
4
|
Sun M, Zhang Y, Wu Y, He X. On a Finitely Activated Terminal RNN Approach to Time-Variant Problem Solving. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:7289-7302. [PMID: 34106866 DOI: 10.1109/tnnls.2021.3084740] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article concerns with terminal recurrent neural network (RNN) models for time-variant computing, featuring finite-valued activation functions (AFs), and finite-time convergence of error variables. Terminal RNNs stand for specific models that admit terminal attractors, and the dynamics of each neuron retains finite-time convergence. The might-existing imperfection in solving time-variant problems, through theoretically examining the asymptotically convergent RNNs, is pointed out for which the finite-time-convergent models are most desirable. The existing AFs are summarized, and it is found that there is a lack of the AFs that take only finite values. A finitely valued terminal RNN, among others, is taken into account, which involves only basic algebraic operations and taking roots. The proposed terminal RNN model is used to solve the time-variant problems undertaken, including the time-variant quadratic programming and motion planning of redundant manipulators. The numerical results are presented to demonstrate effectiveness of the proposed neural network, by which the convergence rate is comparable with that of the existing power-rate RNN.
Collapse
|
5
|
Di Marco M, Forti M, Pancioni L, Innocenti G, Tesi A. Memristor Neural Networks for Linear and Quadratic Programming Problems. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1822-1835. [PMID: 32559170 DOI: 10.1109/tcyb.2020.2997686] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This article introduces a new class of memristor neural networks (NNs) for solving, in real-time, quadratic programming (QP) and linear programming (LP) problems. The networks, which are called memristor programming NNs (MPNNs), use a set of filamentary-type memristors with sharp memristance transitions for constraint satisfaction and an additional set of memristors with smooth memristance transitions for memorizing the result of a computation. The nonlinear dynamics and global optimization capabilities of MPNNs for QP and LP problems are thoroughly investigated via a recently introduced technique called the flux-charge analysis method. One main feature of MPNNs is that the processing is performed in the flux-charge domain rather than in the conventional voltage-current domain. This enables exploiting the unconventional features of memristors to obtain advantages over the traditional NNs for QP and LP problems operating in the voltage-current domain. One advantage is that operating in the flux-charge domain allows for reduced power consumption, since in an MPNN, voltages, currents, and, hence, power vanish when the quick analog transient is over. Moreover, an MPNN works in accordance with the fundamental principle of in-memory computing, that is, the nonlinearity of the memristor is used in the dynamic computation, but the same memristor is also used to memorize in a nonvolatile way the result of a computation.
Collapse
|
6
|
Li W, Han L, Xiao X, Liao B, Peng C. A gradient-based neural network accelerated for vision-based control of an RCM-constrained surgical endoscope robot. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06465-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
7
|
Zhang Z, Zheng L, Qiu T. A gain-adjustment neural network based time-varying underdetermined linear equation solving method. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.096] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
8
|
Zhang Z, Kong LD, Zheng L. Power-Type Varying-Parameter RNN for Solving TVQP Problems: Design, Analysis, and Applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:2419-2433. [PMID: 30596590 DOI: 10.1109/tnnls.2018.2885042] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Many practical problems can be solved by being formulated as time-varying quadratic programing (TVQP) problems. In this paper, a novel power-type varying-parameter recurrent neural network (VPNN) is proposed and analyzed to effectively solve the resulting TVQP problems, as well as the original practical problems. For a clear understanding, we introduce this model from three aspects: design, analysis, and applications. Specifically, the reason why and the method we use to design this neural network model for solving online TVQP problems subject to time-varying linear equality/inequality are described in detail. The theoretical analysis confirms that when activated by six commonly used activation functions, VPNN achieves a superexponential convergence rate. In contrast to the traditional zeroing neural network with fixed design parameters, the proposed VPNN has better convergence performance. Comparative simulations with state-of-the-art methods confirm the advantages of VPNN. Furthermore, the application of VPNN to a robot motion planning problem verifies the feasibility, applicability, and efficiency of the proposed method.
Collapse
|
9
|
Xiao L, Zhang Y, Li K, Liao B, Tan Z. A novel recurrent neural network and its finite-time solution to time-varying complex matrix inversion. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.11.071] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
10
|
Qin S, Le X, Wang J. A Neurodynamic Optimization Approach to Bilevel Quadratic Programming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2580-2591. [PMID: 28113639 DOI: 10.1109/tnnls.2016.2595489] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper presents a neurodynamic optimization approach to bilevel quadratic programming (BQP). Based on the Karush-Kuhn-Tucker (KKT) theorem, the BQP problem is reduced to a one-level mathematical program subject to complementarity constraints (MPCC). It is proved that the global solution of the MPCC is the minimal one of the optimal solutions to multiple convex optimization subproblems. A recurrent neural network is developed for solving these convex optimization subproblems. From any initial state, the state of the proposed neural network is convergent to an equilibrium point of the neural network, which is just the optimal solution of the convex optimization subproblem. Compared with existing recurrent neural networks for BQP, the proposed neural network is guaranteed for delivering the exact optimal solutions to any convex BQP problems. Moreover, it is proved that the proposed neural network for bilevel linear programming is convergent to an equilibrium point in finite time. Finally, three numerical examples are elaborated to substantiate the efficacy of the proposed approach.
Collapse
|
11
|
|
12
|
Pérez-Ilzarbe MJ. Improvement of the convergence speed of a discrete-time recurrent neural network for quadratic optimization with general linear constraints. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.05.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
13
|
Costantini G, Perfetti R, Todisco M. Recurrent neural network for approximate nonnegative matrix factorization. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.02.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
14
|
Li J, Li C, Wu Z, Huang J. A feedback neural network for solving convex quadratic bi-level programming problems. Neural Comput Appl 2013. [DOI: 10.1007/s00521-013-1530-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
15
|
Nazemi A, Tahmasbi N. A computational intelligence method for solving a class of portfolio optimization problems. Soft comput 2013. [DOI: 10.1007/s00500-013-1186-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
16
|
Perez-Ilzarbe MJ. New discrete-time recurrent neural network proposal for quadratic optimization with general linear constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:322-328. [PMID: 24808285 DOI: 10.1109/tnnls.2012.2223484] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this brief, the quadratic problem with general linear constraints is reformulated using the Wolfe dual theory, and a very simple discrete-time recurrent neural network is proved to be able to solve it. Conditions that guarantee global convergence of this network to the constrained minimum are developed. The computational complexity of the method is analyzed, and experimental work is presented that shows its high efficiency.
Collapse
|
17
|
Liu Q, Cao J, Chen G. A Novel Recurrent Neural Network with Finite-Time Convergence for Linear Programming. Neural Comput 2010; 22:2962-78. [DOI: 10.1162/neco_a_00029] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
Collapse
Affiliation(s)
- Qingshan Liu
- School of Automation, Southeast University, Nanjing 210096, China
| | - Jinde Cao
- Department of Mathematics, Southeast University, Nanjing 210096, China
| | - Guanrong Chen
- Department of Electronic Engineering, City University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
18
|
|
19
|
Barbarosou M, Maratos N. A Nonfeasible Gradient Projection Recurrent Neural Network for Equality-Constrained Optimization Problems. ACTA ACUST UNITED AC 2008; 19:1665-77. [DOI: 10.1109/tnn.2008.2000993] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
20
|
Costantini G, Perfetti R, Todisco M. Quasi-Lagrangian Neural Network for Convex Quadratic Optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS 2008; 19:1804-9. [PMID: 18842483 DOI: 10.1109/tnn.2008.2001183] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Giovanni Costantini
- Department of Electronic Engineering, University of Rome Tor Vergata, Rome, Italy.
| | | | | |
Collapse
|
21
|
Cooperative recurrent modular neural networks for constrained optimization: a survey of models and applications. Cogn Neurodyn 2008; 3:47-81. [PMID: 19003467 DOI: 10.1007/s11571-008-9036-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2007] [Accepted: 11/27/2007] [Indexed: 10/22/2022] Open
Abstract
Constrained optimization problems arise in a wide variety of scientific and engineering applications. Since several single recurrent neural networks when applied to solve constrained optimization problems for real-time engineering applications have shown some limitations, cooperative recurrent neural network approaches have been developed to overcome drawbacks of these single recurrent neural networks. This paper surveys in details work on cooperative recurrent neural networks for solving constrained optimization problems and their engineering applications, and points out their standing models from viewpoint of both convergence to the optimal solution and model complexity. We provide examples and comparisons to shown advantages of these models in the given applications.
Collapse
|
22
|
Hu X, Wang J. Design of General Projection Neural Networks for Solving Monotone Linear Variational Inequalities and Linear and Quadratic Optimization Problems. ACTA ACUST UNITED AC 2007; 37:1414-21. [PMID: 17926722 DOI: 10.1109/tsmcb.2007.903706] [Citation(s) in RCA: 63] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
23
|
Hou ZG, Gupta MM, Nikiforuk PN, Tan M, Cheng L. A recurrent neural network for hierarchical control of interconnected dynamic systems. ACTA ACUST UNITED AC 2007; 18:466-81. [PMID: 17385632 DOI: 10.1109/tnn.2006.885040] [Citation(s) in RCA: 48] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
A recurrent neural network for the optimal control of a group of interconnected dynamic systems is presented in this paper. On the basis of decomposition and coordination strategy for interconnected dynamic systems, the proposed neural network has a two-level hierarchical structure: several local optimization subnetworks at the lower level and one coordination subnetwork at the upper level. A goal-coordination method is used to coordinate the interactions between the subsystems. By nesting the dynamic equations of the subsystems into their corresponding local optimization subnetworks, the number of dimensions of the neural network can be reduced significantly. Furthermore, the subnetworks at both the lower and upper levels can work concurrently. Therefore, the computation efficiency, in comparison with the consecutive executions of numerical algorithms on digital computers, is increased dramatically. The proposed method is extended to the case where the control inputs of the subsystems are bounded. The stability analysis shows that the proposed neural network is asymptotically stable. Finally, an example is presented which demonstrates the satisfactory performance of the neural network.
Collapse
Affiliation(s)
- Zeng-Guang Hou
- Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100080, China.
| | | | | | | | | |
Collapse
|
24
|
Yang Y, Cao J. Solving quadratic programming problems by delayed projection neural network. IEEE TRANSACTIONS ON NEURAL NETWORKS 2006; 17:1630-4. [PMID: 17131675 DOI: 10.1109/tnn.2006.880579] [Citation(s) in RCA: 66] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this letter, the delayed projection neural network for solving convex quadratic programming problems is proposed. The neural network is proved to be globally exponentially stable and can converge to an optimal solution of the optimization problem. Three examples show the effectiveness of the proposed network.
Collapse
|
25
|
Ferreira LV, Kaszkurewicz E, Bhaya A. Support vector classifiers via gradient systems with discontinuous righthand sides. Neural Netw 2006; 19:1612-23. [PMID: 17011165 DOI: 10.1016/j.neunet.2006.07.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2004] [Accepted: 07/07/2006] [Indexed: 11/16/2022]
Abstract
Gradient dynamical systems with discontinuous righthand sides are designed using Persidskii-type nonsmooth Lyapunov functions to work as support vector machines (SVMs) for the discrimination of nonseparable classes. The gradient systems are obtained from an exact penalty method applied to the constrained quadratic optimization problems, which are formulations of two well known SVMs. Global convergence of the trajectories of the gradient dynamical systems to the solution of the corresponding constrained problems is shown to be independent of the penalty parameters and of the parameters of the SVMs. The proposed gradient systems can be implemented as simple analog circuits as well as using standard software for integration of ODEs, and in order to use efficient integration methods with adaptive stepsize selection, the discontinuous terms are smoothed around a neighborhood of the discontinuity surface by means of the boundary layer technique. The scalability of the proposed gradient systems is also shown by means of an implementation using parallel computers, resulting in smaller processing times when compared with traditional SVM packages.
Collapse
Affiliation(s)
- Leonardo V Ferreira
- Department of Electrical Engineering, NACAD-COPPE/Federal University of Rio de Janeiro, Rio de Janeiro, RJ, Brazil.
| | | | | |
Collapse
|
26
|
Yee Leung, Kai-Zhou Chen, Xing-Bao Gao. A high-performance feedback neural network for solving convex nonlinear programming problems. ACTA ACUST UNITED AC 2003; 14:1469-77. [DOI: 10.1109/tnn.2003.820852] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|