1
|
Peng D, Yang H, Jiang B. Robust Switching Time Optimization for Networked Switched Systems via Model Predictive Control. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:10961-10972. [PMID: 37027588 DOI: 10.1109/tnnls.2023.3246041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This article presents a model predictive control (MPC) strategy to find the optimal switching time sequences of networked switched systems with uncertainties. First, based on predicted trajectories under exact discretization, a large-scale MPC problem is formulated; second, a two-level hierarchical optimization structure coupled with a local compensation mechanism is established to solve the formulated MPC problem, where the proposed hierarchical optimization structure is actually a recurrent neural network consisting of a coordination unit (CU) at the upper level and a series of local optimization units (LOUs) related to each subsystem at the lower level. Finally, a real-time switching time optimization algorithm is designed to calculate the optimal switching time sequences.
Collapse
|
2
|
Li W, Bian W, Xue X. Projected Neural Network for a Class of Non-Lipschitz Optimization Problems With Linear Constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:3361-3373. [PMID: 31689212 DOI: 10.1109/tnnls.2019.2944388] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this article, we consider a class of nonsmooth, nonconvex, and non-Lipschitz optimization problems, which have wide applications in sparse optimization. We generalize the Clarke stationary point and define a kind of generalized stationary point of the problems with a stronger optimal capability. Based on the smoothing method, we propose a projected neural network for solving this kind of optimization problem. Under the condition that the level set of objective function in the feasible region is bounded, we prove that the solution of the proposed neural network is globally existent and bounded. The uniqueness of the solution of the proposed network is also analyzed. When the feasible region is bounded, any accumulation point of the proposed neural network is a generalized stationary point of the optimization model. Based on some suitable conditions, any solution of the proposed neural network is asymptotic convergent to one stationary point. In particular, we give some deep analysis on the proposed network for solving a special class of the non-Lipschitz optimization problem, which indicates a lower bound property and the unify identification for the nonzero elements of all accumulation points. Finally, some numerical results are presented to show the efficiency of the proposed neural network for solving some kinds of sparse optimization models.
Collapse
|
3
|
Dynamic System Identification and Prediction Using a Self-Evolving Takagi–Sugeno–Kang-Type Fuzzy CMAC Network. ELECTRONICS 2020. [DOI: 10.3390/electronics9040631] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This study proposes a Self-evolving Takagi-Sugeno-Kang-type Fuzzy Cerebellar Model Articulation Controller (STFCMAC) for solving identification and prediction problems. The proposed STFCMAC model uses the hypercube firing strength for generating external loops and internal feedback. A differentiable Gaussian function is used in the fuzzy hypercube cell of the proposed model, and a linear combination function of the model inputs is used as the output of the proposed model. The learning process of the STFCMAC is initiated using an empty hypercube base. Fuzzy hypercube cells are generated through structure learning, and the related parameters are adjusted by a gradient descent algorithm. The proposed STFCMAC network has some advantages that are summarized as follows: (1) the model automatically selects the parameters of the memory structure, (2) it requires few fuzzy hypercube cells, and (3) it performs identification and prediction adaptively and effectively.
Collapse
|
4
|
Sliding Surface in Consensus Problem of Multi-Agent Rigid Manipulators with Neural Network Controller. ENERGIES 2017. [DOI: 10.3390/en10122127] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
5
|
Decentralized adaptive optimal stabilization of nonlinear systems with matched interconnections. Soft comput 2017. [DOI: 10.1007/s00500-017-2526-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
6
|
Qu Q, Zhang H, Feng T, Jiang H. Decentralized adaptive tracking control scheme for nonlinear large-scale interconnected systems via adaptive dynamic programming. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.10.058] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
7
|
Wang Y, Cheng L, Hou ZG, Yu J, Tan M. Optimal Formation of Multirobot Systems Based on a Recurrent Neural Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:322-333. [PMID: 26316224 DOI: 10.1109/tnnls.2015.2464314] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The optimal formation problem of multirobot systems is solved by a recurrent neural network in this paper. The desired formation is described by the shape theory. This theory can generate a set of feasible formations that share the same relative relation among robots. An optimal formation means that finding one formation from the feasible formation set, which has the minimum distance to the initial formation of the multirobot system. Then, the formation problem is transformed into an optimization problem. In addition, the orientation, scale, and admissible range of the formation can also be considered as the constraints in the optimization problem. Furthermore, if all robots are identical, their positions in the system are exchangeable. Then, each robot does not necessarily move to one specific position in the formation. In this case, the optimal formation problem becomes a combinational optimization problem, whose optimal solution is very hard to obtain. Inspired by the penalty method, this combinational optimization problem can be approximately transformed into a convex optimization problem. Due to the involvement of the Euclidean norm in the distance, the objective function of these optimization problems are nonsmooth. To solve these nonsmooth optimization problems efficiently, a recurrent neural network approach is employed, owing to its parallel computation ability. Finally, some simulations and experiments are given to validate the effectiveness and efficiency of the proposed optimal formation approach.
Collapse
|
8
|
Zhang L, Zhu Y, Zheng WX. Synchronization and State Estimation of a Class of Hierarchical Hybrid Neural Networks With Time-Varying Delays. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:459-470. [PMID: 25823045 DOI: 10.1109/tnnls.2015.2412676] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper addresses the problems of synchronization and state estimation for a class of discrete-time hierarchical hybrid neural networks (NNs) with time-varying delays. The hierarchical hybrid feature consists of a higher level nondeterministic switching and a lower level stochastic switching. The latter is used to describe the NNs subject to Markovian modes transitions, whereas the former is of the average dwell-time switching regularity to model the supervisory orchestrating mechanism among these Markov jump NNs. The considered time delays are not only time-varying but also dependent on the mode of NNs on the lower layer in the hierarchical structure. Despite quantization and random data missing, the synchronized controllers and state estimators are designed such that the resulting error system is exponentially stable with an expected decay rate and has a prescribed H∞ disturbance attenuation level. Two numerical examples are provided to show the validity and potential of the developed results.
Collapse
|
9
|
Qin S, Xue X. A two-layer recurrent neural network for nonsmooth convex optimization problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1149-1160. [PMID: 25051563 DOI: 10.1109/tnnls.2014.2334364] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, a two-layer recurrent neural network is proposed to solve the nonsmooth convex optimization problem subject to convex inequality and linear equality constraints. Compared with existing neural network models, the proposed neural network has a low model complexity and avoids penalty parameters. It is proved that from any initial point, the state of the proposed neural network reaches the equality feasible region in finite time and stays there thereafter. Moreover, the state is unique if the initial point lies in the equality feasible region. The equilibrium point set of the proposed neural network is proved to be equivalent to the Karush-Kuhn-Tucker optimality set of the original optimization problem. It is further proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov. Moreover, from any initial point, the state is proved to be convergent to an equilibrium point of the proposed neural network. Finally, as applications, the proposed neural network is used to solve nonlinear convex programming with linear constraints and L1 -norm minimization problems.
Collapse
|
10
|
Li G, Yan Z, Wang J. A one-layer recurrent neural network for constrained nonconvex optimization. Neural Netw 2015; 61:10-21. [DOI: 10.1016/j.neunet.2014.09.009] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2014] [Revised: 08/22/2014] [Accepted: 09/18/2014] [Indexed: 10/24/2022]
|
11
|
Liu D, Wang D, Li H. Decentralized stabilization for a class of continuous-time nonlinear interconnected systems using online learning optimal control approach. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:418-428. [PMID: 24807039 DOI: 10.1109/tnnls.2013.2280013] [Citation(s) in RCA: 115] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, using a neural-network-based online learning optimal control approach, a novel decentralized control strategy is developed to stabilize a class of continuous-time nonlinear interconnected large-scale systems. First, optimal controllers of the isolated subsystems are designed with cost functions reflecting the bounds of interconnections. Then, it is proven that the decentralized control strategy of the overall system can be established by adding appropriate feedback gains to the optimal control policies of the isolated subsystems. Next, an online policy iteration algorithm is presented to solve the Hamilton-Jacobi-Bellman equations related to the optimal control problem. Through constructing a set of critic neural networks, the cost functions can be obtained approximately, followed by the control policies. Furthermore, the dynamics of the estimation errors of the critic networks are verified to be uniformly and ultimately bounded. Finally, a simulation example is provided to illustrate the effectiveness of the present decentralized control scheme.
Collapse
|
12
|
A robust recurrent simultaneous perturbation stochastic approximation training algorithm for recurrent neural networks. Neural Comput Appl 2013. [DOI: 10.1007/s00521-013-1436-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
13
|
Bian W, Chen X. Smoothing neural network for constrained non-Lipschitz optimization with applications. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:399-411. [PMID: 24808547 DOI: 10.1109/tnnls.2011.2181867] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, a smoothing neural network (SNN) is proposed for a class of constrained non-Lipschitz optimization problems, where the objective function is the sum of a nonsmooth, nonconvex function, and a non-Lipschitz function, and the feasible set is a closed convex subset of . Using the smoothing approximate techniques, the proposed neural network is modeled by a differential equation, which can be implemented easily. Under the level bounded condition on the objective function in the feasible set, we prove the global existence and uniform boundedness of the solutions of the SNN with any initial point in the feasible set. The uniqueness of the solution of the SNN is provided under the Lipschitz property of smoothing functions. We show that any accumulation point of the solutions of the SNN is a stationary point of the optimization problem. Numerical results including image restoration, blind source separation, variable selection, and minimizing condition number are presented to illustrate the theoretical results and show the efficiency of the SNN. Comparisons with some existing algorithms show the advantages of the SNN.
Collapse
|
14
|
Li T, Li R, Wang D. Adaptive neural control of nonlinear MIMO systems with unknown time delays. Neurocomputing 2012. [DOI: 10.1016/j.neucom.2011.04.043] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
15
|
Decentralized adaptive neural control of nonlinear interconnected large-scale systems with unknown time delays and input saturation. Neurocomputing 2011. [DOI: 10.1016/j.neucom.2011.03.005] [Citation(s) in RCA: 95] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
16
|
Cheng L, Hou ZG, Lin Y, Tan M, Zhang WC, Wu FX. Recurrent Neural Network for Non-Smooth Convex Optimization Problems With Application to the Identification of Genetic Regulatory Networks. ACTA ACUST UNITED AC 2011; 22:714-26. [PMID: 21427022 DOI: 10.1109/tnn.2011.2109735] [Citation(s) in RCA: 116] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
17
|
Zeng-Guang Hou, Long Cheng, Min Tan. Multicriteria Optimization for Coordination of Redundant Robots Using a Dual Neural Network. ACTA ACUST UNITED AC 2010; 40:1075-87. [DOI: 10.1109/tsmcb.2009.2034073] [Citation(s) in RCA: 78] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
18
|
Hu X, Sun C, Zhang B. Design of Recurrent Neural Networks for Solving Constrained Least Absolute Deviation Problems. ACTA ACUST UNITED AC 2010; 21:1073-86. [DOI: 10.1109/tnn.2010.2048123] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Xiaolin Hu
- State Key Laboratory of Intelligent Technology and Systems, TNList, and the Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China.
| | | | | |
Collapse
|
19
|
Xiaolin Hu, Bo Zhang. An Alternative Recurrent Neural Network for Solving Variational Inequalities and Related Optimization Problems. ACTA ACUST UNITED AC 2009; 39:1640-5. [DOI: 10.1109/tsmcb.2009.2025700] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
20
|
Long Cheng, Zeng-Guang Hou, Min Tan. A Delayed Projection Neural Network for Solving Linear Variational Inequalities. ACTA ACUST UNITED AC 2009; 20:915-25. [DOI: 10.1109/tnn.2009.2012517] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
21
|
|
22
|
Sun Z, Wang Y. Traffic congestion identification by combining PCA with higher-order Boltzmann machine. Neural Comput Appl 2009. [DOI: 10.1007/s00521-009-0250-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
23
|
Zuo W, Cai L. Adaptive-Fourier-neural-network-based control for a class of uncertain nonlinear systems. IEEE TRANSACTIONS ON NEURAL NETWORKS 2008; 19:1689-701. [PMID: 18842474 DOI: 10.1109/tnn.2008.2001003] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
An adaptive Fourier neural network (AFNN) control scheme is presented in this paper for the control of a class of uncertain nonlinear systems. Based on Fourier analysis and neural network (NN) theory, AFNN employs orthogonal complex Fourier exponentials as the activation functions. Due to the clear physical meaning of the neurons, the determination of the AFNN structure as well as the parameters of the activation functions becomes convenient. One salient feature of the proposed AFNN approach is that all the nonlinearities and uncertainties of the dynamical system are lumped together and compensated online by AFNN. It can, therefore, be applied to uncertain nonlinear systems without any a priori knowledge about the system dynamics. Derived from Lyapunov theory, a novel learning algorithm is proposed, which is essentially a frequency domain method and can guarantee asymptotic stability of the closed-loop system. The simulation results of a multiple-input-multiple-output (MIMO) nonlinear system and the experimental results of an X - Y positioning table are presented to show the effectiveness of the proposed AFNN controller.
Collapse
Affiliation(s)
- Wei Zuo
- Department of Mechanical Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | | |
Collapse
|
24
|
Yuzgec U, Becerikli Y, Turker M. Dynamic Neural-Network-Based Model-Predictive Control of an Industrial Baker's Yeast Drying Process. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/tnn.2008.2000205] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
25
|
Changchun Hua, Xinping Guan. Output Feedback Stabilization for Time-Delay Nonlinear Interconnected Systems Using Neural Networks. ACTA ACUST UNITED AC 2008; 19:673-88. [DOI: 10.1109/tnn.2007.912318] [Citation(s) in RCA: 114] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
26
|
Chen M, Gautama T, Mandic DP. An assessment of qualitative performance of machine learning architectures: modular feedback networks. IEEE TRANSACTIONS ON NEURAL NETWORKS 2008; 19:183-9. [PMID: 18269949 DOI: 10.1109/tnn.2007.902728] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A framework for the assessment of qualitative performance of machine learning architectures is proposed. For generality, the analysis is provided for the modular nonlinear pipelined recurrent neural network (PRNN) architecture. This is supported by a sensitivity analysis, which is achieved based upon the prediction performance with respect to changes in the nature of the processed signal and by utilizing the recently introduced delay vector variance (DVV) method for phase space signal characterization. Comprehensive simulations combining the quantitative and qualitative analysis on both linear and nonlinear signals suggest that better quantitative prediction performance may need to be traded in order to preserve the nature of the processed signal, especially where the signal nature is of primary importance (biomedical applications).
Collapse
Affiliation(s)
- Mo Chen
- Department of Electrical and Electronic Engineering, Communication nad Signal Procesing, Imperial College London, London, UK.
| | | | | |
Collapse
|