1
|
Han H, Liu H, Qiao J. Data-Knowledge-Driven Self-Organizing Fuzzy Neural Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:2081-2093. [PMID: 35802545 DOI: 10.1109/tnnls.2022.3186671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Fuzzy neural networks (FNNs) hold the advantages of knowledge leveraging and adaptive learning, which have been widely used in nonlinear system modeling. However, it is difficult for FNNs to obtain the appropriate structure in the situation of insufficient data, which limits its generalization performance. To solve this problem, a data-knowledge-driven self-organizing FNN (DK-SOFNN) with a structure compensation strategy and a parameter reinforcement mechanism is proposed in this article. First, a structure compensation strategy is proposed to mine structural information from empirical knowledge to learn the structure of DK-SOFNN. Then, a complete model structure can be acquired by sufficient structural information. Second, a parameter reinforcement mechanism is developed to determine the parameter evolution direction of DK-SOFNN that is most suitable for the current model structure. Then, a robust model can be obtained by the interaction between parameters and dynamic structure. Finally, the proposed DK-SOFNN is theoretically analyzed on the fixed structure case and dynamic structure case. Then, the convergence conditions can be obtained to guide practical applications. The merits of DK-SOFNN are demonstrated by some benchmark problems and industrial applications.
Collapse
|
2
|
Kaveh M, Mesgari MS. Application of Meta-Heuristic Algorithms for Training Neural Networks and Deep Learning Architectures: A Comprehensive Review. Neural Process Lett 2022; 55:1-104. [PMID: 36339645 PMCID: PMC9628382 DOI: 10.1007/s11063-022-11055-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2022] [Indexed: 12/02/2022]
Abstract
The learning process and hyper-parameter optimization of artificial neural networks (ANNs) and deep learning (DL) architectures is considered one of the most challenging machine learning problems. Several past studies have used gradient-based back propagation methods to train DL architectures. However, gradient-based methods have major drawbacks such as stucking at local minimums in multi-objective cost functions, expensive execution time due to calculating gradient information with thousands of iterations and needing the cost functions to be continuous. Since training the ANNs and DLs is an NP-hard optimization problem, their structure and parameters optimization using the meta-heuristic (MH) algorithms has been considerably raised. MH algorithms can accurately formulate the optimal estimation of DL components (such as hyper-parameter, weights, number of layers, number of neurons, learning rate, etc.). This paper provides a comprehensive review of the optimization of ANNs and DLs using MH algorithms. In this paper, we have reviewed the latest developments in the use of MH algorithms in the DL and ANN methods, presented their disadvantages and advantages, and pointed out some research directions to fill the gaps between MHs and DL methods. Moreover, it has been explained that the evolutionary hybrid architecture still has limited applicability in the literature. Also, this paper classifies the latest MH algorithms in the literature to demonstrate their effectiveness in DL and ANN training for various applications. Most researchers tend to extend novel hybrid algorithms by combining MHs to optimize the hyper-parameters of DLs and ANNs. The development of hybrid MHs helps improving algorithms performance and capable of solving complex optimization problems. In general, the optimal performance of the MHs should be able to achieve a suitable trade-off between exploration and exploitation features. Hence, this paper tries to summarize various MH algorithms in terms of the convergence trend, exploration, exploitation, and the ability to avoid local minima. The integration of MH with DLs is expected to accelerate the training process in the coming few years. However, relevant publications in this way are still rare.
Collapse
Affiliation(s)
- Mehrdad Kaveh
- Department of Geodesy and Geomatics, K. N. Toosi University of Technology, Tehran, 19967-15433 Iran
| | - Mohammad Saadi Mesgari
- Department of Geodesy and Geomatics, K. N. Toosi University of Technology, Tehran, 19967-15433 Iran
| |
Collapse
|
3
|
Enhancement of the HILOMOT Algorithm with Modified EM and Modified PSO Algorithms for Nonlinear Systems Identification. ELECTRONICS 2022. [DOI: 10.3390/electronics11050729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Developing a mathematical model has become an inevitable need in studies of all disciplines. With advancements in technology, there is an emerging need to develop complex mathematical models. System identification is a popular way of constructing mathematical models of highly complex processes when an analytical model is not feasible. One of the many model architectures of system identification is to utilize a Local Model Network (LMN). Hierarchical Local Model Tree (HILOMOT) is an iterative LMN training algorithm that uses the axis-oblique split method to divide the input space hierarchically. The split positions of the local models directly influence the accuracy of the entire model. However, finding the best split positions of the local models presents a nonlinear optimization problem. This paper presents an optimized HILOMOT algorithm with enhanced Expectation–Maximization (EM) and Particle Swarm Optimization (PSO) algorithms which includes the normalization parameter and utilizes the reduced-parameter vector. Finally, the performance of the improved HILOMOT algorithm is compared with the existing algorithm by modeling the NOx emission model of a gas turbine and multiple nonlinear test functions of different orders and structures.
Collapse
|
4
|
Xu X, Ren W. A hybrid model of stacked autoencoder and modified particle swarm optimization for multivariate chaotic time series forecasting. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2021.108321] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
5
|
Han M, Zhong K, Qiu T, Han B. Interval Type-2 Fuzzy Neural Networks for Chaotic Time Series Prediction: A Concise Overview. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:2720-2731. [PMID: 29993733 DOI: 10.1109/tcyb.2018.2834356] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Chaotic time series widely exists in nature and society (e.g., meteorology, physics, economics, etc.), which usually exhibits seemingly unpredictable features due to its inherent nonstationary and high complexity. Thankfully, multifarious advanced approaches have been developed to tackle the prediction issues, such as statistical methods, artificial neural networks (ANNs), and support vector machines. Among them, the interval type-2 fuzzy neural network (IT2FNN), which is a synergistic integration of fuzzy logic systems and ANNs, has received wide attention in the field of chaotic time series prediction. This paper begins with the structural features and superiorities of IT2FNN. Moreover, chaotic characters identification and phase-space reconstruction matters for prediction are presented. In addition, we also offer a comprehensive review of state-of-the-art applications of IT2FNN, with an emphasis on chaotic time series prediction and summarize their main contributions as well as some hardware implementations for computation speedup. Finally, this paper trends and extensions of this field, along with an outlook of future challenges are revealed. The primary objective of this paper is to serve as a tutorial or referee for interested researchers to have an overall picture on the current developments and identify their potential research direction to further investigation.
Collapse
|
6
|
Feng S, Chen CLP. Nonlinear system identification using a simplified Fuzzy Broad Learning System: Stability analysis and a comparative study. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.01.073] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
7
|
Jaddi NS, Abdullah S. Kidney-inspired algorithm with reduced functionality treatment for classification and time series prediction. PLoS One 2019; 14:e0208308. [PMID: 30608936 PMCID: PMC6319704 DOI: 10.1371/journal.pone.0208308] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2017] [Accepted: 11/05/2018] [Indexed: 11/19/2022] Open
Abstract
Optimization of an artificial neural network model through the use of optimization algorithms is the common method employed to search for an optimum solution for a broad variety of real-world problems. One such optimization algorithm is the kidney-inspired algorithm (KA) which has recently been proposed in the literature. The algorithm mimics the four processes performed by the kidneys: filtration, reabsorption, secretion, and excretion. However, a human with reduced kidney function needs to undergo additional treatment to improve kidney performance. In the medical field, the glomerular filtration rate (GFR) test is used to check the health of kidneys. The test estimates the amount of blood that passes through the glomeruli each minute. In this paper, we mimic this kidney function test and the GFR result is used to select a suitable step to add to the basic KA process. This novel imitation is designed for both minimization and maximization problems. In the proposed method, depends on GFR test result which is less than 15 or falls between 15 and 60 or is more than 60 a particular action is performed. These additional processes are applied as required with the aim of improving exploration of the search space and increasing the likelihood of the KA finding the optimum solution. The proposed method is tested on test functions and its results are compared with those of the basic KA. Its performance on benchmark classification and time series prediction problems is also examined and compared with that of other available methods in the literature. In addition, the proposed method is applied to a real-world water quality prediction problem. The statistical analysis of all these applications showed that the proposed method had a ability to improve the optimization outcome.
Collapse
Affiliation(s)
- Najmeh Sadat Jaddi
- Data Mining and Optimization Research Group (DMO), Centre for Artificial Intelligence Technology, Faculty of Information Science and Technology, National University of Malaysia, Bangi, Selangor, Malaysia
| | - Salwani Abdullah
- Data Mining and Optimization Research Group (DMO), Centre for Artificial Intelligence Technology, Faculty of Information Science and Technology, National University of Malaysia, Bangi, Selangor, Malaysia
- * E-mail:
| |
Collapse
|
8
|
Han H, Wu X, Zhang L, Tian Y, Qiao J. Self-Organizing RBF Neural Network Using an Adaptive Gradient Multiobjective Particle Swarm Optimization. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:69-82. [PMID: 29990097 DOI: 10.1109/tcyb.2017.2764744] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
One of the major obstacles in using radial basis function (RBF) neural networks is the convergence toward local minima instead of the global minima. For this reason, an adaptive gradient multiobjective particle swarm optimization (AGMOPSO) algorithm is designed to optimize both the structure and parameters of RBF neural networks in this paper. First, the AGMOPSO algorithm, based on a multiobjective gradient method and a self-adaptive flight parameters mechanism, is developed to improve the computation performance. Second, the AGMOPSO-based self-organizing RBF neural network (AGMOPSO-SORBF) can optimize the parameters (centers, widths, and weights), as well as determine the network size. The goal of AGMOPSO-SORBF is to find a tradeoff between the accuracy and the complexity of RBF neural networks. Third, the convergence analysis of AGMOPSO-SORBF is detailed to ensure the prerequisite of any successful applications. Finally, the merits of our proposed approach are verified on multiple numerical examples. The results indicate that the proposed AGMOPSO-SORBF achieves much better generalization capability and compact network structure than some other existing methods.
Collapse
|
9
|
Han HG, Lu W, Hou Y, Qiao JF. An Adaptive-PSO-Based Self-Organizing RBF Neural Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:104-117. [PMID: 28113788 DOI: 10.1109/tnnls.2016.2616413] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, a self-organizing radial basis function (SORBF) neural network is designed to improve both accuracy and parsimony with the aid of adaptive particle swarm optimization (APSO). In the proposed APSO algorithm, to avoid being trapped into local optimal values, a nonlinear regressive function is developed to adjust the inertia weight. Furthermore, the APSO algorithm can optimize both the network size and the parameters of an RBF neural network simultaneously. As a result, the proposed APSO-SORBF neural network can effectively generate a network model with a compact structure and high accuracy. Moreover, the analysis of convergence is given to guarantee the successful application of the APSO-SORBF neural network. Finally, multiple numerical examples are presented to illustrate the effectiveness of the proposed APSO-SORBF neural network. The results demonstrate that the proposed method is more competitive in solving nonlinear problems than some other existing SORBF neural networks.
Collapse
|
10
|
Fan J, Wang J. A Collective Neurodynamic Optimization Approach to Nonnegative Matrix Factorization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2344-2356. [PMID: 27429450 DOI: 10.1109/tnnls.2016.2582381] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Nonnegative matrix factorization (NMF) is an advanced method for nonnegative feature extraction, with widespread applications. However, the NMF solution often entails to solve a global optimization problem with a nonconvex objective function and nonnegativity constraints. This paper presents a collective neurodynamic optimization (CNO) approach to this challenging problem. The proposed collective neurodynamic system consists of a population of recurrent neural networks (RNNs) at the lower level and a particle swarm optimization (PSO) algorithm with wavelet mutation at the upper level. The RNNs act as search agents carrying out precise local searches according to their neurodynamics and initial conditions. The PSO algorithm coordinates and guides the RNNs with updated initial states toward global optimal solution(s). A wavelet mutation operator is added to enhance PSO exploration diversity. Through iterative interaction and improvement of the locally best solutions of RNNs and global best positions of the whole population, the population-based neurodynamic systems are almost sure able to achieve the global optimality for the NMF problem. It is proved that the convergence of the group-best state to the global optimal solution with probability one. The experimental results substantiate the efficacy and superiority of the CNO approach to bound-constrained global optimization with several benchmark nonconvex functions and NMF-based clustering with benchmark data sets in comparison with the state-of-the-art algorithms.
Collapse
|
11
|
Jia ZJ, Song YD. Barrier Function-Based Neural Adaptive Control With Locally Weighted Learning and Finite Neuron Self-Growing Strategy. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:1439-1451. [PMID: 28534753 DOI: 10.1109/tnnls.2016.2551294] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper presents a new approach to construct neural adaptive control for uncertain nonaffine systems. By integrating locally weighted learning with barrier Lyapunov function (BLF), a novel control design method is presented to systematically address the two critical issues in neural network (NN) control field: one is how to fulfill the compact set precondition for NN approximation, and the other is how to use varying rather than a fixed NN structure to improve the functionality of NN control. A BLF is exploited to ensure the NN inputs to remain bounded during the entire system operation. To account for system nonlinearities, a neuron self-growing strategy is proposed to guide the process for adding new neurons to the system, resulting in a self-adjustable NN structure for better learning capabilities. It is shown that the number of neurons needed to accomplish the control task is finite, and better performance can be obtained with less number of neurons as compared with traditional methods. The salient feature of the proposed method also lies in the continuity of the control action everywhere. Furthermore, the resulting control action is smooth almost everywhere except for a few time instants at which new neurons are added. Numerical example illustrates the effectiveness of the proposed approach.
Collapse
|
12
|
Chen M, Tao G. Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone. IEEE TRANSACTIONS ON CYBERNETICS 2016; 46:1851-1862. [PMID: 26340792 DOI: 10.1109/tcyb.2015.2456028] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.
Collapse
|
13
|
A soft computing method to predict sludge volume index based on a recurrent self-organizing neural network. Appl Soft Comput 2016. [DOI: 10.1016/j.asoc.2015.09.051] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
14
|
Wang B, Wang L, Yin Y, Xu Y, Zhao W, Tang Y. An Improved Neural Network with Random Weights Using Backtracking Search Algorithm. Neural Process Lett 2015. [DOI: 10.1007/s11063-015-9480-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
15
|
Jaddi NS, Abdullah S, Hamdan AR. Optimization of neural network model using modified bat-inspired algorithm. Appl Soft Comput 2015. [DOI: 10.1016/j.asoc.2015.08.002] [Citation(s) in RCA: 92] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
16
|
|
17
|
Impact of Noise on a Dynamical System: Prediction and Uncertainties from a Swarm-Optimized Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2015; 2015:145874. [PMID: 26351449 PMCID: PMC4553171 DOI: 10.1155/2015/145874] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2015] [Revised: 07/15/2015] [Accepted: 07/27/2015] [Indexed: 12/01/2022]
Abstract
An artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey-Glass chaotic time series in the short-term x(t + 6). The performance prediction was evaluated and compared with other studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called stochastic hybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute the uncertainties of predictions for noisy Mackey-Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise level (σN) from 0.01 to 0.1.
Collapse
|
18
|
Dynamic neural networks for gas turbine engine degradation prediction, health monitoring and prognosis. Neural Comput Appl 2015. [DOI: 10.1007/s00521-015-1990-0] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
19
|
Yan Z, Wang J. Nonlinear model predictive control based on collective neurodynamic optimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:840-850. [PMID: 25608315 DOI: 10.1109/tnnls.2014.2387862] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In general, nonlinear model predictive control (NMPC) entails solving a sequential global optimization problem with a nonconvex cost function or constraints. This paper presents a novel collective neurodynamic optimization approach to NMPC without linearization. Utilizing a group of recurrent neural networks (RNNs), the proposed collective neurodynamic optimization approach searches for optimal solutions to global optimization problems by emulating brainstorming. Each RNN is guaranteed to converge to a candidate solution by performing constrained local search. By exchanging information and iteratively improving the starting and restarting points of each RNN using the information of local and global best known solutions in a framework of particle swarm optimization, the group of RNNs is able to reach global optimal solutions to global optimization problems. The essence of the proposed collective neurodynamic optimization approach lies in the integration of capabilities of global search and precise local search. The simulation results of many cases are discussed to substantiate the effectiveness and the characteristics of the proposed approach.
Collapse
|
20
|
Ben Nasr M, Chtourou M. Neural network control of nonlinear dynamic systems using hybrid algorithm. Appl Soft Comput 2014. [DOI: 10.1016/j.asoc.2014.07.023] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
21
|
Ling S, San P, Chan K, Leung F, Liu Y. An intelligent swarm based-wavelet neural network for affective mobile phone design. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.01.054] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
22
|
Chu X, Hu M, Wu T, Weir JD, Lu Q. AHPS2: An optimizer using adaptive heterogeneous particle swarms. Inf Sci (N Y) 2014. [DOI: 10.1016/j.ins.2014.04.043] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
23
|
Liu Z, Mao C, Luo J, Zhang Y, Philip Chen C. A three-domain fuzzy wavelet network filter using fuzzy PSO for robotic assisted minimally invasive surgery. Knowl Based Syst 2014. [DOI: 10.1016/j.knosys.2014.03.025] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
24
|
Mnasser A, Bouani F, Ksouri M. Neural Networks Predictive Controller Using an Adaptive Control Rate. INTERNATIONAL JOURNAL OF SYSTEM DYNAMICS APPLICATIONS 2014. [DOI: 10.4018/ijsda.2014070106] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A model predictive control design for nonlinear systems based on artificial neural networks is discussed. The Feedforward neural networks are used to describe the unknown nonlinear dynamics of the real system. The backpropagation algorithm is used, offline, to train the neural networks model. The optimal control actions are computed by solving a nonconvex optimization problem with the gradient method. In gradient method, the steepest descent is a sensible factor for convergence. Then, an adaptive variable control rate based on Lyapunov function candidate and asymptotic convergence of the predictive controller are proposed. The stability of the closed loop system based on the neural model is proved. In order to demonstrate the robustness of the proposed predictive controller under set-point and load disturbance, a simulation example is considered. A comparison of the control performance achieved with a Levenberg-Marquardt method is also provided to illustrate the effectiveness of the proposed controller.
Collapse
Affiliation(s)
- Ahmed Mnasser
- Faculty of Sciences of Tunis, Tunis El Manar University, Tunis, Tunisia
| | - Faouzi Bouani
- Analysis, Conception and Control of Systems Laboratory, National Engineering School of Tunis, Tunis El Manar University, Tunis, Tunisia
| | - Mekki Ksouri
- Analysis, Conception and Control of Systems Laboratory, National Engineering School of Tunis, Tunis El Manar University, Tunis, Tunisia
| |
Collapse
|
25
|
|
26
|
Yan Z, Wang J. Robust model predictive control of nonlinear systems with unmodeled dynamics and bounded uncertainties based on neural networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:457-469. [PMID: 24807443 DOI: 10.1109/tnnls.2013.2275948] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach.
Collapse
|
27
|
Gao H, Kwong S, Yang J, Cao J. Particle swarm optimization based on intermediate disturbance strategy algorithm and its application in multi-threshold image segmentation. Inf Sci (N Y) 2013. [DOI: 10.1016/j.ins.2013.07.005] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
28
|
Han HG, Wu XL, Qiao JF. Real-time model predictive control using a self-organizing neural network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:1425-1436. [PMID: 24808579 DOI: 10.1109/tnnls.2013.2261574] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, a real-time model predictive control (RT-MPC) based on self-organizing radial basis function neural network (SORBFNN) is proposed for nonlinear systems. This RT-MPC has its simplicity in parallelism to model predictive control design and efficiency to deal with computational complexity. First, a SORBFNN with concurrent structure and parameter learning is developed as the predictive model of the nonlinear systems. The model performance can be significantly improved through SORBFNN, and the modeling error is uniformly ultimately bounded. Second, a fast gradient method (GM) is enhanced for the solution of optimal control problem. This proposed GM can reduce computational cost and suboptimize the RT-MPC online. Then, the conditions of the stability analysis and steady-state performance of the closed-loop systems are presented. Finally, numerical simulations reveal that the proposed control gives satisfactory tracking and disturbance rejection performances. Experimental results demonstrate its effectiveness.
Collapse
|
29
|
Convergence analysis of particle swarm optimizer and its improved algorithm based on velocity differential evolution. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2013; 2013:384125. [PMID: 24078806 PMCID: PMC3773995 DOI: 10.1155/2013/384125] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2013] [Revised: 07/28/2013] [Accepted: 08/04/2013] [Indexed: 11/30/2022]
Abstract
This paper presents an analysis of the relationship of particle velocity and convergence of the particle swarm optimization. Its premature convergence is due to the decrease of particle velocity in search space that leads to a total implosion and ultimately fitness stagnation of the swarm. An improved algorithm which introduces a velocity differential evolution (DE) strategy for the hierarchical particle swarm optimization (H-PSO) is proposed to improve its performance. The DE is employed to regulate the particle velocity rather than the traditional particle position in case that the optimal result has not improved after several iterations. The benchmark functions will be illustrated to demonstrate the effectiveness of the proposed method.
Collapse
|
30
|
Han HG, Wang LD, Qiao JF. Efficient self-organizing multilayer neural network for nonlinear system modeling. Neural Netw 2013; 43:22-32. [DOI: 10.1016/j.neunet.2013.01.015] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2012] [Revised: 01/27/2013] [Accepted: 01/27/2013] [Indexed: 11/27/2022]
|
31
|
Accurate Prediction of Coronary Artery Disease Using Reliable Diagnosis System. J Med Syst 2012; 36:3353-73. [DOI: 10.1007/s10916-012-9828-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2011] [Accepted: 01/30/2012] [Indexed: 10/14/2022]
|
32
|
Zhang Y, Chai T, Li Z, Yang C. Modeling and monitoring of dynamic processes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:277-284. [PMID: 24808506 DOI: 10.1109/tnnls.2011.2179669] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, a new online monitoring approach is proposed for handling the dynamic problem in industrial batch processes. Compared to conventional methods, its contributions are as follows: (1) multimodes are separated correctly since the cross-mode correlations are considered and the common information is extracted; (2) the expensive computing load is avoided since only the specific information is calculated when a mode is monitored online; and (3) after that, two different subspaces are separated, and the common and specific subspace models are built and analyzed, respectively. The monitoring is carried out in the subspace. The corresponding confidence regions are constructed according to their respective models.
Collapse
|
33
|
Zeng Z, Zheng WX. Multistability of neural networks with time-varying delays and concave-convex characteristics. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:293-305. [PMID: 24808508 DOI: 10.1109/tnnls.2011.2179311] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, stability of multiple equilibria of neural networks with time-varying delays and concave-convex characteristics is formulated and studied. Some sufficient conditions are obtained to ensure that an n-neuron neural network with concave-convex characteristics can have a fixed point located in the appointed region. By means of an appropriate partition of the n-dimensional state space, when nonlinear activation functions of an n-neuron neural network are concave or convex in 2k+2m-1 intervals, this neural network can have (2k+2m-1)n equilibrium points. This result can be applied to the multiobjective optimal control and associative memory. In particular, several succinct criteria are given to ascertain multistability of cellular neural networks. These stability conditions are the improvement and extension of the existing stability results in the literature. A numerical example is given to illustrate the theoretical findings via computer simulations.
Collapse
|