1
|
Kafiyan-Safari M, Rouhani M. Adaptive one-pass passive-aggressive radial basis function for classification problems. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
2
|
|
3
|
A Two-Phase Evolutionary Method to Train RBF Networks. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12052439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This article proposes a two-phase hybrid method to train RBF neural networks for classification and regression problems. During the first phase, a range for the critical parameters of the RBF network is estimated and in the second phase a genetic algorithm is incorporated to locate the best RBF neural network for the underlying problem. The method is compared against other training methods of RBF neural networks on a wide series of classification and regression problems from the relevant literature and the results are reported.
Collapse
|
4
|
Tsoulos IG, Anastasopoulos N, Ntritsos G, Tzallas A. Train RBF networks with a hybrid genetic algorithm. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-021-00654-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
5
|
|
6
|
Prediction of Flight Status of Logistics UAVs Based on an Information Entropy Radial Basis Function Neural Network. SENSORS 2021; 21:s21113651. [PMID: 34073923 PMCID: PMC8197341 DOI: 10.3390/s21113651] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 05/17/2021] [Accepted: 05/22/2021] [Indexed: 11/24/2022]
Abstract
Aiming at addressing the problems of short battery life, low payload and unmeasured load ratio of logistics Unmanned Aerial Vehicles (UAVs), the Radial Basis Function (RBF) neural network was trained with the flight data of logistics UAV from the Internet of Things to predict the flight status of logistics UAVs. Under the condition that there are few available input samples and the convergence of RBF neural network is not accurate, a dynamic adjustment method of RBF neural network structure based on information entropy is proposed. This method calculates the information entropy of hidden layer neurons and output layer neurons, and quantifies the output information of hidden layer neurons and the interaction information between hidden layer neurons and output layer neurons. The structural design and optimization of RBF neural network were solved by increasing the hidden layer neurons or disconnecting unnecessary connections, according to the connection strength between neurons. The steepest descent learning algorithm was used to correct the parameters of the network structure to ensure the convergence accuracy of the RBF neural network. By predicting the regression values of the flight status of logistics UAVs, it is demonstrated that the information entropy-based RBF neural network proposed in this paper has good approximation ability for the prediction of nonlinear systems.
Collapse
|
7
|
Ferdaus MM, Pratama M, Anavatti SG, Garratt MA. Online identification of a rotary wing Unmanned Aerial Vehicle from data streams. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2018.12.013] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
8
|
Pratama M, Lughofer E, Er MJ, Anavatti S, Lim CP. Data driven modelling based on Recurrent Interval-Valued Metacognitive Scaffolding Fuzzy Neural Network. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.10.093] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
9
|
Prieto A, Prieto B, Ortigosa EM, Ros E, Pelayo F, Ortega J, Rojas I. Neural networks: An overview of early research, current frameworks and new challenges. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.06.014] [Citation(s) in RCA: 161] [Impact Index Per Article: 17.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
10
|
Wen H, Xie W, Pei J. A Structure-Adaptive Hybrid RBF-BP Classifier with an Optimized Learning Strategy. PLoS One 2016; 11:e0164719. [PMID: 27792737 PMCID: PMC5085025 DOI: 10.1371/journal.pone.0164719] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2016] [Accepted: 09/29/2016] [Indexed: 11/24/2022] Open
Abstract
This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used for nonlinear classification. The optimized learning strategy is as follows: firstly, a potential function is introduced into training sample space to adaptively determine the number of initial RBF hidden nodes and node parameters, and a form of heterogeneous samples repulsive force is designed to further optimize each generated RBF hidden node parameters, the optimized structure-adaptive RBF network is used for adaptively nonlinear mapping the sample space; then, according to the number of adaptively generated RBF hidden nodes, the number of subsequent BP input nodes can be determined, and the overall SAHRBF-BP classifier is built up; finally, different training sample sets are used to train the BP network parameters in SAHRBF-BP. Compared with other algorithms applied to different data sets, experiments show the superiority of SAHRBF-BP. Especially on most low dimensional and large number of data sets, the classification performance of SAHRBF-BP outperforms other training SLFNs algorithms.
Collapse
Affiliation(s)
- Hui Wen
- ATR Key Lab of National Defense, shenzhen University, shenzhen 518060, China
| | - Weixin Xie
- ATR Key Lab of National Defense, shenzhen University, shenzhen 518060, China
| | - Jihong Pei
- ATR Key Lab of National Defense, shenzhen University, shenzhen 518060, China
| |
Collapse
|
11
|
Pratama M, Lu J, Lughofer E, Zhang G, Anavatti S. Scaffolding type-2 classifier for incremental learning under concept drifts. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.01.049] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
12
|
Han HG, Zhang L, Hou Y, Qiao JF. Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:402-415. [PMID: 26336152 DOI: 10.1109/tnnls.2015.2465174] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.
Collapse
|
13
|
A high performing meta-heuristic for training support vector regression in performance forecasting of supply chain. Neural Comput Appl 2015. [DOI: 10.1007/s00521-015-2015-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
14
|
Weruaga L, Vía J. Sparse multivariate gaussian mixture regression. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1098-1108. [PMID: 25029490 DOI: 10.1109/tnnls.2014.2334596] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Fitting a multivariate Gaussian mixture to data represents an attractive, as well as challenging problem, in especial when sparsity in the solution is demanded. Achieving this objective requires the concurrent update of all parameters (weight, centers, and precisions) of all multivariate Gaussian functions during the learning process. Such is the focus of this paper, which presents a novel method founded on the minimization of the error of the generalized logarithmic utility function (GLUF). This choice, which allows us to move smoothly from the mean square error (MSE) criterion to the one based on the logarithmic error, yields an optimization problem that resembles a locally convex problem and can be solved with a quasi-Newton method. The GLUF framework also facilitates the comparative study between both extremes, concluding that the classical MSE optimization is not the most adequate for the task. The performance of the proposed novel technique is demonstrated on simulated as well as realistic scenarios.
Collapse
|
15
|
Yu H, Reiner PD, Xie T, Bartczak T, Wilamowski BM. An incremental design of radial basis function networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:1793-1803. [PMID: 25203995 DOI: 10.1109/tnnls.2013.2295813] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper proposes an offline algorithm for incrementally constructing and training radial basis function (RBF) networks. In each iteration of the error correction (ErrCor) algorithm, one RBF unit is added to fit and then eliminate the highest peak (or lowest valley) in the error surface. This process is repeated until a desired error level is reached. Experimental results on real world data sets show that the ErrCor algorithm designs very compact RBF networks compared with the other investigated algorithms. Several benchmark tests such as the duplicate patterns test and the two spiral problem were applied to show the robustness of the ErrCor algorithm. The proposed ErrCor algorithm generates very compact networks. This compactness leads to greatly reduced computation times of trained networks.
Collapse
|
16
|
Annealed cooperative–competitive learning of Mahalanobis-NRBF neural modules for nonlinear and chaotic differential function approximation. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.01.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
17
|
Hsu CF. A self-evolving functional-linked wavelet neural network for control applications. Appl Soft Comput 2013. [DOI: 10.1016/j.asoc.2013.06.012] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
18
|
Han HG, Wu XL, Qiao JF. Real-time model predictive control using a self-organizing neural network. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:1425-1436. [PMID: 24808579 DOI: 10.1109/tnnls.2013.2261574] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, a real-time model predictive control (RT-MPC) based on self-organizing radial basis function neural network (SORBFNN) is proposed for nonlinear systems. This RT-MPC has its simplicity in parallelism to model predictive control design and efficiency to deal with computational complexity. First, a SORBFNN with concurrent structure and parameter learning is developed as the predictive model of the nonlinear systems. The model performance can be significantly improved through SORBFNN, and the modeling error is uniformly ultimately bounded. Second, a fast gradient method (GM) is enhanced for the solution of optimal control problem. This proposed GM can reduce computational cost and suboptimize the RT-MPC online. Then, the conditions of the stability analysis and steady-state performance of the closed-loop systems are presented. Finally, numerical simulations reveal that the proposed control gives satisfactory tracking and disturbance rejection performances. Experimental results demonstrate its effectiveness.
Collapse
|
19
|
Structure optimization of BiLinear Recurrent Neural Networks and its application to Ethernet network traffic prediction. Inf Sci (N Y) 2013. [DOI: 10.1016/j.ins.2009.10.005] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
20
|
Vuković N, Miljković Z. A growing and pruning sequential learning algorithm of hyper basis function neural network for function approximation. Neural Netw 2013; 46:210-26. [PMID: 23811384 DOI: 10.1016/j.neunet.2013.06.004] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2012] [Revised: 04/22/2013] [Accepted: 06/06/2013] [Indexed: 10/26/2022]
Abstract
Radial basis function (RBF) neural network is constructed of certain number of RBF neurons, and these networks are among the most used neural networks for modeling of various nonlinear problems in engineering. Conventional RBF neuron is usually based on Gaussian type of activation function with single width for each activation function. This feature restricts neuron performance for modeling the complex nonlinear problems. To accommodate limitation of a single scale, this paper presents neural network with similar but yet different activation function-hyper basis function (HBF). The HBF allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The HBF is based on generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. Compared to the RBF, the HBF neuron has more parameters to optimize, but HBF neural network needs less number of HBF neurons to memorize relationship between input and output sets in order to achieve good generalization property. However, recent research results of HBF neural network performance have shown that optimal way of constructing this type of neural network is needed; this paper addresses this issue and modifies sequential learning algorithm for HBF neural network that exploits the concept of neuron's significance and allows growing and pruning of HBF neuron during learning process. Extensive experimental study shows that HBF neural network, trained with developed learning algorithm, achieves lower prediction error and more compact neural network.
Collapse
Affiliation(s)
- Najdan Vuković
- University of Belgrade - Faculty of Mechanical Engineering, Innovation Center, Kraljice Marije 16; 11120 Belgrade 35, Serbia.
| | | |
Collapse
|
21
|
Hsu CF, Lin CM, Yeh RG. Supervisory adaptive dynamic RBF-based neural-fuzzy control system design for unknown nonlinear systems. Appl Soft Comput 2013. [DOI: 10.1016/j.asoc.2012.12.028] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
22
|
Han HG, Qiao JF. A structure optimisation algorithm for feedforward neural network construction. Neurocomputing 2013. [DOI: 10.1016/j.neucom.2012.07.023] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
23
|
Xie T, Yu H, Hewlett J, Rózycki P, Wilamowski B. Fast and efficient second-order method for training radial basis function networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:609-619. [PMID: 24805044 DOI: 10.1109/tnnls.2012.2185059] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper proposes an improved second order (ISO) algorithm for training radial basis function (RBF) networks. Besides the traditional parameters, including centers, widths and output weights, the input weights on the connections between input layer and hidden layer are also adjusted during the training process. More accurate results can be obtained by increasing variable dimensions. Initial centers are chosen from training patterns and other parameters are generated randomly in limited range. Taking the advantages of fast convergence and powerful search ability of second order algorithms, the proposed ISO algorithm can normally reach smaller training/testing error with much less number of RBF units. During the computation process, quasi Hessian matrix and gradient vector are accumulated as the sum of related sub matrices and vectors, respectively. Only one Jacobian row is stored and used for multiplication, instead of the entire Jacobian matrix storage and multiplication. Memory reduction benefits the computation speed and allows the training of problems with basically unlimited number of patterns. Several practical discrete and continuous classification problems are applied to test the properties of the proposed ISO training algorithm.
Collapse
|
24
|
|
25
|
Ya-Jun Qu, Bao-Gang Hu. Generalized Constraint Neural Network Regression Model Subject to Linear Priors. ACTA ACUST UNITED AC 2011; 22:2447-59. [DOI: 10.1109/tnn.2011.2167348] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
26
|
Apolloni B, Bassis S, Pagani E, Rossi GP, Valerio L. Mobility timing for agent communities, a cue for advanced connectionist systems. IEEE TRANSACTIONS ON NEURAL NETWORKS 2011; 22:2032-49. [PMID: 22049366 DOI: 10.1109/tnn.2011.2168536] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
We introduce a wait-and-chase scheme that models the contact times between moving agents within a connectionist construct. The idea that elementary processors move within a network to get a proper position is borne out both by biological neurons in the brain morphogenesis and by agents within social networks. From the former, we take inspiration to devise a medium-term project for new artificial neural network training procedures where mobile neurons exchange data only when they are close to one another in a proper space (are in contact). From the latter, we accumulate mobility tracks experience. We focus on the preliminary step of characterizing the elapsed time between neuron contacts, which results from a spatial process fitting in the family of random processes with memory, where chasing neurons are stochastically driven by the goal of hitting target neurons. Thus, we add an unprecedented mobility model to the literature in the field, introducing a distribution law of the intercontact times that merges features of both negative exponential and Pareto distribution laws. We give a constructive description and implementation of our model, as well as a short analytical form whose parameters are suitably estimated in terms of confidence intervals from experimental data. Numerical experiments show the model and related inference tools to be sufficiently robust to cope with two main requisites for its exploitation in a neural network: the nonindependence of the observed intercontact times and the feasibility of the model inversion problem to infer suitable mobility parameters.
Collapse
Affiliation(s)
- Bruno Apolloni
- Department of Computer Science, University of Milan, Milan 20122, Italy.
| | | | | | | | | |
Collapse
|
27
|
Hoang Xuan Huan, Dang Thi Thu Hien, Huynh Huu Tue. Efficient Algorithm for Training Interpolation RBF Networks With Equally Spaced Nodes. ACTA ACUST UNITED AC 2011; 22:982-8. [DOI: 10.1109/tnn.2011.2120619] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
28
|
Han HG, Chen QL, Qiao JF. An efficient self-organizing RBF neural network for water quality prediction. Neural Netw 2011; 24:717-25. [PMID: 21612889 DOI: 10.1016/j.neunet.2011.04.006] [Citation(s) in RCA: 108] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2010] [Revised: 02/20/2011] [Accepted: 04/25/2011] [Indexed: 10/18/2022]
Abstract
This paper presents a flexible structure Radial Basis Function (RBF) neural network (FS-RBFNN) and its application to water quality prediction. The FS-RBFNN can vary its structure dynamically in order to maintain the prediction accuracy. The hidden neurons in the RBF neural network can be added or removed online based on the neuron activity and mutual information (MI), to achieve the appropriate network complexity and maintain overall computational efficiency. The convergence of the algorithm is analyzed in both the dynamic process phase and the phase following the modification of the structure. The proposed FS-RBFNN has been tested and compared to other algorithms by applying it to the problem of identifying a nonlinear dynamic system. Experimental results show that the FS-RBFNN can be used to design an RBF structure which has fewer hidden neurons; the training time is also much faster. The algorithm is applied for predicting water quality in the wastewater treatment process. The results demonstrate its effectiveness.
Collapse
Affiliation(s)
- Hong-Gui Han
- College of Electronic and Control Engineering, Beijing University of Technology, Beijing, China
| | | | | |
Collapse
|
29
|
Mahdi RN, Rouchka EC. Reduced HyperBF Networks: Regularization by Explicit Complexity Reduction and Scaled Rprop-Based Training. ACTA ACUST UNITED AC 2011; 22:673-86. [DOI: 10.1109/tnn.2011.2109736] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|