51
|
Pedrycz W, Keun-Chang Kwak. Linguistic models as a framework of user-centric system modeling. ACTA ACUST UNITED AC 2006. [DOI: 10.1109/tsmca.2005.855755] [Citation(s) in RCA: 93] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
52
|
Abstract
A novel radial basis function neural network for discriminant analysis is presented in this paper. In contrast to many other researches, this work focuses on the exploitation of the weight structure of radial basis function neural networks using the Bayesian method. It is expected that the performance of a radial basis function neural network with a well-explored weight structure can be improved. As the weight structure of a radial basis function neural network is commonly unknown, the Bayesian method is, therefore, used in this paper to study this a priori structure. Two weight structures are investigated in this study, i.e., a single-Gaussian structure and a two-Gaussian structure. An expectation-maximization learning algorithm is used to estimate the weights. The simulation results showed that the proposed radial basis function neural network with a weight structure of two Gaussians outperformed the other algorithms.
Collapse
Affiliation(s)
- Zheng Rong Yang
- Department of Computer Science, University of Exeter, Devon EX4 4QF, UK.
| |
Collapse
|
53
|
Li S, Chen Q, Huang GB. Dynamic temperature modeling of continuous annealing furnace using GGAP-RBF neural network. Neurocomputing 2006. [DOI: 10.1016/j.neucom.2005.01.008] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
54
|
|
55
|
González J, Rojas I, Pomares H, Rojas F, Palomares JM. Multi-objective evolution of fuzzy systems. Soft comput 2005. [DOI: 10.1007/s00500-005-0003-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
56
|
Low KH, Leow WK, Ang MH. An Ensemble of Cooperative Extended Kohonen Maps for Complex Robot Motion Tasks. Neural Comput 2005. [DOI: 10.1162/0899766053630378] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Self-organizing feature maps such as extended Kohonen maps (EKMs) have been very successful at learning sensorimotor control for mobile robot tasks. This letter presents a new ensemble approach, cooperative EKMs with indirect mapping, to achieve complex robot motion. An indirect-mapping EKM self-organizes to map from the sensory input space to the motor control space indirectly via a control parameter space. Quantitative evaluation reveals that indirect mapping can provide finer, smoother, and more efficient motion control than does direct mapping by operating in a continuous, rather than discrete, motor control space. It is also shown to outperform basis function neural networks. Furthermore, training its control parameters with recursive least squares enables faster convergence and better performance compared to gradient descent. The cooperation and competition of multiple self-organized EKMs allow a nonholonomic mobile robot to negotiate unforeseen, concave, closely spaced, and dynamic obstacles. Qualitative and quantitative comparisons with neural network ensembles employing weighted sum reveal that our method can achieve more sophisticated motion tasks even though the weighted-sum ensemble approach also operates in continuous motor control space.
Collapse
Affiliation(s)
- Kian Hsiang Low
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213-3890, U.S.A
| | - Wee Kheng Leow
- Department of Computer Science, National University of Singapore, Singapore 117543, Singapore
| | - Marcelo H. Ang
- Department of Mechanical Engineering, National University of Singapore, Singapore 119260, Singapore
| |
Collapse
|
57
|
|
58
|
|
59
|
Huang GB, Saratchandran P, Sundararajan N. A Generalized Growing and Pruning RBF (GGAP-RBF) Neural Network for Function Approximation. ACTA ACUST UNITED AC 2005; 16:57-67. [PMID: 15732389 DOI: 10.1109/tnn.2004.836241] [Citation(s) in RCA: 479] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper presents a new sequential learning algorithm for radial basis function (RBF) networks referred to as generalized growing and pruning algorithm for RBF (GGAP-RBF). The paper first introduces the concept of significance for the hidden neurons and then uses it in the learning algorithm to realize parsimonious networks. The growing and pruning strategy of GGAP-RBF is based on linking the required learning accuracy with the significance of the nearest or intentionally added new neuron. Significance of a neuron is a measure of the average information content of that neuron. The GGAP-RBF algorithm can be used for any arbitrary sampling density for training samples and is derived from a rigorous statistical point of view. Simulation results for bench mark problems in the function approximation area show that the GGAP-RBF outperforms several other sequential learning algorithms in terms of learning speed, network size and generalization performance regardless of the sampling density function of the training data.
Collapse
Affiliation(s)
- Guang-Bin Huang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798.
| | | | | |
Collapse
|
60
|
Karayiannis NB, Balasubramanian M, Malki HA. Short-term electric power load forecasting based on cosine radial basis function neural networks: An experimental evaluation. INT J INTELL SYST 2005. [DOI: 10.1002/int.20084] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
61
|
|
62
|
Huang GB, Saratchandran P, Sundararajan N. An Efficient Sequential Learning Algorithm for Growing and Pruning RBF (GAP-RBF) Networks. ACTA ACUST UNITED AC 2004; 34:2284-92. [PMID: 15619929 DOI: 10.1109/tsmcb.2004.834428] [Citation(s) in RCA: 282] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents a simple sequential growing and pruning algorithm for radial basis function (RBF) networks. The algorithm referred to as growing and pruning (GAP)-RBF uses the concept of "Significance" of a neuron and links it to the learning accuracy. "Significance" of a neuron is defined as its contribution to the network output averaged over all the input data received so far. Using a piecewise-linear approximation for the Gaussian function, a simple and efficient way of computing this significance has been derived for uniformly distributed input data. In the GAP-RBF algorithm, the growing and pruning are based on the significance of the "nearest" neuron. In this paper, the performance of the GAP-RBF learning algorithm is compared with other well-known sequential learning algorithms like RAN, RANEKF, and MRAN on an artificial problem with uniform input distribution and three real-world nonuniform, higher dimensional benchmark problems. The results indicate that the GAP-RBF algorithm can provide comparable generalization performance with a considerably reduced network size and training time.
Collapse
Affiliation(s)
- Guang-Bin Huang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798.
| | | | | |
Collapse
|
63
|
Using K-Winner Machines for domain analysis. Neurocomputing 2004. [DOI: 10.1016/j.neucom.2004.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
64
|
|
65
|
Polikar R, Udpa L, Udpa S, Honavar V. An incremental learning algorithm with confidence estimation for automated identification of NDE signals. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2004; 51:990-1001. [PMID: 15344404 DOI: 10.1109/tuffc.2004.1324403] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
An incremental learning algorithm is introduced for learning new information from additional data that may later become available, after a classifier has already been trained using a previously available database. The proposed algorithm is capable of incrementally learning new information without forgetting previously acquired knowledge and without requiring access to the original database, even when new data include examples of previously unseen classes. Scenarios requiring such a learning algorithm are encountered often in nondestructive evaluation (NDE) in which large volumes of data are collected in batches over a period of time, and new defect types may become available in subsequent databases. The algorithm, named Learn++, takes advantage of synergistic generalization performance of an ensemble of classifiers in which each classifier is trained with a strategically chosen subset of the training databases that subsequently become available. The ensemble of classifiers then is combined through a weighted majority voting procedure. Learn++ is independent of the specific classifier(s) comprising the ensemble, and hence may be used with any supervised learning algorithm. The voting procedure also allows Learn++ to estimate the confidence in its own decision. We present the algorithm and its promising results on two separate ultrasonic weld inspection applications.
Collapse
Affiliation(s)
- Robi Polikar
- Department of Electrical and Computer Engineering, Rowan University, Glassboro, NJ 08028, USA.
| | | | | | | |
Collapse
|
66
|
|
67
|
Gonzalez J, Rojas I, Ortega J, Pomares H, Fernandez J, Diaz A. Multiobjective evolutionary optimization of the size, shape, and position parameters of radial basis function networks for function approximation. ACTA ACUST UNITED AC 2003; 14:1478-95. [DOI: 10.1109/tnn.2003.820657] [Citation(s) in RCA: 144] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
68
|
Randolph-Gips MM, Karayiannis NB. Reformulated radial basis function neural networks with adjustable weighted norms. INT J INTELL SYST 2003. [DOI: 10.1002/int.10133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
69
|
Karayiannis N, Randolph-Gips M. On the construction and training of reformulated radial basis function neural networks. ACTA ACUST UNITED AC 2003; 14:835-46. [DOI: 10.1109/tnn.2003.813841] [Citation(s) in RCA: 93] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
70
|
|
71
|
Papatla P, Zahedi M(F, Zekic-Susac M. Leveraging the Strengths of Choice Models and Neural Networks: A Multiproduct Comparative Analysis. DECISION SCIENCES 2002. [DOI: 10.1111/j.1540-5915.2002.tb01651.x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
72
|
Baraldi A, Alpaydin E. Constructive feedforward ART clustering networks. II. ACTA ACUST UNITED AC 2002; 13:662-77. [DOI: 10.1109/tnn.2002.1000131] [Citation(s) in RCA: 21] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
73
|
Meng Joo Er, Shiqian Wu, Juwei Lu, Hock Lye Toh. Face recognition with radial basis function (RBF) neural networks. ACTA ACUST UNITED AC 2002; 13:697-710. [DOI: 10.1109/tnn.2002.1000134] [Citation(s) in RCA: 387] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
74
|
Rojas I, Pomares H, Bernier J, Ortega J, Pino B, Pelayo F, Prieto A. Time series analysis using normalized PG-RBF network with regression weights. Neurocomputing 2002. [DOI: 10.1016/s0925-2312(01)00338-1] [Citation(s) in RCA: 68] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
75
|
Gonzalez J, Rojas H, Ortega J, Prieto A. A new clustering technique for function approximation. ACTA ACUST UNITED AC 2002; 13:132-42. [DOI: 10.1109/72.977289] [Citation(s) in RCA: 95] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
76
|
de Castro LN, Von Zuben FJ. Automatic determination of radial basis functions: an immunity-based approach. Int J Neural Syst 2001; 11:523-35. [PMID: 11852437 DOI: 10.1142/s0129065701000941] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The appropriate operation of a radial basis function (RBF) neural network depends mainly upon an adequate choice of the parameters of its basis functions. The simplest approach to train an RBF network is to assume fixed radial basis functions defining the activation of the hidden units. Once the RBF parameters are fixed, the optimal set of output weights can be determined straightforwardly by using a linear least squares algorithm, which generally means reduction in the learning time as compared to the determination of all RBF network parameters using supervised learning. The main drawback of this strategy is the requirement of an efficient algorithm to determine the number, position, and dispersion of the RBFs. The approach proposed here is inspired by models derived from the vertebrate immune system, that will be shown to perform unsupervised cluster analysis. The algorithm is introduced and its performance is compared to that of the random, k-means center selection procedures and other results from the literature. By automatically defining the number of RBF centers, their positions and dispersions, the proposed method leads to parsimonious solutions. Simulation results are reported concerning regression and classification problems.
Collapse
Affiliation(s)
- L N de Castro
- Department of Computer Engineering and Industrial Automation, UNICAMP, Campinas, São Paulo, Caixa Postal6101, CEP13081-970, Brasil.
| | | |
Collapse
|
77
|
Hamker FH. Life-long learning cell structures--continuously learning without catastrophic interference. Neural Netw 2001; 14:551-73. [PMID: 11411637 DOI: 10.1016/s0893-6080(01)00018-1] [Citation(s) in RCA: 70] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
As an extension of on-line learning, life-long learning challenges a system which is exposed to patterns from a changing environment during its entire lifespan. An autonomous system should not only integrate new knowledge on-line into its memory, but also preserve the knowledge learned by previous interactions. Thus, life-long learning implies the fundamental Stability-Plasticity Dilemma, which addresses the problem of learning new patterns without forgetting old prototype patterns. We propose an extension to the known Cell Structures, growing Radial Basis Function-like networks, that enables them to learn their number of nodes needed to solve a current task and to dynamically adapt the learning rate of each node separately. As shown in several simulations, the resulting Life-long Learning Cell Structures posses the major characteristics needed to cope with the Stability-Plasticity Dilemma.
Collapse
Affiliation(s)
- F H Hamker
- California Institute of Technology, Division of Biology, Pasadena, CA 91125, USA
| |
Collapse
|
78
|
Yen G, Meesad P. An effective neuro-fuzzy paradigm for machinery condition health monitoring. ACTA ACUST UNITED AC 2001; 31:523-36. [DOI: 10.1109/3477.938258] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
79
|
Schilling R, Carroll J, Al-Ajlouni A. Approximation of nonlinear systems with radial basis function neural networks. ACTA ACUST UNITED AC 2001; 12:1-15. [DOI: 10.1109/72.896792] [Citation(s) in RCA: 173] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
80
|
Hoya T, Chambers J. Heuristic pattern correction scheme using adaptively trained generalized regression neural networks. ACTA ACUST UNITED AC 2001; 12:91-100. [DOI: 10.1109/72.896798] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
81
|
Rojas I, Gonzalez J, Cañas A, Diaz AF, Rojas FJ, Rodriguez M. Short-term prediction of chaotic time series by using RBF network with regression weights. Int J Neural Syst 2000; 10:353-64. [PMID: 11195935 DOI: 10.1142/s0129065700000351] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
We propose a framework for constructing and training a radial basis function (RBF) neural network. The structure of the gaussian functions is modified using a pseudo-gaussian function (PG) in which two scaling parameters sigma are introduced, which eliminates the symmetry restriction and provides the neurons in the hidden layer with greater flexibility with respect to function approximation. We propose a modified PG-BF (pseudo-gaussian basis function) network in which the regression weights are used to replace the constant weights in the output layer. For this purpose, a sequential learning algorithm is presented to adapt the structure of the network, in which it is possible to create a new hidden unit and also to detect and remove inactive units. A salient feature of the network systems is that the method used for calculating the overall output is the weighted average of the output associated with each receptive field. The superior performance of the proposed PG-BF system over the standard RBF are illustrated using the problem of short-term prediction of chaotic time series.
Collapse
Affiliation(s)
- I Rojas
- Department of Architecture and Computer Technology, University of Granada, Spain.
| | | | | | | | | | | |
Collapse
|
82
|
Brouwer RK. Growing of a Fuzzy Recurrent Artificial Neural Network (FRANN) for pattern classification. Int J Neural Syst 1999; 9:335-50. [PMID: 10586991 DOI: 10.1142/s0129065799000320] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper describes a method for growing a recurrent neural network of fuzzy threshold units for the classification of feature vectors. Fuzzy networks seem natural for performing classification, since classification is concerned with set membership and objects generally belonging to sets of various degrees. A fuzzy unit in the architecture proposed here determines the degree to which the input vector lies in the fuzzy set associated with the fuzzy unit. This is in contrast to perceptrons that determine the correlation between input vector and a weighting vector. The resulting membership value, in the case of the fuzzy unit, is compared with a threshold, which is interpreted as a membership value. Training of a fuzzy unit is based on an algorithm for linear inequalities similar to Ho-Kashyap recording. These fuzzy threshold units are fully connected in a recurrent network. The network grows as it is trained. The advantages of the network and its training method are: (1) Allowing the network to grow to the required size which is generally much smaller than the size of the network which would be obtained otherwise, implying better generalization, smaller storage requirements and fewer calculations during classification; (2) The training time is extremely short; (3) Recurrent networks such as this one are generally readily implemented in hardware; (4) Classification accuracy obtained on several standard data sets is better than that obtained by the majority of other standard methods; and (5) The use of fuzzy logic is very intuitive since class membership is generally fuzzy.
Collapse
Affiliation(s)
- R K Brouwer
- Department of Computing Science, University College of the Cariboo (UCC), Kamloops BC Canada.
| |
Collapse
|
83
|
Karayiannis N. An axiomatic approach to soft learning vector quantization and clustering. ACTA ACUST UNITED AC 1999; 10:1153-65. [DOI: 10.1109/72.788654] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|