151
|
Multilayer Feedforward Neural Network Based on Multi-valued Neurons (MLMVN) and a Backpropagation Learning Algorithm. Soft comput 2006. [DOI: 10.1007/s00500-006-0075-5] [Citation(s) in RCA: 53] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
152
|
Tsai JT, Chou JH, Liu TK. Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm. ACTA ACUST UNITED AC 2006; 17:69-80. [PMID: 16526477 DOI: 10.1109/tnn.2005.860885] [Citation(s) in RCA: 80] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
In this paper, a hybrid Taguchi-genetic algorithm (HTGA) is applied to solve the problem of tuning both network structure and parameters of a feedforward neural network. The HTGA approach is a method of combining the traditional genetic algorithm (TGA), which has a powerful global exploration capability, with the Taguchi method, which can exploit the optimum offspring. The Taguchi method is inserted between crossover and mutation operations of a TGA. Then, the systematic reasoning ability of the Taguchi method is incorporated in the crossover operations to select the better genes to achieve crossover, and consequently enhance the genetic algorithms. Therefore, the HTGA approach can be more robust, statistically sound, and quickly convergent. First, the authors evaluate the performance of the presented HTGA approach by studying some global numerical optimization problems. Then, the presented HTGA approach is effectively applied to solve three examples on forecasting the sunspot numbers, tuning the associative memory, and solving the XOR problem. The numbers of hidden nodes and the links of the feedforward neural network are chosen by increasing them from small numbers until the learning performance is good enough. As a result, a partially connected feedforward neural network can be obtained after tuning. This implies that the cost of implementation of the neural network can be reduced. In these studied problems of tuning both network structure and parameters of a feedforward neural network, there are many parameters and numerous local optima so that these studied problems are challenging enough for evaluating the performances of any proposed GA-based approaches. The computational experiments show that the presented HTGA approach can obtain better results than the existing method reported recently in the literature.
Collapse
Affiliation(s)
- Jinn-Tsong Tsai
- Department of Medical Information Management, Kaohsiung Medical University, Kaohsiung 807, Taiwan, ROC
| | | | | |
Collapse
|
153
|
Abstract
Recursive least square (RLS) is an efficient approach to neural network training. However, in the classical RLS algorithm, there is no explicit decay in the energy function. This will lead to an unsatisfactory generalization ability for the trained networks. In this paper, we propose a generalized RLS (GRLS) model which includes a general decay term in the energy function for the training of feedforward neural networks. In particular, four different weight decay functions, namely, the quadratic weight decay, the constant weight decay and the newly proposed multimodal and quartic weight decay are discussed. By using the GRLS approach, not only the generalization ability of the trained networks is significantly improved but more unnecessary weights are pruned to obtain a compact network. Furthermore, the computational complexity of the GRLS remains the same as that of the standard RLS algorithm. The advantages and tradeoffs of using different decay functions are analyzed and then demonstrated with examples. Simulation results show that our approach is able to meet the design goals: improving the generalization ability of the trained network while getting a compact network.
Collapse
Affiliation(s)
- Yong Xu
- Centre of Excellence for Research in Computational Intelligence and Applications, School of Computer Science, The University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK.
| | | | | |
Collapse
|
154
|
|
155
|
García-Gimeno RM, Hervás-Martínez C, Rodríguez-Pérez R, Zurera-Cosano G. Modelling the growth of Leuconostoc mesenteroides by Artificial Neural Networks. Int J Food Microbiol 2005; 105:317-32. [PMID: 16054719 DOI: 10.1016/j.ijfoodmicro.2005.04.013] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2004] [Accepted: 04/18/2005] [Indexed: 11/30/2022]
Abstract
The combined effect of temperature (10.5 to 24.5 degrees C), pH level (5.5 to 7.5), sodium chloride level (0.25% to 6.25%) and sodium nitrite level (0 to 200 ppm) on the predicted specific growth rate (Gr), lag-time (Lag) and maximum population density (yEnd) of Leuconostoc mesenteroides under aerobic and anaerobic conditions, was studied using an Artificial Neural Network-based model (ANN) in comparison with Response Surface Methodology (RS). For both aerobic and anaerobic conditions, two types of ANN model were elaborated, unidimensional for each of the growth parameters, and multidimensional in which the three parameters Gr, Lag, and yEnd are combined. Although in general no significant statistical differences were observed between both types of model, we opted for the unidimensional model, because it obtained the lowest mean value for the standard error of prediction for generalisation. The ANN models developed provided reliable estimates for the three kinetic parameters studied; the SEP values in aerobic conditions ranged from between 2.82% for Gr, 6.05% for Lag and 10% for yEnd, a higher degree accuracy than those of the RS model (Gr: 9.54%; Lag: 8.89%; yEnd: 10.27%). Similar results were observed for anaerobic conditions. During external validation, a higher degree of accuracy (Af) and bias (Bf) were observed for the ANN model compared with the RS model. ANN predictive growth models are a valuable tool, enabling swift determination of L. mesenteroides growth parameters.
Collapse
Affiliation(s)
- R M García-Gimeno
- Department of Food Science and Technology, University of Córdoba, Campus Rabanales, Edif. Darwin, 14014 Córdoba, Spain.
| | | | | | | |
Collapse
|
156
|
Nguyen MH, Abbass HA, McKay RI. Stopping criteria for ensemble of evolutionary artificial neural networks. Appl Soft Comput 2005. [DOI: 10.1016/j.asoc.2004.12.005] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
157
|
Ferentinos KP. Biological engineering applications of feedforward neural networks designed and parameterized by genetic algorithms. Neural Netw 2005; 18:934-50. [PMID: 15963690 DOI: 10.1016/j.neunet.2005.03.010] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2003] [Revised: 03/03/2005] [Accepted: 03/03/2005] [Indexed: 11/26/2022]
Abstract
Two neural network (NN) applications in the field of biological engineering are developed, designed and parameterized by an evolutionary method based on the evolutionary process of genetic algorithms. The developed systems are a fault detection NN model and a predictive modeling NN system. An indirect or 'weak specification' representation was used for the encoding of NN topologies and training parameters into genes of the genetic algorithm (GA). Some a priori knowledge of the demands in network topology for specific application cases is required by this approach, so that the infinite search space of the problem is limited to some reasonable degree. Both one-hidden-layer and two-hidden-layer network architectures were explored by the GA. Except for the network architecture, each gene of the GA also encoded the type of activation functions in both hidden and output nodes of the NN and the type of minimization algorithm that was used by the backpropagation algorithm for the training of the NN. Both models achieved satisfactory performance, while the GA system proved to be a powerful tool that can successfully replace the problematic trial-and-error approach that is usually used for these tasks.
Collapse
|
158
|
Chen Y, Yang B, Dong J, Abraham A. Time-series forecasting using flexible neural tree model. Inf Sci (N Y) 2005. [DOI: 10.1016/j.ins.2004.10.005] [Citation(s) in RCA: 108] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
159
|
A Preliminary Study on Negative Correlation Learning via Correlation-Corrected Data (NCCD). Neural Process Lett 2005. [DOI: 10.1007/s11063-005-1084-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
160
|
Tan K, Yu Q, Lee T. A Distributed Evolutionary Classifier for Knowledge Discovery in Data Mining. ACTA ACUST UNITED AC 2005. [DOI: 10.1109/tsmcc.2004.841911] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
161
|
Abstract
Evolving gradient-learning artificial neural networks (ANNs) using an evolutionary algorithm (EA) is a popular approach to address the local optima and design problems of ANN. The typical approach is to combine the strength of backpropagation (BP) in weight learning and EA's capability of searching the architecture space. However, the BP's "gradient descent" approach requires a highly computer-intensive operation that relatively restricts the search coverage of EA by compelling it to use a small population size. To address this problem, we utilized mutation-based genetic neural network (MGNN) to replace BP by using the mutation strategy of local adaptation of evolutionary programming (EP) to effect weight learning. The MGNN's mutation enables the network to dynamically evolve its structure and adapt its weights at the same time. Moreover, MGNN's EP-based encoding scheme allows for a flexible and less restricted formulation of the fitness function and makes fitness computation fast and efficient. This makes it feasible to use larger population sizes and allows MGNN to have a relatively wide search coverage of the architecture space. MGNN implements a stopping criterion where overfitness occurrences are monitored through "sliding-windows" to avoid premature learning and overlearning. Statistical analysis of its performance to some well-known classification problems demonstrate its good generalization capability. It also reveals that locally adapting or scheduling the strategy parameters embedded in each individual network may provide a proper balance between the local and global searching capabilities of MGNN.
Collapse
Affiliation(s)
- Paulito P Palmes
- Laboratory for Neuroinformatics, RIKEN Brain Science Institute, Wako City, Saitama 351-0198, Japan.
| | | | | |
Collapse
|
162
|
Abstract
A geometrical interpretation of the multilayer perceptron (MLP) is suggested in this paper. Some general guidelines for selecting the architecture of the MLP, i.e., the number of the hidden neurons and the hidden layers, are proposed based upon this interpretation and the controversial issue of whether four-layered MLP is superior to the three-layered MLP is also carefully examined.
Collapse
Affiliation(s)
- Cheng Xiang
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore 119260.
| | | | | |
Collapse
|
163
|
|
164
|
De Falco I, Della Cioppa A, Iazzetta A, Tarantino E. An evolutionary approach for automatically extracting intelligible classification rules. Knowl Inf Syst 2005. [DOI: 10.1007/s10115-003-0143-4] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
165
|
Simultaneous Evolution of Neural Network Topologies and Weights for Classification and Regression. COMPUTATIONAL INTELLIGENCE AND BIOINSPIRED SYSTEMS 2005. [DOI: 10.1007/11494669_8] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
166
|
|
167
|
CHAN ZEKESH, KASABOV NIKOLA. EVOLUTIONARY COMPUTATION FOR ON-LINE AND OFF-LINE PARAMETER TUNING OF EVOLVING FUZZY NEURAL NETWORKS. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS 2004. [DOI: 10.1142/s1469026804001331] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This work applies Evolutionary Computation to achieve completely self-adapting Evolving Fuzzy Neural Networks (EFuNNs) for operating in both incremental (on-line) and batch (off-line) modes. EFuNNs belong to a class of Evolving Connectionist Systems (ECOS), capable of performing clustering-based, on-line, local area learning and rule extraction. Through Evolutionary Computation, its parameters such as learning rates and membership functions are continuously adjusted to reflect the changes in the dynamics of incoming data. The proposed methods are tested on the Mackey–Glass series and the results demonstrate a substantial improvement in EFuNN's performance.
Collapse
Affiliation(s)
- ZEKE S. H. CHAN
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland, New Zealand
| | - NIKOLA KASABOV
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland, New Zealand
| |
Collapse
|
168
|
Jaeger H, Haas H. Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication. Science 2004; 304:78-80. [PMID: 15064413 DOI: 10.1126/science.1091277] [Citation(s) in RCA: 1033] [Impact Index Per Article: 49.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
We present a method for learning nonlinear systems, echo state networks (ESNs). ESNs employ artificial recurrent neural networks in a way that has recently been proposed independently as a learning mechanism in biological brains. The learning method is computationally efficient and easy to use. On a benchmark task of predicting a chaotic time series, accuracy is improved by a factor of 2400 over previous techniques. The potential for engineering applications is illustrated by equalizing a communication channel, where the signal error rate is improved by two orders of magnitude.
Collapse
Affiliation(s)
- Herbert Jaeger
- International University Bremen, Bremen D-28759, Germany.
| | | |
Collapse
|
169
|
Bettayeb F, Rachedi T, Benbartaoui H. An improved automated ultrasonic NDE system by wavelet and neuron networks. ULTRASONICS 2004; 42:853-858. [PMID: 15047396 DOI: 10.1016/j.ultras.2004.01.064] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Despite of the widespread and increasing use of digitized signals, the ultrasonic testing community has not realized yet the full potential of the electronic processing. The performance of an ultrasonic flaw detection method is evaluated by the success of distinguishing the flaw echoes from those scattered by microstructures. So, de-noising of ultrasonic signals is extremely important as to correctly identify smaller defects, because the probability of detection usually decreases as the defect size decreases, while the probability of false call does increase. In this paper, the wavelet transform has been successfully experimented to suppress noise and to enhance flaw location from ultrasonic signal, with a good defect localization. The obtained result is then directed to an automatic Artificial Neuronal Networks classification and learning algorithm of defects from A-scan data. Since there is some uncertainty connected with the testing technique, the system needs a numerical modelling. So, knowing the technical characteristics of the transducer, we can preview which are the defects that experimental inspection should find. Indeed, the system performs simulation of the ultrasonic wave propagation in the material, and gives a very helpful tool to get information and physical phenomena understanding, which can help to a suitable prediction of the service life of the component.
Collapse
Affiliation(s)
- Fairouz Bettayeb
- CSC, Research Center on Welding and Control, Route de Dely Brahin, BP: 64, Chéraga, Algiers, Algeria.
| | | | | |
Collapse
|
170
|
ZHANG MENGJIE, CIESIELSKI VICTOR. NEURAL NETWORKS AND GENETIC ALGORITHMS FOR DOMAIN INDEPENDENT MULTICLASS OBJECT DETECTION. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS 2004. [DOI: 10.1142/s146902680400115x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper describes a domain independent approach to multiple class rotation invariant 2D object detection problems. The approach avoids preprocessing, segmentation and specific feature extraction. Instead, raw image pixel values are used as inputs to the learning systems. Five object detection methods have been developed and tested, the basic method and four variations which are expected to improve the accuracy of the basic method. In the basic method cutouts of the objects of interest are used to train multilayer feed forward networks using back propagation. The trained network is then used as a template to sweep the full image and find the objects of interest. The variations are (1) Use of a centred weight initialization method in network training, (2) Use of a genetic algorithm to train the network, (3) Use of a genetic algorithm, with fitness based on detection rate and false alarm rate, to refine the weights found in basic approach, and (4) Use of the same genetic algorithm to refine the weights found by method 2. These methods have been tested on three detection problems of increasing difficulty: an easy database of circles and squares, a medium difficulty database of coins and a very difficult database of retinal pathologies. For detecting the objects in all classes of interest in the easy and the medium difficulty problems, a 100% detection rate with no false alarms was achieved. However the results on the retinal pathologies were unsatisfactory. The centred weight initialization algorithm improved the detection performance over the basic approach on all three databases. In addition, refinement of weights with a genetic algorithm significantly improved detection performance on the three databases.The goal of domain independent object recognition was achieved for the detection of relatively small regular objects in larger images with relatively uncluttered backgrounds. Detection performance on irregular objects in complex, highly cluttered backgrounds such as the retina pictures, however, has not been achieved to an acceptable level.
Collapse
Affiliation(s)
- MENGJIE ZHANG
- School of Mathematical and Computing Sciences, Victoria University of Wellington, P. O. Box 600, Wellington, New Zealand
| | - VICTOR CIESIELSKI
- School of Computer Science and Information Technology, RMIT University, GPO Box 2476v, Melbourne, Victoria, Australia
| |
Collapse
|
171
|
Rask JM, Gonzalez RV, Barr RE. Genetically-designed Neural Networks for Error Reduction in an Optimized Biomechanical Model of the Human Elbow Joint Complex. Comput Methods Biomech Biomed Engin 2004; 7:43-50. [PMID: 14965879 DOI: 10.1080/10255840310001634269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
A real time dynamic biomechanical model of the human elbow joint has been used as the first step in the process of calculating time varying joint position from the electromyograms (EMGs) of eight muscles crossing the joint. Since calculation of position has a high sensitivity to errors in the model torque calculation, a genetic algorithm (GA) neural network (NN) has been developed for automatic error reduction in the dynamic model. Genetic algorithms are used to design many neural network structures during a preliminary trial effort, and then each network's performance is ranked to choose a trained network that represents the most accurate result. Experimental results from three subjects have shown model error reduction in 84.2% of the data sets from a subject on which the model had been trained, and 52.6% of the data sets from the subjects on which the model had not been trained. Furthermore, the GA networks reduced the error standard deviation across all subjects, showing that progress in error reduction was made evenly across all data sets.
Collapse
Affiliation(s)
- John Michael Rask
- Mechanical Engineering Department, The University of Texas at Austin, TX 78712, USA
| | | | | |
Collapse
|
172
|
|
173
|
|
174
|
Garcı́a-Pedrajas N, Ortiz-Boyer D, Hervás-Martı́nez C. Cooperative coevolution of generalized multi-layer perceptrons. Neurocomputing 2004. [DOI: 10.1016/j.neucom.2003.09.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
175
|
Chalup SK, Blair AD. Incremental training of first order recurrent neural networks to predict a context-sensitive language. Neural Netw 2003; 16:955-72. [PMID: 14692631 DOI: 10.1016/s0893-6080(03)00054-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
In recent years it has been shown that first order recurrent neural networks trained by gradient-descent can learn not only regular but also simple context-free and context-sensitive languages. However, the success rate was generally low and severe instability issues were encountered. The present study examines the hypothesis that a combination of evolutionary hill climbing with incremental learning and a well-balanced training set enables first order recurrent networks to reliably learn context-free and mildly context-sensitive languages. In particular, we trained the networks to predict symbols in string sequences of the context-sensitive language [a(n)b(n)c(n); n > or = 1. Comparative experiments with and without incremental learning indicated that incremental learning can accelerate and facilitate training. Furthermore, incrementally trained networks generally resulted in monotonic trajectories in hidden unit activation space, while the trajectories of non-incrementally trained networks were oscillating. The non-incrementally trained networks were more likely to generalise.
Collapse
Affiliation(s)
- Stephan K Chalup
- School of Electrical Engineering and Computer Science, The University of Newcastle, Callaghan, NSW 2308, Australia.
| | | |
Collapse
|
176
|
Islam M, Xin Yao, Murase K. A constructive algorithm for training cooperative neural network ensembles. ACTA ACUST UNITED AC 2003; 14:820-34. [DOI: 10.1109/tnn.2003.813832] [Citation(s) in RCA: 215] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
177
|
Lynch R, Willett P. Bayesian classification and feature reduction using uniform dirichlet priors. ACTA ACUST UNITED AC 2003; 33:448-64. [DOI: 10.1109/tsmcb.2003.811121] [Citation(s) in RCA: 19] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
178
|
Abstract
This paper presents an approach of using Simulated Annealing and Tabu Search for the simultaneous optimization of neural network architectures and weights. The problem considered is the odor recognition in an artificial nose. Both methods have produced networks with high classification performance and low complexity. Generalization has been improved by using the backpropagation algorithm for fine tuning. The combination of simple and traditional search methods has shown to be very suitable for generating compact and efficient networks.
Collapse
Affiliation(s)
- Akio Yamazaki
- Center of Informatics, Federal University of Pernambuco, Cidade Universitária, P.O. Box 7851, Recife, Pernambuco, 50.732-970, Brazil.
| | | |
Collapse
|
179
|
Meesad P, Yen G. Combined numerical and linguistic knowledge representation and its application to medical diagnosis. ACTA ACUST UNITED AC 2003. [DOI: 10.1109/tsmca.2003.811290] [Citation(s) in RCA: 51] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
180
|
Abstract
One of the major challenges in medical domain is the extraction of comprehensible knowledge from medical diagnosis data. In this paper, a two-phase hybrid evolutionary classification technique is proposed to extract classification rules that can be used in clinical practice for better understanding and prevention of unwanted medical events. In the first phase, a hybrid evolutionary algorithm (EA) is utilized to confine the search space by evolving a pool of good candidate rules, e.g. genetic programming (GP) is applied to evolve nominal attributes for free structured rules and genetic algorithm (GA) is used to optimize the numeric attributes for concise classification rules without the need of discretization. These candidate rules are then used in the second phase to optimize the order and number of rules in the evolution for forming accurate and comprehensible rule sets. The proposed evolutionary classifier (EvoC) is validated upon hepatitis and breast cancer datasets obtained from the UCI machine-learning repository. Simulation results show that the evolutionary classifier produces comprehensible rules and good classification accuracy for the medical datasets. Results obtained from t-tests further justify its robustness and invariance to random partition of datasets.
Collapse
|
181
|
Ecological Applications of Adaptive Agents. ECOL INFORM 2003. [DOI: 10.1007/978-3-662-05150-4_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
182
|
Evolutionary Neuroestimation of Fitness Functions. PROGRESS IN ARTIFICIAL INTELLIGENCE 2003. [DOI: 10.1007/978-3-540-24580-3_15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
183
|
Leung F, Lam H, Ling S, Tam P. Tuning of the structure and parameters of a neural network using an improved genetic algorithm. ACTA ACUST UNITED AC 2003; 14:79-88. [DOI: 10.1109/tnn.2002.804317] [Citation(s) in RCA: 510] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
184
|
|
185
|
Predictive Rules for Phytoplankton Dynamics in Freshwater Lakes Discovered by Evolutionary Algorithms. ECOL INFORM 2003. [DOI: 10.1007/978-3-662-05150-4_15] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
186
|
Chalup SK. Incremental learning in biological and machine learning systems. Int J Neural Syst 2002; 12:447-65. [PMID: 12528196 DOI: 10.1142/s0129065702001308] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2002] [Revised: 09/23/2002] [Accepted: 09/25/2002] [Indexed: 11/18/2022]
Abstract
Incremental learning concepts are reviewed in machine learning and neurobiology. They are identified in evolution, neurodevelopment and learning. A timeline of qualitative axon, neuron and synapse development summarizes the review on neurodevelopment. A discussion of experimental results on data incremental learning with recurrent artificial neural networks reveals that incremental learning often seems to be more efficient or powerful than standard learning but can produce unexpected side effects. A characterization of incremental learning is proposed which takes the elaborated biological and machine learning concepts into account.
Collapse
Affiliation(s)
- Stephan K Chalup
- School of Electrical Engineering and Computer Science, The University of Newcastle, Australia.
| |
Collapse
|
187
|
García-Pedrajas N, Hervás-Martínez C, Muñoz-Pérez J. Multi-objective cooperative coevolution of artificial neural networks (multi-objective cooperative networks). Neural Netw 2002; 15:1259-78. [PMID: 12425442 DOI: 10.1016/s0893-6080(02)00095-3] [Citation(s) in RCA: 83] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
In this paper we present a cooperative coevolutive model for the evolution of neural network topology and weights, called MOBNET. MOBNET evolves subcomponents that must be combined in order to form a network, instead of whole networks. The problem of assigning credit to the subcomponents is approached as a multi-objective optimization task. The subcomponents in a cooperative coevolutive model must fulfill different criteria to be useful, these criteria usually conflict with each other. The problem of evaluating the fitness on an individual based on many criteria that must be optimized together can be approached as a multi-criteria optimization problems, so the methods from multi-objective optimization offer the most natural way to solve the problem. In this work we show how using several objectives for every subcomponent and evaluating its fitness as a multi-objective optimization problem, the performance of the model is highly competitive. MOBNET is compared with several standard methods of classification and with other neural network models in solving four real-world problems, and it shows the best overall performance of all classification methods applied. It also produces smaller networks when compared to other models. The basic idea underlying MOBNET is extensible to a more general model of coevolutionary computation, as none of its features are exclusive of neural networks design. There are many applications of cooperative coevolution that could benefit from the multi-objective optimization approach proposed in this paper.
Collapse
Affiliation(s)
- N García-Pedrajas
- Department of Computing and Numerical Analysis, University of Córdoba, Spain.
| | | | | |
Collapse
|
188
|
Evolving SQL Queries for Data Mining. ACTA ACUST UNITED AC 2002. [DOI: 10.1007/3-540-45675-9_11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
189
|
Abstract
This paper presents an evolutionary artificial neural network (EANN) approach based on the pareto-differential evolution (PDE) algorithm augmented with local search for the prediction of breast cancer. The approach is named memetic pareto artificial neural network (MPANN). Artificial neural networks (ANNs) could be used to improve the work of medical practitioners in the diagnosis of breast cancer. Their abilities to approximate nonlinear functions and capture complex relationships in the data are instrumental abilities which could support the medical domain. We compare our results against an evolutionary programming approach and standard backpropagation (BP), and we show experimentally that MPANN has better generalization and much lower computational cost.
Collapse
Affiliation(s)
- Hussein A Abbass
- School of Computer Science, University of New South Wales, Australian Defence Force Academy Campus, Northcott Drive, 2600 Canberra, ACT, Australia.
| |
Collapse
|
190
|
|
191
|
Paul S, Kumar S. Subsethood-product fuzzy neural inference system (SuPFuNIS). ACTA ACUST UNITED AC 2002; 13:578-99. [DOI: 10.1109/tnn.2002.1000126] [Citation(s) in RCA: 92] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
192
|
Yen G, Lu H. Hierarchical genetic algorithm for near optimal feedforward neural network design. Int J Neural Syst 2002; 12:31-43. [PMID: 11852443 DOI: 10.1142/s0129065702001023] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2001] [Revised: 09/24/2001] [Accepted: 09/24/2001] [Indexed: 11/18/2022]
Abstract
In this paper, we propose a genetic algorithm based design procedure for a multi layer feed forward neural network. A hierarchical genetic algorithm is used to evolve both the neural networks topology and weighting parameters. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies, including a feasibility check highlighted in literature. A multi objective cost function is used herein to optimize the performance and topology of the evolved neural network simultaneously. In the prediction of Mackey Glass chaotic time series, the networks designed by the proposed approach prove to be competitive, or even superior, to traditional learning algorithms for the multi layer Perceptron networks and radial basis function networks. Based upon the chosen cost function, a linear weight combination decision making approach has been applied to derive an approximated Pareto optimal solution set. Therefore, designing a set of neural networks can be considered as solving a two objective optimization problem.
Collapse
Affiliation(s)
- Gary Yen
- Intelligent Systems and Control Laboratory, School of Electrical and Computer Engineering, Stillwater, OK 74078, USA
| | | |
Collapse
|
193
|
García-Gimeno RM, Hervás-Martínez C, de S. Improving artificial neural networks with a pruning methodology and genetic algorithms for their application in microbial growth prediction in food. Int J Food Microbiol 2002; 72:19-30. [PMID: 11843410 DOI: 10.1016/s0168-1605(01)00608-0] [Citation(s) in RCA: 74] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
The application of Artificial Neural Networks (ANN) in predictive microbiology is presented in this paper. This technique was used to build up a predictive model of the joint effect of NaCl concentration, pH level and storage temperature on kinetic parameters of the growth curve of Lactobacillus plantarum using ANN and Response Surface Model (RSM). Sigmoid functions were fitted to the data and kinetic parameters were estimated and used to build the models in which the independent variables were the factors mentioned above (NaCl, pH, temperature), and in some models, the values of the optical densities (OD) vs. time of the growth curve were also included in order to improve the error of estimation. The determination of the proper size of an ANN was the first step of the estimation. This study shows the usefulness of an ANN pruning methodology. The pruning of the network is a process consisting of removing unnecessary parameters (weights) and nodes during the training process of the network without losing its generalization capacity. The best architecture has been sought using genetic algorithms (GA) in conjunction with pruning algorithms and regularization methods in which the initial distribution of the parameters (weights) of the network is not uniform. The ANN model has been compared with the response surface model by means of the Standard Error of Prediction (SEP). The best values obtained were 14.04% of SEP for the growth rate and 14.84% for the lag estimation by the best ANN model, which were much better than those obtained by the RSM, 35.63% and 39.30%, respectively. These were very promising results that, in our opinion, open up an extremely important field of research.
Collapse
|
194
|
Toward an evolvable neuromolecular hardware: a hardware design for a multilevel artificial brain with digital circuits. Neurocomputing 2002. [DOI: 10.1016/s0925-2312(01)00592-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
195
|
Applying LSTM to Time Series Predictable Through Time-Window Approaches. PERSPECTIVES IN NEURAL COMPUTING 2002. [DOI: 10.1007/978-1-4471-0219-9_20] [Citation(s) in RCA: 113] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
196
|
Bobbin J, Recknagel F. Knowledge discovery for prediction and explanation of blue-green algal dynamics in lakes by evolutionary algorithms. Ecol Modell 2001. [DOI: 10.1016/s0304-3800(01)00311-8] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
197
|
Abstract
This paper describes the cascade neural network design algorithm (CNNDA), a new algorithm for designing compact, two-hidden-layer artificial neural networks (ANNs). This algorithm determines an ANN's architecture with connection weights automatically. The design strategy used in the CNNDA was intended to optimize both the generalization ability and the training time of ANNs. In order to improve the generalization ability, the CNDDA uses a combination of constructive and pruning algorithms and bounded fan-ins of the hidden nodes. A new training approach, by which the input weights of a hidden node are temporarily frozen when its output does not change much after a few successive training cycles, was used in the CNNDA for reducing the computational cost and the training time. The CNNDA was tested on several benchmarks including the cancer, diabetes and character-recognition problems in ANNs. The experimental results show that the CNNDA can produce compact ANNs with good generalization ability and short training time in comparison with other algorithms.
Collapse
Affiliation(s)
- M M Islam
- Department of Human and Artificial Intelligence Systems, Fukui University, Japan
| | | |
Collapse
|
198
|
|
199
|
|
200
|
|