1
|
Ross M, Berberian N, Nikolla A, Chartier S. Dynamic multilayer growth: Parallel vs. sequential approaches. PLoS One 2024; 19:e0301513. [PMID: 38722934 PMCID: PMC11081283 DOI: 10.1371/journal.pone.0301513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 03/18/2024] [Indexed: 05/13/2024] Open
Abstract
The decision of when to add a new hidden unit or layer is a fundamental challenge for constructive algorithms. It becomes even more complex in the context of multiple hidden layers. Growing both network width and depth offers a robust framework for leveraging the ability to capture more information from the data and model more complex representations. In the context of multiple hidden layers, should growing units occur sequentially with hidden units only being grown in one layer at a time or in parallel with hidden units growing across multiple layers simultaneously? The effects of growing sequentially or in parallel are investigated using a population dynamics-inspired growing algorithm in a multilayer context. A modified version of the constructive growing algorithm capable of growing in parallel is presented. Sequential and parallel growth methodologies are compared in a three-hidden layer multilayer perceptron on several benchmark classification tasks. Several variants of these approaches are developed for a more in-depth comparison based on the type of hidden layer initialization and the weight update methods employed. Comparisons are then made to another sequential growing approach, Dynamic Node Creation. Growing hidden layers in parallel resulted in comparable or higher performances than sequential approaches. Growing hidden layers in parallel promotes growing narrower deep architectures tailored to the task. Dynamic growth inspired by population dynamics offers the potential to grow the width and depth of deeper neural networks in either a sequential or parallel fashion.
Collapse
Affiliation(s)
- Matt Ross
- Laboratory for- Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, ON, Canada
| | - Nareg Berberian
- Laboratory for- Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, ON, Canada
| | - Albino Nikolla
- Laboratory for- Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, ON, Canada
| | - Sylvain Chartier
- Laboratory for- Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
2
|
Wang G, Qiao J, Bi J, Jia QS, Zhou M. An Adaptive Deep Belief Network With Sparse Restricted Boltzmann Machines. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:4217-4228. [PMID: 31880561 DOI: 10.1109/tnnls.2019.2952864] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Deep belief network (DBN) is an efficient learning model for unknown data representation, especially nonlinear systems. However, it is extremely hard to design a satisfactory DBN with a robust structure because of traditional dense representation. In addition, backpropagation algorithm-based fine-tuning tends to yield poor performance since its ease of being trapped into local optima. In this article, we propose a novel DBN model based on adaptive sparse restricted Boltzmann machines (AS-RBM) and partial least square (PLS) regression fine-tuning, abbreviated as ARP-DBN, to obtain a more robust and accurate model than the existing ones. First, the adaptive learning step size is designed to accelerate an RBM training process, and two regularization terms are introduced into such a process to realize sparse representation. Second, initial weight derived from AS-RBM is further optimized via layer-by-layer PLS modeling starting from the output layer to input one. Third, we present the convergence and stability analysis of the proposed method. Finally, our approach is tested on Mackey-Glass time-series prediction, 2-D function approximation, and unknown system identification. Simulation results demonstrate that it has higher learning accuracy and faster learning speed. It can be used to build a more robust model than the existing ones.
Collapse
|
3
|
|
4
|
Prediction of $${\mathrm{PM}}_{2.5}$$ concentration based on multi-source data and self-organizing fuzzy neural network. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-2380-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022] Open
|
5
|
Yang SH, Wang HL, Lo YC, Lai HY, Chen KY, Lan YH, Kao CC, Chou C, Lin SH, Huang JW, Wang CF, Kuo CH, Chen YY. Inhibition of Long-Term Variability in Decoding Forelimb Trajectory Using Evolutionary Neural Networks With Error-Correction Learning. Front Comput Neurosci 2020; 14:22. [PMID: 32296323 PMCID: PMC7136463 DOI: 10.3389/fncom.2020.00022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 03/04/2020] [Indexed: 11/13/2022] Open
Abstract
Objective: In brain machine interfaces (BMIs), the functional mapping between neural activities and kinematic parameters varied over time owing to changes in neural recording conditions. The variability in neural recording conditions might result in unstable long-term decoding performance. Relevant studies trained decoders with several days of training data to make them inherently robust to changes in neural recording conditions. However, these decoders might not be robust to changes in neural recording conditions when only a few days of training data are available. In time-series prediction and feedback control system, an error feedback was commonly adopted to reduce the effects of model uncertainty. This motivated us to introduce an error feedback to a neural decoder for dealing with the variability in neural recording conditions. Approach: We proposed an evolutionary constructive and pruning neural network with error feedback (ECPNN-EF) as a neural decoder. The ECPNN-EF with partially connected topology decoded the instantaneous firing rates of each sorted unit into forelimb movement of a rat. Furthermore, an error feedback was adopted as an additional input to provide kinematic information and thus compensate for changes in functional mapping. The proposed neural decoder was trained on data collected from a water reward-related lever-pressing task for a rat. The first 2 days of data were used to train the decoder, and the subsequent 10 days of data were used to test the decoder. Main Results: The ECPNN-EF under different settings was evaluated to better understand the impact of the error feedback and partially connected topology. The experimental results demonstrated that the ECPNN-EF achieved significantly higher daily decoding performance with smaller daily variability when using the error feedback and partially connected topology. Significance: These results suggested that the ECPNN-EF with partially connected topology could cope with both within- and across-day changes in neural recording conditions. The error feedback in the ECPNN-EF compensated for decreases in decoding performance when neural recording conditions changed. This mechanism made the ECPNN-EF robust against changes in functional mappings and thus improved the long-term decoding stability when only a few days of training data were available.
Collapse
Affiliation(s)
- Shih-Hung Yang
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Han-Lin Wang
- Department of Biomedical Engineering, National Yang Ming University, Taipei, Taiwan
| | - Yu-Chun Lo
- The Ph.D. Program for Neural Regenerative Medicine, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
| | - Hsin-Yi Lai
- Key Laboratory of Medical Neurobiology of Zhejiang Province, Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, China
| | - Kuan-Yu Chen
- Department of Biomedical Engineering, National Yang Ming University, Taipei, Taiwan
| | - Yu-Hao Lan
- Department of Biomedical Engineering, National Yang Ming University, Taipei, Taiwan
| | - Ching-Chia Kao
- Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
| | - Chin Chou
- Department of Regulatory & Quality Sciences, University of Southern California, Los Angeles, CA, United States
| | - Sheng-Huang Lin
- Buddhist Tzu Chi Medical Foundation, Department of Neurology, Hualien Tzu Chi Hospital, Hualien, Taiwan
- Department of Neurology, School of Medicine, Tzu Chi University, Hualien, Taiwan
| | - Jyun-We Huang
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Ching-Fu Wang
- Department of Biomedical Engineering, National Yang Ming University, Taipei, Taiwan
| | - Chao-Hung Kuo
- Department of Biomedical Engineering, National Yang Ming University, Taipei, Taiwan
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Neurological Surgery, University of Washington, Seattle, WA, United States
| | - You-Yin Chen
- Department of Biomedical Engineering, National Yang Ming University, Taipei, Taiwan
- The Ph.D. Program for Neural Regenerative Medicine, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
| |
Collapse
|
6
|
Optimizing Deep Feedforward Neural Network Architecture: A Tabu Search Based Approach. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10234-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
7
|
A hybrid grasshopper and new cat swarm optimization algorithm for feature selection and optimization of multi-layer perceptron. Soft comput 2020. [DOI: 10.1007/s00500-020-04877-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
8
|
|
9
|
Kiliroor CC, Valliyammai C. Social network based filtering of unsolicited messages from e-mails. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-169964] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Cinu C. Kiliroor
- Department of Computer Technology, Madras Institute of Technology, Anna University, Chennai, India
| | - C. Valliyammai
- Department of Computer Technology, Madras Institute of Technology, Anna University, Chennai, India
| |
Collapse
|
10
|
Bansal P, Gupta S, Kumar S, Sharma S, Sharma S. MLP-LOA: a metaheuristic approach to design an optimal multilayer perceptron. Soft comput 2019. [DOI: 10.1007/s00500-019-03773-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
11
|
Nayyeri M, Sadoghi Yazdi H, Maskooki A, Rouhani M. Universal Approximation by Using the Correntropy Objective Function. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4515-4521. [PMID: 29035228 DOI: 10.1109/tnnls.2017.2753725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Several objective functions have been proposed in the literature to adjust the input parameters of a node in constructive networks. Furthermore, many researchers have focused on the universal approximation capability of the network based on the existing objective functions. In this brief, we use a correntropy measure based on the sigmoid kernel in the objective function to adjust the input parameters of a newly added node in a cascade network. The proposed network is shown to be capable of approximating any continuous nonlinear mapping with probability one in a compact input sample space. Thus, the convergence is guaranteed. The performance of our method was compared with that of eight different objective functions, as well as with an existing one hidden layer feedforward network on several real regression data sets with and without impulsive noise. The experimental results indicate the benefits of using a correntropy measure in reducing the root mean square error and increasing the robustness to noise.
Collapse
|
12
|
Kamada S, Ichimura T, Hara A, Mackin KJ. Adaptive structure learning method of deep belief network using neuron generation–annihilation and layer generation. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3622-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
13
|
Gadea-Gironés R, Colom-Palero R, Herrero-Bosch V. Optimization of Deep Neural Networks Using SoCs with OpenCL. SENSORS 2018; 18:s18051384. [PMID: 29710875 PMCID: PMC5982427 DOI: 10.3390/s18051384] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 04/18/2018] [Accepted: 04/27/2018] [Indexed: 11/16/2022]
Abstract
In the optimization of deep neural networks (DNNs) via evolutionary algorithms (EAs) and the implementation of the training necessary for the creation of the objective function, there is often a trade-off between efficiency and flexibility. Pure software solutions implemented on general-purpose processors tend to be slow because they do not take advantage of the inherent parallelism of these devices, whereas hardware realizations based on heterogeneous platforms (combining central processing units (CPUs), graphics processing units (GPUs) and/or field-programmable gate arrays (FPGAs)) are designed based on different solutions using methodologies supported by different languages and using very different implementation criteria. This paper first presents a study that demonstrates the need for a heterogeneous (CPU-GPU-FPGA) platform to accelerate the optimization of artificial neural networks (ANNs) using genetic algorithms. Second, the paper presents implementations of the calculations related to the individuals evaluated in such an algorithm on different (CPU- and FPGA-based) platforms, but with the same source files written in OpenCL. The implementation of individuals on remote, low-cost FPGA systems on a chip (SoCs) is found to enable the achievement of good efficiency in terms of performance per watt.
Collapse
Affiliation(s)
- Rafael Gadea-Gironés
- Department Universitat Politècnica de València, Camino de Vera, s/n, 46022 València, Spain.
| | - Ricardo Colom-Palero
- Department Universitat Politècnica de València, Camino de Vera, s/n, 46022 València, Spain.
| | - Vicente Herrero-Bosch
- Department Universitat Politècnica de València, Camino de Vera, s/n, 46022 València, Spain.
| |
Collapse
|
14
|
Zemouri R, Omri N, Morello B, Devalland C, Arnould L, Zerhouni N, Fnaiech F. Constructive Deep Neural Network for Breast Cancer Diagnosis. ACTA ACUST UNITED AC 2018. [DOI: 10.1016/j.ifacol.2018.11.660] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
15
|
|
16
|
Zheng T, Wang C. Relationship Between Persistent Excitation Levels and RBF Network Structures, With Application to Performance Analysis of Deterministic Learning. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:3380-3392. [PMID: 28613194 DOI: 10.1109/tcyb.2017.2710284] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Based on the notion of persistent excitation (PE), a deterministic learning theory is recently proposed for RBF network-based identification of nonlinear systems. In this paper, we study the relationship between the PE levels, the structures of RBF networks and the performance of deterministic learning. Specifically, given a state trajectory generated from a nonlinear dynamical system, we investigate how to construct the RBF networks in order to guarantee sufficient PE levels (especially the level of excitation) for deterministic learning. It is revealed that the PE levels decrease with the density of neural centers, denoted by explicit formulas. As an illustration, these formulas are applied to convergence analysis of deterministic learning. We present exact theoretical conclusions that a finite and definite number of centers can achieve the same performance as global centers. In addition, a tradeoff exists between a relatively high level of excitation and the good approximation capabilities of RBF networks, which indicates that we cannot always obtain better convergence accuracy by increasing the density of centers. These results provide a new perspective for performance analysis of RBF network algorithms based on the notion of PE. Simulation studies are included to illustrate the results.
Collapse
|
17
|
Jaddi NS, Abdullah S, Abdul Malek M. Master-Leader-Slave Cuckoo Search with Parameter Control for ANN Optimization and Its Real-World Application to Water Quality Prediction. PLoS One 2017; 12:e0170372. [PMID: 28125609 PMCID: PMC5268472 DOI: 10.1371/journal.pone.0170372] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2016] [Accepted: 01/04/2017] [Indexed: 11/24/2022] Open
Abstract
Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results.
Collapse
Affiliation(s)
- Najmeh Sadat Jaddi
- Data Mining and Optimization Research Group (DMO), Centre for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan, Malaysia, Bangi, Selangor, Malaysia
| | - Salwani Abdullah
- Data Mining and Optimization Research Group (DMO), Centre for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan, Malaysia, Bangi, Selangor, Malaysia
- * E-mail:
| | - Marlinda Abdul Malek
- Civil Engineering Department, College of Engineering, Universiti Tenaga Nasional, Jalan IKRAM-UNITEN, Kajang, Selangor, Malaysia
| |
Collapse
|
18
|
|
19
|
Yang J, Ma J. A structure optimization framework for feed-forward neural networks using sparse representation. Knowl Based Syst 2016. [DOI: 10.1016/j.knosys.2016.06.026] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
20
|
Müller AT, Kaymaz AC, Gabernet G, Posselt G, Wessler S, Hiss JA, Schneider G. Sparse Neural Network Models of Antimicrobial Peptide-Activity Relationships. Mol Inform 2016; 35:606-614. [DOI: 10.1002/minf.201600029] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2016] [Accepted: 06/13/2016] [Indexed: 01/07/2023]
Affiliation(s)
- Alex T. Müller
- Swiss Federal Institute of Technology (ETH); Department of Chemistry and Applied Biosciences; Vladimir-Prelog-Weg 4 CH-8093 Zurich Switzerland
| | - Aral C. Kaymaz
- Swiss Federal Institute of Technology (ETH); Department of Chemistry and Applied Biosciences; Vladimir-Prelog-Weg 4 CH-8093 Zurich Switzerland
| | - Gisela Gabernet
- Swiss Federal Institute of Technology (ETH); Department of Chemistry and Applied Biosciences; Vladimir-Prelog-Weg 4 CH-8093 Zurich Switzerland
| | - Gernot Posselt
- Department of Molecular Biology, Division of Microbiology, Paris Lodron; University of Salzburg; Billrothstr. 11 A-5020 Salzburg Austria
| | - Silja Wessler
- Department of Molecular Biology, Division of Microbiology, Paris Lodron; University of Salzburg; Billrothstr. 11 A-5020 Salzburg Austria
| | - Jan A. Hiss
- Swiss Federal Institute of Technology (ETH); Department of Chemistry and Applied Biosciences; Vladimir-Prelog-Weg 4 CH-8093 Zurich Switzerland
| | - Gisbert Schneider
- Swiss Federal Institute of Technology (ETH); Department of Chemistry and Applied Biosciences; Vladimir-Prelog-Weg 4 CH-8093 Zurich Switzerland
| |
Collapse
|
21
|
Wu C, Wang L, Shi Z. Financial Distress Prediction Based on Support Vector Machine with a Modified Kernel Function. JOURNAL OF INTELLIGENT SYSTEMS 2016. [DOI: 10.1515/jisys-2014-0132] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
AbstractFor the financial distress prediction model based on support vector machine, there are no theories concerning how to choose a proper kernel function in a data-dependent way. This paper proposes a method of modified kernel function that can availably enhance classification accuracy. We apply an information-geometric method to modifying a kernel that is based on the structure of the Riemannian geometry induced in the input space by the kernel. A conformal transformation of a kernel from input space to higher-dimensional feature space enlarges volume elements locally near support vectors that are situated around the classification boundary and reduce the number of support vectors. This paper takes the Gaussian radial basis function as the internal kernel. Additionally, this paper combines the above method with the theories of standard regularization and non-dimensionalization to construct the new model. In the empirical analysis section, the paper adopts the financial data of Chinese listed companies. It uses five groups of experiments with different parameters to compare the classification accuracy. We can make the conclusion that the model of modified kernel function can effectively reduce the number of support vectors, and improve the classification accuracy.
Collapse
Affiliation(s)
- Chong Wu
- 1Harbin Institute of Technology, School of Economics and Management, Department of Management Science and Engineering, West Str. 92, Harbin 150001, P. R. China
| | - Lu Wang
- 1Harbin Institute of Technology, School of Economics and Management, Department of Management Science and Engineering, West Str. 92, Harbin 150001, P. R. China
| | - Zhe Shi
- 2Harbin Institute of Technology, School of Science, Department of Computational Mathematics, West Str. 92, Harbin 150001, P. R. China
| |
Collapse
|
22
|
Arjona-Román J, Hernández-García R, Navarro-Limón I, Coria-Hernández J, Rosas-Mendoza M, Meléndez-Pérez R. Heat Capacity Prediction During Pork Meat Thawing: Application of Artificial Neural Network. J FOOD PROCESS ENG 2016. [DOI: 10.1111/jfpe.12399] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- J.L. Arjona-Román
- Facultad de Estudios Superiores Cuautitlán Campo IV; Universidad Nacional Autónoma de México; Km 2.5 Carretera Teoloyucan, Cuautitlán Izcalli Estado de México C.P. 54740 Mexico
| | - R.P. Hernández-García
- Facultad de Estudios Superiores Cuautitlán Campo IV; Universidad Nacional Autónoma de México; Km 2.5 Carretera Teoloyucan, Cuautitlán Izcalli Estado de México C.P. 54740 Mexico
| | - I. Navarro-Limón
- Facultad de Estudios Superiores Cuautitlán Campo IV; Universidad Nacional Autónoma de México; Km 2.5 Carretera Teoloyucan, Cuautitlán Izcalli Estado de México C.P. 54740 Mexico
| | - J. Coria-Hernández
- Facultad de Estudios Superiores Cuautitlán Campo IV; Universidad Nacional Autónoma de México; Km 2.5 Carretera Teoloyucan, Cuautitlán Izcalli Estado de México C.P. 54740 Mexico
| | - M.E. Rosas-Mendoza
- Facultad de Estudios Superiores Cuautitlán Campo IV; Universidad Nacional Autónoma de México; Km 2.5 Carretera Teoloyucan, Cuautitlán Izcalli Estado de México C.P. 54740 Mexico
| | - R. Meléndez-Pérez
- Facultad de Estudios Superiores Cuautitlán Campo IV; Universidad Nacional Autónoma de México; Km 2.5 Carretera Teoloyucan, Cuautitlán Izcalli Estado de México C.P. 54740 Mexico
| |
Collapse
|
23
|
Hussain S, Basu A. Multiclass Classification by Adaptive Network of Dendritic Neurons with Binary Synapses Using Structural Plasticity. Front Neurosci 2016; 10:113. [PMID: 27065782 PMCID: PMC4814530 DOI: 10.3389/fnins.2016.00113] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2015] [Accepted: 03/07/2016] [Indexed: 11/28/2022] Open
Abstract
The development of power-efficient neuromorphic devices presents the challenge of designing spike pattern classification algorithms which can be implemented on low-precision hardware and can also achieve state-of-the-art performance. In our pursuit of meeting this challenge, we present a pattern classification model which uses a sparse connection matrix and exploits the mechanism of nonlinear dendritic processing to achieve high classification accuracy. A rate-based structural learning rule for multiclass classification is proposed which modifies a connectivity matrix of binary synaptic connections by choosing the best "k" out of "d" inputs to make connections on every dendritic branch (k < < d). Because learning only modifies connectivity, the model is well suited for implementation in neuromorphic systems using address-event representation (AER). We develop an ensemble method which combines several dendritic classifiers to achieve enhanced generalization over individual classifiers. We have two major findings: (1) Our results demonstrate that an ensemble created with classifiers comprising moderate number of dendrites performs better than both ensembles of perceptrons and of complex dendritic trees. (2) In order to determine the moderate number of dendrites required for a specific classification problem, a two-step solution is proposed. First, an adaptive approach is proposed which scales the relative size of the dendritic trees of neurons for each class. It works by progressively adding dendrites with fixed number of synapses to the network, thereby allocating synaptic resources as per the complexity of the given problem. As a second step, theoretical capacity calculations are used to convert each neuronal dendritic tree to its optimal topology where dendrites of each class are assigned different number of synapses. The performance of the model is evaluated on classification of handwritten digits from the benchmark MNIST dataset and compared with other spike classifiers. We show that our system can achieve classification accuracy within 1 - 2% of other reported spike-based classifiers while using much less synaptic resources (only 7%) compared to that used by other methods. Further, an ensemble classifier created with adaptively learned sizes can attain accuracy of 96.4% which is at par with the best reported performance of spike-based classifiers. Moreover, the proposed method achieves this by using about 20% of the synapses used by other spike algorithms. We also present results of applying our algorithm to classify the MNIST-DVS dataset collected from a real spike-based image sensor and show results comparable to the best reported ones (88.1% accuracy). For VLSI implementations, we show that the reduced synaptic memory can save upto 4X area compared to conventional crossbar topologies. Finally, we also present a biologically realistic spike-based version for calculating the correlations required by the structural learning rule and demonstrate the correspondence between the rate-based and spike-based methods of learning.
Collapse
Affiliation(s)
| | - Arindam Basu
- School of Electrical and Electronic Engineering, Nanyang Technological UniversitySingapore, Singapore
| |
Collapse
|
24
|
Qiao J, Li F, Han H, Li W. Constructive algorithm for fully connected cascade feedforward neural networks. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.12.003] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
25
|
Gawehn E, Hiss JA, Schneider G. Deep Learning in Drug Discovery. Mol Inform 2015; 35:3-14. [PMID: 27491648 DOI: 10.1002/minf.201501008] [Citation(s) in RCA: 309] [Impact Index Per Article: 34.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2015] [Accepted: 12/01/2015] [Indexed: 12/18/2022]
Abstract
Artificial neural networks had their first heyday in molecular informatics and drug discovery approximately two decades ago. Currently, we are witnessing renewed interest in adapting advanced neural network architectures for pharmaceutical research by borrowing from the field of "deep learning". Compared with some of the other life sciences, their application in drug discovery is still limited. Here, we provide an overview of this emerging field of molecular informatics, present the basic concepts of prominent deep learning methods and offer motivation to explore these techniques for their usefulness in computer-assisted drug discovery and design. We specifically emphasize deep neural networks, restricted Boltzmann machine networks and convolutional networks.
Collapse
Affiliation(s)
- Erik Gawehn
- Swiss Federal Institute of Technology (ETH), Department of Chemistry and Applied Biosciences, Vladimir-Prelog-Weg 4, CH-8093 Zurich, Switzerland, Fax: +41 44 633 13 79, Tel: +41 44 633 74 38
| | - Jan A Hiss
- Swiss Federal Institute of Technology (ETH), Department of Chemistry and Applied Biosciences, Vladimir-Prelog-Weg 4, CH-8093 Zurich, Switzerland, Fax: +41 44 633 13 79, Tel: +41 44 633 74 38
| | - Gisbert Schneider
- Swiss Federal Institute of Technology (ETH), Department of Chemistry and Applied Biosciences, Vladimir-Prelog-Weg 4, CH-8093 Zurich, Switzerland, Fax: +41 44 633 13 79, Tel: +41 44 633 74 38.
| |
Collapse
|
26
|
Jaddi NS, Abdullah S, Hamdan AR. Optimization of neural network model using modified bat-inspired algorithm. Appl Soft Comput 2015. [DOI: 10.1016/j.asoc.2015.08.002] [Citation(s) in RCA: 92] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
27
|
Chen C, Yan X. Optimization of a multilayer neural network by using minimal redundancy maximal relevance-partial mutual information clustering with least square regression. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1177-1187. [PMID: 25055386 DOI: 10.1109/tnnls.2014.2334599] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, an optimized multilayer feed-forward network (MLFN) is developed to construct a soft sensor for controlling naphtha dry point. To overcome the two main flaws in the structure and weight of MLFNs, which are trained by a back-propagation learning algorithm, minimal redundancy maximal relevance-partial mutual information clustering (mPMIc) integrated with least square regression (LSR) is proposed to optimize the MLFN. The mPMIc can determine the location of hidden layer nodes using information in the hidden and output layers, as well as remove redundant hidden layer nodes. These selected nodes are highly related to output data, but are minimally correlated with other hidden layer nodes. The weights between the selected hidden layer nodes and output layer are then updated through LSR. When the redundant nodes from the hidden layer are removed, the ideal MLFN structure can be obtained according to the test error results. In actual applications, the naphtha dry point must be controlled accurately because it strongly affects the production yield and the stability of subsequent operational processes. The mPMIc-LSR MLFN with a simple network size performs better than other improved MLFN variants and existing efficient models.
Collapse
|
28
|
Song W, Chen P, Cheol Park S. Application of a staged learning-based resource allocation network to automatic text categorization. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.07.017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
29
|
Chen C, Yan X. Burning Side Reaction Model of the INVISTA Oxidation Process Using a Radial Basis Function Neural Network Integrated with Partial Mutual Information-Least Square Regression. JOURNAL OF CHEMICAL ENGINEERING OF JAPAN 2015. [DOI: 10.1252/jcej.14we212] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Chao Chen
- Key Laboratory of Advanced Control and Optimization for Chemical Processes of Ministry of Education, East China University of Science and Technology
| | - Xuefeng Yan
- Key Laboratory of Advanced Control and Optimization for Chemical Processes of Ministry of Education, East China University of Science and Technology
| |
Collapse
|
30
|
Fernandes BJT, Cavalcanti GDC, Ren TI. Constructive autoassociative neural network for facial recognition. PLoS One 2014; 9:e115967. [PMID: 25542018 PMCID: PMC4277427 DOI: 10.1371/journal.pone.0115967] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2014] [Accepted: 12/02/2014] [Indexed: 11/27/2022] Open
Abstract
Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature.
Collapse
Affiliation(s)
| | | | - Tsang I. Ren
- Centro de Informática, Universidade Federal de Pernambuco, Recife-PE, Brazil
| |
Collapse
|
31
|
Unluturk S, Pelvan M, Unluturk MS. The discrimination of raw and UHT milk samples contaminated with penicillin G and ampicillin using image processing neural network and biocrystallization methods. J Food Compost Anal 2013. [DOI: 10.1016/j.jfca.2013.06.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
32
|
Lu TC, Yu GR, Juang JC. Quantum-based algorithm for optimizing artificial neural networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:1266-1278. [PMID: 24808566 DOI: 10.1109/tnnls.2013.2249089] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents a quantum-based algorithm for evolving artificial neural networks (ANNs). The aim is to design an ANN with few connections and high classification performance by simultaneously optimizing the network structure and the connection weights. Unlike most previous studies, the proposed algorithm uses quantum bit representation to codify the network. As a result, the connectivity bits do not indicate the actual links but the probability of the existence of the connections, thus alleviating mapping problems and reducing the risk of throwing away a potential candidate. In addition, in the proposed model, each weight space is decomposed into subspaces in terms of quantum bits. Thus, the algorithm performs a region by region exploration, and evolves gradually to find promising subspaces for further exploitation. This is helpful to provide a set of appropriate weights when evolving the network structure and to alleviate the noisy fitness evaluation problem. The proposed model is tested on four benchmark problems, namely breast cancer and iris, heart, and diabetes problems. The experimental results show that the proposed algorithm can produce compact ANN structures with good generalization ability compared to other algorithms.
Collapse
|
33
|
Han HG, Wang LD, Qiao JF. Efficient self-organizing multilayer neural network for nonlinear system modeling. Neural Netw 2013; 43:22-32. [DOI: 10.1016/j.neunet.2013.01.015] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2012] [Revised: 01/27/2013] [Accepted: 01/27/2013] [Indexed: 11/27/2022]
|
34
|
Vuković N, Miljković Z. A growing and pruning sequential learning algorithm of hyper basis function neural network for function approximation. Neural Netw 2013; 46:210-26. [PMID: 23811384 DOI: 10.1016/j.neunet.2013.06.004] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2012] [Revised: 04/22/2013] [Accepted: 06/06/2013] [Indexed: 10/26/2022]
Abstract
Radial basis function (RBF) neural network is constructed of certain number of RBF neurons, and these networks are among the most used neural networks for modeling of various nonlinear problems in engineering. Conventional RBF neuron is usually based on Gaussian type of activation function with single width for each activation function. This feature restricts neuron performance for modeling the complex nonlinear problems. To accommodate limitation of a single scale, this paper presents neural network with similar but yet different activation function-hyper basis function (HBF). The HBF allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The HBF is based on generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. Compared to the RBF, the HBF neuron has more parameters to optimize, but HBF neural network needs less number of HBF neurons to memorize relationship between input and output sets in order to achieve good generalization property. However, recent research results of HBF neural network performance have shown that optimal way of constructing this type of neural network is needed; this paper addresses this issue and modifies sequential learning algorithm for HBF neural network that exploits the concept of neuron's significance and allows growing and pruning of HBF neuron during learning process. Extensive experimental study shows that HBF neural network, trained with developed learning algorithm, achieves lower prediction error and more compact neural network.
Collapse
Affiliation(s)
- Najdan Vuković
- University of Belgrade - Faculty of Mechanical Engineering, Innovation Center, Kraljice Marije 16; 11120 Belgrade 35, Serbia.
| | | |
Collapse
|
35
|
Windeatt T, Zor C. Ensemble pruning using spectral coefficients. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:673-678. [PMID: 24808387 DOI: 10.1109/tnnls.2013.2239659] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Ensemble pruning aims to increase efficiency by reducing the number of base classifiers, without sacrificing and preferably enhancing performance. In this brief, a novel pruning paradigm is proposed. Two class supervised learning problems are pruned using a combination of first- and second-order Walsh coefficients. A comparison is made with other ordered aggregation pruning methods, using multilayer perceptron base classifiers. The Walsh pruning method is analyzed with the help of a model that shows the relationship between second-order coefficients and added classification error with respect to Bayes error.
Collapse
|
36
|
Han HG, Qiao JF. A structure optimisation algorithm for feedforward neural network construction. Neurocomputing 2013. [DOI: 10.1016/j.neucom.2012.07.023] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
37
|
Yang SH, Chen YP. An evolutionary constructive and pruning algorithm for artificial neural networks and its prediction applications. Neurocomputing 2012. [DOI: 10.1016/j.neucom.2012.01.024] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
38
|
Stuhlsatz A, Lippel J, Zielke T. Feature extraction with deep neural networks by a generalized discriminant analysis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:596-608. [PMID: 24805043 DOI: 10.1109/tnnls.2012.2183645] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
Collapse
|
39
|
AHMED SULTANUDDIN, SHAHJAHAN MD, MURASE KAZUYUKI. A LEMPEL-ZIV COMPLEXITY-BASED NEURAL NETWORK PRUNING ALGORITHM. Int J Neural Syst 2011; 21:427-41. [DOI: 10.1142/s0129065711002936] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper presents a pruning method for artificial neural networks (ANNs) based on the 'Lempel-Ziv complexity' (LZC) measure. We call this method the 'silent pruning algorithm' (SPA). The term 'silent' is used in the sense that SPA prunes ANNs without causing much disturbance during the network training. SPA prunes hidden units during the training process according to their ranks computed from LZC. LZC extracts the number of unique patterns in a time sequence obtained from the output of a hidden unit and a smaller value of LZC indicates higher redundancy of a hidden unit. SPA has a great resemblance to biological brains since it encourages higher complexity during the training process. SPA is similar to, yet different from, existing pruning algorithms. The algorithm has been tested on a number of challenging benchmark problems in machine learning, including cancer, diabetes, heart, card, iris, glass, thyroid, and hepatitis problems. We compared SPA with other pruning algorithms and we found that SPA is better than the 'random deletion algorithm' (RDA) which prunes hidden units randomly. Our experimental results show that SPA can simplify ANNs with good generalization ability.
Collapse
Affiliation(s)
- SULTAN UDDIN AHMED
- Department of Electronics and Communication Engineering, Khulna University of Engineering and Technology, Khulna-9203, Bangladesh
| | - MD. SHAHJAHAN
- Department of Electrical and Electronic Engineering, Khulna University of Engineering and Technology, Khulna-9203, Bangladesh
| | - KAZUYUKI MURASE
- Department of Human and Artificial Intelligence Systems, University of Fukui, Bunkyo 3-9-1, Fukui-910-8705, Japan
| |
Collapse
|
40
|
Tatt Hee Oong, Isa NAM. Adaptive Evolutionary Artificial Neural Networks for Pattern Classification. ACTA ACUST UNITED AC 2011; 22:1823-36. [DOI: 10.1109/tnn.2011.2169426] [Citation(s) in RCA: 67] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
41
|
Han HG, Qiao JF. Adaptive dissolved oxygen control based on dynamic structure neural network. Appl Soft Comput 2011. [DOI: 10.1016/j.asoc.2011.02.014] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
42
|
Han HG, Chen QL, Qiao JF. An efficient self-organizing RBF neural network for water quality prediction. Neural Netw 2011; 24:717-25. [PMID: 21612889 DOI: 10.1016/j.neunet.2011.04.006] [Citation(s) in RCA: 108] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2010] [Revised: 02/20/2011] [Accepted: 04/25/2011] [Indexed: 10/18/2022]
Abstract
This paper presents a flexible structure Radial Basis Function (RBF) neural network (FS-RBFNN) and its application to water quality prediction. The FS-RBFNN can vary its structure dynamically in order to maintain the prediction accuracy. The hidden neurons in the RBF neural network can be added or removed online based on the neuron activity and mutual information (MI), to achieve the appropriate network complexity and maintain overall computational efficiency. The convergence of the algorithm is analyzed in both the dynamic process phase and the phase following the modification of the structure. The proposed FS-RBFNN has been tested and compared to other algorithms by applying it to the problem of identifying a nonlinear dynamic system. Experimental results show that the FS-RBFNN can be used to design an RBF structure which has fewer hidden neurons; the training time is also much faster. The algorithm is applied for predicting water quality in the wastewater treatment process. The results demonstrate its effectiveness.
Collapse
Affiliation(s)
- Hong-Gui Han
- College of Electronic and Control Engineering, Beijing University of Technology, Beijing, China
| | | | | |
Collapse
|
43
|
HCBPM: An Idea toward a Social Learning Environment for Humanoid Robot. JOURNAL OF ROBOTICS 2010. [DOI: 10.1155/2010/241785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
To advance robotics toward real-world applications, a growing body of research has focused on the development of control systems for humanoid robots in recent years. Several approaches have been proposed to support the learning stage of such controllers, where the robot can learn new behaviors by observing and/or receiving direct guidance from a human or even another robot. These approaches require dynamic learning and memorization techniques, which the robot can use to reform and update its internal systems continuously while learning new behaviors. Against this background, this study investigates a new approach to the development of an incremental learning and memorization model. This approach was inspired by the principles of neuroscience, and the developed model was named “Hierarchical Constructive Backpropagation with Memory” (HCBPM). The validity of the model was tested by teaching a humanoid robot to recognize a group of objects through natural interaction. The experimental results indicate that the proposed model efficiently enhances real-time machine learning in general and can be used to establish an environment suitable for social learning between the robot and the user in particular.
Collapse
|
44
|
Islam M, Sattar M, Amin M, Xin Yao, Murase K. A New Constructive Algorithm for Architectural and Functional Adaptation of Artificial Neural Networks. ACTA ACUST UNITED AC 2009; 39:1590-605. [DOI: 10.1109/tsmcb.2009.2021849] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|