1
|
Wang J, Lu S, Wang SH, Zhang YD. A review on extreme learning machine. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:41611-41660. [DOI: 10.1007/s11042-021-11007-7] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 02/26/2021] [Accepted: 05/05/2021] [Indexed: 08/30/2023]
Abstract
AbstractExtreme learning machine (ELM) is a training algorithm for single hidden layer feedforward neural network (SLFN), which converges much faster than traditional methods and yields promising performance. In this paper, we hope to present a comprehensive review on ELM. Firstly, we will focus on the theoretical analysis including universal approximation theory and generalization. Then, the various improvements are listed, which help ELM works better in terms of stability, efficiency, and accuracy. Because of its outstanding performance, ELM has been successfully applied in many real-time learning tasks for classification, clustering, and regression. Besides, we report the applications of ELM in medical imaging: MRI, CT, and mammogram. The controversies of ELM were also discussed in this paper. We aim to report these advances and find some future perspectives.
Collapse
|
2
|
Chen W, Chen X, Lin Y. Homogeneous ensemble extreme learning machine autoencoder with mutual representation learning and manifold regularization for medical datasets. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04284-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
3
|
Lu S, Liu S, Wang SH, Zhang YD. Cerebral Microbleed Detection via Convolutional Neural Network and Extreme Learning Machine. Front Comput Neurosci 2021; 15:738885. [PMID: 34566615 PMCID: PMC8461250 DOI: 10.3389/fncom.2021.738885] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 08/20/2021] [Indexed: 11/24/2022] Open
Abstract
Aim: Cerebral microbleeds (CMBs) are small round dots distributed over the brain which contribute to stroke, dementia, and death. The early diagnosis is significant for the treatment. Method: In this paper, a new CMB detection approach was put forward for brain magnetic resonance images. We leveraged a sliding window to obtain training and testing samples from input brain images. Then, a 13-layer convolutional neural network (CNN) was designed and trained. Finally, we proposed to utilize an extreme learning machine (ELM) to substitute the last several layers in the CNN for detection. We carried out an experiment to decide the optimal number of layers to be substituted. The parameters in ELM were optimized by a heuristic algorithm named bat algorithm. The evaluation of our approach was based on hold-out validation, and the final predictions were generated by averaging the performance of five runs. Results: Through the experiments, we found replacing the last five layers with ELM can get the optimal results. Conclusion: We offered a comparison with state-of-the-art algorithms, and it can be revealed that our method was accurate in CMB detection.
Collapse
Affiliation(s)
- Siyuan Lu
- School of Informatics, University of Leicester, Leicester, United Kingdom
| | - Shuaiqi Liu
- College of Electronic and Information Engineering, Hebei University, Baoding, China
| | - Shui-Hua Wang
- School of Mathematics and Actuarial Science, University of Leicester, Leicester, United Kingdom
| | - Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester, United Kingdom
| |
Collapse
|
4
|
Luo J, Wong CM, Vong CM. Multinomial Bayesian extreme learning machine for sparse and accurate classification model. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.09.061] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
5
|
Ren LR, Gao YL, Liu JX, Shang J, Zheng CH. Correntropy induced loss based sparse robust graph regularized extreme learning machine for cancer classification. BMC Bioinformatics 2020; 21:445. [PMID: 33028187 PMCID: PMC7542897 DOI: 10.1186/s12859-020-03790-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 09/30/2020] [Indexed: 01/17/2023] Open
Abstract
Background As a machine learning method with high performance and excellent generalization ability, extreme learning machine (ELM) is gaining popularity in various studies. Various ELM-based methods for different fields have been proposed. However, the robustness to noise and outliers is always the main problem affecting the performance of ELM. Results In this paper, an integrated method named correntropy induced loss based sparse robust graph regularized extreme learning machine (CSRGELM) is proposed. The introduction of correntropy induced loss improves the robustness of ELM and weakens the negative effects of noise and outliers. By using the L2,1-norm to constrain the output weight matrix, we tend to obtain a sparse output weight matrix to construct a simpler single hidden layer feedforward neural network model. By introducing the graph regularization to preserve the local structural information of the data, the classification performance of the new method is further improved. Besides, we design an iterative optimization method based on the idea of half quadratic optimization to solve the non-convex problem of CSRGELM. Conclusions The classification results on the benchmark dataset show that CSRGELM can obtain better classification results compared with other methods. More importantly, we also apply the new method to the classification problems of cancer samples and get a good classification effect.
Collapse
Affiliation(s)
- Liang-Rui Ren
- School of Computer Science, Qufu Normal University, Rizhao, 276826, China
| | - Ying-Lian Gao
- Qufu Normal University Library, Qufu Normal University, Rizhao, 276826, China
| | - Jin-Xing Liu
- School of Computer Science, Qufu Normal University, Rizhao, 276826, China.
| | - Junliang Shang
- School of Computer Science, Qufu Normal University, Rizhao, 276826, China
| | - Chun-Hou Zheng
- School of Computer Science, Qufu Normal University, Rizhao, 276826, China.,College of Computer Science and Technology, Anhui University, Hefei, 230601, China
| |
Collapse
|
6
|
Learning local discriminative representations via extreme learning machine for machine fault diagnosis. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.05.021] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
7
|
Qing Y, Zeng Y, Li Y, Huang GB. Deep and wide feature based extreme learning machine for image classification. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.06.110] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
8
|
Zeng Y, Chen J, Li Y, Qing Y, Huang GB. Clustering via Adaptive and Locality-constrained Graph Learning and Unsupervised ELM. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.03.045] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
9
|
Lu S, Wang SH, Zhang YD. Detection of abnormal brain in MRI via improved AlexNet and ELM optimized by chaotic bat algorithm. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05082-4] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
10
|
|
11
|
Zeng Y, Li Y, Chen J, Jia X, Huang GB. ELM embedded discriminative dictionary learning for image classification. Neural Netw 2019; 123:331-342. [PMID: 31901564 DOI: 10.1016/j.neunet.2019.11.015] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 08/01/2019] [Accepted: 11/18/2019] [Indexed: 11/30/2022]
Abstract
Dictionary learning is a widely adopted approach for image classification. Existing methods focus either on finding a dictionary that produces discriminative sparse representation, or on enforcing priors that best describe the dataset distribution. In many cases, the dataset size is often small with large intra-class variability and nondiscriminative feature space. In this work we propose a simple and effective framework called ELM-DDL to address these issues. Specifically, we represent input features with Extreme Learning Machine (ELM) with orthogonal output projection, which enables diverse representation on nonlinear hidden space and task specific feature learning on output space. The embeddings are further regularized via a maximum margin criterion (MMC) to maximize the inter-class variance and minimize intra-class variance. For dictionary learning, we design a novel weighted class specific ℓ1,2 norm to regularize the sparse coding vectors, which promotes uniformity of the sparse patterns of samples belonging to the same class and suppresses support overlaps of different classes. We show that such regularization is robust, discriminative and easy to optimize. The proposed method is combined with a sparse representation classifier (SRC) to evaluate on benchmark datasets. Results show that our approach achieves state-of-the-art performance compared to other dictionary learning methods.
Collapse
Affiliation(s)
- Yijie Zeng
- School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore.
| | - Yue Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore.
| | - Jichao Chen
- School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore.
| | - Xiaofan Jia
- School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore.
| | - Guang-Bin Huang
- School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore.
| |
Collapse
|
12
|
Xie J, Liu S, Dai H. A distributed semi-supervised learning algorithm based on manifold regularization using wavelet neural network. Neural Netw 2019; 118:300-309. [PMID: 31330270 DOI: 10.1016/j.neunet.2018.10.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2018] [Revised: 08/14/2018] [Accepted: 10/30/2018] [Indexed: 10/27/2022]
Abstract
This paper aims to propose a distributed semi-supervised learning (D-SSL) algorithm to solve D-SSL problems, where training samples are often extremely large-scale and located on distributed nodes over communication networks. Training data of each node consists of labeled and unlabeled samples whose output values or labels are unknown. These nodes communicate in a distributed way, where each node has only access to its own data and can only exchange local information with its neighboring nodes. In some scenarios, these distributed data cannot be processed centrally. As a result, D-SSL problems cannot be centrally solved by using traditional semi-supervised learning (SSL) algorithms. The state-of-the-art D-SSL algorithm, denoted as Distributed Laplacian Regularization Least Square (D-LapRLS), is a kernel based algorithm. It is essential for the D-LapRLS algorithm to estimate the global Euclidian Distance Matrix (EDM) with respect to total samples, which is time-consuming especially when the scale of training data is large. In order to solve D-SSL problems and overcome the common drawback of kernel based D-SSL algorithms, we propose a novel Manifold Regularization (MR) based D-SSL algorithm using Wavelet Neural Network (WNN) and Zero-Gradient-Sum (ZGS) distributed optimization strategy. Accordingly, each node is assigned an individual WNN with the same basis functions. In order to initialize the proposed D-SSL algorithm, we propose a centralized MR based SSL algorithm using WNN. We denote the proposed SSL and D-SSL algorithms as Laplacian WNN (LapWNN) and distributed LapWNN (D-LapWNN), respectively. The D-LapWNN algorithm works in a fully distributed fashion by using ZGS strategy, whose convergence is guaranteed by the Lyapunov method. During the learning process, each node only exchanges local coefficients with its neighbors rather than raw data. It means that the D-LapWNN algorithm is a privacy preserving method. At last, several illustrative simulations are presented to show the efficiency and advantage of the proposed algorithm.
Collapse
Affiliation(s)
- Jin Xie
- School of Mathematics and Statistics, Xidian University, Xi'an 710071, PR China.
| | - Sanyang Liu
- School of Mathematics and Statistics, Xidian University, Xi'an 710071, PR China.
| | - Hao Dai
- School of Aerospace Science and Technology, Xidian University, Xi'an 710071, PR China.
| |
Collapse
|
13
|
Zhang Y, Wu J, Zhou C, Cai Z, Yang J, Yu PS. Multi-View Fusion with Extreme Learning Machine for Clustering. ACM T INTEL SYST TEC 2019. [DOI: 10.1145/3340268] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Unlabeled, multi-view data presents a considerable challenge in many real-world data analysis tasks. These data are worth exploring because they often contain complementary information that improves the quality of the analysis results. Clustering with multi-view data is a particularly challenging problem as revealing the complex data structures between many feature spaces demands discriminative features that are specific to the task and, when too few of these features are present, performance suffers. Extreme learning machines (ELMs) are an emerging form of learning model that have shown an outstanding representation ability and superior performance in a range of different learning tasks. Motivated by the promise of this advancement, we have developed a novel multi-view fusion clustering framework based on an ELM, called MVEC. MVEC learns the embeddings from each view of the data via the ELM network, then constructs a single unified embedding according to the correlations and dependencies between each embedding and automatically weighting the contribution of each. This process exposes the underlying clustering structures embedded within multi-view data with a high degree of accuracy. A simple yet efficient solution is also provided to solve the optimization problem within MVEC. Experiments and comparisons on eight different benchmarks from different domains confirm MVEC’s clustering accuracy.
Collapse
Affiliation(s)
| | - Jia Wu
- Macquarie University, Sydney, NSW, Australia
| | - Chuan Zhou
- Chinese Academy of Sciences, Beijing, China
| | - Zhihua Cai
- China University of Geosciences, Wuhan, Hubei, China
| | - Jian Yang
- Macquarie University, Sydney, NSW, Australia
| | | |
Collapse
|
14
|
Automatic optic disc detection using low-rank representation based semi-supervised extreme learning machine. INT J MACH LEARN CYB 2019. [DOI: 10.1007/s13042-019-00939-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|