1
|
|
2
|
Cervantes J, Garcia-Lamont F, Rodríguez-Mazahua L, Lopez A. A comprehensive survey on support vector machine classification: Applications, challenges and trends. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.118] [Citation(s) in RCA: 312] [Impact Index Per Article: 78.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
3
|
Van M, Hoang DT, Kang HJ. Bearing Fault Diagnosis Using a Particle Swarm Optimization-Least Squares Wavelet Support Vector Machine Classifier. Sensors (Basel) 2020; 20:s20123422. [PMID: 32560493 PMCID: PMC7349084 DOI: 10.3390/s20123422] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 06/11/2020] [Accepted: 06/14/2020] [Indexed: 11/16/2022]
Abstract
Bearing is one of the key components of a rotating machine. Hence, monitoring health condition of the bearing is of paramount importace. This paper develops a novel particle swarm optimization (PSO)-least squares wavelet support vector machine (PSO-LSWSVM) classifier, which is designed based on a combination between a PSO, a least squares procedure, and a new wavelet kernel function-based support vector machine (SVM), for bearing fault diagnosis. In this work, bearing fault classification is transformed into a pattern recognition problem, which consists of three stages of data processing. Firstly, a rich information dataset is built by extracting the features from the signals, which are decomposed by the nonlocal means (NLM) and empirical mode decomposition (EMD). Secondly, a minimum-redundancy maximum-relevance (mRMR) method is employed to determine a subset of feature that can provide an optimal performance. Thirdly, a novel classifier, namely LSWSVM, is proposed with the aid of a PSO, to provide higher classification accuracy. The key innovative science of this work is to propropose a new classifier with the aid of an new wavelet kernel type to increase the classification precision of bearing fault diagnosis. The merit features of the proposed approach are demonstrated based on a benchmark bearing dataset and a comprehensive comparison procedure.
Collapse
Affiliation(s)
- Mien Van
- Centre for Intelligent and Autonomous Manufacturing Systems, and School of Electronics, Electrical Engineering and Computer Science, Queen’s University Belfast, Belfast BT7 1NN, UK;
| | - Duy Tang Hoang
- Department of Electrical Engineering, University of Ulsan, Ulsan 44610, Korea;
| | - Hee Jun Kang
- School of Electrical Engineering, University of Ulsan, Ulsan 44610, Korea
- Correspondence:
| |
Collapse
|
4
|
Guo W, Alham NK, Liu Y, Li M, Qi M. A Resource Aware MapReduce Based Parallel SVM for Large Scale Image Classifications. Neural Process Lett 2016; 44:161-84. [DOI: 10.1007/s11063-015-9472-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
5
|
|
6
|
Lee TL, Chiang JH. A Systems Biology Approach to Solving the Puzzle of Unknown Genomic Gene-Function Association Using Grid-Ready SVM Committee Machines. IEEE COMPUT INTELL M 2012. [DOI: 10.1109/mci.2012.2215126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
7
|
Huang GB, Zhou H, Ding X, Zhang R. Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern B Cybern 2011; 42:513-29. [PMID: 21984515 DOI: 10.1109/tsmcb.2011.2168604] [Citation(s) in RCA: 1423] [Impact Index Per Article: 109.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Due to the simplicity of their implementations, least square support vector machine (LS-SVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LS-SVM and PSVM cannot be used in regression and multiclass classification applications directly, although variants of LS-SVM and PSVM have been proposed to handle such cases. This paper shows that both LS-SVM and PSVM can be simplified further and a unified learning framework of LS-SVM, PSVM, and other regularization algorithms referred to extreme learning machine (ELM) can be built. ELM works for the "generalized" single-hidden-layer feedforward networks (SLFNs), but the hidden layer (or called feature mapping) in ELM need not be tuned. Such SLFNs include but are not limited to SVM, polynomial network, and the conventional feedforward neural networks. This paper shows the following: 1) ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly; 2) from the optimization method point of view, ELM has milder optimization constraints compared to LS-SVM and PSVM; 3) in theory, compared to ELM, LS-SVM and PSVM achieve suboptimal solutions and require higher computational complexity; and 4) in theory, ELM can approximate any target continuous function and classify any disjoint regions. As verified by the simulation results, ELM tends to have better scalability and achieve similar (for regression and binary class cases) or much better (for multiclass cases) generalization performance at much faster learning speed (up to thousands times) than traditional SVM and LS-SVM.
Collapse
Affiliation(s)
- Guang-Bin Huang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798.
| | | | | | | |
Collapse
|
8
|
Abstract
AbstractGPUSVM (Graphic Processing Unit Support Vector Machine) is a Computing Unified Device Architecture (CUDA) based Support Vector Machine (SVM) package. It is designed to offer an end-user a fully functional and user friendly SVM tool which utilizes the power of GPUs. The core package includes an efficient cross validation tool, a fast training tool and a predicting tool. In this article, we first introduce the background theory of how we build our parallel SVM solver using CUDA programming model. Then we compare our GPUSVM package with the popular state of the art Libsvm package on several well known datasets. The preliminary results have shown one to two orders of magnitude speed improvement in both training and predicting phases compared to Libsvm using our Tesla server.
Collapse
|
9
|
Abstract
In this paper, a novel heuristic structure optimization methodology for radial basis probabilistic neural networks (RBPNNs) is proposed. First, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to select the initial hidden-layer centers of the RBPNN, and then the recursive orthogonal least square algorithm (ROLSA) combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. The proposed algorithms are evaluated through eight benchmark classification problems and two real-world application problems, a plant species identification task involving 50 plant species and a palmprint recognition task. Experimental results show that our proposed algorithm is feasible and efficient for the structure optimization of the RBPNN. The RBPNN achieves higher recognition rates and better classification efficiency than multilayer perceptron networks (MLPNs) and radial basis function neural networks (RBFNNs) in both tasks. Moreover, the experimental results illustrated that the generalization performance of the optimized RBPNN in the plant species identification task was markedly better than that of the optimized RBFNN.
Collapse
Affiliation(s)
- De-Shuang Huang
- Intelligent Computing Lab, Hefei Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei, Anhui 230031, China.
| | | |
Collapse
|
10
|
|
11
|
Abstract
This paper presents a new version of fuzzy support vector machine (FSVM) developed for product design time estimation. As there exist problems of finite samples and uncertain data in the estimation, the input and output variables are described as fuzzy numbers, with the metric on fuzzy number space defined. Then, the fuzzy v-support vector machine (Fv-SVM) is proposed on the basis of combining the fuzzy theory with the v-support vector machine, followed by the presentation of a time estimation method based on Fv-SVM and its relevant parameter-choosing algorithm. The results from the applications in injection mold design and software product design confirm the feasibility and validity of the estimation method. Compared with the fuzzy neural network (FNN) model, our Fv-SVM method requires fewer samples and enjoys higher estimating precision.
Collapse
Affiliation(s)
- Hong-Sen Yan
- Research Institute of Automation, Southeast University, Nanjing 210096, China.
| | | |
Collapse
|
12
|
Abstract
This paper presents incremental hierarchical discriminant regression (IHDR) which incrementally builds a decision tree or regression tree for very high-dimensional regression or decision spaces by an online, real-time learning system. Biologically motivated, it is an approximate computational model for automatic development of associative cortex, with both bottom-up sensory inputs and top-down motor projections. At each internal node of the IHDR tree, information in the output space is used to automatically derive the local subspace spanned by the most discriminating features. Embedded in the tree is a hierarchical probability distribution model used to prune very unlikely cases during the search. The number of parameters in the coarse-to-fine approximation is dynamic and data-driven, enabling the IHDR tree to automatically fit data with unknown distribution shapes (thus, it is difficult to select the number of parameters up front). The IHDR tree dynamically assigns long-term memory to avoid the loss-of-memory problem typical with a global-fitting learning algorithm for neural networks. A major challenge for an incrementally built tree is that the number of samples varies arbitrarily during the construction process. An incrementally updated probability model, called sample-size-dependent negative-log-likelihood (SDNLL) metric is used to deal with large sample-size cases, small sample-size cases, and unbalanced sample-size cases, measured among different internal nodes of the IHDR tree. We report experimental results for four types of data: synthetic data to visualize the behavior of the algorithms, large face image data, continuous video stream from robot navigation, and publicly available data sets that use human defined features.
Collapse
Affiliation(s)
- Juyang Weng
- Department of Computer Science and Engineering, Michigan State University, East Lansing, MI 48824, USA.
| | | |
Collapse
|
13
|
Cao LJ, Keerthi SS, Ong CJ, Zhang JQ, Periyathamby U, Fu XJ, Lee HP. Parallel sequential minimal optimization for the training of support vector machines. ACTA ACUST UNITED AC 2006; 17:1039-49. [PMID: 16856665 DOI: 10.1109/tnn.2006.875989] [Citation(s) in RCA: 122] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Sequential minimal optimization (SMO) is one popular algorithm for training support vector machine (SVM), but it still requires a large amount of computation time for solving large size problems. This paper proposes one parallel implementation of SMO for training SVM. The parallel SMO is developed using message passing interface (MPI). Specifically, the parallel SMO first partitions the entire training data set into smaller subsets and then simultaneously runs multiple CPU processors to deal with each of the partitioned data sets. Experiments show that there is great speedup on the adult data set and the Mixing National Institute of Standard and Technology (MNIST) data set when many processors are used. There are also satisfactory results on the Web data set.
Collapse
Affiliation(s)
- L J Cao
- Financial Studies, Fudan University, ShangHai, PR China.
| | | | | | | | | | | | | |
Collapse
|
14
|
Abstract
In some practical applications of neural networks, fast response to external events within an extremely short time is highly demanded and expected. However, the extensively used gradient-descent-based learning algorithms obviously cannot satisfy the real-time learning needs in many applications, especially for large-scale applications and/or when higher generalization performance is required. Based on Huang's constructive network model, this paper proposes a simple learning algorithm capable of real-time learning which can automatically select appropriate values of neural quantizers and analytically determine the parameters (weights and bias) of the network at one time only. The performance of the proposed algorithm has been systematically investigated on a large batch of benchmark real-world regression and classification problems. The experimental results demonstrate that our algorithm can not only produce good generalization performance but also have real-time learning and prediction capability. Thus, it may provide an alternative approach for the practical applications of neural networks where real-time learning and prediction implementation is required.
Collapse
|