1
|
Luo J, Fang SC, Deng Z, Tian Y. Robust kernel-free support vector regression based on optimal margin distribution. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
2
|
A state of health estimation method for electric vehicle Li-ion batteries using GA-PSO-SVR. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-021-00639-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
AbstractState of health (SOH) is the ratio of the currently available maximum capacity of the battery to the rated capacity. It is an important index to describe the degradation state of a pure electric vehicle battery and has an important reference value in evaluating the health level of the retired battery and estimating the driving range. In this study, the random forest algorithm is first used to find the most important health factors to lithium-ion batteries based on the dataset released by National Aeronautics and Space Administration (NASA). Then the support vector regression (SVR) algorithm is developed to predict the SOH of a lithium-ion battery. The genetic algorithm-particle swarm optimization (GA-PSO) algorithm is brought forward to optimize the parameter values of the SVR, which could improve the estimation accuracy and convergence speed. The proposed SOH estimation method is applied to four batteries and gets a root mean square error (RMSE) of 0.40% and an average absolute percentage error (MAPE) of 0.56%. In addition, the method is also compared with genetic algorithm-support vector regression (GA-SVR) and particle swarm optimization-support vector regression (PSO-SVR), respectively. The results show that (i) compared with the PSO-SVR method, the proposed method can decrease the average RMSE by 0.10%, and the average MAPE by 0.17%; (ii) compared with the GA-PSO method, number of iterations under the proposed method can be reduced by 7 generations.
Collapse
|
3
|
|
4
|
Gu B, Geng X, Li X, Shi W, Zheng G, Deng C, Huang H. Scalable Kernel Ordinal Regression via Doubly Stochastic Gradients. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:3677-3689. [PMID: 32857699 DOI: 10.1109/tnnls.2020.3015937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Ordinal regression (OR) is one of the most important machine learning tasks. The kernel method is a major technique to achieve nonlinear OR. However, traditional kernel OR solvers are inefficient due to increased complexity introduced by multiple ordinal thresholds as well as the cost of kernel computation. Doubly stochastic gradient (DSG) is a very efficient and scalable kernel learning algorithm that combines random feature approximation with stochastic functional optimization. However, the theory and algorithm of DSG can only support optimization tasks within the unique reproducing kernel Hilbert space (RKHS), which is not suitable for OR problems where the multiple ordinal thresholds usually lead to multiple RKHSs. To address this problem, we construct a kernel whose RKHS can contain the decision function with multiple thresholds. Based on this new kernel, we further propose a novel DSG-like algorithm, DSGOR. In each iteration of DSGOR, we update the decision functional as well as the function bias with appropriately set learning rates for each. Our theoretic analysis shows that DSGOR can achieve O(1/t) convergence rate, which is as good as DSG, even though dealing with a much harder problem. Extensive experimental results demonstrate that our algorithm is much more efficient than traditional kernel OR solvers, especially on large-scale problems.
Collapse
|
5
|
Jing S, Wang Y, Yang L. Selective ensemble of uncertain extreme learning machine for pattern classification with missing features. Artif Intell Rev 2020. [DOI: 10.1007/s10462-020-09836-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
6
|
A survey of robust optimization based machine learning with special reference to support vector machines. INT J MACH LEARN CYB 2019. [DOI: 10.1007/s13042-019-01044-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
7
|
Maldonado S, López J. Ellipsoidal support vector regression based on second-order cone programming. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.04.035] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
8
|
Ligeiro R, Vilela Mendes R. Detecting and quantifying ambiguity: a neural network approach. Soft comput 2018. [DOI: 10.1007/s00500-017-2525-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
9
|
Lu X, Zou H, Zhou H, Xie L, Huang GB. Robust Extreme Learning Machine With its Application to Indoor Positioning. IEEE TRANSACTIONS ON CYBERNETICS 2016; 46:194-205. [PMID: 26684258 DOI: 10.1109/tcyb.2015.2399420] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The increasing demands of location-based services have spurred the rapid development of indoor positioning system and indoor localization system interchangeably (IPSs). However, the performance of IPSs suffers from noisy measurements. In this paper, two kinds of robust extreme learning machines (RELMs), corresponding to the close-to-mean constraint, and the small-residual constraint, have been proposed to address the issue of noisy measurements in IPSs. Based on whether the feature mapping in extreme learning machine is explicit, we respectively provide random-hidden-nodes and kernelized formulations of RELMs by second order cone programming. Furthermore, the computation of the covariance in feature space is discussed. Simulations and real-world indoor localization experiments are extensively carried out and the results demonstrate that the proposed algorithms can not only improve the accuracy and repeatability, but also reduce the deviation and worst case error of IPSs compared with other baseline algorithms.
Collapse
|
10
|
Ding Y, Cheng L, Pedrycz W, Hao K. Global nonlinear kernel prediction for large data set with a particle swarm-optimized interval support vector regression. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2521-2534. [PMID: 25974954 DOI: 10.1109/tnnls.2015.2426182] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
A new global nonlinear predictor with a particle swarm-optimized interval support vector regression (PSO-ISVR) is proposed to address three issues (viz., kernel selection, model optimization, kernel method speed) encountered when applying SVR in the presence of large data sets. The novel prediction model can reduce the SVR computing overhead by dividing input space and adaptively selecting the optimized kernel functions to obtain optimal SVR parameter by PSO. To quantify the quality of the predictor, its generalization performance and execution speed are investigated based on statistical learning theory. In addition, experiments using synthetic data as well as the stock volume weighted average price are reported to demonstrate the effectiveness of the developed models. The experimental results show that the proposed PSO-ISVR predictor can improve the computational efficiency and the overall prediction accuracy compared with the results produced by the SVR and other regression methods. The proposed PSO-ISVR provides an important tool for nonlinear regression analysis of big data.
Collapse
|
11
|
Wang Y, Dang C, Wang S. Robust Novelty Detection via Worst Case CVaR Minimization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2098-2110. [PMID: 25532212 DOI: 10.1109/tnnls.2014.2378270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Novelty detection models aim to find the minimum volume set covering a given probability mass. This paper proposes a robust single-class support vector machine (SSVM) for novelty detection, which is mainly based on the worst case conditional value-at-risk minimization. By assuming that every input is subject to an uncertainty with a specified symmetric support, this robust formulation results in a maximization term that is similar to the regularization term in the classical SSVM. When the uncertainty set is l1 -norm, l∞ -norm or box, its training can be reformulated to a linear program; while the uncertainty set is l2 -norm or ellipsoidal, its training is a tractable second-order cone program. The proposed method has a nice consistent statistical property. As the training size goes to infinity, the estimated normal region converges to the true provided that the magnitude of the uncertainty set decreases in a systematic way. The experimental results on three data sets clearly demonstrate its superiority over three benchmark models.
Collapse
|
12
|
Gu B, Sheng VS, Tay KY, Romano W, Li S. Incremental Support Vector Learning for Ordinal Regression. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1403-1416. [PMID: 25134094 DOI: 10.1109/tnnls.2014.2342533] [Citation(s) in RCA: 221] [Impact Index Per Article: 22.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Support vector ordinal regression (SVOR) is a popular method to tackle ordinal regression problems. However, until now there were no effective algorithms proposed to address incremental SVOR learning due to the complicated formulations of SVOR. Recently, an interesting accurate on-line algorithm was proposed for training ν -support vector classification (ν-SVC), which can handle a quadratic formulation with a pair of equality constraints. In this paper, we first present a modified SVOR formulation based on a sum-of-margins strategy. The formulation has multiple constraints, and each constraint includes a mixture of an equality and an inequality. Then, we extend the accurate on-line ν-SVC algorithm to the modified formulation, and propose an effective incremental SVOR algorithm. The algorithm can handle a quadratic formulation with multiple constraints, where each constraint is constituted of an equality and an inequality. More importantly, it tackles the conflicts between the equality and inequality constraints. We also provide the finite convergence analysis for the algorithm. Numerical experiments on the several benchmark and real-world data sets show that the incremental algorithm can converge to the optimal solution in a finite number of steps, and is faster than the existing batch and incremental SVOR algorithms. Meanwhile, the modified formulation has better accuracy than the existing incremental SVOR algorithm, and is as accurate as the sum-of-margins based formulation of Shashua and Levin.
Collapse
|
13
|
Huang G, Song S, Gupta JND, Wu C. Semi-supervised and unsupervised extreme learning machines. IEEE TRANSACTIONS ON CYBERNETICS 2014; 44:2405-2417. [PMID: 25415946 DOI: 10.1109/tcyb.2014.2307349] [Citation(s) in RCA: 267] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.
Collapse
|
14
|
Hu Q, Zhang S, Xie Z, Mi J, Wan J. Noise model based ν-support vector regression with its application to short-term wind speed forecasting. Neural Netw 2014; 57:1-11. [DOI: 10.1016/j.neunet.2014.05.003] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2013] [Revised: 03/09/2014] [Accepted: 05/01/2014] [Indexed: 10/25/2022]
|
15
|
Forghani Y, Sadoghi Yazdi H. Robust Support Vector Machines with Low Test Time. Comput Intell 2014. [DOI: 10.1111/coin.12039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Yahya Forghani
- Computer Department; Ferdowsi University of Mashhad; Iran
| | - Hadi Sadoghi Yazdi
- Computer Department, Center of Excellence on Soft Computing and Intelligent Information Processing; Ferdowsi University of Mashhad; Iran
| |
Collapse
|
16
|
Wang K, Zhu W, Zhong P. Robust Support Vector Regression with Generalized Loss Function and Applications. Neural Process Lett 2014. [DOI: 10.1007/s11063-013-9336-3] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
17
|
Zhang XX, Jiang Y, Li HX, Li SY. SVR learning-based spatiotemporal fuzzy logic controller for nonlinear spatially distributed dynamic systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:1635-1647. [PMID: 24808600 DOI: 10.1109/tnnls.2013.2258356] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
A data-driven 3-D fuzzy-logic controller (3-D FLC) design methodology based on support vector regression (SVR) learning is developed for nonlinear spatially distributed dynamic systems. Initially, the spatial information expression and processing as well as the fuzzy linguistic expression and rule inference of a 3-D FLC are integrated into spatial fuzzy basis functions (SFBFs), and then the 3-D FLC can be depicted by a three-layer network structure. By relating SFBFs of the 3-D FLC directly to spatial kernel functions of an SVR, an equivalence relationship of the 3-D FLC and the SVR is established, which means that the 3-D FLC can be designed with the help of the SVR learning. Subsequently, for an easy implementation, a systematic SVR learning-based 3-D FLC design scheme is formulated. In addition, the universal approximation capability of the proposed 3-D FLC is presented. Finally, the control of a nonlinear catalytic packed-bed reactor is considered as an application to demonstrate the effectiveness of the proposed 3-D FLC.
Collapse
|