1
|
Sha T, Peng Y. Orthogonal semi-supervised regression with adaptive label dragging for cross-session EEG emotion recognition. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2023. [DOI: 10.1016/j.jksuci.2023.03.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
2
|
Zhang Y, Yang D, Lam S, Li B, Teng X, Zhang J, Zhou T, Ma Z, Ying TC(M, Cai J. Radiomics-Based Detection of COVID-19 from Chest X-ray Using Interpretable Soft Label-Driven TSK Fuzzy Classifier. Diagnostics (Basel) 2022; 12:2613. [PMID: 36359456 PMCID: PMC9689330 DOI: 10.3390/diagnostics12112613] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 10/16/2022] [Accepted: 10/18/2022] [Indexed: 05/22/2024] Open
Abstract
The COVID-19 pandemic has posed a significant global public health threat with an escalating number of new cases and death toll daily. The early detection of COVID-related CXR abnormality potentially allows the early isolation of suspected cases. Chest X-Ray (CXR) is a fast and highly accessible imaging modality. Recently, a number of CXR-based AI models have been developed for the automated detection of COVID-19. However, most existing models are difficult to interpret due to the use of incomprehensible deep features in their models. Confronted with this, we developed an interpretable TSK fuzzy system in this study for COVID-19 detection using radiomics features extracted from CXR images. There are two main contributions. (1) When TSK fuzzy systems are applied to classification tasks, the commonly used binary label matrix of training samples is transformed into a soft one in order to learn a more discriminant transformation matrix and hence improve classification accuracy. (2) Based on the assumption that the samples in the same class should be kept as close as possible when they are transformed into the label space, the compactness class graph is introduced to avoid overfitting caused by label matrix relaxation. Our proposed model for a multi-categorical classification task (COVID-19 vs. No-Findings vs. Pneumonia) was evaluated using 600 CXR images from publicly available datasets and compared against five state-of-the-art AI models in aspects of classification accuracy. Experimental findings showed that our model achieved classification accuracy of over 83%, which is better than the state-of-the-art models, while maintaining high interpretability.
Collapse
Affiliation(s)
- Yuanpeng Zhang
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
- Department of Medical informatics, Nantong University, Nantong 226007, China
| | - Dongrong Yang
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Saikit Lam
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Bing Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Xinzhi Teng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jiang Zhang
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ta Zhou
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Zongrui Ma
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Tin-Cheung (Michael) Ying
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
3
|
Multi-view multi-manifold learning with local and global structure preservation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04101-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
4
|
Zhu Z, Wang Z, Li D, Du W. Globalized Multiple Balanced Subsets With Collaborative Learning for Imbalanced Data. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:2407-2417. [PMID: 32609619 DOI: 10.1109/tcyb.2020.3001158] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The skewed distribution of data brings difficulties to classify minority and majority samples in the imbalanced problem. The balanced bagging randomly undersampes majority samples several times and combines the selected majority samples with minority samples to form several balanced subsets, in which the numbers of minority and majority samples are roughly equal. However, the balanced bagging is the lack of a unified learning framework. Moreover, it fails to concern the connection of all subsets and the global information of the entire data distribution. To this end, this article puts several balanced subsets into an effective learning framework with a criterion function. In the learning framework, one regularization term called RS establishes the connection and realizes the collaborative learning of all subsets by requiring the consistent outputs of the minority samples in different subsets. Besides, another regularization term called RW provides the global information to each basic classifier by reducing the difference between the direction of the solution vector in each subset and that in the entire dataset. The proposed learning framework is called globalized multiple balanced subsets with collaborative learning (GMBSCL). The experimental results validate the effectiveness of the proposed GMBSCL.
Collapse
|
5
|
Ni T, Gu X, Zhang C. An Intelligence EEG Signal Recognition Method via Noise Insensitive TSK Fuzzy System Based on Interclass Competitive Learning. Front Neurosci 2020; 14:837. [PMID: 33013284 PMCID: PMC7499470 DOI: 10.3389/fnins.2020.00837] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2020] [Accepted: 07/20/2020] [Indexed: 11/29/2022] Open
Abstract
Epilepsy is an abnormal function disease of movement, consciousness, and nerve caused by abnormal discharge of brain neurons in the brain. EEG is currently a very important tool in the process of epilepsy research. In this paper, a novel noise-insensitive Takagi-Sugeno-Kang (TSK) fuzzy system based on interclass competitive learning is proposed for EEG signal recognition. First, a possibilistic clustering in Bayesian framework with interclass competitive learning called PCB-ICL is presented to determine antecedent parameters of fuzzy rules. Inherited by the possibilistic c-means clustering, PCB-ICL is noise insensitive. PCB-ICL learns cluster centers of different classes in a competitive relationship. The obtained clustering centers are attracted by the samples of the same class and also excluded by the samples of other classes and pushed away from the heterogeneous data. PCB-ICL uses the Metropolis-Hastings method to obtain the optimal clustering results in an alternating iterative strategy. Thus, the learned antecedent parameters have high interpretability. To further promote the noise insensitivity of rules, the asymmetric expectile term and Ho-Kashyap procedure are adopted to learn the consequent parameters of rules. Based on the above ideas, a TSK fuzzy system is proposed and is called PCB-ICL-TSK. Comprehensive experiments on real-world EEG data reveal that the proposed fuzzy system achieves the robust and effective performance for EEG signal recognition.
Collapse
Affiliation(s)
| | - Xiaoqing Gu
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
| | | |
Collapse
|
6
|
Efficient matrixized classification learning with separated solution process. Neural Comput Appl 2020. [DOI: 10.1007/s00521-019-04595-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
7
|
He K, Peng Y, Liu S, Li J. Regularized Negative Label Relaxation Least Squares Regression for Face Recognition. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10219-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
8
|
Zhu Z, Wang Z, Li D, Du W, Zhou Y. Multiple Partial Empirical Kernel Learning with Instance Weighting and Boundary Fitting. Neural Netw 2019; 123:26-37. [PMID: 31821948 DOI: 10.1016/j.neunet.2019.11.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Revised: 09/26/2019] [Accepted: 11/19/2019] [Indexed: 10/25/2022]
Abstract
By dividing the original data set into several sub-sets, Multiple Partial Empirical Kernel Learning (MPEKL) constructs multiple kernel matrixes corresponding to the sub-sets, and these kernel matrixes are decomposed to provide the explicit kernel functions. Then, the instances in the original data set are mapped into multiple kernel spaces, which provide better performance than single kernel space. It is known that the instances in different locations and distributions behave differently. Therefore, this paper defines the weight of instance in accordance with the location and distribution of the instances. According to the location, the instances can be categorized into intrinsic instances, boundary instances and noise instances. Generally, the boundary instances, as well as the minority instances in the imbalanced data set, are assigned high weight. Meanwhile, a regularization term, which regulates the classification hyperplane to fit the distribution trend of the class boundary, is constructed by the boundary instances. Then, the weight of instance and the regularization term are introduced into MPEKL to form an algorithm named Multiple Partial Empirical Kernel Learning with Instance Weighting and Boundary Fitting (IBMPEKL). Experiments demonstrate the good performance of IBMPEKL and validate the effectiveness of the instance weighting and boundary fitting.
Collapse
Affiliation(s)
- Zonghai Zhu
- Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, 200237, PR China; Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, 200237, PR China
| | - Zhe Wang
- Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, 200237, PR China; Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, 200237, PR China.
| | - Dongdong Li
- Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, 200237, PR China
| | - Wenli Du
- Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, 200237, PR China
| | - Yangming Zhou
- Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, 200237, PR China
| |
Collapse
|
9
|
Wang Z, Wang B, Zhou Y, Li D, Yin Y. Weight-based multiple empirical kernel learning with neighbor discriminant constraint for heart failure mortality prediction. J Biomed Inform 2019; 101:103340. [PMID: 31756495 DOI: 10.1016/j.jbi.2019.103340] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Revised: 06/14/2019] [Accepted: 11/10/2019] [Indexed: 11/16/2022]
Abstract
Heart Failure (HF) is one of the most common causes of hospitalization and is burdened by short-term (in-hospital) and long-term (6-12 month) mortality. Accurate prediction of HF mortality plays a critical role in evaluating early treatment effects. However, due to the lack of a simple and effective prediction model, mortality prediction of HF is difficult, resulting in a low rate of control. To handle this issue, we propose a Weight-based Multiple Empirical Kernel Learning with Neighbor Discriminant Constraint (WMEKL-NDC) method for HF mortality prediction. In our method, feature selection by calculating the F-value of each feature is first performed to identify the crucial clinical features. Then, different weights are assigned to each empirical kernel space according to the centered kernel alignment criterion. To make use of the discriminant information of samples, neighbor discriminant constraint is finally integrated into multiple empirical kernel learning framework. Extensive experiments were performed on a real clinical dataset containing 10, 198 in-patients records collected from Shanghai Shuguang Hospital in March 2009 and April 2016. Experimental results demonstrate that our proposed WMEKL-NDC method achieves a highly competitive performance for HF mortality prediction of in-hospital, 30-day and 1-year. Compared with the state-of-the-art multiple kernel learning and baseline algorithms, our proposed WMEKL-NDC is more accurate on mortality prediction Moreover, top 10 crucial clinical features are identified together with their meanings, which are very useful to assist clinicians in the treatment of HF disease.
Collapse
Affiliation(s)
- Zhe Wang
- Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China; Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai 200237, China.
| | - Bolu Wang
- Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Yangming Zhou
- Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai 200237, China.
| | - Dongdong Li
- Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Yichao Yin
- Shanghai Shuguang Hospital, Shanghai 200021, China
| |
Collapse
|
10
|
Wang Z, Wang B, Cheng Y, Li D, Zhang J. Cost-sensitive Fuzzy Multiple Kernel Learning for imbalanced problem. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.06.065] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
11
|
|
12
|
Fan Q, Wang Z, Gao D. Locality Density-Based Fuzzy Multiple Empirical Kernel Learning. Neural Process Lett 2019. [DOI: 10.1007/s11063-018-9881-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
13
|
Zhu Z, Wang Z, Li D, Du W. Multiple Empirical Kernel Learning with Majority Projection for imbalanced problems. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2018.11.037] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
14
|
Wang Z, Zhu Z. Matrix-pattern-oriented classifier with boundary projection discrimination. Knowl Based Syst 2018. [DOI: 10.1016/j.knosys.2017.12.024] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
15
|
Zhu Y, Wang Z, Zha H, Gao D, Wang Z, Gao D, Zhu Y, Zha H. Boundary-Eliminated Pseudoinverse Linear Discriminant for Imbalanced Problems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:2581-2594. [PMID: 28534789 DOI: 10.1109/tnnls.2017.2676239] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Existing learning models for classification of imbalanced data sets can be grouped as either boundary-based or nonboundary-based depending on whether a decision hyperplane is used in the learning process. The focus of this paper is a new approach that leverages the advantage of both approaches. Specifically, our new model partitions the input space into three parts by creating two additional boundaries in the training process, and then makes the final decision based on a heuristic measurement between the test sample and a subset of selected training samples. Since the original hyperplane used by the underlying original classifier will be eliminated, the proposed model is named the boundary-eliminated (BE) model. Additionally, the pseudoinverse linear discriminant (PILD) is adopted for the BE model so as to obtain a novel classifier abbreviated as BEPILD. Experiments validate both the effectiveness and the efficiency of BEPILD, compared with 13 state-of-the-art classification methods, based on 31 imbalanced and 7 standard data sets.
Collapse
|
16
|
Fang X, Xu Y, Li X, Lai Z, Wong WK, Fang B. Regularized Label Relaxation Linear Regression. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:1006-1018. [PMID: 28166507 DOI: 10.1109/tnnls.2017.2648880] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.
Collapse
|
17
|
|
18
|
|
19
|
Li D, Zhu Y, Wang Z, Chong C, Gao D. Regularized Matrix-Pattern-Oriented Classification Machine with Universum. Neural Process Lett 2016. [DOI: 10.1007/s11063-016-9567-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
20
|
Fan Q, Gao D, Wang Z. Multiple empirical kernel learning with locality preserving constraint. Knowl Based Syst 2016. [DOI: 10.1016/j.knosys.2016.05.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
21
|
Zhu C, Wang Z, Gao D. New design goal of a classifier: Global and local structural risk minimization. Knowl Based Syst 2016. [DOI: 10.1016/j.knosys.2016.02.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
22
|
|
23
|
Zhu C. Improved multi-kernel classification machine with Nyström approximation technique and Universum data. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.10.102] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
24
|
|
25
|
|
26
|
|
27
|
Matrixized Learning Machine with Feature-Clustering Interpolation. Neural Process Lett 2015. [DOI: 10.1007/s11063-015-9458-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
28
|
|
29
|
Wang Z, Lu M, Gao D. Reduced multiple empirical kernel learning machine. Cogn Neurodyn 2015; 9:63-73. [PMID: 26052363 DOI: 10.1007/s11571-014-9304-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2014] [Revised: 07/15/2014] [Accepted: 07/15/2014] [Indexed: 10/25/2022] Open
Abstract
Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3) this paper adopts the Gauss Elimination, one of the on-the-shelf techniques, to generate a basis of the original feature space, which is stable and efficient.
Collapse
Affiliation(s)
- Zhe Wang
- Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, 200237 People's Republic of China
| | - MingZhe Lu
- Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, 200237 People's Republic of China
| | - Daqi Gao
- Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, 200237 People's Republic of China
| |
Collapse
|
30
|
Wang Z, Fan Q, Gao D. Multiple Empirical Kernel Learning with dynamic pairwise constraints. Appl Soft Comput 2015. [DOI: 10.1016/j.asoc.2015.01.040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
31
|
Zhu C, Wang Z, Gao D. Globalized and localized canonical correlation analysis with multiple empirical kernel mapping. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.11.066] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
32
|
|
33
|
Fan Q, Wang Z, Gao D, Li D. MPEKDyL: Efficient multi-partial empirical kernel dynamic learning. Knowl Based Syst 2015. [DOI: 10.1016/j.knosys.2014.12.024] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
34
|
|
35
|
|
36
|
|
37
|
An Efficient and Effective Multiple Empirical Kernel Learning Based on Random Projection. Neural Process Lett 2014. [DOI: 10.1007/s11063-014-9385-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
38
|
Wang Z, Zhu C, Niu Z, Gao D, Feng X. Multi-kernel classification machine with reduced complexity. Knowl Based Syst 2014. [DOI: 10.1016/j.knosys.2014.04.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
39
|
Wang Z, Jie W, Chen S, Gao D. Random projection ensemble learning with multiple empirical kernels. Knowl Based Syst 2013. [DOI: 10.1016/j.knosys.2012.08.017] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
40
|
Wang Z, Xu J, Chen S, Gao D. Regularized multi-view learning machine based on response surface technique. Neurocomputing 2012. [DOI: 10.1016/j.neucom.2012.05.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
41
|
Xiang S, Nie F, Meng G, Pan C, Zhang C. Discriminative least squares regression for multiclass classification and feature selection. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:1738-54. [PMID: 24808069 DOI: 10.1109/tnnls.2012.2212721] [Citation(s) in RCA: 163] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.
Collapse
|
42
|
Wang Z, Xu J, Gao D, Fu Y. Multiple empirical kernel learning based on local information. Neural Comput Appl 2012. [DOI: 10.1007/s00521-012-1161-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
43
|
Wang Z, Chen S, Liu J, Zhang D. Pattern Representation in Feature Extraction and Classifier Design: Matrix Versus Vector. ACTA ACUST UNITED AC 2008; 19:758-69. [DOI: 10.1109/tnn.2007.911744] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
44
|
Wang Z, Chen S, Sun T. MultiK-MHKS: a novel multiple kernel learning algorithm. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2008; 30:348-353. [PMID: 18084064 DOI: 10.1109/tpami.2007.70786] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In this paper, we develop a new effective multiple kernel learning algorithm. First, map the input data into m different feature spaces by m empirical kernels, where each generatedfeature space is takenas one viewof the input space. Then through the borrowing the motivating argument from Canonical Correlation Analysis (CCA)that can maximally correlate the m views in the transformed coordinates, we introduce a special term called Inter-Function Similarity Loss R IFSL into the existing regularization framework so as to guarantee the agreement of multi-view outputs. In implementation, we select the Modification of Ho-Kashyap algorithm with Squared approximation of the misclassification errors (MHKS) as the incorporated paradigm, and the experimental results on benchmark data sets demonstrate the feasibility and effectiveness of the proposed algorithm named MultiK-MHKS.
Collapse
Affiliation(s)
- Zhe Wang
- Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, 29 Yudao St., Nanjing 210016, PR China.
| | | | | |
Collapse
|
45
|
|