51
|
Jia X, Jing XY, Zhu X, Cai Z, Hu CH. Co-embedding: a semi-supervised multi-view representation learning approach. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06599-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
52
|
Identification of drug-target interactions via multi-view graph regularized link propagation model. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.100] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
53
|
Lu R, Cai Y, Zhu J, Nie F, Yang H. Dimension reduction of multimodal data by auto-weighted local discriminant analysis. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.06.035] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
54
|
Deep matrix factorization with knowledge transfer for lifelong clustering and semi-supervised clustering. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.04.067] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
|
55
|
Wang J, Liang J, Cui J, Liang J. Semi-supervised learning with mixed-order graph convolutional networks. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.05.057] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
56
|
Multi-view Clustering Based on Low-rank Representation and Adaptive Graph Learning. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10634-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
57
|
Guo X, Dang C, Liang J, Wei W, Liang J. Metric learning with clustering-based constraints. INT J MACH LEARN CYB 2021. [DOI: 10.1007/s13042-021-01408-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
58
|
|
59
|
Yang S, Ienco D, Esposito R, Pensa RG. ESA☆: A generic framework for semi-supervised inductive learning. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
60
|
Robust and sparse label propagation for graph-based semi-supervised classification. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02360-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
61
|
|
62
|
|
63
|
|
64
|
Ren Z, Yang SX, Sun Q, Wang T. Consensus Affinity Graph Learning for Multiple Kernel Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3273-3284. [PMID: 32584777 DOI: 10.1109/tcyb.2020.3000947] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Significant attention to multiple kernel graph-based clustering (MKGC) has emerged in recent years, primarily due to the superiority of multiple kernel learning (MKL) and the outstanding performance of graph-based clustering. However, many existing MKGC methods design a fat model that poses challenges for computational cost and clustering performance, as they learn both an affinity graph and an extra consensus kernel cumbersomely. To tackle this challenging problem, this article proposes a new MKGC method to learn a consensus affinity graph directly. By using the self-expressiveness graph learning and an adaptive local structure learning term, the local manifold structure of the data in kernel space is preserved for learning multiple candidate affinity graphs from a kernel pool first. After that, these candidate affinity graphs are synthesized to learn a consensus affinity graph via a thin autoweighted fusion model, in which a self-tuned Laplacian rank constraint and a top- k neighbors sparse strategy are introduced to improve the quality of the consensus affinity graph for accurate clustering purposes. The experimental results on ten benchmark datasets and two synthetic datasets show that the proposed method consistently and significantly outperforms the state-of-the-art methods.
Collapse
|
65
|
Li D, Zhong X, Dou Z, Gong M, Ma X. Detecting dynamic community by fusing network embedding and nonnegative matrix factorization. Knowl Based Syst 2021; 221:106961. [DOI: 10.1016/j.knosys.2021.106961] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
|
66
|
|
67
|
Lv J, Kang Z, Lu X, Xu Z. Pseudo-Supervised Deep Subspace Clustering. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:5252-5263. [PMID: 34033539 DOI: 10.1109/tip.2021.3079800] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Auto-Encoder (AE)-based deep subspace clustering (DSC) methods have achieved impressive performance due to the powerful representation extracted using deep neural networks while prioritizing categorical separability. However, self-reconstruction loss of an AE ignores rich useful relation information and might lead to indiscriminative representation, which inevitably degrades the clustering performance. It is also challenging to learn high-level similarity without feeding semantic labels. Another unsolved problem facing DSC is the huge memory cost due to n×n similarity matrix, which is incurred by the self-expression layer between an encoder and decoder. To tackle these problems, we use pairwise similarity to weigh the reconstruction loss to capture local structure information, while a similarity is learned by the self-expression layer. Pseudo-graphs and pseudo-labels, which allow benefiting from uncertain knowledge acquired during network training, are further employed to supervise similarity learning. Joint learning and iterative training facilitate to obtain an overall optimal solution. Extensive experiments on benchmark datasets demonstrate the superiority of our approach. By combining with the k -nearest neighbors algorithm, we further show that our method can address the large-scale and out-of-sample problems. The source code of our method is available: https://github.com/sckangz/SelfsupervisedSC.
Collapse
|
68
|
|
69
|
|
70
|
Peach RL, Arnaudon A, Schmidt JA, Palasciano HA, Bernier NR, Jelfs KE, Yaliraki SN, Barahona M. HCGA: Highly comparative graph analysis for network phenotyping. PATTERNS (NEW YORK, N.Y.) 2021; 2:100227. [PMID: 33982022 PMCID: PMC8085611 DOI: 10.1016/j.patter.2021.100227] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Revised: 02/02/2021] [Accepted: 03/03/2021] [Indexed: 11/30/2022]
Abstract
Networks are widely used as mathematical models of complex systems across many scientific disciplines. Decades of work have produced a vast corpus of research characterizing the topological, combinatorial, statistical, and spectral properties of graphs. Each graph property can be thought of as a feature that captures important (and sometimes overlapping) characteristics of a network. In this paper, we introduce HCGA, a framework for highly comparative analysis of graph datasets that computes several thousands of graph features from any given network. HCGA also offers a suite of statistical learning and data analysis tools for automated identification and selection of important and interpretable features underpinning the characterization of graph datasets. We show that HCGA outperforms other methodologies on supervised classification tasks on benchmark datasets while retaining the interpretability of network features. We exemplify HCGA by predicting the charge transfer in organic semiconductors and clustering a dataset of neuronal morphology images.
Collapse
Affiliation(s)
- Robert L. Peach
- Department of Mathematics, Imperial College London, SW7 2AZ London, UK
| | - Alexis Arnaudon
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, 1202 Geneva, Switzerland
| | - Julia A. Schmidt
- Department of Chemistry, Imperial College London, SW7 2AZ London, UK
| | | | | | - Kim E. Jelfs
- Department of Chemistry, Imperial College London, SW7 2AZ London, UK
| | | | - Mauricio Barahona
- Department of Mathematics, Imperial College London, SW7 2AZ London, UK
| |
Collapse
|
71
|
|
72
|
Xu J, Yu M, Shao L, Zuo W, Meng D, Zhang L, Zhang D. Scaled Simplex Representation for Subspace Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:1493-1505. [PMID: 31634148 DOI: 10.1109/tcyb.2019.2943691] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The self-expressive property of data points, that is, each data point can be linearly represented by the other data points in the same subspace, has proven effective in leading subspace clustering (SC) methods. Most self-expressive methods usually construct a feasible affinity matrix from a coefficient matrix, obtained by solving an optimization problem. However, the negative entries in the coefficient matrix are forced to be positive when constructing the affinity matrix via exponentiation, absolute symmetrization, or squaring operations. This consequently damages the inherent correlations among the data. Besides, the affine constraint used in these methods is not flexible enough for practical applications. To overcome these problems, in this article, we introduce a scaled simplex representation (SSR) for the SC problem. Specifically, the non-negative constraint is used to make the coefficient matrix physically meaningful, and the coefficient vector is constrained to be summed up to a scalar to make it more discriminative. The proposed SSR-based SC (SSRSC) model is reformulated as a linear equality-constrained problem, which is solved efficiently under the alternating direction method of multipliers framework. Experiments on benchmark datasets demonstrate that the proposed SSRSC algorithm is very efficient and outperforms the state-of-the-art SC methods on accuracy. The code can be found at https://github.com/csjunxu/SSRSC.
Collapse
|
73
|
Mi Y, Ren Z, Mukherjee M, Huang Y, Sun Q, Chen L. Diversity and consistency embedding learning for multi-view subspace clustering. APPL INTELL 2021. [DOI: 10.1007/s10489-020-02126-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
74
|
Ren Z, Lei H, Sun Q, Yang C. Simultaneous learning coefficient matrix and affinity graph for multiple kernel clustering. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.08.056] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
75
|
Li X, Ren Z, Lei H, Huang Y, Sun Q. Multiple kernel clustering with pure graph learning scheme. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.10.052] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
76
|
|
77
|
Li L, Zhao K, Gan J, Cai S, Liu T, Mu H, Sun R. Robust Adaptive Semi-supervised Classification Method based on Dynamic Graph and Self-paced Learning. Inf Process Manag 2021. [DOI: 10.1016/j.ipm.2020.102433] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
78
|
Clothes image caption generation with attribute detection and visual attention model. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2020.12.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
79
|
Zhang H, Lu G, Zhan M, Zhang B. Semi-Supervised Classification of Graph Convolutional Networks with Laplacian Rank Constraints. Neural Process Lett 2021. [DOI: 10.1007/s11063-020-10404-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
80
|
TileGAN: category-oriented attention-based high-quality tiled clothes generation from dressed person. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-04928-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
81
|
Wen J, Sun H, Fei L, Li J, Zhang Z, Zhang B. Consensus guided incomplete multi-view spectral clustering. Neural Netw 2020; 133:207-219. [PMID: 33227665 DOI: 10.1016/j.neunet.2020.10.014] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 10/25/2020] [Accepted: 10/29/2020] [Indexed: 10/23/2022]
Abstract
Incomplete multi-view clustering which aims to solve the difficult clustering challenge on incomplete multi-view data collected from diverse domains with missing views has drawn considerable attention in recent years. In this paper, we propose a novel method, called consensus guided incomplete multi-view spectral clustering (CGIMVSC), to address the incomplete clustering problem. Specifically, CGIMVSC seeks to explore the local information within every single-view and the semantic consistent information shared by all views in a unified framework simultaneously, where the local structure is adaptively obtained from the incomplete data rather than pre-constructed via a k-nearest neighbor approach in the existing methods. Considering the semantic consistency of multiple views, CGIMVSC introduces a co-regularization constraint to minimize the disagreement between the common representation and the individual representations with respect to different views, such that all views will obtain a consensus clustering result. Experimental comparisons with some state-of-the-art methods on seven datasets validate the effectiveness of the proposed method on incomplete multi-view clustering.
Collapse
Affiliation(s)
- Jie Wen
- PAMI Research Group, Department of Computer and Information Science, University of Macau, Taipa, Macau
| | - Huijie Sun
- Nanchang Institute of Technology, Nanchang 330044, China; Sun Yat-sen University, Guangzhou 510000, China
| | - Lunke Fei
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| | - Jinxing Li
- School of Science and Engineering, Chinese University of Hong Kong (Shenzhen), Shenzhen, 518000, China
| | - Zheng Zhang
- Shenzhen Key Laboratory of Visual Object Detection and Recognition, Harbin Institute of Technology, Shenzhen, Shenzhen 518055, China; Peng Cheng Laboratory, Shenzhen 518055, China
| | - Bob Zhang
- PAMI Research Group, Department of Computer and Information Science, University of Macau, Taipa, Macau.
| |
Collapse
|
82
|
Bi P, Xu J, Du X, Li J. Generalized robust graph-Laplacian PCA and underwater image recognition. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-04927-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
83
|
Yu X, Liu H, Wu Y, Ruan H. Kernel‐based low‐rank tensorized multiview spectral clustering. INT J INTELL SYST 2020. [DOI: 10.1002/int.22319] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Xiao Yu
- Department of Computer Science and Technology Shandong University of Finance and Economics Jinan China
- Shandong Key Laboratory of Digital Media Technology Jinan Shandong China
| | - Hui Liu
- Department of Computer Science and Technology Shandong University of Finance and Economics Jinan China
- Shandong Key Laboratory of Digital Media Technology Jinan Shandong China
| | - Yan Wu
- Medical Center, Stanford University Palo Alto California USA
| | - Huaijun Ruan
- S&T Information Institution Shandong Academy of Agricultural Sciences Jinan Shandong China
| |
Collapse
|
84
|
|
85
|
Kang Z, Lu X, Lu Y, Peng C, Chen W, Xu Z. Structure learning with similarity preserving. Neural Netw 2020; 129:138-148. [DOI: 10.1016/j.neunet.2020.05.030] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Revised: 02/15/2020] [Accepted: 05/26/2020] [Indexed: 02/07/2023]
|
86
|
Identification of Drug–Target Interactions via Dual Laplacian Regularized Least Squares with Multiple Kernel Fusion. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106254] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
87
|
Kang Z, Lu X, Liang J, Bai K, Xu Z. Relation-Guided Representation Learning. Neural Netw 2020; 131:93-102. [PMID: 32763763 DOI: 10.1016/j.neunet.2020.07.014] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/12/2020] [Accepted: 07/10/2020] [Indexed: 11/20/2022]
Abstract
Deep auto-encoders (DAEs) have achieved great success in learning data representations via the powerful representability of neural networks. But most DAEs only focus on the most dominant structures which are able to reconstruct the data from a latent space and neglect rich latent structural information. In this work, we propose a new representation learning method that explicitly models and leverages sample relations, which in turn is used as supervision to guide the representation learning. Different from previous work, our framework well preserves the relations between samples. Since the prediction of pairwise relations themselves is a fundamental problem, our model adaptively learns them from data. This provides much flexibility to encode real data manifold. The important role of relation and representation learning is evaluated on the clustering task. Extensive experiments on benchmark data sets demonstrate the superiority of our approach. By seeking to embed samples into subspace, we further show that our method can address the large-scale and out-of-sample problem. Our source code is publicly available at: https://github.com/nbShawnLu/RGRL.
Collapse
Affiliation(s)
- Zhao Kang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Sichuan, China; Trusted Cloud Computing and Big Data Key Laboratory of Sichuan Province, China
| | - Xiao Lu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Sichuan, China
| | - Jian Liang
- Cloud and Smart Industries Group, Tencent, Beijing, China
| | - Kun Bai
- Cloud and Smart Industries Group, Tencent, Beijing, China
| | - Zenglin Xu
- School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China; Center for Artificial Intelligence, Peng Cheng Lab, Shenzhen, China.
| |
Collapse
|
88
|
|
89
|
Liu X, Zhu X, Li M, Wang L, Zhu E, Liu T, Kloft M, Shen D, Yin J, Gao W. Multiple Kernel k-Means with Incomplete Kernels. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:1191-1204. [PMID: 30640600 PMCID: PMC6626696 DOI: 10.1109/tpami.2019.2892416] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Multiple kernel clustering (MKC) algorithms optimally combine a group of pre-specified base kernel matrices to improve clustering performance. However, existing MKC algorithms cannot efficiently address the situation where some rows and columns of base kernel matrices are absent. This paper proposes two simple yet effective algorithms to address this issue. Different from existing approaches where incomplete kernel matrices are first imputed and a standard MKC algorithm is applied to the imputed kernel matrices, our first algorithm integrates imputation and clustering into a unified learning procedure. Specifically, we perform multiple kernel clustering directly with the presence of incomplete kernel matrices, which are treated as auxiliary variables to be jointly optimized. Our algorithm does not require that there be at least one complete base kernel matrix over all the samples. Also, it adaptively imputes incomplete kernel matrices and combines them to best serve clustering. Moreover, we further improve this algorithm by encouraging these incomplete kernel matrices to mutually complete each other. The three-step iterative algorithm is designed to solve the resultant optimization problems. After that, we theoretically study the generalization bound of the proposed algorithms. Extensive experiments are conducted on 13 benchmark data sets to compare the proposed algorithms with existing imputation-based methods. Our algorithms consistently achieve superior performance and the improvement becomes more significant with increasing missing ratio, verifying the effectiveness and advantages of the proposed joint imputation and clustering.
Collapse
|
90
|
Ji L, Chang M, Shen Y, Zhang Q. Recurrent convolutions of binary-constraint Cellular Neural Network for texture recognition. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.12.119] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
91
|
Fu S, Liu W, Tao D, Zhou Y, Nie L. HesGCN: Hessian graph convolutional networks for semi-supervised classification. Inf Sci (N Y) 2020. [DOI: 10.1016/j.ins.2019.11.019] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
92
|
Huang S, Xu Z, Kang Z, Ren Y. Regularized nonnegative matrix factorization with adaptive local structure learning. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.11.070] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
93
|
Peng C, Chen Y, Kang Z, Chen C, Cheng Q. Robust principal component analysis: A factorization-based approach with linear complexity. Inf Sci (N Y) 2020. [DOI: 10.1016/j.ins.2019.09.074] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
94
|
|
95
|
|
96
|
Kang Z, Zhao X, Peng C, Zhu H, Zhou JT, Peng X, Chen W, Xu Z. Partition level multiview subspace clustering. Neural Netw 2019; 122:279-288. [PMID: 31731045 DOI: 10.1016/j.neunet.2019.10.010] [Citation(s) in RCA: 75] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2019] [Revised: 09/17/2019] [Accepted: 10/14/2019] [Indexed: 10/25/2022]
Abstract
Multiview clustering has gained increasing attention recently due to its ability to deal with multiple sources (views) data and explore complementary information between different views. Among various methods, multiview subspace clustering methods provide encouraging performance. They mainly integrate the multiview information in the space where the data points lie. Hence, their performance may be deteriorated because of noises existing in each individual view or inconsistent between heterogeneous features. For multiview clustering, the basic premise is that there exists a shared partition among all views. Therefore, the natural space for multiview clustering should be all partitions. Orthogonal to existing methods, we propose to fuse multiview information in partition level following two intuitive assumptions: (i) each partition is a perturbation of the consensus clustering; (ii) the partition that is close to the consensus clustering should be assigned a large weight. Finally, we propose a unified multiview subspace clustering model which incorporates the graph learning from each view, the generation of basic partitions, and the fusion of consensus partition. These three components are seamlessly integrated and can be iteratively boosted by each other towards an overall optimal solution. Experiments on four benchmark datasets demonstrate the efficacy of our approach against the state-of-the-art techniques.
Collapse
Affiliation(s)
- Zhao Kang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Sichuan, 611731, China
| | - Xinjia Zhao
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Sichuan, 611731, China
| | - Chong Peng
- College of Computer Science and Technology, Qingdao University, China
| | - Hongyuan Zhu
- Institute for Infocomm Research, A*STAR, Singapore
| | | | - Xi Peng
- College of Computer Science, Sichuan University, China
| | - Wenyu Chen
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Sichuan, 611731, China.
| | - Zenglin Xu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Sichuan, 611731, China; Centre for Artificial Intelligence, Peng Cheng Lab, Shenzhen 518055, China.
| |
Collapse
|
97
|
|
98
|
|
99
|
Tang C, Bian M, Liu X, Li M, Zhou H, Wang P, Yin H. Unsupervised feature selection via latent representation learning and manifold regularization. Neural Netw 2019; 117:163-178. [PMID: 31170576 DOI: 10.1016/j.neunet.2019.04.015] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Revised: 04/16/2019] [Accepted: 04/22/2019] [Indexed: 01/17/2023]
Abstract
With the rapid development of multimedia technology, massive unlabelled data with high dimensionality need to be processed. As a means of dimensionality reduction, unsupervised feature selection has been widely recognized as an important and challenging pre-step for many machine learning and data mining tasks. Traditional unsupervised feature selection algorithms usually assume that the data instances are identically distributed and there is no dependency between them. However, the data instances are not only associated with high dimensional features but also inherently interconnected with each other. Furthermore, the inevitable noises mixed in data could degenerate the performances of previous methods which perform feature selection in original data space. Without label information, the connection information between data instances can be exploited and could help select relevant features. In this work, we propose a robust unsupervised feature selection method which embeds the latent representation learning into feature selection. Instead of measuring the feature importances in original data space, the feature selection is carried out in the learned latent representation space which is more robust to noises. The latent representation is modelled by non-negative matrix factorization of the affinity matrix which explicitly reflects the relationships of data instances. Meanwhile, the local manifold structure of original data space is preserved by a graph based manifold regularization term in the transformed feature space. An efficient alternating algorithm is developed to optimize the proposed model. Experimental results on eight benchmark datasets demonstrate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Chang Tang
- School of Computer Science, China University of Geosciences, Wuhan 430074, China.
| | - Meiru Bian
- Department of Hematology, The Affiliated Huai'an Hospital of Xuzhou Medical University, Huai'an 223002, China.
| | - Xinwang Liu
- School of Computer Science, National University of Defense Technology, Changsha 410073, China.
| | - Miaomiao Li
- School of Computer Science, National University of Defense Technology, Changsha 410073, China.
| | - Hua Zhou
- Department of Hematology, The Affiliated Huai'an Hospital of Xuzhou Medical University, Huai'an 223002, China.
| | - Pichao Wang
- Alibaba Group (U.S.) Inc. Bellevue, WA, 98004, USA.
| | - Hailin Yin
- Department of Oncology, People's Hospital of Lian'shui County, Huai'an 223300, China.
| |
Collapse
|
100
|
Tensor Robust Principal Component Analysis via Non-Convex Low Rank Approximation. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9071411] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Tensor Robust Principal Component Analysis (TRPCA) plays a critical role in handling high multi-dimensional data sets, aiming to recover the low-rank and sparse components both accurately and efficiently. In this paper, different from current approach, we developed a new t-Gamma tensor quasi-norm as a non-convex regularization to approximate the low-rank component. Compared to various convex regularization, this new configuration not only can better capture the tensor rank but also provides a simplified approach. An optimization process is conducted via tensor singular decomposition and an efficient augmented Lagrange multiplier algorithm is established. Extensive experimental results demonstrate that our new approach outperforms current state-of-the-art algorithms in terms of accuracy and efficiency.
Collapse
|