1
|
Wang B, Chen M, Li X. Robust Subcluster Search and Mergence Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2024; 54:7616-7628. [PMID: 39231063 DOI: 10.1109/tcyb.2024.3446764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/06/2024]
Abstract
In recent years, graph-based clustering presents outstanding performance and has been widely investigated. It segments the data similarity graph into multiple subgraphs as final clusters. Many methods integrate graph learning and segmentation into a unified optimization problem to explore the graph structure. However, existing research 1) attempts to derive the final clusters from the learned graph directly, which relies on a highly tight internal distribution within each cluster, and is too strict for the real-world data; 2) generally constructs a holistic full sample graph, which means the outliers are involved in graph learning explicitly, and may corrupt the graph quality. To overcome the above limitations, a new clustering model called robust subcluster search and mergence (RSSM) is established in this article. Inspired by the positive-incentive noise (Pi-Noise), RSSM assumes that the outliers are useful for learning the data structure. Considering a few samples with large errors as outliers, RSSM finds the subcentroids by searching an imbalanced residue distribution. In this way, the subcentroids pull the normal samples together and push the outliers far away. Compared with the traditional clusters, the subclusters indicated by the subcentroids are more explicit, where the normal samples are tightly connected. After that, a subcluster similarity graph is constructed to guide the mergence of subclusters. To sum up, RSSM performs the search and mergence of subclusters simultaneously with the help of outliers, and generates a graph that is more suitable for clustering. Experiments on several datasets demonstrate the rationality and superiority of RSSM.
Collapse
|
2
|
Zheng Q, Yang X, Wang S, An X, Liu Q. Asymmetric double-winged multi-view clustering network for exploring diverse and consistent information. Neural Netw 2024; 179:106563. [PMID: 39111164 DOI: 10.1016/j.neunet.2024.106563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 05/24/2024] [Accepted: 07/20/2024] [Indexed: 09/18/2024]
Abstract
In unsupervised scenarios, deep contrastive multi-view clustering (DCMVC) is becoming a hot research spot, which aims to mine the potential relationships between different views. Most existing DCMVC algorithms focus on exploring the consistency information for the deep semantic features, while ignoring the diverse information on shallow features. To fill this gap, we propose a novel multi-view clustering network termed CodingNet to explore the diverse and consistent information simultaneously in this paper. Specifically, instead of utilizing the conventional auto-encoder, we design an asymmetric structure network to extract shallow and deep features separately. Then, by approximating the similarity matrix on the shallow feature to the zero matrix, we ensure the diversity for the shallow features, thus offering a better description of multi-view data. Moreover, we propose a dual contrastive mechanism that maintains consistency for deep features at both view-feature and pseudo-label levels. Our framework's efficacy is validated through extensive experiments on six widely used benchmark datasets, outperforming most state-of-the-art multi-view clustering algorithms.
Collapse
Affiliation(s)
- Qun Zheng
- School of Earth and Space Sciences, CMA-USTC Laboratory of Fengyun Remote Sensing, University of Science and Technology of China, Hefei 230026, China
| | - Xihong Yang
- College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
| | - Siwei Wang
- College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
| | - Xinru An
- School of Earth and Space Sciences, CMA-USTC Laboratory of Fengyun Remote Sensing, University of Science and Technology of China, Hefei 230026, China
| | - Qi Liu
- School of Earth and Space Sciences, CMA-USTC Laboratory of Fengyun Remote Sensing, University of Science and Technology of China, Hefei 230026, China.
| |
Collapse
|
3
|
Zhang H, Qian F, Shi P, Du W, Tang Y, Qian J, Gong C, Yang J. Generalized Nonconvex Nonsmooth Low-Rank Matrix Recovery Framework With Feasible Algorithm Designs and Convergence Analysis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:5342-5353. [PMID: 35737613 DOI: 10.1109/tnnls.2022.3183970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Decomposing data matrix into low-rank plus additive matrices is a commonly used strategy in pattern recognition and machine learning. This article mainly studies the alternating direction method of multiplier (ADMM) with two dual variables, which is used to optimize the generalized nonconvex nonsmooth low-rank matrix recovery problems. Furthermore, the minimization framework with a feasible optimization procedure is designed along with the theoretical analysis, where the variable sequences generated by the proposed ADMM can be proved to be bounded. Most importantly, it can be concluded from the Bolzano-Weierstrass theorem that there must exist a subsequence converging to a critical point, which satisfies the Karush-Kuhn-Tucher (KKT) conditions. Meanwhile, we further ensure the local and global convergence properties of the generated sequence relying on constructing the potential objective function. Particularly, the detailed convergence analysis would be regarded as one of the core contributions besides the algorithm designs and the model generality. Finally, the numerical simulations and the real-world applications are both provided to verify the consistence of the theoretical results, and we also validate the superiority in performance over several mostly related solvers to the tasks of image inpainting and subspace clustering.
Collapse
|
4
|
Xu Z, Tian S, Abhadiomhen SE, Shen XJ. Robust multiview spectral clustering via cooperative manifold and low rank representation induced. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:24445-24464. [DOI: 10.1007/s11042-023-14557-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 05/13/2022] [Accepted: 01/31/2023] [Indexed: 12/04/2024]
|
5
|
Yu W, Wu XJ, Xu T, Chen Z, Kittler J. Scalable Affine Multi-view Subspace Clustering. Neural Process Lett 2023. [DOI: 10.1007/s11063-022-11059-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
|
6
|
Mixed structure low-rank representation for multi-view subspace clustering. APPL INTELL 2023. [DOI: 10.1007/s10489-023-04474-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
|
7
|
Guo J, Sun Y, Gao J, Hu Y, Yin B. Multi-Attribute Subspace Clustering via Auto-Weighted Tensor Nuclear Norm Minimization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:7191-7205. [PMID: 36355733 DOI: 10.1109/tip.2022.3220949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Self-expressiveness based subspace clustering methods have received wide attention for unsupervised learning tasks. However, most existing subspace clustering methods consider data features as a whole and then focus only on one single self-representation. These approaches ignore the intrinsic multi-attribute information embedded in the original data feature and result in one-attribute self-representation. This paper proposes a novel multi-attribute subspace clustering (MASC) model that understands data from multiple attributes. MASC simultaneously learns multiple subspace representations corresponding to each specific attribute by exploiting the intrinsic multi-attribute features drawn from original data. In order to better capture the high-order correlation among multi-attribute representations, we represent them as a tensor in low-rank structure and propose the auto-weighted tensor nuclear norm (AWTNN) as a superior low-rank tensor approximation. Especially, the non-convex AWTNN fully considers the difference between singular values through the implicit and adaptive weights splitting during the AWTNN optimization procedure. We further develop an efficient algorithm to optimize the non-convex and multi-block MASC model and establish the convergence guarantees. A more comprehensive subspace representation can be obtained via aggregating these multi-attribute representations, which can be used to construct a clustering-friendly affinity matrix. Extensive experiments on eight real-world databases reveal that the proposed MASC exhibits superior performance over other subspace clustering methods.
Collapse
|
8
|
Zha Z, Yuan X, Wen B, Zhang J, Zhu C. Nonconvex Structural Sparsity Residual Constraint for Image Restoration. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12440-12453. [PMID: 34161250 DOI: 10.1109/tcyb.2021.3084931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This article proposes a novel nonconvex structural sparsity residual constraint (NSSRC) model for image restoration, which integrates structural sparse representation (SSR) with nonconvex sparsity residual constraint (NC-SRC). Although SSR itself is powerful for image restoration by combining the local sparsity and nonlocal self-similarity in natural images, in this work, we explicitly incorporate the novel NC-SRC prior into SSR. Our proposed approach provides more effective sparse modeling for natural images by applying a more flexible sparse representation scheme, leading to high-quality restored images. Moreover, an alternating minimizing framework is developed to solve the proposed NSSRC-based image restoration problems. Extensive experimental results on image denoising and image deblocking validate that the proposed NSSRC achieves better results than many popular or state-of-the-art methods over several publicly available datasets.
Collapse
|
9
|
Wang Q, Liu R, Chen M, Li X. Robust Rank-Constrained Sparse Learning: A Graph-Based Framework for Single View and Multiview Clustering. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:10228-10239. [PMID: 33872170 DOI: 10.1109/tcyb.2021.3067137] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Graph-based clustering aims to partition the data according to a similarity graph, which has shown impressive performance on various kinds of tasks. The quality of similarity graph largely determines the clustering results, but it is difficult to produce a high-quality one, especially when data contain noises and outliers. To solve this problem, we propose a robust rank constrained sparse learning (RRCSL) method in this article. The L2,1 -norm is adopted into the objective function of sparse representation to learn the optimal graph with robustness. To preserve the data structure, we construct an initial graph and search the graph within its neighborhood. By incorporating a rank constraint, the learned graph can be directly used as the cluster indicator, and the final results are obtained without additional postprocessing. In addition, the proposed method cannot only be applied to single-view clustering but also extended to multiview clustering. Plenty of experiments on synthetic and real-world datasets have demonstrated the superiority and robustness of the proposed framework.
Collapse
|
10
|
Auto-weighted low-rank representation for clustering. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
11
|
Chen H, Wang W, Luo S. Coupled block diagonal regularization for multi-view subspace clustering. Data Min Knowl Discov 2022. [DOI: 10.1007/s10618-022-00852-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
12
|
Cai Y, Huang JZ, Yin J. A new method to build the adaptive k-nearest neighbors similarity graph matrix for spectral clustering. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.030] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
13
|
Yao L, Lu GF. Double structure scaled simplex representation for multi-view subspace clustering. Neural Netw 2022; 151:168-177. [DOI: 10.1016/j.neunet.2022.03.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Revised: 02/12/2022] [Accepted: 03/29/2022] [Indexed: 10/18/2022]
|
14
|
|
15
|
Xu J, Hou Y, Ren D, Liu L, Zhu F, Yu M, Wang H, Shao L. STAR: A Structure and Texture Aware Retinex Model. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:5022-5037. [PMID: 32167892 DOI: 10.1109/tip.2020.2974060] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Retinex theory is developed mainly to decompose an image into the illumination and reflectance components by analyzing local image derivatives. In this theory, larger derivatives are attributed to the changes in reflectance, while smaller derivatives are emerged in the smooth illumination. In this paper, we utilize exponentiated local derivatives (with an exponent γ) of an observed image to generate its structure map and texture map. The structure map is produced by been amplified with γ > 1, while the texture map is generated by been shrank with γ < 1. To this end, we design exponential filters for the local derivatives, and present their capability on extracting accurate structure and texture maps, influenced by the choices of exponents γ. The extracted structure and texture maps are employed to regularize the illumination and reflectance components in Retinex decomposition. A novel Structure and Texture Aware Retinex (STAR) model is further proposed for illumination and reflectance decomposition of a single image. We solve the STAR model by an alternating optimization algorithm. Each sub-problem is transformed into a vectorized least squares regression, with closed-form solutions. Comprehensive experiments on commonly tested datasets demonstrate that, the proposed STAR model produce better quantitative and qualitative performance than previous competing methods, on illumination and reflectance decomposition, low-light image enhancement, and color correction. The code is publicly available at https://github.com/csjunxu/STAR.
Collapse
|