1
|
Lin J, Tang Y, Wang J, Zhang W. Constrained Maximum Cross-Domain Likelihood for Domain Generalization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:2013-2027. [PMID: 37440378 DOI: 10.1109/tnnls.2023.3292242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/15/2023]
Abstract
As a recent noticeable topic, domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains. Great efforts have been made to learn domain-invariant features by aligning distributions across domains. However, existing works are often designed based on some relaxed conditions which are generally hard to satisfy and fail to realize the desired joint distribution alignment. In this article, we propose a novel domain generalization method, which originates from an intuitive idea that a domain-invariant classifier can be learned by minimizing the Kullback-Leibler (KL)-divergence between posterior distributions from different domains. To enhance the generalizability of the learned classifier, we formalize the optimization objective as an expectation computed on the ground-truth marginal distribution. Nevertheless, it also presents two obvious deficiencies, one of which is the side-effect of entropy increase in KL-divergence and the other is the unavailability of ground-truth marginal distributions. For the former, we introduce a term named maximum in-domain likelihood to maintain the discrimination of the learned domain-invariant representation space. For the latter, we approximate the ground-truth marginal distribution with source domains under a reasonable convex hull assumption. Finally, a constrained maximum cross-domain likelihood (CMCL) optimization problem is deduced, by solving which the joint distributions are naturally aligned. An alternating optimization strategy is carefully designed to approximately solve this optimization problem. Extensive experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home, and miniDomainNet, highlight the superior performance of our method.
Collapse
|
2
|
Tao J, Dan Y, Zhou D. Local domain generalization with low-rank constraint for EEG-based emotion recognition. Front Neurosci 2023; 17:1213099. [PMID: 38027525 PMCID: PMC10662311 DOI: 10.3389/fnins.2023.1213099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 10/04/2023] [Indexed: 12/01/2023] Open
Abstract
As an important branch in the field of affective computing, emotion recognition based on electroencephalography (EEG) faces a long-standing challenge due to individual diversities. To conquer this challenge, domain adaptation (DA) or domain generalization (i.e., DA without target domain in the training stage) techniques have been introduced into EEG-based emotion recognition to eliminate the distribution discrepancy between different subjects. The preceding DA or domain generalization (DG) methods mainly focus on aligning the global distribution shift between source and target domains, yet without considering the correlations between the subdomains within the source domain and the target domain of interest. Since the ignorance of the fine-grained distribution information in the source may still bind the DG expectation on EEG datasets with multimodal structures, multiple patches (or subdomains) should be reconstructed from the source domain, on which multi-classifiers could be learned collaboratively. It is expected that accurately aligning relevant subdomains by excavating multiple distribution patterns within the source domain could further boost the learning performance of DG/DA. Therefore, we propose in this work a novel DG method for EEG-based emotion recognition, i.e., Local Domain Generalization with low-rank constraint (LDG). Specifically, the source domain is firstly partitioned into multiple local domains, each of which contains only one positive sample and its positive neighbors and k2 negative neighbors. Multiple subject-invariant classifiers on different subdomains are then co-learned in a unified framework by minimizing local regression loss with low-rank regularization for considering the shared knowledge among local domains. In the inference stage, the learned local classifiers are discriminatively selected according to their importance of adaptation. Extensive experiments are conducted on two benchmark databases (DEAP and SEED) under two cross-validation evaluation protocols, i.e., cross-subject within-dataset and cross-dataset within-session. The experimental results under the 5-fold cross-validation demonstrate the superiority of the proposed method compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Jianwen Tao
- Institute of Artificial Intelligence Application, Ningbo Polytechnic, Zhejiang, China
| | - Yufang Dan
- Institute of Artificial Intelligence Application, Ningbo Polytechnic, Zhejiang, China
| | - Di Zhou
- Industrial Technological Institute of Intelligent Manufacturing, Sichuan University of Arts and Science, Dazhou, China
| |
Collapse
|
3
|
Zhang H, Guo L, Wang J, Ying S, Shi J. Multi-View Feature Transformation Based SVM+ for Computer-Aided Diagnosis of Liver Cancers With Ultrasound Images. IEEE J Biomed Health Inform 2023; 27:1512-1523. [PMID: 37018255 DOI: 10.1109/jbhi.2022.3233717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
It is feasible to improve the performance of B-mode ultrasound (BUS) based computer-aided diagnosis (CAD) for liver cancers by transferring knowledge from contrast-enhanced ultrasound (CEUS) images. In this work, we propose a novel feature transformation based support vector machine plus (SVM+) algorithm for this transfer learning task by introducing feature transformation into the SVM+ framework (named FSVM+). Specifically, the transformation matrix in FSVM+ is learned to minimize the radius of the enclosing ball of all samples, while the SVM+ is used to maximize the margin between two classes. Moreover, to capture more transferable information from multiple CEUS phase images, a multi-view FSVM+ (MFSVM+) is further developed, which transfers knowledge from three CEUS images from three phases, i.e., arterial phase, portal venous phase, and delayed phase, to the BUS-based CAD model. MFSVM+ innovatively assigns appropriate weights for each CEUS image by calculating the maximum mean discrepancy between a pair of BUS and CEUS images, which can capture the relationship between source and target domains. The experimental results on a bi-modal ultrasound liver cancer dataset demonstrate that MFSVM+ achieves the best classification accuracy of 88.24±1.28%, sensitivity of 88.32±2.88%, specificity of 88.17±2.91%, suggesting its effectiveness in promoting the diagnostic accuracy of BUS-based CAD.
Collapse
|
4
|
Khan A, Maji P. Multi-Manifold Optimization for Multi-View Subspace Clustering. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3895-3907. [PMID: 33606638 DOI: 10.1109/tnnls.2021.3054789] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The meaningful patterns embedded in high-dimensional multi-view data sets typically tend to have a much more compact representation that often lies close to a low-dimensional manifold. Identification of hidden structures in such data mainly depends on the proper modeling of the geometry of low-dimensional manifolds. In this regard, this article presents a manifold optimization-based integrative clustering algorithm for multi-view data. To identify consensus clusters, the algorithm constructs a joint graph Laplacian that contains denoised cluster information of the individual views. It optimizes a joint clustering objective while reducing the disagreement between the cluster structures conveyed by the joint and individual views. The optimization is performed alternatively over k -means and Stiefel manifolds. The Stiefel manifold helps to model the nonlinearities and differential clusters within the individual views, whereas k -means manifold tries to elucidate the best-fit joint cluster structure of the data. A gradient-based movement is performed separately on the manifold of each view so that individual nonlinearity is preserved while looking for shared cluster information. The convergence of the proposed algorithm is established over the manifold and asymptotic convergence bound is obtained to quantify theoretically how fast the sequence of iterates generated by the algorithm converges to an optimal solution. The integrative clustering on benchmark and multi-omics cancer data sets demonstrates that the proposed algorithm outperforms state-of-the-art multi-view clustering approaches.
Collapse
|
5
|
Shared Dictionary Learning Via Coupled Adaptations for Cross-Domain Classification. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10967-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
6
|
Multi-dictionary induced low-rank representation with multi-manifold regularization. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03446-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
7
|
ConvNet combined with minimum weighted random search algorithm for improving the domain shift problem of image recognition model. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02767-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
8
|
Dissanayake T, Fernando T, Denman S, Ghaemmaghami H, Sridharan S, Fookes C. Domain Generalization in Biosignal Classification. IEEE Trans Biomed Eng 2021; 68:1978-1989. [PMID: 33338009 DOI: 10.1109/tbme.2020.3045720] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE When training machine learning models, we often assume that the training data and evaluation data are sampled from the same distribution. However, this assumption is violated when the model is evaluated on another unseen but similar database, even if that database contains the same classes. This problem is caused by domain-shift and can be solved using two approaches: domain adaptation and domain generalization. Simply, domain adaptation methods can access data from unseen domains during training; whereas in domain generalization, the unseen data is not available during training. Hence, domain generalization concerns models that perform well on inaccessible, domain-shifted data. METHOD Our proposed domain generalization method represents an unseen domain using a set of known basis domains, afterwhich we classify the unseen domain using classifier fusion. To demonstrate our system, we employ a collection of heart sound databases that contain normal and abnormal sounds (classes). RESULTS Our proposed classifier fusion method achieves accuracy gains of up to 16% for four completely unseen domains. CONCLUSION Recognizing the complexity induced by the inherent temporal nature of biosignal data, the two-stage method proposed in this study is able to effectively simplify the whole process of domain generalization while demonstrating good results on unseen domains and the adopted basis domains. SIGNIFICANCE To our best knowledge, this is the first study that investigates domain generalization for biosignal data. Our proposed learning strategy can be used to effectively learn domain-relevant features while being aware of the class differences in the data.
Collapse
|
9
|
Chen J, Wu X, Duan L, Chen L. Sequential Instance Refinement for Cross-Domain Object Detection in Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3970-3984. [PMID: 33769933 DOI: 10.1109/tip.2021.3066904] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Cross-domain object detection in images has attracted increasing attention in the past few years, which aims at adapting the detection model learned from existing labeled images (source domain) to newly collected unlabeled ones (target domain). Existing methods usually deal with the cross-domain object detection problem through direct feature alignment between the source and target domains at the image level, the instance level (i.e., region proposals) or both. However, we have observed that directly aligning features of all object instances from the two domains often results in the problem of negative transfer, due to the existence of (1) outlier target instances that contain confusing objects not belonging to any category of the source domain and thus are hard to be captured by detectors and (2) low-relevance source instances that are considerably statistically different from target instances although their contained objects are from the same category. With this in mind, we propose a reinforcement learning based method, coined as sequential instance refinement, where two agents are learned to progressively refine both source and target instances by taking sequential actions to remove both outlier target instances and low-relevance source instances step by step. Extensive experiments on several benchmark datasets demonstrate the superior performance of our method over existing state-of-the-art baselines for cross-domain object detection.
Collapse
|
10
|
Liang Y, Li H, Guo B, Yu Z, Zheng X, Samtani S, Zeng DD. Fusion of heterogeneous attention mechanisms in multi-view convolutional neural network for text classification. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.10.021] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
11
|
Zhao W, Xu C, Guan Z, Liu Y. Multiview Concept Learning Via Deep Matrix Factorization. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:814-825. [PMID: 32275617 DOI: 10.1109/tnnls.2020.2979532] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Multiview representation learning (MVRL) leverages information from multiple views to obtain a common representation summarizing the consistency and complementarity in multiview data. Most previous matrix factorization-based MVRL methods are shallow models that neglect the complex hierarchical information. The recently proposed deep multiview factorization models cannot explicitly capture consistency and complementarity in multiview data. We present the deep multiview concept learning (DMCL) method, which hierarchically factorizes the multiview data, and tries to explicitly model consistent and complementary information and capture semantic structures at the highest abstraction level. We explore two variants of the DMCL framework, DMCL-L and DMCL-N, with respectively linear/nonlinear transformations between adjacent layers. We propose two block coordinate descent-based optimization methods for DMCL-L and DMCL-N. We verify the effectiveness of DMCL on three real-world data sets for both clustering and classification tasks.
Collapse
|
12
|
|
13
|
Zhou R, Chang X, Shi L, Shen YD, Yang Y, Nie F. Person Reidentification via Multi-Feature Fusion With Adaptive Graph Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1592-1601. [PMID: 31283511 DOI: 10.1109/tnnls.2019.2920905] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The goal of person reidentification (Re-ID) is to identify a given pedestrian from a network of nonoverlapping surveillance cameras. Most existing works follow the supervised learning paradigm which requires pairwise labeled training data for each pair of cameras. However, this limits their scalability to real-world applications where abundant unlabeled data are available. To address this issue, we propose a multi-feature fusion with adaptive graph learning model for unsupervised Re-ID. Our model aims to negotiate comprehensive assessment on the consistent graph structure of pedestrians with the help of special information of feature descriptors. Specifically, we incorporate multi-feature dictionary learning and adaptive multi-feature graph learning into a unified learning model such that the learned dictionaries are discriminative and the subsequent graph structure learning is accurate. An alternating optimization algorithm with proved convergence is developed to solve the final optimization objective. Extensive experiments on four benchmark data sets demonstrate the superiority and effectiveness of the proposed method.
Collapse
|
14
|
Shi Y, Suk HI, Gao Y, Lee SW, Shen D. Leveraging Coupled Interaction for Multimodal Alzheimer's Disease Diagnosis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:186-200. [PMID: 30908241 DOI: 10.1109/tnnls.2019.2900077] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
As the population becomes older worldwide, accurate computer-aided diagnosis for Alzheimer's disease (AD) in the early stage has been regarded as a crucial step for neurodegeneration care in recent years. Since it extracts the low-level features from the neuroimaging data, previous methods regarded this computer-aided diagnosis as a classification problem that ignored latent featurewise relation. However, it is known that multiple brain regions in the human brain are anatomically and functionally interlinked according to the current neuroscience perspective. Thus, it is reasonable to assume that the extracted features from different brain regions are related to each other to some extent. Also, the complementary information between different neuroimaging modalities could benefit multimodal fusion. To this end, we consider leveraging the coupled interactions in the feature level and modality level for diagnosis in this paper. First, we propose capturing the feature-level coupled interaction using a coupled feature representation. Then, to model the modality-level coupled interaction, we present two novel methods: 1) the coupled boosting (CB) that models the correlation of pairwise coupled-diversity on both inconsistently and incorrectly classified samples between different modalities and 2) the coupled metric ensemble (CME) that learns an informative feature projection from different modalities by integrating the intrarelation and interrelation of training samples. We systematically evaluated our methods with the AD neuroimaging initiative data set. By comparison with the baseline learning-based methods and the state-of-the-art methods that are specially developed for AD/MCI (mild cognitive impairment) diagnosis, our methods achieved the best performance with accuracy of 95.0% and 80.7% (CB), 94.9% and 79.9% (CME) for AD/NC (normal control), and MCI/NC identification, respectively.
Collapse
|
15
|
Visual Cognition–Inspired Multi-View Vehicle Re-Identification via Laplacian-Regularized Correlative Sparse Ranking. Cognit Comput 2019. [DOI: 10.1007/s12559-019-09687-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
16
|
Ding Z, Fu Y. Deep Transfer Low-Rank Coding for Cross-Domain Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:1768-1779. [PMID: 30371396 DOI: 10.1109/tnnls.2018.2874567] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Transfer learning has attracted great attention to facilitate the sparsely labeled or unlabeled target learning by leveraging previously well-established source domain through knowledge transfer. Recent activities on transfer learning attempt to build deep architectures to better fight off cross-domain divergences by extracting more effective features. However, its generalizability would decrease greatly due to the domain mismatch enlarges, particularly at the top layers. In this paper, we develop a novel deep transfer low-rank coding based on deep convolutional neural networks, where we investigate multilayer low-rank coding at the top task-specific layers. Specifically, multilayer common dictionaries shared across two domains are obtained to bridge the domain gap such that more enriched domain-invariant knowledge can be captured through a layerwise fashion. With rank minimization on the new codings, our model manages to preserve the global structures across source and target, and thus, similar samples of two domains tend to gather together for effective knowledge transfer. Furthermore, domain/classwise adaption terms are integrated to guide the effective coding optimization in a semisupervised manner, so the marginal and conditional disparities of two domains will be alleviated. Experimental results on three visual domain adaptation benchmarks verify the effectiveness of our proposed approach on boosting the recognition performance for the target domain, by comparing it with other state-of-the-art deep transfer learning.
Collapse
|
17
|
Shi W, Gong Y, Tao X, Cheng D, Zheng N. Fine-Grained Image Classification Using Modified DCNNs Trained by Cascaded Softmax and Generalized Large-Margin Losses. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:683-694. [PMID: 30047915 DOI: 10.1109/tnnls.2018.2852721] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We develop a fine-grained image classifier using a general deep convolutional neural network (DCNN). We improve the fine-grained image classification accuracy of a DCNN model from the following two aspects. First, to better model the h -level hierarchical label structure of the fine-grained image classes contained in the given training data set, we introduce h fully connected (fc) layers to replace the top fc layer of a given DCNN model and train them with the cascaded softmax loss. Second, we propose a novel loss function, namely, generalized large-margin (GLM) loss, to make the given DCNN model explicitly explore the hierarchical label structure and the similarity regularities of the fine-grained image classes. The GLM loss explicitly not only reduces between-class similarity and within-class variance of the learned features by DCNN models but also makes the subclasses belonging to the same coarse class be more similar to each other than those belonging to different coarse classes in the feature space. Moreover, the proposed fine-grained image classification framework is independent and can be applied to any DCNN structures. Comprehensive experimental evaluations of several general DCNN models (AlexNet, GoogLeNet, and VGG) using three benchmark data sets (Stanford car, fine-grained visual classification-aircraft, and CUB-200-2011) for the fine-grained image classification task demonstrate the effectiveness of our method.
Collapse
|
18
|
Ding Z, Nasrabadi NM, Fu Y. Semi-supervised Deep Domain Adaptation via Coupled Neural Networks. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:5214-5224. [PMID: 29994676 DOI: 10.1109/tip.2018.2851067] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Domain adaptation is a promising technique when addressing limited or no labeled target data by borrowing well-labeled knowledge from the auxiliary source data. Recently, researchers have exploited multi-layer structures for discriminative feature learning to reduce the domain discrepancy. However, there are limited research efforts on simultaneously building a deep structure and a discriminative classifier over both labeled source and unlabeled target. In this paper, we propose a semi-supervised deep domain adaptation framework, in which the multi-layer feature extractor and a multi-class classifier are jointly learned to benefit from each other. Specifically, we develop a novel semi-supervised class-wise adaptation manner to fight off the conditional distribution mismatch between two domains by assigning a probabilistic label to each target sample, i.e., multiple class labels with different probabilities. Furthermore, a multi-class classifier is simultaneously trained on labeled source and unlabeled target samples in a semi-supervised fashion. In this way, the deep structure can formally alleviate the domain divergence and enhance the feature transferability. Experimental evaluations on several standard cross-domain benchmarks verify the superiority of our proposed approach.
Collapse
|
19
|
Ding Z, Fu Y. Robust Multiview Data Analysis Through Collective Low-Rank Subspace. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:1986-1997. [PMID: 28436903 DOI: 10.1109/tnnls.2017.2690970] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Multiview data are of great abundance in real-world applications, since various viewpoints and multiple sensors desire to represent the data in a better way. Conventional multiview learning methods aimed to learn multiple view-specific transformations meanwhile assumed the view knowledge of training, and test data were available in advance. However, they would fail when we do not have any prior knowledge for the probe data's view information, since the correct view-specific projections cannot be utilized to extract effective feature representations. In this paper, we develop a collective low-rank subspace (CLRS) algorithm to deal with this problem in multiview data analysis. CLRS attempts to reduce the semantic gap across multiple views through seeking a view-free low-rank projection shared by multiple view-specific transformations. Moreover, we exploit low-rank reconstruction to build a bridge between the view-specific features and those view-free ones transformed with the CLRS. Furthermore, a supervised cross-view regularizer is developed to couple the within-class data across different views to make the learned collective subspace more discriminative. Our CLRS makes our algorithm more flexible when addressing the challenging issue without any prior knowledge of the probe data's view information. To that end, two different settings of experiments on several multiview benchmarks are designed to evaluate the proposed approach. Experimental results have verified the effective performance of our proposed method by comparing with the state-of-the-art algorithms.
Collapse
|