1
|
Qian Y, Zou Q, Zhao M, Liu Y, Guo F, Ding Y. scRNMF: An imputation method for single-cell RNA-seq data by robust and non-negative matrix factorization. PLoS Comput Biol 2024; 20:e1012339. [PMID: 39116191 PMCID: PMC11338450 DOI: 10.1371/journal.pcbi.1012339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Revised: 08/21/2024] [Accepted: 07/19/2024] [Indexed: 08/10/2024] Open
Abstract
Single-cell RNA sequencing (scRNA-seq) has emerged as a powerful tool in genomics research, enabling the analysis of gene expression at the individual cell level. However, scRNA-seq data often suffer from a high rate of dropouts, where certain genes fail to be detected in specific cells due to technical limitations. This missing data can introduce biases and hinder downstream analysis. To overcome this challenge, the development of effective imputation methods has become crucial in the field of scRNA-seq data analysis. Here, we propose an imputation method based on robust and non-negative matrix factorization (scRNMF). Instead of other matrix factorization algorithms, scRNMF integrates two loss functions: L2 loss and C-loss. The L2 loss function is highly sensitive to outliers, which can introduce substantial errors. We utilize the C-loss function when dealing with zero values in the raw data. The primary advantage of the C-loss function is that it imposes a smaller punishment for larger errors, which results in more robust factorization when handling outliers. Various datasets of different sizes and zero rates are used to evaluate the performance of scRNMF against other state-of-the-art methods. Our method demonstrates its power and stability as a tool for imputation of scRNA-seq data.
Collapse
Affiliation(s)
- Yuqing Qian
- Institute Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
| | - Quan Zou
- Institute Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
| | - Mengyuan Zhao
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yi Liu
- Institute Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
| | - Fei Guo
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Yijie Ding
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
| |
Collapse
|
2
|
Zhang X, Wang Y, Zhu L, Chen H, Li H, Wu L. Robust variable structure discovery based on tilted empirical risk minimization. APPL INTELL 2023. [DOI: 10.1007/s10489-022-04409-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
3
|
Gan J, Li J, Xie Y. Robust SVM for Cost-Sensitive Learning. Neural Process Lett 2022. [DOI: 10.1007/s11063-021-10480-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
4
|
Zhu Y, Tan M, Wei J. Robust Multi-view Classification with Sample Constraints. Neural Process Lett 2022. [DOI: 10.1007/s11063-021-10483-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
5
|
Ma B, Zalmai N, Loeliger HA. Smoothed-NUV Priors for Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:4663-4678. [PMID: 35786555 DOI: 10.1109/tip.2022.3186749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Variations of L1 -regularization including, in particular, total variation regularization, have hugely improved computational imaging. However, sharper edges and fewer staircase artifacts can be achieved with convex-concave regularizers. We present a new class of such regularizers using normal priors with unknown variance (NUV), which include smoothed versions of the logarithm function and smoothed versions of Lp norms with p ≤ 1 . All NUV priors allow variational representations that lead to efficient algorithms for image reconstruction by iterative reweighted descent. A preferred such algorithm is iterative reweighted coordinate descent, which has no parameters (in particular, no step size to control) and is empirically robust and efficient. The proposed priors and algorithms are demonstrated with applications to tomography. We also note that the proposed priors come with built-in edge detection, which is demonstrated by an application to image segmentation.
Collapse
|
6
|
Xu J, Noo F. Convex optimization algorithms in medical image reconstruction-in the age of AI. Phys Med Biol 2022; 67:10.1088/1361-6560/ac3842. [PMID: 34757943 PMCID: PMC10405576 DOI: 10.1088/1361-6560/ac3842] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 11/10/2021] [Indexed: 11/12/2022]
Abstract
The past decade has seen the rapid growth of model based image reconstruction (MBIR) algorithms, which are often applications or adaptations of convex optimization algorithms from the optimization community. We review some state-of-the-art algorithms that have enjoyed wide popularity in medical image reconstruction, emphasize known connections between different algorithms, and discuss practical issues such as computation and memory cost. More recently, deep learning (DL) has forayed into medical imaging, where the latest development tries to exploit the synergy between DL and MBIR to elevate the MBIR's performance. We present existing approaches and emerging trends in DL-enhanced MBIR methods, with particular attention to the underlying role of convexity and convex algorithms on network architecture. We also discuss how convexity can be employed to improve the generalizability and representation power of DL networks in general.
Collapse
Affiliation(s)
- Jingyan Xu
- Department of Radiology, Johns Hopkins University, Baltimore, MD, United States of America
| | - Frédéric Noo
- Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, UT, United States of America
| |
Collapse
|
7
|
Jiao CN, Liu JX, Wang J, Shang J, Zheng CH. Visualization and Analysis of Single cell RNA-seq Data by Maximizing Correntropy based Non-negative Low Rank Representation. IEEE J Biomed Health Inform 2021; 26:1872-1882. [PMID: 34495855 DOI: 10.1109/jbhi.2021.3110766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The exploration of single cell RNA-sequencing (scRNA-seq) technology generates a new perspective to analyze biological problems. One of the major applications of scRNA-seq data is to discover subtypes of cells by cell clustering. Nevertheless, it is challengeable for traditional methods to handle scRNA-seq data with high level of technical noise and notorious dropouts. To better analyze single cell data, a novel scRNA-seq data analysis model called Maximum correntropy criterion based Non-negative and Low Rank Representation (MccNLRR) is introduced. Specifically, the maximum correntropy criterion, as an effective loss function, is more robust to the high noise and large outliers existed in the data. Moreover, the low rank representation is proven to be a powerful tool for capturing the global and local structures of data. Therefore, some important information, such as the similarity of cells in the subspace, is also extracted by it. Then, an iterative algorithm on the basis of the half-quadratic optimization and alternating direction method is developed to settle the complex optimization problem. Before the experiment, we also analyze the convergence and robustness of MccNLRR. At last, the results of cell clustering, visualization analysis, and gene markers selection on scRNA-seq data reveal that MccNLRR method can distinguish cell subtypes accurately and robustly.
Collapse
|
8
|
Yu N, Wu MJ, Liu JX, Zheng CH, Xu Y. Correntropy-Based Hypergraph Regularized NMF for Clustering and Feature Selection on Multi-Cancer Integrated Data. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:3952-3963. [PMID: 32603306 DOI: 10.1109/tcyb.2020.3000799] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Non-negative matrix factorization (NMF) has become one of the most powerful methods for clustering and feature selection. However, the performance of the traditional NMF method severely degrades when the data contain noises and outliers or the manifold structure of the data is not taken into account. In this article, a novel method called correntropy-based hypergraph regularized NMF (CHNMF) is proposed to solve the above problem. Specifically, we use the correntropy instead of the Euclidean norm in the loss term of CHNMF, which will improve the robustness of the algorithm. And the hypergraph regularization term is also applied to the objective function, which can explore the high-order geometric information in more sample points. Then, the half-quadratic (HQ) optimization technique is adopted to solve the complex optimization problem of CHNMF. Finally, extensive experimental results on multi-cancer integrated data indicate that the proposed CHNMF method is superior to other state-of-the-art methods for clustering and feature selection.
Collapse
|
9
|
Shen HT, Zhu Y, Zheng W, Zhu X. Half-Quadratic Minimization for Unsupervised Feature Selection on Incomplete Data. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:3122-3135. [PMID: 32730208 DOI: 10.1109/tnnls.2020.3009632] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Unsupervised feature selection (UFS) is a popular technique of reducing the dimensions of high-dimensional data. Previous UFS methods were often designed with the assumption that the whole information in the data set is observed. However, incomplete data sets that contain unobserved information can be often found in real applications, especially in industry. Thus, these existing UFS methods have a limitation on conducting feature selection on incomplete data. On the other hand, most existing UFS methods did not consider the sample importance for feature selection, i.e., different samples have various importance. As a result, the constructed UFS models easily suffer from the influence of outliers. This article investigates a new UFS method for conducting UFS on incomplete data sets to investigate the abovementioned issues. Specifically, the proposed method deals with unobserved information by using an indicator matrix to filter it out the process of feature selection and reduces the influence of outliers by employing the half-quadratic minimization technique to automatically assigning outliers with small or even zero weights and important samples with large weights. This article further designs an alternative optimization strategy to optimize the proposed objective function as well as theoretically and experimentally prove the convergence of the proposed optimization strategy. Experimental results on both real and synthetic incomplete data sets verified the effectiveness of the proposed method compared with previous methods, in terms of clustering performance on the low-dimensional space of the high-dimensional data.
Collapse
|
10
|
Joint Robust Multi-view Spectral Clustering. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10257-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
11
|
|
12
|
Bin G, Wu S, Shao M, Zhou Z, Bin G. IRN-MLSQR: An improved iterative reweight norm approach to the inverse problem of electrocardiography incorporating factorization-free preconditioned LSQR. J Electrocardiol 2020; 62:190-199. [PMID: 32977208 DOI: 10.1016/j.jelectrocard.2020.08.017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 07/07/2020] [Accepted: 08/18/2020] [Indexed: 02/01/2023]
Abstract
The inverse problem of electrocardiography (ECG) of computing epicardial potentials from body surface potentials, is an ill-posed problem and needs to be solved by regularization techniques. The L2-norm regularization can cause considerable smoothing of the solution, while the L1-norm scheme promotes a solution with sharp boundaries/gradients between piecewise smooth regions, so L1-norm is widely used in the ECG inverse problem. However, large amount of computation and long computation time are needed in the L1-norm scheme. In this paper, by combining iterative reweight norm (IRN) with a factorization-free preconditioned LSQR algorithm (MLSQR), a new IRN-MLSQR method was proposed to accelerate the convergence speed of the L1-norm scheme. We validated the IRN-MLSQR method using experimental data from isolated canine hearts and clinical procedures in the electrophysiology laboratory. The results showed that the IRN-MLSQR method can significantly reduce the number of iterations and operation time while ensuring the calculation accuracy. The number of iterations of the IRN-MLSQR method is about 60%-70% that of the conventional IRN method, and at the same time, the accuracy of the solution is almost the same as that of the conventional IRN method. The proposed IRN-MLSQR method may be used as a new approach to the inverse problem of ECG.
Collapse
Affiliation(s)
- Guanghong Bin
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, China
| | - Shuicai Wu
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, China
| | - Minggang Shao
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, China
| | - Zhuhuang Zhou
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, China
| | - Guangyu Bin
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, China.
| |
Collapse
|
13
|
Peng S, Ser W, Chen B, Lin Z. Robust orthogonal nonnegative matrix tri-factorization for data representation. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106054] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
14
|
Wang CY, Liu JX, Yu N, Zheng CH. Sparse Graph Regularization Non-Negative Matrix Factorization Based on Huber Loss Model for Cancer Data Analysis. Front Genet 2019; 10:1054. [PMID: 31824556 PMCID: PMC6882287 DOI: 10.3389/fgene.2019.01054] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2019] [Accepted: 10/01/2019] [Indexed: 12/02/2022] Open
Abstract
Non-negative matrix factorization (NMF) is a matrix decomposition method based on the square loss function. To exploit cancer information, cancer gene expression data often uses the NMF method to reduce dimensionality. Gene expression data usually have some noise and outliers, while the original NMF loss function is very sensitive to non-Gaussian noise. To improve the robustness and clustering performance of the algorithm, we propose a sparse graph regularization NMF based on Huber loss model for cancer data analysis (Huber-SGNMF). Huber loss is a function between L1-norm and L2-norm that can effectively handle non-Gaussian noise and outliers. Taking into account the sparsity matrix and data geometry information, sparse penalty and graph regularization terms are introduced into the model to enhance matrix sparsity and capture data manifold structure. Before the experiment, we first analyzed the robustness of Huber-SGNMF and other models. Experiments on The Cancer Genome Atlas (TCGA) data have shown that Huber-SGNMF performs better than other most advanced methods in sample clustering and differentially expressed gene selection.
Collapse
Affiliation(s)
- Chuan-Yuan Wang
- School of Information Science and Engineering, Qufu Normal University, Rizhao, China
| | - Jin-Xing Liu
- School of Information Science and Engineering, Qufu Normal University, Rizhao, China
- *Correspondence: Jin-Xing Liu,
| | - Na Yu
- School of Information Science and Engineering, Qufu Normal University, Rizhao, China
| | - Chun-Hou Zheng
- School of Software Engineering, Qufu Normal University, Qufu, China
| |
Collapse
|
15
|
Cauchy sparse NMF with manifold regularization: A robust method for hyperspectral unmixing. Knowl Based Syst 2019. [DOI: 10.1016/j.knosys.2019.104898] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
16
|
Guan N, Liu T, Zhang Y, Tao D, Davis LS. Truncated Cauchy Non-Negative Matrix Factorization. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2019; 41:246-259. [PMID: 29990056 DOI: 10.1109/tpami.2017.2777841] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Non-negative matrix factorization (NMF) minimizes the euclidean distance between the data matrix and its low rank approximation, and it fails when applied to corrupted data because the loss function is sensitive to outliers. In this paper, we propose a Truncated CauchyNMF loss that handle outliers by truncating large errors, and develop a Truncated CauchyNMF to robustly learn the subspace on noisy datasets contaminated by outliers. We theoretically analyze the robustness of Truncated CauchyNMF comparing with the competing models and theoretically prove that Truncated CauchyNMF has a generalization bound which converges at a rate of order , where is the sample size. We evaluate Truncated CauchyNMF by image clustering on both simulated and real datasets. The experimental results on the datasets containing gross corruptions validate the effectiveness and robustness of Truncated CauchyNMF for learning robust subspaces.
Collapse
|
17
|
Xiao L, Li S, Yang J, Zhang Z. A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.01.033] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
18
|
Ham B, Cho M, Ponce J. Robust Guided Image Filtering Using Nonconvex Potentials. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2018; 40:192-207. [PMID: 28212077 DOI: 10.1109/tpami.2017.2669034] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Filtering images using a guidance signal, a process called guided or joint image filtering, has been used in various tasks in computer vision and computational photography, particularly for noise reduction and joint upsampling. This uses an additional guidance signal as a structure prior, and transfers the structure of the guidance signal to an input image, restoring noisy or altered image structure. The main drawbacks of such a data-dependent framework are that it does not consider structural differences between guidance and input images, and that it is not robust to outliers. We propose a novel SD (for static/dynamic) filter to address these problems in a unified framework, and jointly leverage structural information from guidance and input images. Guided image filtering is formulated as a nonconvex optimization problem, which is solved by the majorize-minimization algorithm. The proposed algorithm converges quickly while guaranteeing a local minimum. The SD filter effectively controls the underlying image structure at different scales, and can handle a variety of types of data from different sensors. It is robust to outliers and other artifacts such as gradient reversal and global intensity shift, and has good edge-preserving smoothing properties. We demonstrate the flexibility and effectiveness of the proposed SD filter in a variety of applications, including depth upsampling, scale-space filtering, texture removal, flash/non-flash denoising, and RGB/NIR denoising.
Collapse
|
19
|
Kim Y, Ham B, Oh C, Sohn K. Structure Selective Depth Superresolution for RGB-D Cameras. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:5227-5238. [PMID: 27552747 DOI: 10.1109/tip.2016.2601262] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper describes a method for high-quality depth superresolution. The standard formulations of image-guided depth upsampling, using simple joint filtering or quadratic optimization, lead to texture copying and depth bleeding artifacts. These artifacts are caused by inherent discrepancy of structures in data from different sensors. Although there exists some correlation between depth and intensity discontinuities, they are different in distribution and formation. To tackle this problem, we formulate an optimization model using a nonconvex regularizer. A nonlocal affinity established in a high-dimensional feature space is used to offer precisely localized depth boundaries. We show that the proposed method iteratively handles differences in structure between depth and intensity images. This property enables reducing texture copying and depth bleeding artifacts significantly on a variety of range data sets. We also propose a fast alternating direction method of multipliers algorithm to solve our optimization problem. Our solver shows a noticeable speed up compared with the conventional majorize-minimize algorithm. Extensive experiments with synthetic and real-world data sets demonstrate that the proposed method is superior to the existing methods.
Collapse
|
20
|
Ou W, Yu S, Li G, Lu J, Zhang K, Xie G. Multi-view non-negative matrix factorization by patch alignment framework with view consistency. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.09.133] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
21
|
You X, Ou W, Chen CLP, Li Q, Zhu Z, Tang Y. Robust Nonnegative Patch Alignment for Dimensionality Reduction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:2760-2774. [PMID: 25955994 DOI: 10.1109/tnnls.2015.2393886] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Dimensionality reduction is an important method to analyze high-dimensional data and has many applications in pattern recognition and computer vision. In this paper, we propose a robust nonnegative patch alignment for dimensionality reduction, which includes a reconstruction error term and a whole alignment term. We use correntropy-induced metric to measure the reconstruction error, in which the weight is learned adaptively for each entry. For the whole alignment, we propose locality-preserving robust nonnegative patch alignment (LP-RNA) and sparsity-preserviing robust nonnegative patch alignment (SP-RNA), which are unsupervised and supervised, respectively. In the LP-RNA, we propose a locally sparse graph to encode the local geometric structure of the manifold embedded in high-dimensional space. In particular, we select large p -nearest neighbors for each sample, then obtain the sparse representation with respect to these neighbors. The sparse representation is used to build a graph, which simultaneously enjoys locality, sparseness, and robustness. In the SP-RNA, we simultaneously use local geometric structure and discriminative information, in which the sparse reconstruction coefficient is used to characterize the local geometric structure and weighted distance is used to measure the separability of different classes. For the induced nonconvex objective function, we formulate it into a weighted nonnegative matrix factorization based on half-quadratic optimization. We propose a multiplicative update rule to solve this function and show that the objective function converges to a local optimum. Several experimental results on synthetic and real data sets demonstrate that the learned representation is more discriminative and robust than most existing dimensionality reduction methods.
Collapse
|
22
|
Wang Y, Pan C, Xiang S, Zhu F. Robust Hyperspectral Unmixing With Correntropy-Based Metric. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:4027-4040. [PMID: 26186789 DOI: 10.1109/tip.2015.2456508] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Hyperspectral unmixing is one of the crucial steps for many hyperspectral applications. The problem of hyperspectral unmixing has proved to be a difficult task in unsupervised work settings where the endmembers and abundances are both unknown. In addition, this task becomes more challenging in the case that the spectral bands are degraded by noise. This paper presents a robust model for unsupervised hyperspectral unmixing. Specifically, our model is developed with the correntropy-based metric where the nonnegative constraints on both endmembers and abundances are imposed to keep physical significance. Besides, a sparsity prior is explicitly formulated to constrain the distribution of the abundances of each endmember. To solve our model, a half-quadratic optimization technique is developed to convert the original complex optimization problem into an iteratively reweighted nonnegative matrix factorization with sparsity constraints. As a result, the optimization of our model can adaptively assign small weights to noisy bands and put more emphasis on noise-free bands. In addition, with sparsity constraints, our model can naturally generate sparse abundances. Experiments on synthetic and real data demonstrate the effectiveness of our model in comparison to the related state-of-the-art unmixing models.
Collapse
|
23
|
Bourquard A, Unser M. Anisotropic interpolation of sparse generalized image samples. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:459-472. [PMID: 22968212 DOI: 10.1109/tip.2012.2217346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Practical image-acquisition systems are often modeled as a continuous-domain prefilter followed by an ideal sampler, where generalized samples are obtained after convolution with the impulse response of the device. In this paper, our goal is to interpolate images from a given subset of such samples. We express our solution in the continuous domain, considering consistent resampling as a data-fidelity constraint. To make the problem well posed and ensure edge-preserving solutions, we develop an efficient anisotropic regularization approach that is based on an improved version of the edge-enhancing anisotropic diffusion equation. Following variational principles, our reconstruction algorithm minimizes successive quadratic cost functionals. To ensure fast convergence, we solve the corresponding sequence of linear problems by using multigrid iterations that are specifically tailored to their sparse structure. We conduct illustrative experiments and discuss the potential of our approach both in terms of algorithmic design and reconstruction quality. In particular, we present results that use as little as 2% of the image samples.
Collapse
Affiliation(s)
- Aurélien Bourquard
- École Polytechnique Fédérale de Lausanne, School of Electrical and Computer Engineering, Lausanne CH-1015, Switzerland.
| | | |
Collapse
|
24
|
Gutiérrez O, de la Rosa I, Villa J, González E, Escalante N. Semi-Huber potential function for image segmentation. OPTICS EXPRESS 2012; 20:6542-6554. [PMID: 22418537 DOI: 10.1364/oe.20.006542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In this work, a novel model of Markov Random Field (MRF) is introduced. Such a model is based on a proposed Semi-Huber potential function and it is applied successfully to image segmentation in presence of noise. The main difference with respect to other half-quadratic models that have been taken as a reference is, that the number of parameters to be tuned in the proposed model is smaller and simpler. The idea is then, to choose adequate parameter values heuristically for a good segmentation of the image. In that sense, some experimental results show that the proposed model allows an easier parameter adjustment with reasonable computation times.
Collapse
Affiliation(s)
- Osvaldo Gutiérrez
- Unidad Academica de Ingenieriıa Electrica, Universidad Autonoma de Zacatecas, Av. Lopez Velarde 801, Col. Centro, C. P. 98000, Zacatecas, Zacatecas, Mexico.
| | | | | | | | | |
Collapse
|
25
|
Weller DS, Polimeni JR, Grady L, Wald LL, Adalsteinsson E, Goyal VK. Denoising sparse images from GRAPPA using the nullspace method. Magn Reson Med 2011; 68:1176-89. [PMID: 22213069 DOI: 10.1002/mrm.24116] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2011] [Revised: 11/18/2011] [Accepted: 11/20/2011] [Indexed: 11/11/2022]
Abstract
To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with generalized autocalibrating partially parallel acquisitions (GRAPPA) alone, the DEnoising of Sparse Images from GRAPPA using the Nullspace method is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DEnoising of Sparse Images from GRAPPA using the Nullspace method are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), peak-signal-to-noise ratio, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DEnoising of Sparse Images from GRAPPA using the Nullspace method mitigates noise amplification better than both GRAPPA and L1 iterative self-consistent parallel imaging reconstruction (the latter limited here by uniform undersampling).
Collapse
Affiliation(s)
- Daniel S Weller
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139-4307, USA.
| | | | | | | | | | | |
Collapse
|
26
|
Trzasko J, Manduca A. Highly undersampled magnetic resonance image reconstruction via homotopic l(0) -minimization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2009; 28:106-21. [PMID: 19116193 DOI: 10.1109/tmi.2008.927346] [Citation(s) in RCA: 155] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
In clinical magnetic resonance imaging (MRI), any reduction in scan time offers a number of potential benefits ranging from high-temporal-rate observation of physiological processes to improvements in patient comfort. Following recent developments in compressive sensing (CS) theory, several authors have demonstrated that certain classes of MR images which possess sparse representations in some transform domain can be accurately reconstructed from very highly undersampled K-space data by solving a convex l(1) -minimization problem. Although l(1)-based techniques are extremely powerful, they inherently require a degree of over-sampling above the theoretical minimum sampling rate to guarantee that exact reconstruction can be achieved. In this paper, we propose a generalization of the CS paradigm based on homotopic approximation of the l(0) quasi-norm and show how MR image reconstruction can be pushed even further below the Nyquist limit and significantly closer to the theoretical bound. Following a brief review of standard CS methods and the developed theoretical extensions, several example MRI reconstructions from highly undersampled K-space data are presented.
Collapse
Affiliation(s)
- Joshua Trzasko
- Center for Advanced Imaging Research, Mayo Clinic College of Medicine, Rochester, MN 55905 USA.
| | | |
Collapse
|
27
|
Cai JF, H. Chan R, Nikolova M. Two-phase approach for deblurring images corrupted by impulse plus gaussian noise. ACTA ACUST UNITED AC 2008. [DOI: 10.3934/ipi.2008.2.187] [Citation(s) in RCA: 135] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|