1
|
Huang J, Wang H, Wang X, Ruzhansky M. Semi-Sparsity for Smoothing Filters. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:1627-1639. [PMID: 37027756 DOI: 10.1109/tip.2023.3247181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In this paper, we propose a semi-sparsity smoothing method based on a new sparsity-induced minimization scheme. The model is derived from the observations that semi-sparsity prior knowledge is universally applicable in situations where sparsity is not fully admitted such as in the polynomial-smoothing surfaces. We illustrate that such priors can be identified into a generalized $L_{0}$ -norm minimization problem in higher-order gradient domains, giving rise to a new "feature-aware" filter with a powerful simultaneous-fitting ability in both sparse singularities (corners and salient edges) and polynomial-smoothing surfaces. Notice that a direct solver to the proposed model is not available due to the non-convexity and combinatorial nature of $L_{0}$ -norm minimization. Instead, we propose to solve it approximately based on an efficient half-quadratic splitting technique. We demonstrate its versatility and many benefits to a series of signal/image processing and computer vision applications.
Collapse
|
2
|
Ruan W, Sun L. Robust latent discriminant adaptive graph preserving learning for image feature extraction. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
|
3
|
Ren LR, Gao YL, Liu JX, Shang J, Zheng CH. Correntropy induced loss based sparse robust graph regularized extreme learning machine for cancer classification. BMC Bioinformatics 2020; 21:445. [PMID: 33028187 PMCID: PMC7542897 DOI: 10.1186/s12859-020-03790-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 09/30/2020] [Indexed: 01/17/2023] Open
Abstract
Background As a machine learning method with high performance and excellent generalization ability, extreme learning machine (ELM) is gaining popularity in various studies. Various ELM-based methods for different fields have been proposed. However, the robustness to noise and outliers is always the main problem affecting the performance of ELM. Results In this paper, an integrated method named correntropy induced loss based sparse robust graph regularized extreme learning machine (CSRGELM) is proposed. The introduction of correntropy induced loss improves the robustness of ELM and weakens the negative effects of noise and outliers. By using the L2,1-norm to constrain the output weight matrix, we tend to obtain a sparse output weight matrix to construct a simpler single hidden layer feedforward neural network model. By introducing the graph regularization to preserve the local structural information of the data, the classification performance of the new method is further improved. Besides, we design an iterative optimization method based on the idea of half quadratic optimization to solve the non-convex problem of CSRGELM. Conclusions The classification results on the benchmark dataset show that CSRGELM can obtain better classification results compared with other methods. More importantly, we also apply the new method to the classification problems of cancer samples and get a good classification effect.
Collapse
Affiliation(s)
- Liang-Rui Ren
- School of Computer Science, Qufu Normal University, Rizhao, 276826, China
| | - Ying-Lian Gao
- Qufu Normal University Library, Qufu Normal University, Rizhao, 276826, China
| | - Jin-Xing Liu
- School of Computer Science, Qufu Normal University, Rizhao, 276826, China.
| | - Junliang Shang
- School of Computer Science, Qufu Normal University, Rizhao, 276826, China
| | - Chun-Hou Zheng
- School of Computer Science, Qufu Normal University, Rizhao, 276826, China.,College of Computer Science and Technology, Anhui University, Hefei, 230601, China
| |
Collapse
|
4
|
El-Hajj C, Moussaoui S, Collewet G, Musse M. Multi-exponential Transverse Relaxation Times Estimation from Magnetic Resonance Images under Rician Noise and Spatial Regularization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:6721-6733. [PMID: 32406838 DOI: 10.1109/tip.2020.2993114] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Relaxation signal inside each voxel of magnetic resonance images (MRI) is commonly fitted by a multi-exponential decay curve. The estimation of a discrete multi-component relaxation model parameters from magnitude MRI data is a challenging nonlinear inverse problem since it should be conducted on the entire image voxels under non-Gaussian noise statistics. This paper proposes an efficient algorithm allowing the joint estimation of relaxation time values and their amplitudes using different criteria taking into account a Rician noise model, combined with a spatial regularization accounting for low spatial variability of relaxation time constants and amplitudes between neighboring voxels. The Rician noise hypothesis is accounted for either by an adapted nonlinear least squares algorithm applied to a corrected least squares criterion or by a majorization-minimization approach applied to the maximum likelihood criterion. In order to solve the resulting large-scale non-negativity constrained optimization problem with a reduced numerical complexity and computing time, an optimization algorithm based on a majorization approach ensuring separability of variables between voxels is proposed. The minimization is carried out iteratively using an adapted Levenberg-Marquardt algorithm that ensures convergence by imposing a sufficient decrease of the objective function and the non-negativity of the parameters. The importance of the regularization alongside the Rician noise incorporation is shown both visually and numerically on a simulated phantom and on magnitude MRI images acquired on fruit samples.
Collapse
|
5
|
Zuo W, Wu X, Lin L, Zhang L, Yang MH. Learning Support Correlation Filters for Visual Tracking. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2019; 41:1158-1172. [PMID: 29993910 DOI: 10.1109/tpami.2018.2829180] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
For visual tracking methods based on kernel support vector machines (SVMs), data sampling is usually adopted to reduce the computational cost in training. In addition, budgeting of support vectors is required for computational efficiency. Instead of sampling and budgeting, recently the circulant matrix formed by dense sampling of translated image patches has been utilized in kernel correlation filters for fast tracking. In this paper, we derive an equivalent formulation of a SVM model with the circulant matrix expression and present an efficient alternating optimization method for visual tracking. We incorporate the discrete Fourier transform with the proposed alternating optimization process, and pose the tracking problem as an iterative learning of support correlation filters (SCFs). In the fully-supervision setting, our SCF can find the globally optimal solution with real-time performance. For a given circulant data matrix with n2 samples of n ×n pixels, the computational complexity of the proposed algorithm is O(n2 logn) whereas that of the standard SVM-based approaches is at least O(n4). In addition, we extend the SCF-based tracking algorithm with multi-channel features, kernel functions, and scale-adaptive approaches to further improve the tracking performance. Experimental results on a large benchmark dataset show that the proposed SCF-based algorithms perform favorably against the state-of-the-art tracking methods in terms of accuracy and speed.
Collapse
|
6
|
Xu G, Hu BG, Principe JC. Robust C-Loss Kernel Classifiers. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:510-522. [PMID: 28055924 DOI: 10.1109/tnnls.2016.2637351] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The correntropy-induced loss (C-loss) function has the nice property of being robust to outliers. In this paper, we study the C-loss kernel classifier with the Tikhonov regularization term, which is used to avoid overfitting. After using the half-quadratic optimization algorithm, which converges much faster than the gradient optimization algorithm, we find out that the resulting C-loss kernel classifier is equivalent to an iterative weighted least square support vector machine (LS-SVM). This relationship helps explain the robustness of iterative weighted LS-SVM from the correntropy and density estimation perspectives. On the large-scale data sets which have low-rank Gram matrices, we suggest to use incomplete Cholesky decomposition to speed up the training process. Moreover, we use the representer theorem to improve the sparseness of the resulting C-loss kernel classifier. Experimental results confirm that our methods are more robust to outliers than the existing common classifiers.
Collapse
|
7
|
Labouesse S, Negash A, Idier J, Bourguignon S, Mangeat T, Liu P, Sentenac A, Allain M. Joint Reconstruction Strategy for Structured Illumination Microscopy With Unknown Illuminations. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:2480-2493. [PMID: 28252396 DOI: 10.1109/tip.2017.2675200] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The blind structured illumination microscopy strategy proposed by Mudry et al. is fully re-founded in this paper, unveiling the central role of the sparsity of the illumination patterns in the mechanism that drives super-resolution in the method. A numerical analysis shows that the resolving power of the method can be further enhanced with optimized one-photon or two-photon speckle illuminations. A much improved numerical implementation is provided for the reconstruction problem under the image positivity constraint. This algorithm rests on a new preconditioned proximal iteration faster than existing solutions, paving the way to 3D and real-time 2D reconstruction.
Collapse
|
8
|
Polson NG, Scott JG, Willard BT. Proximal Algorithms in Statistics and Machine Learning. Stat Sci 2015. [DOI: 10.1214/15-sts530] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
9
|
He R, Zheng WS, Tan T, Sun Z. Half-quadratic-based iterative minimization for robust sparse representation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2014; 36:261-275. [PMID: 24356348 DOI: 10.1109/tpami.2013.102] [Citation(s) in RCA: 83] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Robust sparse representation has shown significant potential in solving challenging problems in computer vision such as biometrics and visual surveillance. Although several robust sparse models have been proposed and promising results have been obtained, they are either for error correction or for error detection, and learning a general framework that systematically unifies these two aspects and explores their relation is still an open problem. In this paper, we develop a half-quadratic (HQ) framework to solve the robust sparse representation problem. By defining different kinds of half-quadratic functions, the proposed HQ framework is applicable to performing both error correction and error detection. More specifically, by using the additive form of HQ, we propose an ℓ1-regularized error correction method by iteratively recovering corrupted data from errors incurred by noises and outliers; by using the multiplicative form of HQ, we propose an ℓ1-regularized error detection method by learning from uncorrupted data iteratively. We also show that the ℓ1-regularization solved by soft-thresholding function has a dual relationship to Huber M-estimator, which theoretically guarantees the performance of robust sparse representation in terms of M-estimation. Experiments on robust face recognition under severe occlusion and corruption validate our framework and findings.
Collapse
Affiliation(s)
- Ran He
- Institute of Automation, Chinese Academy of Sciences, Beijing
| | | | - Tieniu Tan
- Institute of Automation, Chinese Academy of Sciences, Beijing
| | - Zhenan Sun
- Institute of Automation, Chinese Academy of Sciences, Beijing
| |
Collapse
|
10
|
Frindel C, Robini MC, Rousseau D. A 3-D spatio-temporal deconvolution approach for MR perfusion in the brain. Med Image Anal 2014; 18:144-60. [DOI: 10.1016/j.media.2013.10.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2013] [Revised: 09/12/2013] [Accepted: 10/07/2013] [Indexed: 11/26/2022]
|
11
|
Mir M, Babacan SD, Bednarz M, Do MN, Golding I, Popescu G. Visualizing Escherichia coli sub-cellular structure using sparse deconvolution Spatial Light Interference Tomography. PLoS One 2012; 7:e39816. [PMID: 22761910 PMCID: PMC3386179 DOI: 10.1371/journal.pone.0039816] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2012] [Accepted: 05/27/2012] [Indexed: 01/27/2023] Open
Abstract
Studying the 3D sub-cellular structure of living cells is essential to our understanding of biological function. However, tomographic imaging of live cells is challenging mainly because they are transparent, i.e., weakly scattering structures. Therefore, this type of imaging has been implemented largely using fluorescence techniques. While confocal fluorescence imaging is a common approach to achieve sectioning, it requires fluorescence probes that are often harmful to the living specimen. On the other hand, by using the intrinsic contrast of the structures it is possible to study living cells in a non-invasive manner. One method that provides high-resolution quantitative information about nanoscale structures is a broadband interferometric technique known as Spatial Light Interference Microscopy (SLIM). In addition to rendering quantitative phase information, when combined with a high numerical aperture objective, SLIM also provides excellent depth sectioning capabilities. However, like in all linear optical systems, SLIM's resolution is limited by diffraction. Here we present a novel 3D field deconvolution algorithm that exploits the sparsity of phase images and renders images with resolution beyond the diffraction limit. We employ this label-free method, called deconvolution Spatial Light Interference Tomography (dSLIT), to visualize coiled sub-cellular structures in E. coli cells which are most likely the cytoskeletal MreB protein and the division site regulating MinCDE proteins. Previously these structures have only been observed using specialized strains and plasmids and fluorescence techniques. Our results indicate that dSLIT can be employed to study such structures in a practical and non-invasive manner.
Collapse
Affiliation(s)
- Mustafa Mir
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, United States of America.
| | | | | | | | | | | |
Collapse
|
12
|
Chouzenoux E, Idier J, Moussaoui S. A majorize-minimize strategy for subspace optimization applied to image restoration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:1517-1528. [PMID: 21193375 DOI: 10.1109/tip.2010.2103083] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
This paper proposes accelerated subspace optimization methods in the context of image restoration. Subspace optimization methods belong to the class of iterative descent algorithms for unconstrained optimization. At each iteration of such methods, a stepsize vector allowing the best combination of several search directions is computed through a multidimensional search. It is usually obtained by an inner iterative second-order method ruled by a stopping criterion that guarantees the convergence of the outer algorithm. As an alternative, we propose an original multidimensional search strategy based on the majorize-minimize principle. It leads to a closed-form stepsize formula that ensures the convergence of the subspace algorithm whatever the number of inner iterations. The practical efficiency of the proposed scheme is illustrated in the context of edge-preserving image restoration.
Collapse
Affiliation(s)
- Emilie Chouzenoux
- IRCCyN (CNRS UMR 6597), Ecole Centrale Nantes, 44321 Nantes Cedex 03, France
| | | | | |
Collapse
|
13
|
Tarel JP, Ieng SS, Charbonnier P. A constrained-optimization based half-quadratic algorithm for robustly fitting sets of linearly parametrized curves. ADV DATA ANAL CLASSI 2008. [DOI: 10.1007/s11634-008-0031-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
14
|
Nikolova M, Chan RH. The equivalence of half-quadratic minimization and the gradient linearization iteration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:1623-7. [PMID: 17547139 DOI: 10.1109/tip.2007.896622] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
A popular way to restore images comprising edges is to minimize a cost function combining a quadratic data-fidelity term and an edge-preserving (possibly nonconvex) regularizalion term. Mainly because of the latter term, the calculation of the solution is slow and cumbersome. Half-quadratic (HQ) minimization (multiplicative form) was pioneered by Geman and Reynolds (1992) in order to alleviate the computational task in the context of image reconstruction with nonconvex regularization. By promoting the idea of locally homogeneous image models with a continuous-valued line process, they reformulated the optimization problem in terms of an augmented cost function which is quadratic with respect to the image and separable with respect to the line process, hence the name "half quadratic." Since then, a large amount of papers were dedicated to HQ minimization and important results--including edge-preservation along with convex regularization and convergence-have been obtained. In this paper, we show that HQ minimization (multiplicative form) is equivalent to the most simple and basic method where the gradient of the cost function is linearized at each iteration step. In fact, both methods give exactly the same iterations. Furthermore, connections of HQ minimization with other methods, such as the quasi-Newton method and the generalized Weiszfeld's method, are straightforward.
Collapse
Affiliation(s)
- Mila Nikolova
- Centre de Mathématiques et de Leurs Applications CNRS-UMR 8536). ENS de Cachan, 94235 Cachan Cedex, France.
| | | |
Collapse
|