1
|
Improving recommendation quality through outlier removal. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-021-01490-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
2
|
Grussu F, Battiston M, Veraart J, Schneider T, Cohen-Adad J, Shepherd TM, Alexander DC, Fieremans E, Novikov DS, Gandini Wheeler-Kingshott CAM. Multi-parametric quantitative in vivo spinal cord MRI with unified signal readout and image denoising. Neuroimage 2020; 217:116884. [PMID: 32360689 PMCID: PMC7378937 DOI: 10.1016/j.neuroimage.2020.116884] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 03/18/2020] [Accepted: 04/23/2020] [Indexed: 12/11/2022] Open
Abstract
Multi-parametric quantitative MRI (qMRI) of the spinal cord is a promising non-invasive tool to probe early microstructural damage in neurological disorders. It is usually performed in vivo by combining acquisitions with multiple signal readouts, which exhibit different thermal noise levels, geometrical distortions and susceptibility to physiological noise. This ultimately hinders joint multi-contrast modelling and makes the geometric correspondence of parametric maps challenging. We propose an approach to overcome these limitations, by implementing state-of-the-art microstructural MRI of the spinal cord with a unified signal readout in vivo (i.e. with matched spatial encoding parameters across a range of imaging contrasts). We base our acquisition on single-shot echo planar imaging with reduced field-of-view, and obtain data from two different vendors (vendor 1: Philips Achieva; vendor 2: Siemens Prisma). Importantly, the unified acquisition allows us to compare signal and noise across contrasts, thus enabling overall quality enhancement via multi-contrast image denoising methods. As a proof-of-concept, here we provide a demonstration with one such method, known as Marchenko-Pastur (MP) Principal Component Analysis (PCA) denoising. MP-PCA is a singular value (SV) decomposition truncation approach that relies on redundant acquisitions, i.e. such that the number of measurements is large compared to the number of components that are maintained in the truncated SV decomposition. Here we used in vivo and synthetic data to test whether a unified readout enables more efficient MP-PCA denoising of less redundant acquisitions, since these can be denoised jointly with more redundant ones. We demonstrate that a unified readout provides robust multi-parametric maps, including diffusion and kurtosis tensors from diffusion MRI, myelin metrics from two-pool magnetisation transfer, and T1 and T2 from relaxometry. Moreover, we show that MP-PCA improves the quality of our multi-contrast acquisitions, since it reduces the coefficient of variation (i.e. variability) by up to 17% for mean kurtosis, 8% for bound pool fraction (myelin-sensitive), and 13% for T1, while enabling more efficient denoising of modalities limited in redundancy (e.g. relaxometry). In conclusion, multi-parametric spinal cord qMRI with unified readout is feasible and provides robust microstructural metrics with matched resolution and distortions, whose quality benefits from multi-contrast denoising methods such as MP-PCA.
Collapse
Affiliation(s)
- Francesco Grussu
- Queen Square MS Centre, UCL Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, London, UK; Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK.
| | - Marco Battiston
- Queen Square MS Centre, UCL Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, London, UK
| | - Jelle Veraart
- Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, USA
| | | | - Julien Cohen-Adad
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, Canada; Functional Neuroimaging Unit, CRIUGM, Université de Montréal, Montreal, Canada
| | - Timothy M Shepherd
- Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, USA
| | - Daniel C Alexander
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| | - Els Fieremans
- Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, USA
| | - Dmitry S Novikov
- Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, USA
| | - Claudia A M Gandini Wheeler-Kingshott
- Queen Square MS Centre, UCL Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, London, UK; Brain MRI 3T Research Centre, IRCCS Mondino Foundation, Pavia, Italy; Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy
| |
Collapse
|
3
|
Chen X, Han Z, Wang Y, Zhao Q, Meng D, Lin L, Tang Y. A Generalized Model for Robust Tensor Factorization With Noise Modeling by Mixture of Gaussians. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:5380-5393. [PMID: 29994738 DOI: 10.1109/tnnls.2018.2796606] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The low-rank tensor factorization (LRTF) technique has received increasing attention in many computer vision applications. Compared with the traditional matrix factorization technique, it can better preserve the intrinsic structure information and thus has a better low-dimensional subspace recovery performance. Basically, the desired low-rank tensor is recovered by minimizing the least square loss between the input data and its factorized representation. Since the least square loss is most optimal when the noise follows a Gaussian distribution, -norm-based methods are designed to deal with outliers. Unfortunately, they may lose their effectiveness when dealing with real data, which are often contaminated by complex noise. In this paper, we consider integrating the noise modeling technique into a generalized weighted LRTF (GWLRTF) procedure. This procedure treats the original issue as an LRTF problem and models the noise using a mixture of Gaussians (MoG), a procedure called MoG GWLRTF. To extend the applicability of the model, two typical tensor factorization operations, i.e., CANDECOMP/PARAFAC factorization and Tucker factorization, are incorporated into the LRTF procedure. Its parameters are updated under the expectation-maximization framework. Extensive experiments indicate the respective advantages of these two versions of MoG GWLRTF in various applications and also demonstrate their effectiveness compared with other competing methods.
Collapse
|
4
|
Kang W, Yu S, Seo D, Jeong J, Paik J. Push-Broom-Type Very High-Resolution Satellite Sensor Data Correction Using Combined Wavelet-Fourier and Multiscale Non-Local Means Filtering. SENSORS 2015; 15:22826-53. [PMID: 26378532 PMCID: PMC4610582 DOI: 10.3390/s150922826] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2015] [Revised: 07/28/2015] [Accepted: 09/01/2015] [Indexed: 11/22/2022]
Abstract
In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments.
Collapse
Affiliation(s)
- Wonseok Kang
- Department of Image, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul 06974, Korea.
| | - Soohwan Yu
- Department of Image, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul 06974, Korea.
| | - Doochun Seo
- Department of Satellite Data Cal/Val Team, Korea Aerospace Research Institute, 115 Gwahangbo, Yusung-Gu, Daejeon 34133, Korea.
| | - Jaeheon Jeong
- Department of Satellite Data Cal/Val Team, Korea Aerospace Research Institute, 115 Gwahangbo, Yusung-Gu, Daejeon 34133, Korea.
| | - Joonki Paik
- Department of Image, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul 06974, Korea.
| |
Collapse
|
5
|
Abstract
BACKGROUND Optical mapping technology is an important tool to study cardiac electrophysiology. Transmembrane fluorescence signals from voltage-dependent dyes need to be preprocessed before analysis to improve the signal-to-noise ratio. Fourier analysis, based on spectral properties of stationary signals, cannot directly provide information on the spectrum changes with respect to time. Fourier filtering has the disadvantage of causing degradation of abrupt waveform changes such as those in action potential signals. Wavelet analysis has the ability to offer simultaneous localization in time and frequency domains, suitable for the analysis and reconstruction of irregular, non-stationary signals like the fast action-potential upstroke, and better than conventional filters for denoising. METHODS We applied discrete wavelet transformation for temporal processing of optical mapping signals and wavelet packet analysis approaches to process activation maps from simulated and experimental optical mapping data from canine right atrium. We compared the results obtained with the wavelet approach to a variety of other methods (Fast Fourier Transformation (FFT) with finite or infinite response filtering, and Gaussian filters). RESULTS Temporal wavelet analysis improved signal-to-noise ratio (SNR) better than FFT filtering for 5-10dB SNR, and caused less distortion of the action potential waveform over the full range of simulated noise (5-20dB). Spatial wavelet filtering produced more efficient denoising and/or more accurate conduction velocity estimates than Gaussian filtering. Propagation patterns were also best revealed by wavelet filtering. CONCLUSIONS Wavelet analysis is a promising tool, facilitating accurate action potential characterization, activation map formation, and conduction velocity estimation.
Collapse
Affiliation(s)
- Feng Xiong
- Department of Pharmacology and Therapeutics, McGill University, Montreal, Que., Canada; Research Center, Montreal Heart Institute and Université de Montréal, 5000 Belanger Street East, Montreal, Que., Canada H1T 1C8
| | - Xiaoyan Qi
- Research Center, Montreal Heart Institute and Université de Montréal, 5000 Belanger Street East, Montreal, Que., Canada H1T 1C8
| | - Stanley Nattel
- Department of Pharmacology and Therapeutics, McGill University, Montreal, Que., Canada; Research Center, Montreal Heart Institute and Université de Montréal, 5000 Belanger Street East, Montreal, Que., Canada H1T 1C8; Department of Medicine, Montreal Heart Institute and Université de Montréal, Montreal, Que., Canada
| | - Philippe Comtois
- Research Center, Montreal Heart Institute and Université de Montréal, 5000 Belanger Street East, Montreal, Que., Canada H1T 1C8; Department of Molecular and Integrative Physiology/Institute of Biomedical Engineering, Université de Montréal, Montreal, Que., Canada.
| |
Collapse
|
6
|
Ikeda M, Makino R, Imai K. A new evaluation method for image noise reduction and usefulness of the spatially adaptive wavelet thresholding method for CT images. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2012; 35:475-83. [PMID: 23250578 DOI: 10.1007/s13246-012-0175-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2012] [Accepted: 12/10/2012] [Indexed: 11/26/2022]
Abstract
We have proposed a direct evaluation method concerning preservation of noise-free components for image noise reduction. This evaluation method is to graphically estimate how well a noise-reduction method will preserve noise-free image components by using the normal probability plot of the image pixel value difference between an original image and its noise-reduced image; this difference is equivalent to the "method noise" which was defined by Buades et al. Further, by comparing the linearity of a normal probability plot for two different noise reduction methods, one can graphically assess which method will be more able to preserve the noise-free component than the other. As an illustrative example of this evaluation method, we have evaluated the effectiveness of the spatially-adaptive BayesShrink noise-reduced method devised by Chang et al., when applied to chest phantom CT images. The evaluation results of our proposed method were consistent with the visual impressions for the CT images processed in this study. The results of this study also indicate that the spatially-adaptive BayesShrink algorithm devised by Chang et al. will work well on the chest phantom CT images, although the assumption for this method is often violated in CT images, and the assumption postulated for the spatially-adaptive BayesShrink method is expected to have sufficient robustness for CT images.
Collapse
Affiliation(s)
- Mitsuru Ikeda
- Department of Radiological Technology, Nagoya University Graduate School of Medicine, Higashi-ku, Nagoya, Japan.
| | | | | |
Collapse
|
7
|
Liu Y, Cormack LK, Bovik AC. Statistical modeling of 3-D natural scenes with application to Bayesian stereopsis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:2515-2530. [PMID: 21342845 DOI: 10.1109/tip.2011.2118223] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We studied the empirical distributions of luminance, range and disparity wavelet coefficients using a coregistered database of luminance and range images. The marginal distributions of range and disparity are observed to have high peaks and heavy tails, similar to the well-known properties of luminance wavelet coefficients. However, we found that the kurtosis of range and disparity coefficients is significantly larger than that of luminance coefficients. We used generalized Gaussian models to fit the empirical marginal distributions. We found that the marginal distribution of luminance coefficients have a shape parameter p between 0.6 and 0.8, while range and disparity coefficients have much smaller parameters p < 0.32, corresponding to a much higher peak. We also examined the conditional distributions of luminance, range and disparity coefficients. The magnitudes of luminance and range (disparity) coefficients show a clear positive correlation, which means, at a location with larger luminance variation, there is a higher probability of a larger range (disparity) variation. We also used generalized Gaussians to model the conditional distributions of luminance and range (disparity) coefficients. The values of the two shape parameters (p,s) reflect the observed luminance-range (disparity) dependency. As an example of the usefulness of luminance statistics conditioned on range statistics, we modified a well-known Bayesian stereo ranging algorithm using our natural scene statistics models, which improved its performance.
Collapse
Affiliation(s)
- Yang Liu
- Center for Perceptual Systems and the Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX 78712, USA.
| | | | | |
Collapse
|
8
|
Geodesics on the Manifold of Multivariate Generalized Gaussian Distributions with an Application to Multicomponent Texture Discrimination. Int J Comput Vis 2011. [DOI: 10.1007/s11263-011-0448-9] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
9
|
Abstract
Magnetic Resonance images are normally corrupted by random noise from the measurement process complicating the automatic feature extraction and analysis of clinical data. It is because of this reason that denoising methods have been traditionally applied to improve MR image quality. Many of these methods use the information of a single image without taking into consideration the intrinsic multicomponent nature of MR images. In this paper we propose a new filter to reduce random noise in multicomponent MR images by spatially averaging similar pixels using information from all available image components to perform the denoising process. The proposed algorithm also uses a local Principal Component Analysis decomposition as a postprocessing step to remove more noise by using information not only in the spatial domain but also in the intercomponent domain dealing in a higher noise reduction without significantly affecting the original image resolution. The proposed method has been compared with
similar state-of-art methods over synthetic and real clinical multicomponent MR images showing an improved performance in all cases analyzed.
Collapse
|
10
|
Rahman SMM, Ahmad MO, Swamy MNS. Bayesian wavelet-based image denoising using the Gauss-Hermite expansion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:1755-1771. [PMID: 18784025 DOI: 10.1109/tip.2008.2002163] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters.
Collapse
Affiliation(s)
- S M Mahbubur Rahman
- Center for Signal Processing and Communications, Department of Electrical and Computer Engineering, Concordia University, Montréal, QC, Canada.
| | | | | |
Collapse
|