1
|
Zhang D, Cheng KT. Generalized Task-Driven Medical Image Quality Enhancement With Gradient Promotion. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2025; 47:2785-2798. [PMID: 40030882 DOI: 10.1109/tpami.2025.3525671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Thanks to the recent achievements in task-driven image quality enhancement (IQE) models like ESTR (Liu et al. 2023), the image enhancement model and the visual recognition model can mutually enhance each other's quantitation while producing high-quality processed images that are perceivable by our human vision systems. However, existing task-driven IQE models tend to overlook an underlying fact-different levels of vision tasks have varying and sometimes conflicting requirements of image features. To address this problem, this paper proposes a generalized gradient promotion (GradProm) training strategy for task-driven IQE of medical images. Specifically, we partition a task-driven IQE system into two sub-models, i.e., a mainstream model for image enhancement and an auxiliary model for visual recognition. During training, GradProm updates only parameters of the image enhancement model using gradients of the visual recognition model and the image enhancement model, but only when gradients of these two sub-models are aligned in the same direction, which is measured by their cosine similarity. In case gradients of these two sub-models are not in the same direction, GradProm only uses the gradient of the image enhancement model to update its parameters. Theoretically, we have proved that the optimization direction of the image enhancement model will not be biased by the auxiliary visual recognition model under the implementation of GradProm. Empirically, extensive experimental results on four public yet challenging medical image datasets demonstrated the superior performance of GradProm over existing state-of-the-art methods.
Collapse
|
2
|
Gao Z, Guo Y, Zhang J, Zeng T, Yang G. Hierarchical Perception Adversarial Learning Framework for Compressed Sensing MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1859-1874. [PMID: 37022266 DOI: 10.1109/tmi.2023.3240862] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
The long acquisition time has limited the accessibility of magnetic resonance imaging (MRI) because it leads to patient discomfort and motion artifacts. Although several MRI techniques have been proposed to reduce the acquisition time, compressed sensing in magnetic resonance imaging (CS-MRI) enables fast acquisition without compromising SNR and resolution. However, existing CS-MRI methods suffer from the challenge of aliasing artifacts. This challenge results in the noise-like textures and missing the fine details, thus leading to unsatisfactory reconstruction performance. To tackle this challenge, we propose a hierarchical perception adversarial learning framework (HP-ALF). HP-ALF can perceive the image information in the hierarchical mechanism: image-level perception and patch-level perception. The former can reduce the visual perception difference in the entire image, and thus achieve aliasing artifact removal. The latter can reduce this difference in the regions of the image, and thus recover fine details. Specifically, HP-ALF achieves the hierarchical mechanism by utilizing multilevel perspective discrimination. This discrimination can provide the information from two perspectives (overall and regional) for adversarial learning. It also utilizes a global and local coherent discriminator to provide structure information to the generator during training. In addition, HP-ALF contains a context-aware learning block to effectively exploit the slice information between individual images for better reconstruction performance. The experiments validated on three datasets demonstrate the effectiveness of HP-ALF and its superiority to the comparative methods.
Collapse
|
3
|
Zhang Z, Cheng Y, Suo J, Bian L, Dai Q. INFWIDE: Image and Feature Space Wiener Deconvolution Network for Non-Blind Image Deblurring in Low-Light Conditions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:1390-1402. [PMID: 37027543 DOI: 10.1109/tip.2023.3244417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Under low-light environment, handheld photography suffers from severe camera shake under long exposure settings. Although existing deblurring algorithms have shown promising performance on well-exposed blurry images, they still cannot cope with low-light snapshots. Sophisticated noise and saturation regions are two dominating challenges in practical low-light deblurring: the former violates the Gaussian or Poisson assumption widely used in most existing algorithms and thus degrades their performance badly, while the latter introduces non-linearity to the classical convolution-based blurring model and makes the deblurring task even challenging. In this work, we propose a novel non-blind deblurring method dubbed image and feature space Wiener deconvolution network (INFWIDE) to tackle these problems systematically. In terms of algorithm design, INFWIDE proposes a two-branch architecture, which explicitly removes noise and hallucinates saturated regions in the image space and suppresses ringing artifacts in the feature space, and integrates the two complementary outputs with a subtle multi-scale fusion network for high quality night photograph deblurring. For effective network training, we design a set of loss functions integrating a forward imaging model and backward reconstruction to form a close-loop regularization to secure good convergence of the deep neural network. Further, to optimize INFWIDE's applicability in real low-light conditions, a physical-process-based low-light noise model is employed to synthesize realistic noisy night photographs for model training. Taking advantage of the traditional Wiener deconvolution algorithm's physically driven characteristics and deep neural network's representation ability, INFWIDE can recover fine details while suppressing the unpleasant artifacts during deblurring. Extensive experiments on synthetic data and real data demonstrate the superior performance of the proposed approach.
Collapse
|
4
|
Analysis of Urban Visual Memes Based on Dictionary Learning: An Example with Urban Image Data. Symmetry (Basel) 2022. [DOI: 10.3390/sym14010175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
The coexistence of different cultures is a distinctive feature of human society, and globalization makes the construction of cities gradually tend to be the same, so how to find the unique memes of urban culture in a multicultural environment is very important for the development of a city. Most of the previous analyses of urban style have been based on simple classification tasks to obtain the visual elements of cities, lacking in considering the most essential visual elements of cities as a whole. Therefore, based on the image data of ten representative cities around the world, we extract the visual memes via the dictionary learning method, quantify the symmetric similarities and differences between cities by using the memetic similarity, and interpret the reasons for the similarities and differences between cities by using the memetic similarity and sparse representation. The experimental results show that the visual memes have certain limitations among different cities, i.e., the elements composing the urban style are very similar, and the linear combinations of visual memes vary widely as the reason for the differences in the urban style among cities.
Collapse
|
5
|
Zeng D, Wang L, Geng M, Li S, Deng Y, Xie Q, Li D, Zhang H, Li Y, Xu Z, Meng D, Ma J. Noise-Generating-Mechanism-Driven Unsupervised Learning for Low-Dose CT Sinogram Recovery. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3083361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
6
|
Li J, Zhao G, Tao Y, Zhai P, Chen H, He H, Cai T. Multi-task contrastive learning for automatic CT and X-ray diagnosis of COVID-19. PATTERN RECOGNITION 2021; 114:107848. [PMID: 33518812 PMCID: PMC7834978 DOI: 10.1016/j.patcog.2021.107848] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Revised: 01/18/2021] [Accepted: 01/24/2021] [Indexed: 05/07/2023]
Abstract
Computed tomography (CT) and X-ray are effective methods for diagnosing COVID-19. Although several studies have demonstrated the potential of deep learning in the automatic diagnosis of COVID-19 using CT and X-ray, the generalization on unseen samples needs to be improved. To tackle this problem, we present the contrastive multi-task convolutional neural network (CMT-CNN), which is composed of two tasks. The main task is to diagnose COVID-19 from other pneumonia and normal control. The auxiliary task is to encourage local aggregation though a contrastive loss: first, each image is transformed by a series of augmentations (Poisson noise, rotation, etc.). Then, the model is optimized to embed representations of a same image similar while different images dissimilar in a latent space. In this way, CMT-CNN is capable of making transformation-invariant predictions and the spread-out properties of data are preserved. We demonstrate that the apparently simple auxiliary task provides powerful supervisions to enhance generalization. We conduct experiments on a CT dataset (4,758 samples) and an X-ray dataset (5,821 samples) assembled by open datasets and data collected in our hospital. Experimental results demonstrate that contrastive learning (as plugin module) brings solid accuracy improvement for deep learning models on both CT (5.49%-6.45%) and X-ray (0.96%-2.42%) without requiring additional annotations. Our codes are accessible online.
Collapse
Affiliation(s)
- Jinpeng Li
- HwaMei Hospital, University of Chinese Academy of Sciences, 41 Northwest Street, Haishu District, Ningbo, 315010, China
- Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, 159 Beijiao Street, Jiangbei District, Ningbo, 315000, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Gangming Zhao
- Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, 159 Beijiao Street, Jiangbei District, Ningbo, 315000, China
- The University of Hong Kong, Hong Kong
| | - Yaling Tao
- HwaMei Hospital, University of Chinese Academy of Sciences, 41 Northwest Street, Haishu District, Ningbo, 315010, China
- Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, 159 Beijiao Street, Jiangbei District, Ningbo, 315000, China
| | - Penghua Zhai
- Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, 159 Beijiao Street, Jiangbei District, Ningbo, 315000, China
| | - Hao Chen
- Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, 159 Beijiao Street, Jiangbei District, Ningbo, 315000, China
| | - Huiguang He
- Institute of Automation, Chinese Academy of Sciences, 95 Zhongguancun East Road, Haidian District, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Ting Cai
- HwaMei Hospital, University of Chinese Academy of Sciences, 41 Northwest Street, Haishu District, Ningbo, 315010, China
- Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, 159 Beijiao Street, Jiangbei District, Ningbo, 315000, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
7
|
Jon K, Liu J, Lv X, Zhu W. Poisson noisy image restoration via overlapping group sparse and nonconvex second-order total variation priors. PLoS One 2021; 16:e0250260. [PMID: 33878121 PMCID: PMC8057597 DOI: 10.1371/journal.pone.0250260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Accepted: 04/01/2021] [Indexed: 11/23/2022] Open
Abstract
The restoration of the Poisson noisy images is an essential task in many imaging applications due to the uncertainty of the number of discrete particles incident on the image sensor. In this paper, we consider utilizing a hybrid regularizer for Poisson noisy image restoration. The proposed regularizer, which combines the overlapping group sparse (OGS) total variation with the high-order nonconvex total variation, can alleviate the staircase artifacts while preserving the original sharp edges. We use the framework of the alternating direction method of multipliers to design an efficient minimization algorithm for the proposed model. Since the objective function is the sum of the non-quadratic log-likelihood and nonconvex nondifferentiable regularizer, we propose to solve the intractable subproblems by the majorization-minimization (MM) method and the iteratively reweighted least squares (IRLS) algorithm, respectively. Numerical experiments show the efficiency of the proposed method for Poissonian image restoration including denoising and deblurring.
Collapse
Affiliation(s)
- Kyongson Jon
- Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun, P.R. China.,Faculty of Mathematics, Kim Il Sung University, Pyongyang, D.P.R. of Korea
| | - Jun Liu
- Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun, P.R. China
| | - Xiaoguang Lv
- School of Science, Jiangsu Ocean University, Lianyungang, Jiangsu, P.R. China
| | - Wensheng Zhu
- Key Laboratory for Applied Statistics of MOE, School of Mathematics and Statistics, Northeast Normal University, Changchun, P.R. China
| |
Collapse
|
8
|
Zhang Z, Yu L, Zhao W, Xing L. Modularized data-driven reconstruction framework for nonideal focal spot effect elimination in computed tomography. Med Phys 2021; 48:2245-2257. [PMID: 33595900 DOI: 10.1002/mp.14785] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 01/17/2021] [Accepted: 02/12/2021] [Indexed: 01/05/2023] Open
Abstract
PURPOSE High-performance computed tomography (CT) plays a vital role in clinical decision-making. However, the performance of CT imaging is adversely affected by the nonideal focal spot size of the x-ray source or degraded by an enlarged focal spot size due to aging. In this work, we aim to develop a deep learning-based strategy to mitigate the problem so that high spatial resolution CT images can be obtained even in the case of a nonideal x-ray source. METHODS To reconstruct high-quality CT images from blurred sinograms via joint image and sinogram learning, a cross-domain hybrid model is formulated via deep learning into a modularized data-driven reconstruction (MDR) framework. The proposed MDR framework comprises several blocks, and all the blocks share the same network architecture and network parameters. In essence, each block utilizes two sub-models to generate an estimated blur kernel and a high-quality CT image simultaneously. In this way, our framework generates not only a final high-quality CT image but also a series of intermediate images with gradually improved anatomical details, enhancing the visual perception for clinicians through the dynamic process. We used simulated training datasets to train our model in an end-to-end manner and tested our model on both simulated and realistic experimental datasets. RESULTS On the simulated testing datasets, our approach increases the information fidelity criterion (IFC) by up to 34.2%, the universal quality index (UQI) by up to 20.3%, the signal-to-noise (SNR) by up to 6.7%, and reduces the root mean square error (RMSE) by up to 10.5% as compared with FBP. Compared with the iterative deconvolution method (NSM), MDR increases IFC by up to 24.7%, UQI by up to 16.7%, SNR by up to 6.0%, and reduces RMSE by up to 9.4%. In the modulation transfer function (MTF) experiment, our method improves the MTF50% by 34.5% and MTF10% by 18.7% as compared with FBP, Similarly remarkably, our method improves MTF50% by 14.3% and MTF10% by 0.9% as compared with NSM. Also, our method shows better imaging results in the edge of bony structures and other tiny structures in the experiments using phantom consisting of ham and a bottle of peanuts. CONCLUSIONS A modularized data-driven CT reconstruction framework is established to mitigate the blurring effect caused by a nonideal x-ray source with relatively large focal spot. The proposed method enables us to obtain high-resolution images with less ideal x-ray source.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Lequan Yu
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| |
Collapse
|
9
|
Onofrey JA, Staib LH, Huang X, Zhang F, Papademetris X, Metaxas D, Rueckert D, Duncan JS. Sparse Data-Driven Learning for Effective and Efficient Biomedical Image Segmentation. Annu Rev Biomed Eng 2020; 22:127-153. [PMID: 32169002 PMCID: PMC9351438 DOI: 10.1146/annurev-bioeng-060418-052147] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Sparsity is a powerful concept to exploit for high-dimensional machine learning and associated representational and computational efficiency. Sparsity is well suited for medical image segmentation. We present a selection of techniques that incorporate sparsity, including strategies based on dictionary learning and deep learning, that are aimed at medical image segmentation and related quantification.
Collapse
Affiliation(s)
- John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Urology, Yale School of Medicine, New Haven, Connecticut 06520, USA
| | - Lawrence H Staib
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut 06520, USA;
| | - Xiaojie Huang
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Citadel Securities, Chicago, Illinois 60603, USA
| | - Fan Zhang
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
| | - Xenophon Papademetris
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut 06520, USA;
| | - Dimitris Metaxas
- Department of Computer Science, Rutgers University, Piscataway, New Jersey 08854, USA
| | - Daniel Rueckert
- Department of Computing, Imperial College London, London SW7 2AZ, United Kingdom
| | - James S Duncan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut 06520, USA;
| |
Collapse
|
10
|
Viswanath S, Ghulyani M, De Beco S, Dahan M, Arigovindan M. Image Restoration by Combined Order Regularization with Optimal Spatial Adaptation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020:1-1. [PMID: 32356745 DOI: 10.1109/tip.2020.2988146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Total Variation (TV) and related extensions have been popular in image restoration due to their robust performance and wide applicability. While the original formulation is still relevant after two decades of extensive research, its extensions that combine derivatives of first and second orders are now being explored for better performance, with examples being Combined Order TV (COTV) and Total Generalized Variation (TGV). As an improvement over such multi-order convex formulations, we propose a novel non-convex regularization functional which adaptively combines Hessian-Schatten (HS) norm and first order TV (TV1) functionals with spatially varying weight. This adaptive weight itself is controlled by another regularization term; the total cost becomes the sum of this adaptively weighted HS-TV1 term, the regularization term for the adaptive weight, and the data-fitting term. The reconstruction is obtained by jointly minimizing w.r.t. the required image and the adaptive weight. We construct a block coordinate descent method for this minimization with proof of convergence, which alternates between minimization w.r.t. the required image and the adaptive weights. We derive exact computational formula for minimization w.r.t. the adaptive weight, and construct an ADMM algorithm for minimization w.r.t. to the required image. We compare the proposed method with existing regularization methods, and a recently proposed Deep GAN method using image recovery examples including MRI reconstruction and microscopy deconvolution.
Collapse
|
11
|
Xu Y, Li Z, Tian C, Yang J. Multiple vector representations of images and robust dictionary learning. Pattern Recognit Lett 2019. [DOI: 10.1016/j.patrec.2019.08.022] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
12
|
Sarno A, Andreozzi E, De Caro D, Di Meo G, Strollo AGM, Cesarelli M, Bifulco P. Real-time algorithm for Poissonian noise reduction in low-dose fluoroscopy: performance evaluation. Biomed Eng Online 2019; 18:94. [PMID: 31511017 PMCID: PMC6737613 DOI: 10.1186/s12938-019-0713-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Accepted: 08/31/2019] [Indexed: 11/21/2022] Open
Abstract
BACKGROUND Quantum noise intrinsically limits the quality of fluoroscopic images. The lower is the X-ray dose the higher is the noise. Fluoroscopy video processing can enhance image quality and allows further patient's dose lowering. This study aims to assess the performances achieved by a Noise Variance Conditioned Average (NVCA) spatio-temporal filter for real-time denoising of fluoroscopic sequences. The filter is specifically designed for quantum noise suppression and edge preservation. It is an average filter that excludes neighborhood pixel values exceeding noise statistic limits, by means of a threshold which depends on the local noise standard deviation, to preserve the image spatial resolution. The performances were evaluated in terms of contrast-to-noise-ratio (CNR) increment, image blurring (full width of the half maximum of the line spread function) and computational time. The NVCA filter performances were compared to those achieved by simple moving average filters and the state-of-the-art video denoising block matching-4D (VBM4D) algorithm. The influence of the NVCA filter size and threshold on the final image quality was evaluated too. RESULTS For NVCA filter mask size of 5 × 5 × 5 pixels (the third dimension represents the temporal extent of the filter) and a threshold level equal to 2 times the local noise standard deviation, the NVCA filter achieved a 10% increase of the CNR with respect to the unfiltered sequence, while the VBM4D achieved a 14% increase. In the case of NVCA, the edge blurring did not depend on the speed of the moving objects; on the other hand, the spatial resolution worsened of about 2.2 times by doubling the objects speed with VBM4D. The NVCA mask size and the local noise-threshold level are critical for final image quality. The computational time of the NVCA filter was found to be just few percentages of that required for the VBM4D filter. CONCLUSIONS The NVCA filter obtained a better image quality compared to simple moving average filters, and a lower but comparable quality when compared with the VBM4D filter. The NVCA filter showed to preserve edge sharpness, in particular in the case of moving objects (performing even better than VBM4D). The simplicity of the NVCA filter and its low computational burden make this filter suitable for real-time video processing and its hardware implementation is ready to be included in future fluoroscopy devices, offering further lowering of patient's X-ray dose.
Collapse
Affiliation(s)
- A Sarno
- Università di Napoli, "Federico II", dip. di Fisica "E. Pancini" & INFN sez. di Napoli, Via Cintia, 80126, Naples, Italy.
| | - E Andreozzi
- Department of Electrical Engineering and Information Technologies, Università di Napoli "Federico II", Via Claudio, 21, 80125, Naples, Italy
- Istituti Clinici Scientifici Maugeri S.p.A.-Società Benefit, Via S. Maugeri, 4, 27100, Pavia, Italy
| | - D De Caro
- Department of Electrical Engineering and Information Technologies, Università di Napoli "Federico II", Via Claudio, 21, 80125, Naples, Italy
| | - G Di Meo
- Department of Electrical Engineering and Information Technologies, Università di Napoli "Federico II", Via Claudio, 21, 80125, Naples, Italy
| | - A G M Strollo
- Department of Electrical Engineering and Information Technologies, Università di Napoli "Federico II", Via Claudio, 21, 80125, Naples, Italy
| | - M Cesarelli
- Department of Electrical Engineering and Information Technologies, Università di Napoli "Federico II", Via Claudio, 21, 80125, Naples, Italy
- Istituti Clinici Scientifici Maugeri S.p.A.-Società Benefit, Via S. Maugeri, 4, 27100, Pavia, Italy
| | - P Bifulco
- Department of Electrical Engineering and Information Technologies, Università di Napoli "Federico II", Via Claudio, 21, 80125, Naples, Italy
- Istituti Clinici Scientifici Maugeri S.p.A.-Società Benefit, Via S. Maugeri, 4, 27100, Pavia, Italy
| |
Collapse
|
13
|
Pohlman RM, Varghese T. Dictionary Representations for Electrode Displacement Elastography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2018; 65:2381-2389. [PMID: 30296219 PMCID: PMC6400457 DOI: 10.1109/tuffc.2018.2874181] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Ultrasound electrode displacement elastography (EDE) has demonstrated the potential to monitor ablated regions in human patients after minimally invasive microwave ablation procedures. Displacement estimation for EDE is commonly plagued by decorrelation noise artifacts degrading displacement estimates. In this paper, we propose a global dictionary learning approach applied to denoising displacement estimates with an adaptively learned dictionary from EDE phantom displacement maps. The resulting algorithm is one that represents displacement patches sparsely if they contain low noise and averages remaining patches thereby denoising displacement maps while retaining important edge information. The results of dictionary-represented displacements presented with a higher signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) with improved contrast, as well as improved phantom inclusion delineation when compared to initial displacements, median-filtered displacements, and spline smoothened displacements, respectively. In addition to visualized noise reduction, dictionary-represented displacements presented with the highest SNR, CNR, and improved contrast with values of 1.77, 4.56, and 4.35 dB, respectively, when compared to axial strain tensor images estimated using the initial displacements. Following EDE phantom imaging, we utilized dictionary representations from in vivo patient data, further validating efficacy. Denoising displacement estimates are a newer application for dictionary learning producing strong ablated region delineation with little degradation from denoising.
Collapse
|
14
|
|
15
|
Boulanger J, Pustelnik N, Condat L, Sengmanivong L, Piolot T. Nonsmooth Convex Optimization for Structured Illumination Microscopy Image Reconstruction. INVERSE PROBLEMS 2018; 34:095004. [PMID: 30083025 PMCID: PMC6075701 DOI: 10.1088/1361-6420/aaccca] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
In this paper, we propose a new approach for structured illumination microscopy image reconstruction. We first introduce the principles of this imaging modality and describe the forward model. We then propose the minimization of nonsmooth convex objective functions for the recovery of the unknown image. In this context, we investigate two data-fitting terms for Poisson-Gaussian noise and introduce a new patch-based regularization method. This approach is tested against other regularization approaches on a realistic benchmark. Finally, we perform some test experiments on images acquired on two different microscopes.
Collapse
Affiliation(s)
- Jérôme Boulanger
- CNRS UMR144, F-75248 Paris, France
- Institut Curie, F-75248 Paris, France
- Cell Biology Division, MRC Laboratory of Molecular Biology, Cambridge CB2 0QH, UK
| | - Nelly Pustelnik
- Laboratoire de Physique ENS de Lyon
- CNRS UMR5672, Université Lyon I, France
| | - Laurent Condat
- CNRS, GIPSA-Lab, Univ. Grenoble Alpes, 38000 Grenoble, France
| | - Lucie Sengmanivong
- CNRS UMR144, F-75248 Paris, France
- Institut Curie, F-75248 Paris, France
- Cell and Tissue Imaging Core Facility (PICT-IBiSA), F-75248 Paris, France
- Nikon Imaging Centre@Institut Curie-CNRS, F-75248 Paris, France
| | - Tristan Piolot
- CNRS UMR3215, F-75248, Paris, France
- INSERM U934, F-75248, Paris, France
| |
Collapse
|
16
|
Meiniel W, Olivo-Marin JC, Angelini ED. Denoising of Microscopy Images: A Review of the State-of-the-Art, and a New Sparsity-Based Method. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:3842-3856. [PMID: 29733271 DOI: 10.1109/tip.2018.2819821] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
This paper reviews the state-of-the-art in denoising methods for biological microscopy images and introduces a new and original sparsity-based algorithm. The proposed method combines total variation (TV) spatial regularization, enhancement of low-frequency information, and aggregation of sparse estimators and is able to handle simple and complex types of noise (Gaussian, Poisson, and mixed), without any a priori model and with a single set of parameter values. An extended comparison is also presented, that evaluates the denoising performance of the thirteen (including ours) state-of-the-art denoising methods specifically designed to handle the different types of noises found in bioimaging. Quantitative and qualitative results on synthetic and real images show that the proposed method outperforms the other ones on the majority of the tested scenarios.
Collapse
|
17
|
A Convex Constraint Variational Method for Restoring Blurred Images in the Presence of Alpha-Stable Noises. SENSORS 2018; 18:s18041175. [PMID: 29649147 PMCID: PMC5948533 DOI: 10.3390/s18041175] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 04/06/2018] [Accepted: 04/10/2018] [Indexed: 11/18/2022]
Abstract
Blurred image restoration poses a great challenge under the non-Gaussian noise environments in various communication systems. In order to restore images from blur and alpha-stable noise while also preserving their edges, this paper proposes a variational method to restore the blurred images with alpha-stable noises based on the property of the meridian distribution and the total variation (TV). Since the variational model is non-convex, it cannot guarantee a global optimal solution. To overcome this drawback, we also incorporate an additional penalty term into the deblurring and denoising model and propose a strictly convex variational method. Due to the convexity of our model, the primal-dual algorithm is adopted to solve this convex variational problem. Our simulation results validate the proposed method.
Collapse
|
18
|
Wang R, Tao D. Training Very Deep CNNs for General Non-Blind Deconvolution. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2897-2910. [PMID: 29993866 DOI: 10.1109/tip.2018.2815084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Non-blind image deconvolution is an ill-posed problem. The presence of noise and band-limited blur kernels makes the solution of this problem non-unique. Existing deconvolution techniques produce a residual between the sharp image and the estimation that is highly correlated with the sharp image, the kernel, and the noise. In most cases, different restoration models must be constructed for different blur kernels and different levels of noise, resulting in low computational efficiency or highly redundant model parameters. Here we aim to develop a single model that handles different types of kernels and different levels of noise: general non-blind deconvolution. Specifically, we propose a very deep convolutional neural network that predicts the residual between a pre-deconvolved image and the sharp image rather than the sharp image. The residual learning strategy makes it easier to train a single model for different kernels and different levels of noise, encouraging high effectiveness and efficiency. Quantitative evaluations demonstrate the practical applicability of the proposed model for different blur kernels. The model also shows state-of-the-art performance on synthesized blurry images.
Collapse
|
19
|
Li J, Luisier F, Blu T. PURE-LET Image Deconvolution. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:92-105. [PMID: 28922119 DOI: 10.1109/tip.2017.2753404] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
We propose a non-iterative image deconvolution algorithm for data corrupted by Poisson or mixed Poisson-Gaussian noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parameterize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds. This parameterization is then optimized by minimizing a robust estimate of the true mean squared error, the Poisson unbiased risk estimate. Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations, which has a fast and exact solution. Simulation experiments over different types of convolution kernels and various noise levels indicate that the proposed method outperforms the state-of-the-art techniques, in terms of both restoration quality and computational complexity. Finally, we present some results on real confocal fluorescence microscopy images and demonstrate the potential applicability of the proposed method for improving the quality of these images.We propose a non-iterative image deconvolution algorithm for data corrupted by Poisson or mixed Poisson-Gaussian noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parameterize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds. This parameterization is then optimized by minimizing a robust estimate of the true mean squared error, the Poisson unbiased risk estimate. Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations, which has a fast and exact solution. Simulation experiments over different types of convolution kernels and various noise levels indicate that the proposed method outperforms the state-of-the-art techniques, in terms of both restoration quality and computational complexity. Finally, we present some results on real confocal fluorescence microscopy images and demonstrate the potential applicability of the proposed method for improving the quality of these images.
Collapse
Affiliation(s)
- Jizhou Li
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong
| | | | - Thierry Blu
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
20
|
Zou C, Xia Y. Restoration of hyperspectral image contaminated by Poisson noise using spectral unmixing. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.09.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
21
|
A Probabilistic Weighted Archetypal Analysis Method with Earth Mover’s Distance for Endmember Extraction from Hyperspectral Imagery. REMOTE SENSING 2017. [DOI: 10.3390/rs9080841] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
22
|
Tang J, Yang B, Wang Y, Ying L. Sparsity-constrained PET image reconstruction with learned dictionaries. Phys Med Biol 2016; 61:6347-68. [DOI: 10.1088/0031-9155/61/17/6347] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
23
|
Chen L, Liu L, Philip Chen C. A robust bi-sparsity model with non-local regularization for mixed noise reduction. Inf Sci (N Y) 2016. [DOI: 10.1016/j.ins.2016.03.014] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
24
|
Improving Low-dose Cardiac CT Images based on 3D Sparse Representation. Sci Rep 2016; 6:22804. [PMID: 26980176 PMCID: PMC4793253 DOI: 10.1038/srep22804] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Accepted: 02/19/2016] [Indexed: 11/08/2022] Open
Abstract
Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.
Collapse
|
25
|
Recent Development of Dual-Dictionary Learning Approach in Medical Image Analysis and Reconstruction. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2015; 2015:152693. [PMID: 26089956 PMCID: PMC4450335 DOI: 10.1155/2015/152693] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2014] [Revised: 01/12/2015] [Accepted: 04/06/2015] [Indexed: 11/25/2022]
Abstract
As an implementation of compressive sensing (CS), dual-dictionary learning (DDL) method provides an ideal access to restore signals of two related dictionaries and sparse representation. It has been proven that this method performs well in medical image reconstruction with highly undersampled data, especially for multimodality imaging like CT-MRI hybrid reconstruction. Because of its outstanding strength, short signal acquisition time, and low radiation dose, DDL has allured a broad interest in both academic and industrial fields. Here in this review article, we summarize DDL's development history, conclude the latest advance, and also discuss its role in the future directions and potential applications in medical imaging. Meanwhile, this paper points out that DDL is still in the initial stage, and it is necessary to make further studies to improve this method, especially in dictionary training.
Collapse
|
26
|
|
27
|
Chen Y, Shi L, Feng Q, Yang J, Shu H, Luo L, Coatrieux JL, Chen W. Artifact suppressed dictionary learning for low-dose CT image processing. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:2271-92. [PMID: 25029378 DOI: 10.1109/tmi.2014.2336860] [Citation(s) in RCA: 158] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Low-dose computed tomography (LDCT) images are often severely degraded by amplified mottle noise and streak artifacts. These artifacts are often hard to suppress without introducing tissue blurring effects. In this paper, we propose to process LDCT images using a novel image-domain algorithm called "artifact suppressed dictionary learning (ASDL)." In this ASDL method, orientation and scale information on artifacts is exploited to train artifact atoms, which are then combined with tissue feature atoms to build three discriminative dictionaries. The streak artifacts are cancelled via a discriminative sparse representation operation based on these dictionaries. Then, a general dictionary learning processing is applied to further reduce the noise and residual artifacts. Qualitative and quantitative evaluations on a large set of abdominal and mediastinum CT images are carried out and the results show that the proposed method can be efficiently applied in most current CT systems.
Collapse
|
28
|
Xiang S, Meng G, Wang Y, Pan C, Zhang C. Image Deblurring with Coupled Dictionary Learning. Int J Comput Vis 2014. [DOI: 10.1007/s11263-014-0755-z] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
29
|
Chen Y, Shi L, Yang J, Hu Y, Luo L, Yin X, Coatrieux JL. Radiation dose reduction with dictionary learning based processing for head CT. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2014; 37:483-93. [PMID: 24923788 DOI: 10.1007/s13246-014-0276-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Accepted: 05/07/2014] [Indexed: 01/01/2023]
Abstract
In CT, ionizing radiation exposure from the scan has attracted much concern from patients and doctors. This work is aimed at improving head CT images from low-dose scans by using a fast Dictionary learning (DL) based post-processing. Both Low-dose CT (LDCT) and Standard-dose CT (SDCT) nonenhanced head images were acquired in head examination from a multi-detector row Siemens Somatom Sensation 16 CT scanner. One hundred patients were involved in the experiments. Two groups of LDCT images were acquired with 50 % (LDCT50 %) and 25 % (LDCT25 %) tube current setting in SDCT. To give quantitative evaluation, Signal to noise ratio (SNR) and Contrast to noise ratio (CNR) were computed from the Hounsfield unit (HU) measurements of GM, WM and CSF tissues. A blinded qualitative analysis was also performed to assess the processed LDCT datasets. Fifty and seventy five percent dose reductions are obtained for the two LDCT groups (LDCT50 %, 1.15 ± 0.1 mSv; LDCT25 %, 0.58 ± 0.1 mSv; SDCT, 2.32 ± 0.1 mSv; P < 0.001). Significant SNR increase over the original LDCT images is observed in the processed LDCT images for all the GM, WM and CSF tissues. Significant GM-WM CNR enhancement is noted in the DL processed LDCT images. Higher SNR and CNR than the reference SDCT images can even be achieved in the processed LDCT50 % and LDCT25 % images. Blinded qualitative review validates the perceptual improvements brought by the proposed approach. Compared to the original LDCT images, the application of DL processing in head CT is associated with a significant improvement of image quality.
Collapse
Affiliation(s)
- Yang Chen
- Laboratory of Image Science and Technology, Southeast University, 210096, Nanjing, People's Republic of China,
| | | | | | | | | | | | | |
Collapse
|