1
|
Azimi MS, Kamali-Asl A, Ay MR, Zeraatkar N, Hosseini MS, Sanaat A, Dadgar H, Arabi H. Deep learning-based partial volume correction in standard and low-dose positron emission tomography-computed tomography imaging. Quant Imaging Med Surg 2024; 14:2146-2164. [PMID: 38545051 PMCID: PMC10963814 DOI: 10.21037/qims-23-871] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 11/20/2023] [Indexed: 08/05/2024]
Abstract
BACKGROUND Positron emission tomography (PET) imaging encounters the obstacle of partial volume effects, arising from its limited intrinsic resolution, giving rise to (I) considerable bias, particularly for structures comparable in size to the point spread function (PSF) of the system; and (II) blurred image edges and blending of textures along the borders. We set out to build a deep learning-based framework for predicting partial volume corrected full-dose (FD + PVC) images from either standard or low-dose (LD) PET images without requiring any anatomical data in order to provide a joint solution for partial volume correction and de-noise LD PET images. METHODS We trained a modified encoder-decoder U-Net network with standard of care or LD PET images as the input and FD + PVC images by six different PVC methods as the target. These six PVC approaches include geometric transfer matrix (GTM), multi-target correction (MTC), region-based voxel-wise correction (RBV), iterative Yang (IY), reblurred Van-Cittert (RVC), and Richardson-Lucy (RL). The proposed models were evaluated using standard criteria, such as peak signal-to-noise ratio (PSNR), root mean squared error (RMSE), structural similarity index (SSIM), relative bias, and absolute relative bias. RESULTS Different levels of error were observed for these partial volume correction methods, which were relatively smaller for GTM with a SSIM of 0.63 for LD and 0.29 for FD, IY with an SSIM of 0.63 for LD and 0.67 for FD, RBV with an SSIM of 0.57 for LD and 0.65 for FD, and RVC with an SSIM of 0.89 for LD and 0.94 for FD PVC approaches. However, large quantitative errors were observed for multi-target MTC with an RMSE of 2.71 for LD and 2.45 for FD and RL with an RMSE of 5 for LD and 3.27 for FD PVC approaches. CONCLUSIONS We found that the proposed framework could effectively perform joint de-noising and partial volume correction for PET images with LD and FD input PET data (LD vs. FD). When no magnetic resonance imaging (MRI) images are available, the developed deep learning models could be used for partial volume correction on LD or standard PET-computed tomography (PET-CT) scans as an image quality enhancement technique.
Collapse
Affiliation(s)
- Mohammad-Saber Azimi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
- Research Center for Molecular and Cellular Imaging (RCMCI), Advanced Medical Technologies and Equipment Institute (AMTEI), Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Alireza Kamali-Asl
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Mohammad-Reza Ay
- Research Center for Molecular and Cellular Imaging (RCMCI), Advanced Medical Technologies and Equipment Institute (AMTEI), Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
| | | | | | - Amirhossein Sanaat
- Division of Nuclear Medicine & Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine & Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| |
Collapse
|
2
|
Shirakawa Y, Matsutomo N, Suyama J. Feasibility of noise-reduction reconstruction technology based on non-local-mean principle in SiPM-PET/CT. Phys Med 2024; 119:103303. [PMID: 38325223 DOI: 10.1016/j.ejmp.2024.103303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/09/2023] [Accepted: 01/29/2024] [Indexed: 02/09/2024] Open
Abstract
Quantitative values of positron emission tomography (PET) images using non-local-mean in a silicon photomultiplier (SiPM)-PET/computed tomography (CT) system with phantom and clinical images. The evaluation was conducted on a National Electrical Manufacturers Association body phantom with micro-spheres (4, 5, 6, 8, 10, 13 mm) and clinical images using the SiPM-PET/CT system. The signal-to-background ratio of the phantom was set to 4, and all PET image data was obtained and reconstructed using three-dimensional ordered subset expectation maximization, time-of-flight, point-spread function, and a 4-mm Gaussian filter (GF) and clear adaptive low-noise method (CaLM) in mild, standard, and strong intensities. The evaluation included the standardized uptake value (SUV), percent contrast (QH), coefficient of variation of the background area (CVbackground) clinical imaging for SUV of lung nodules, liver signal-to-noise ratio (SNR), and visual evaluation. SUVmax for 8-mm sphere in phantom images at 2 min for GF and CaLM (mild, standard, strong) were 2.11, 2.32, 2.02, and 1.72; the QH, 8 mm was 27.33 %, 27.47 %, 21.81 %, and 16.09 %; and CVbackground was 12.78, 11.35, 7.86, and 4.71, respectively. CaLM demonstrated higher SUVmax in clinical images than GF for all lung nodule sizes. The average SUVmax for nodules with a diameter of ≤ 1 cm were 5.9 ± 2.4, 9.9 ± 4.9, 9.9 ± 5.0, and 9.9 ± 5.0 for GF and CaLM-mild, standard, and strong intensities, respectively. Liver SNRs were higher for CaLM (mild, standard, strong) compared to GF, with increasing CaLM intensity causing higher liver SNR. CaLM-mild and standard demonstrated suitability for diagnosis in visual evaluation.
Collapse
Affiliation(s)
- Yuya Shirakawa
- Department of Radiology, Kyorin University Hospital, Tokyo, Japan.
| | - Norikazu Matsutomo
- Department of Medical Radiological Technology, Faculty of Health Sciences, Kyorin University, Japan.
| | - Jumpei Suyama
- Department of Radiology, Faculty of Medicine, Kyorin University, Tokyo, Japan.
| |
Collapse
|
3
|
Xing F, Silosky M, Ghosh D, Chin BB. Location-Aware Encoding for Lesion Detection in 68Ga-DOTATATE Positron Emission Tomography Images. IEEE Trans Biomed Eng 2024; 71:247-257. [PMID: 37471190 DOI: 10.1109/tbme.2023.3297249] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
OBJECTIVE Lesion detection with positron emission tomography (PET) imaging is critical for tumor staging, treatment planning, and advancing novel therapies to improve patient outcomes, especially for neuroendocrine tumors (NETs). Current lesion detection methods often require manual cropping of regions/volumes of interest (ROIs/VOIs) a priori, or rely on multi-stage, cascaded models, or use multi-modality imaging to detect lesions in PET images. This leads to significant inefficiency, high variability and/or potential accumulative errors in lesion quantification. To tackle this issue, we propose a novel single-stage lesion detection method using only PET images. METHODS We design and incorporate a new, plug-and-play codebook learning module into a U-Net-like neural network and promote lesion location-specific feature learning at multiple scales. We explicitly regularize the codebook learning with direct supervision at the network's multi-level hidden layers and enforce the network to learn multi-scale discriminative features with respect to predicting lesion positions. The network automatically combines the predictions from the codebook learning module and other layers via a learnable fusion layer. RESULTS We evaluate the proposed method on a real-world clinical 68Ga-DOTATATE PET image dataset, and our method produces significantly better lesion detection performance than recent state-of-the-art approaches. CONCLUSION We present a novel deep learning method for single-stage lesion detection in PET imaging data, with no ROI/VOI cropping in advance, no multi-stage modeling and no multi-modality data. SIGNIFICANCE This study provides a new perspective for effective and efficient lesion identification in PET, potentially accelerating novel therapeutic regimen development for NETs and ultimately improving patient outcomes including survival.
Collapse
|
4
|
Jaakkola MK, Rantala M, Jalo A, Saari T, Hentilä J, Helin JS, Nissinen TA, Eskola O, Rajander J, Virtanen KA, Hannukainen JC, López-Picón F, Klén R. Segmentation of Dynamic Total-Body [ 18F]-FDG PET Images Using Unsupervised Clustering. Int J Biomed Imaging 2023; 2023:3819587. [PMID: 38089593 PMCID: PMC10715853 DOI: 10.1155/2023/3819587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 11/01/2023] [Accepted: 11/17/2023] [Indexed: 10/17/2024] Open
Abstract
Clustering time activity curves of PET images have been used to separate clinically relevant areas of the brain or tumours. However, PET image segmentation in multiorgan level is much less studied due to the available total-body data being limited to animal studies. Now, the new PET scanners providing the opportunity to acquire total-body PET scans also from humans are becoming more common, which opens plenty of new clinically interesting opportunities. Therefore, organ-level segmentation of PET images has important applications, yet it lacks sufficient research. In this proof of concept study, we evaluate if the previously used segmentation approaches are suitable for segmenting dynamic human total-body PET images in organ level. Our focus is on general-purpose unsupervised methods that are independent of external data and can be used for all tracers, organisms, and health conditions. Additional anatomical image modalities, such as CT or MRI, are not used, but the segmentation is done purely based on the dynamic PET images. The tested methods are commonly used building blocks of the more sophisticated methods rather than final methods as such, and our goal is to evaluate if these basic tools are suited for the arising human total-body PET image segmentation. First, we excluded methods that were computationally too demanding for the large datasets from human total-body PET scanners. These criteria filtered out most of the commonly used approaches, leaving only two clustering methods, k-means and Gaussian mixture model (GMM), for further analyses. We combined k-means with two different preprocessing approaches, namely, principal component analysis (PCA) and independent component analysis (ICA). Then, we selected a suitable number of clusters using 10 images. Finally, we tested how well the usable approaches segment the remaining PET images in organ level, highlight the best approaches together with their limitations, and discuss how further research could tackle the observed shortcomings. In this study, we utilised 40 total-body [18F] fluorodeoxyglucose PET images of rats to mimic the coming large human PET images and a few actual human total-body images to ensure that our conclusions from the rat data generalise to the human data. Our results show that ICA combined with k-means has weaker performance than the other two computationally usable approaches and that certain organs are easier to segment than others. While GMM performed sufficiently, it was by far the slowest one among the tested approaches, making k-means combined with PCA the most promising candidate for further development. However, even with the best methods, the mean Jaccard index was slightly below 0.5 for the easiest tested organ and below 0.2 for the most challenging organ. Thus, we conclude that there is a lack of accurate and computationally light general-purpose segmentation method that can analyse dynamic total-body PET images.
Collapse
Affiliation(s)
- Maria K. Jaakkola
- Turku PET Centre, University of Turku, Turku, Finland
- Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Maria Rantala
- Turku PET Centre, University of Turku, Turku, Finland
| | - Anna Jalo
- MediCity Research Laboratory, University of Turku, Turku, Finland
- PET Preclinical Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Teemu Saari
- Turku PET Centre, University of Turku, Turku, Finland
- Turku PET Centre, Turku University Hospital, Turku, Finland
| | | | - Jatta S. Helin
- MediCity Research Laboratory, University of Turku, Turku, Finland
- PET Preclinical Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Tuuli A. Nissinen
- MediCity Research Laboratory, University of Turku, Turku, Finland
- PET Preclinical Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Olli Eskola
- Radiopharmaceutical Chemistry Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Johan Rajander
- Accelerator Laboratory, Turku PET Centre, Abo Akademi University, Turku, Finland
| | - Kirsi A. Virtanen
- Turku PET Centre, University of Turku, Turku, Finland
- Turku PET Centre, Turku University Hospital, Turku, Finland
| | | | - Francisco López-Picón
- Turku PET Centre, University of Turku, Turku, Finland
- MediCity Research Laboratory, University of Turku, Turku, Finland
- PET Preclinical Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Riku Klén
- Turku PET Centre, University of Turku, Turku, Finland
- Turku PET Centre, Turku University Hospital, Turku, Finland
| |
Collapse
|
5
|
Musculoskeletal MR Image Segmentation with Artificial Intelligence. ADVANCES IN CLINICAL RADIOLOGY 2022; 4:179-188. [PMID: 36815063 PMCID: PMC9943059 DOI: 10.1016/j.yacr.2022.04.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
6
|
Bevington CWJ, Cheng JC, Sossi V. A 4-D Iterative HYPR Denoising Operator Improves PET Image Quality. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3123537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Connor W. J. Bevington
- Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, Canada
| | - Ju-Chieh Cheng
- Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, Canada
| | - Vesna Sossi
- Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
7
|
Abstract
Molecular assembly in a complex cellular environment is vital for understanding underlying biological mechanisms. Biophysical parameters (such as single-molecule cluster density, cluster-area, pairwise distance, and number of molecules per cluster) related to molecular clusters directly associate with the physiological state (healthy/diseased) of a cell. Using super-resolution imaging along with powerful clustering methods (K-means, Gaussian mixture, and point clustering), we estimated these critical biophysical parameters associated with dense and sparse molecular clusters. We investigated Hemaglutinin (HA) molecules in an Influenza type A disease model. Subsequently, clustering parameters were estimated for transfected NIH3T3 cells. Investigations on test sample (randomly generated clusters) and NIH3T3 cells (expressing Dendra2-Hemaglutinin (Dendra2-HA) photoactivable molecules) show a significant disparity among the existing clustering techniques. It is observed that a single method is inadequate for estimating all relevant biophysical parameters accurately. Thus, a multimodel approach is necessary in order to characterize molecular clusters and determine critical parameters. The proposed study involving optical system development, photoactivable sample synthesis, and advanced clustering methods may facilitate a better understanding of single molecular clusters. Potential applications are in the emerging field of cell biology, biophysics, and fluorescence imaging.
Collapse
|
8
|
Shiri I, Arabi H, Sanaat A, Jenabi E, Becker M, Zaidi H. Fully Automated Gross Tumor Volume Delineation From PET in Head and Neck Cancer Using Deep Learning Algorithms. Clin Nucl Med 2021; 46:872-883. [PMID: 34238799 DOI: 10.1097/rlu.0000000000003789] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE The availability of automated, accurate, and robust gross tumor volume (GTV) segmentation algorithms is critical for the management of head and neck cancer (HNC) patients. In this work, we evaluated 3 state-of-the-art deep learning algorithms combined with 8 different loss functions for PET image segmentation using a comprehensive training set and evaluated its performance on an external validation set of HNC patients. PATIENTS AND METHODS 18F-FDG PET/CT images of 470 patients presenting with HNC on which manually defined GTVs serving as standard of reference were used for training (340 patients), evaluation (30 patients), and testing (100 patients from different centers) of these algorithms. PET image intensity was converted to SUVs and normalized in the range (0-1) using the SUVmax of the whole data set. PET images were cropped to 12 × 12 × 12 cm3 subvolumes using isotropic voxel spacing of 3 × 3 × 3 mm3 containing the whole tumor and neighboring background including lymph nodes. We used different approaches for data augmentation, including rotation (-15 degrees, +15 degrees), scaling (-20%, 20%), random flipping (3 axes), and elastic deformation (sigma = 1 and proportion to deform = 0.7) to increase the number of training sets. Three state-of-the-art networks, including Dense-VNet, NN-UNet, and Res-Net, with 8 different loss functions, including Dice, generalized Wasserstein Dice loss, Dice plus XEnt loss, generalized Dice loss, cross-entropy, sensitivity-specificity, and Tversky, were used. Overall, 28 different networks were built. Standard image segmentation metrics, including Dice similarity, image-derived PET metrics, first-order, and shape radiomic features, were used for performance assessment of these algorithms. RESULTS The best results in terms of Dice coefficient (mean ± SD) were achieved by cross-entropy for Res-Net (0.86 ± 0.05; 95% confidence interval [CI], 0.85-0.87), Dense-VNet (0.85 ± 0.058; 95% CI, 0.84-0.86), and Dice plus XEnt for NN-UNet (0.87 ± 0.05; 95% CI, 0.86-0.88). The difference between the 3 networks was not statistically significant (P > 0.05). The percent relative error (RE%) of SUVmax quantification was less than 5% in networks with a Dice coefficient more than 0.84, whereas a lower RE% (0.41%) was achieved by Res-Net with cross-entropy loss. For maximum 3-dimensional diameter and sphericity shape features, all networks achieved a RE ≤ 5% and ≤10%, respectively, reflecting a small variability. CONCLUSIONS Deep learning algorithms exhibited promising performance for automated GTV delineation on HNC PET images. Different loss functions performed competitively when using different networks and cross-entropy for Res-Net, Dense-VNet, and Dice plus XEnt for NN-UNet emerged as reliable networks for GTV delineation. Caution should be exercised for clinical deployment owing to the occurrence of outliers in deep learning-based algorithms.
Collapse
Affiliation(s)
- Isaac Shiri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elnaz Jenabi
- Research Centre for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | | | | |
Collapse
|
9
|
Nichols KJ, DiFilippo FP, Palestro CJ. Computational approaches to detect small lesions in 18 F-FDG PET/CT scans. J Appl Clin Med Phys 2021; 22:125-139. [PMID: 34643029 PMCID: PMC8664135 DOI: 10.1002/acm2.13451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 09/22/2021] [Accepted: 09/27/2021] [Indexed: 01/09/2023] Open
Abstract
Purpose When physicians interpret 18F‐FDG PET/CT scans, they rely on their subjective visual impression of the presence of small lesions, the criteria for which may vary among readers. Our investigation used physical phantom scans to evaluate whether image texture analysis metrics reliably correspond to visual criteria used to identify lesions and accurately differentiate background regions from sub‐centimeter simulated lesions. Methods Routinely collected quality assurance test data were processed retrospectively for 65 different 18F‐FDG PET scans performed of standardized phantoms on eight different PET/CT systems. Phantoms included 8‐, 12‐, 16‐, and 25‐mm diameter cylinders embedded in a cylindrical water bath, prepared with 2.5:1 activity‐to‐background ratio emulating typical whole‐body PET protocols. Voxel values in cylinder regions and background regions were sampled to compute several classes of image metrics. Two experienced physicists, blinded to quantified image metrics and to each other's readings, independently graded cylinder visibility on a 5‐level scale (0 = definitely not visible to 4 = definitely visible). Results The three largest cylinders were visible in 100% of cases with a mean visibility score of 3.3 ± 1.2, while the smallest 8‐mm cylinder was visible in 58% of cases with a significantly lower mean visibility score of 1.5±1.1 (P < 0.0001). By ROC analysis, the polynomial‐fit signal‐to‐noise ratio was the most accurate at discriminating 8‐mm cylinders from the background, with accuracy greater than visual detection (93% ± 2% versus 76% ± 4%, P = 0.0001), and better sensitivity (94% versus 58%, P < 0.0001). Conclusion Image texture analysis metrics are more sensitive than visual impressions for detecting sub‐centimeter simulated lesions. Therefore, image texture analysis metrics are potentially clinically useful for 18F‐FDG PET/CT studies.
Collapse
Affiliation(s)
- Kenneth J Nichols
- Department of Radiology, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
| | - Frank P DiFilippo
- Department of Nuclear Medicine, Cleveland Clinic, Cleveland, Ohio, USA
| | - Christopher J Palestro
- Department of Radiology, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
| |
Collapse
|
10
|
Onishi Y, Hashimoto F, Ote K, Ohba H, Ota R, Yoshikawa E, Ouchi Y. Anatomical-guided attention enhances unsupervised PET image denoising performance. Med Image Anal 2021; 74:102226. [PMID: 34563861 DOI: 10.1016/j.media.2021.102226] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 08/02/2021] [Accepted: 09/05/2021] [Indexed: 10/20/2022]
Abstract
Although supervised convolutional neural networks (CNNs) often outperform conventional alternatives for denoising positron emission tomography (PET) images, they require many low- and high-quality reference PET image pairs. Herein, we propose an unsupervised 3D PET image denoising method based on an anatomical information-guided attention mechanism. The proposed magnetic resonance-guided deep decoder (MR-GDD) utilizes the spatial details and semantic features of MR-guidance image more effectively by introducing encoder-decoder and deep decoder subnetworks. Moreover, the specific shapes and patterns of the guidance image do not affect the denoised PET image, because the guidance image is input to the network through an attention gate. In a Monte Carlo simulation of [18F]fluoro-2-deoxy-D-glucose (FDG), the proposed method achieved the highest peak signal-to-noise ratio and structural similarity (27.92 ± 0.44 dB/0.886 ± 0.007), as compared with Gaussian filtering (26.68 ± 0.10 dB/0.807 ± 0.004), image guided filtering (27.40 ± 0.11 dB/0.849 ± 0.003), deep image prior (DIP) (24.22 ± 0.43 dB/0.737 ± 0.017), and MR-DIP (27.65 ± 0.42 dB/0.879 ± 0.007). Furthermore, we experimentally visualized the behavior of the optimization process, which is often unknown in unsupervised CNN-based restoration problems. For preclinical (using [18F]FDG and [11C]raclopride) and clinical (using [18F]florbetapir) studies, the proposed method demonstrates state-of-the-art denoising performance while retaining spatial resolution and quantitative accuracy, despite using a common network architecture for various noisy PET images with 1/10th of the full counts. These results suggest that the proposed MR-GDD can reduce PET scan times and PET tracer doses considerably without impacting patients.
Collapse
Affiliation(s)
- Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan.
| | - Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Hiroyuki Ohba
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Ryosuke Ota
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Etsuji Yoshikawa
- Central Research Laboratory, Hamamatsu Photonics K. K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu 434-8601, Japan
| | - Yasuomi Ouchi
- Department of Biofunctional Imaging, Preeminent Medical Photonics Education & Research Center, Hamamatsu University School of Medicine, 1-20-1 Handayama, Higashi-ku, Hamamatsu 431-3192, Japan
| |
Collapse
|
11
|
Gao Y, Zhu Y, Bilgel M, Ashrafinia S, Lu L, Rahmim A. Voxel-based partial volume correction of PET images via subtle MRI guided non-local means regularization. Phys Med 2021; 89:129-139. [PMID: 34365117 DOI: 10.1016/j.ejmp.2021.07.028] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Revised: 07/18/2021] [Accepted: 07/19/2021] [Indexed: 11/26/2022] Open
Abstract
PURPOSE Positron emission tomography (PET) images tend to be significantly degraded by the partial volume effect (PVE) resulting from the limited spatial resolution of the reconstructed images. Our purpose is to propose a partial volume correction (PVC) method to tackle this issue. METHODS In the present work, we explore a voxel-based PVC method under the least squares framework (LS) employing anatomical non-local means (NLMA) regularization. The well-known non-local means (NLM) filter utilizes the high degree of information redundancy that typically exists in images, and is typically used to directly reduce image noise by replacing each voxel intensity with a weighted average of its non-local neighbors. Here we explore NLM as a regularization term within iterative-deconvolution model to perform PVC. Further, an anatomical-guided version of NLM was proposed that incorporates MRI information into NLM to improve resolution and suppress image noise. The proposed approach makes subtle usage of the accompanying MRI information to define a more appropriate search space within the prior model. To optimize the regularized LS objective function, we used the Gauss-Seidel (GS) algorithm with the one-step-late (OSL) technique. RESULTS After the import of NLMA, the visual and quality results are all improved. With a visual check, we notice that NLMA reduce the noise compared to other PVC methods. This is also validated in bias-noise curve compared to non-MRI-guided PVC framework. We can see NLMA gives better bias-noise trade-off compared to other PVC methods. CONCLUSIONS Our efforts were evaluated in the base of amyloid brain PET imaging using the BrainWeb phantom and in vivo human data. We also compared our method with other PVC methods. Overall, we demonstrated the value of introducing subtle MRI-guidance in the regularization process, the proposed NLMA method resulting in promising visual as well as quantitative performance improvements.
Collapse
Affiliation(s)
- Yuanyuan Gao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China; Department of Radiology, Johns Hopkins University, Baltimore, MD 21287, USA.
| | - Yansong Zhu
- Department of Radiology, Johns Hopkins University, Baltimore, MD 21287, USA; Department of Electrical & Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; Departments of Radiology and Physics, University of British Columbia, Vancouver, BC V5Z 1M9, Canada
| | - Murat Bilgel
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, MD 20892, USA
| | - Saeed Ashrafinia
- Department of Radiology, Johns Hopkins University, Baltimore, MD 21287, USA; Department of Electrical & Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, China
| | - Arman Rahmim
- Department of Radiology, Johns Hopkins University, Baltimore, MD 21287, USA; Department of Electrical & Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; Departments of Radiology and Physics, University of British Columbia, Vancouver, BC V5Z 1M9, Canada.
| |
Collapse
|
12
|
Sanaat A, Shiri I, Arabi H, Mainta I, Nkoulou R, Zaidi H. Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging. Eur J Nucl Med Mol Imaging 2021; 48:2405-2415. [PMID: 33495927 PMCID: PMC8241799 DOI: 10.1007/s00259-020-05167-1] [Citation(s) in RCA: 69] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 12/15/2020] [Indexed: 12/21/2022]
Abstract
PURPOSE Tendency is to moderate the injected activity and/or reduce acquisition time in PET examinations to minimize potential radiation hazards and increase patient comfort. This work aims to assess the performance of regular full-dose (FD) synthesis from fast/low-dose (LD) whole-body (WB) PET images using deep learning techniques. METHODS Instead of using synthetic LD scans, two separate clinical WB 18F-Fluorodeoxyglucose (18F-FDG) PET/CT studies of 100 patients were acquired: one regular FD (~ 27 min) and one fast or LD (~ 3 min) consisting of 1/8th of the standard acquisition time. A modified cycle-consistent generative adversarial network (CycleGAN) and residual neural network (ResNET) models, denoted as CGAN and RNET, respectively, were implemented to predict FD PET images. The quality of the predicted PET images was assessed by two nuclear medicine physicians. Moreover, the diagnostic quality of the predicted PET images was evaluated using a pass/fail scheme for lesion detectability task. Quantitative analysis using established metrics including standardized uptake value (SUV) bias was performed for the liver, left/right lung, brain, and 400 malignant lesions from the test and evaluation datasets. RESULTS CGAN scored 4.92 and 3.88 (out of 5) (adequate to good) for brain and neck + trunk, respectively. The average SUV bias calculated over normal tissues was 3.39 ± 0.71% and - 3.83 ± 1.25% for CGAN and RNET, respectively. Bland-Altman analysis reported the lowest SUV bias (0.01%) and 95% confidence interval of - 0.36, + 0.47 for CGAN compared with the reference FD images for malignant lesions. CONCLUSION CycleGAN is able to synthesize clinical FD WB PET images from LD images with 1/8th of standard injected activity or acquisition time. The predicted FD images present almost similar performance in terms of lesion detectability, qualitative scores, and quantification bias and variance.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - René Nkoulou
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, 1205 Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, DK-500 Odense, Denmark
| |
Collapse
|
13
|
Cui R, Chen Z, Wu J, Tan Y, Yu G. A Multiprocessing Scheme for PET Image Pre-Screening, Noise Reduction, Segmentation and Lesion Partitioning. IEEE J Biomed Health Inform 2021; 25:1699-1711. [PMID: 32946400 DOI: 10.1109/jbhi.2020.3024563] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Accurate segmentation and partitioning of lesions in PET images provide computer-aided procedures and doctors with parameters for tumour diagnosis, staging and prognosis. Currently, PET segmentation and lesion partitioning are manually measured by radiologists, which is time consuming and laborious, and tedious manual procedures might lead to inaccurate measurement results. Therefore, we designed a new automatic multiprocessing scheme for PET image pre-screening, noise reduction, segmentation and lesion partitioning in this study. PET image pre-screening can reduce the time cost of noise reduction, segmentation and lesion partitioning methods, and denoising can enhance both quantitative metrics and visual quality for better segmentation accuracy. For pre-screening, we propose a new differential activation filter (DAF) to screen the lesion images from whole-body scanning. For noise reduction, neural network inverse (NN inverse) as the inverse transformation of generalized Anscombe transformation (GAT), which does not depend on the distribution of residual noise, was presented to improve the SNR of images. For segmentation and lesion partitioning, definition density peak clustering (DDPC) was proposed to realize instance segmentation of lesion and normal tissue with unsupervised images, which helped reduce the cost of density calculation and completely deleted the cluster halo. The experimental results of clinical data demonstrate that our proposed methods have good results and better performance in noise reduction, segmentation and lesion partitioning compared with state-of-the-art methods.
Collapse
|
14
|
Arabi H, Zaidi H. Non-local mean denoising using multiple PET reconstructions. Ann Nucl Med 2021; 35:176-186. [PMID: 33244745 PMCID: PMC7895794 DOI: 10.1007/s12149-020-01550-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 11/07/2020] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Non-local mean (NLM) filtering has been broadly used for denoising of natural and medical images. The NLM filter relies on the redundant information, in the form of repeated patterns/textures, in the target image to discriminate the underlying structures/signals from noise. In PET (or SPECT) imaging, the raw data could be reconstructed using different parameters and settings, leading to different representations of the target image, which contain highly similar structures/signals to the target image contaminated with different noise levels (or properties). In this light, multiple-reconstruction NLM filtering (MR-NLM) is proposed, which relies on the redundant information provided by the different reconstructions of the same PET data (referred to as auxiliary images) to conduct the denoising process. METHODS Implementation of the MR-NLM approach involved the use of twelve auxiliary PET images (in addition to the target image) reconstructed using the same iterative reconstruction algorithm with different numbers of iterations and subsets. For each target voxel, the patches of voxels at the same location are extracted from the auxiliary PET images based on which the NLM denoising process is conducted. Through this, the exhaustive search scheme performed in the conventional NLM method to find similar patches of voxels is bypassed. The performance evaluation of the MR-NLM filter was carried out against the conventional NLM, Gaussian and bilateral post-reconstruction approaches using the experimental Jaszczak phantom and 25 whole-body PET/CT clinical studies. RESULTS The signal-to-noise ratio (SNR) in the experimental Jaszczak phantom study improved from 25.1 when using Gaussian filtering to 27.9 and 28.8 when the conventional NLM and MR-NLM methods were applied (p value < 0.05), respectively. Conversely, the Gaussian filter led to quantification bias of 35.4%, while NLM and MR-NLM approaches resulted in a bias of 32.0% and 31.1% (p value < 0.05), respectively. The clinical studies further confirm the superior performance of the MR-NLM method, wherein the quantitative bias measured in malignant lesions (hot spots) decreased from - 12.3 ± 2.3% when using the Gaussian filter to - 3.5 ± 1.3% and - 2.2 ± 1.2% when using the NLM and MR-NLM approaches (p value < 0.05), respectively. CONCLUSION The MR-NLM approach exhibited promising performance in terms of noise suppression and signal preservation for PET images, thus translating into higher SNR compared to the conventional NLM approach. Despite the promising performance of the MR-NLM approach, the additional computational burden owing to the requirement of multiple PET reconstruction still needs to be addressed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, 1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB, Groningen, The Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 5000, Odense, Denmark.
| |
Collapse
|
15
|
Li L, Lu W, Tan S. Variational PET/CT Tumor Co-segmentation Integrated with PET Restoration. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020; 4:37-49. [PMID: 32939423 DOI: 10.1109/trpms.2019.2911597] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
PET and CT are widely used imaging modalities in radiation oncology. PET imaging has a high contrast but blurry tumor edges due to its limited spatial resolution, while CT imaging has a high resolution but a low contrast between tumor and soft normal tissues. Tumor segmentation from either a single PET or CT image is difficult. It is known that co-segmentation methods utilizing the complementary information between PET and CT can improve segmentation accuracy. These information can be either consistent or inconsistent in the image-level. How to correctly localize tumor edges with these inconsistent information is a major challenge for co-segmentation methods. In this study, we proposed a novel variational method for tumor co-segmentation in PET/CT, with a fusion strategy specifically designed to handle the information inconsistency between PET and CT in an adaptive way - the method can automatically decide which modality should be more trustful when PET and CT disagree to each other for localizing the tumor boundary. The proposed method was constructed based on the Γ-convergence approximation of the Mumford-Shah (MS) segmentation model. A PET restoration process was integrated into the co-segmentation process, which further eliminate the uncertainty for tumor segmentation introduced by the blurring of tumor edges in PET. The performance of the proposed method was validated on a test dataset with fifty non-small cell lung cancer patients. Experimental results demonstrated that the proposed method had a high accuracy for PET/CT co-segmentation and PET restoration, and can accurately estimate the blur kernel of the PET scanner as well. For those complex images in which the tumors exhibit Fluorodeoxyglucose (FDG) uptake inhomogeneity or even invade adjacent soft normal tissues, the proposed method can still accurately segment the tumors. It achieved an average dice similarity indexes (DSI) of 0.85 ± 0.06, volume error (VE) of 0.09 ± 0.08, and classification error (CE) of 0.31 ± 0.13.
Collapse
Affiliation(s)
- Laquan Li
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York 10065, USA
| | - Shan Tan
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
16
|
Sbei A, ElBedoui K, Barhoumi W, Maktouf C. Gradient-based generation of intermediate images for heterogeneous tumor segmentation within hybrid PET/MRI scans. Comput Biol Med 2020; 119:103669. [PMID: 32339115 DOI: 10.1016/j.compbiomed.2020.103669] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 02/17/2020] [Accepted: 02/17/2020] [Indexed: 10/25/2022]
Abstract
Segmentation of tumors from hybrid PET/MRI scans plays an essential role in accurate diagnosis and treatment planning. However, when treating tumors, several challenges, notably heterogeneity and the problem of leaking into surrounding tissues with similar high uptake, have to be considered. To address these issues, we propose an automated method for accurate delineation of tumors in hybrid PET/MRI scans. The method is mainly based on creating intermediate images. In fact, an automatic detection technique that determines a preliminary Interesting Uptake Region (IUR) is firstly performed. To overcome the leakage problem, a separation technique is adopted to generate the final IUR. Then, smart seeds are provided for the Graph Cut (GC) technique to obtain the tumor map. To create intermediate images that tend to reduce heterogeneity faced on the original images, the tumor map gradient is combined with the gradient image. Lastly, segmentation based on the GCsummax technique is applied to the generated images. The proposed method has been validated on PET phantoms as well as on real-world PET/MRI scans of prostate, liver and pancreatic tumors. Experimental comparison revealed the superiority of the proposed method over state-of-the-art methods. This confirms the crucial role of automatically creating intermediate images in addressing the problem of wrongly estimating arc weights for heterogeneous targets.
Collapse
Affiliation(s)
- Arafet Sbei
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia
| | - Khaoula ElBedoui
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, 45 Rue des Entrepreneurs, 2035 Tunis-Carthage, Tunisia
| | - Walid Barhoumi
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Bayrouni, 2080 Ariana, Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, 45 Rue des Entrepreneurs, 2035 Tunis-Carthage, Tunisia.
| | - Chokri Maktouf
- Nuclear Medicine Department, Pasteur Institute of Tunis, Tunis, Tunisia
| |
Collapse
|
17
|
Analysis of the Quantization Noise in Discrete Wavelet Transform Filters for 3D Medical Imaging. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10041223] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Denoising and compression of 2D and 3D images are important problems in modern medical imaging systems. Discrete wavelet transform (DWT) is used to solve them in practice. We analyze the quantization noise effect in coefficients of DWT filters for 3D medical imaging in this paper. The method for wavelet filters coefficients quantizing is proposed, which allows minimizing resources in hardware implementation by simplifying rounding operations. We develop the method for estimating the maximum error of 3D grayscale and color images DWT with various bits per color (BPC). The dependence of the peak signal-to-noise ratio (PSNR) of the images processing result on wavelet used, the effective bit-width of filters coefficients and BPC is revealed. We derive formulas for determining the minimum bit-width of wavelet filters coefficients that provide a high (PSNR ≥ 40 dB for images with 8 BPC, for example) and maximum (PSNR = ∞ dB) quality of 3D medical imaging by DWT depending on wavelet used. The experiments of 3D tomographic images processing confirmed the accuracy of theoretical analysis. All data are presented in the fixed-point format in the proposed method of 3D medical images DWT. It is making possible efficient, from the point of view of hardware and time resources, the implementation for image denoising and compression on modern devices such as field-programmable gate arrays and application-specific integrated circuits.
Collapse
|
18
|
Arabi H, Zaidi H. Spatially guided nonlocal mean approach for denoising of PET images. Med Phys 2020; 47:1656-1669. [DOI: 10.1002/mp.14024] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 12/13/2019] [Accepted: 01/10/2020] [Indexed: 12/11/2022] Open
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging Department of Medical Imaging Geneva University Hospital CH‐1211Geneva 4 Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging Department of Medical Imaging Geneva University Hospital CH‐1211Geneva 4 Switzerland
- Geneva University Neurocenter Geneva University CH‐1205Geneva Switzerland
- Department of Nuclear Medicine and Molecular Imaging University of GroningenUniversity Medical Center Groningen 9700 RBGroningen Netherlands
- Department of Nuclear Medicine University of Southern Denmark DK‐500Odense Denmark
| |
Collapse
|
19
|
Koç A, Güveniş A. Design and evaluation of an accurate CNR-guided small region iterative restoration-based tumor segmentation scheme for PET using both simulated and real heterogeneous tumors. Med Biol Eng Comput 2019; 58:335-355. [PMID: 31848977 DOI: 10.1007/s11517-019-02094-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2019] [Accepted: 12/06/2019] [Indexed: 11/30/2022]
Abstract
Tumor delineation accuracy directly affects the effectiveness of radiotherapy. This study presents a methodology that minimizes potential errors during the automated segmentation of tumors in PET images. Iterative blind deconvolution was implemented in a region of interest encompassing the tumor with the number of iterations determined from contrast-to-noise ratios. The active contour and random forest classification-based segmentation method was evaluated using three distinct image databases that included both synthetic and real heterogeneous tumors. Ground truths about tumor volumes were known precisely. The volumes of the tumors were in the range of 0.49-26.34 cm3, 0.64-1.52 cm3, and 40.38-203.84 cm3 respectively. Widely available software tools, namely, MATLAB, MIPAV, and ITK-SNAP were utilized. When using the active contour method, image restoration reduced mean errors in volumes estimation from 95.85 to 3.37%, from 815.63 to 17.45%, and from 32.61 to 6.80% for the three datasets. The accuracy gains were higher using datasets that include smaller tumors for which PVE is known to be more predominant. Computation time was reduced by a factor of about 10 in the smaller deconvolution region. Contrast-to-noise ratios were improved for all tumors in all data. The presented methodology has the potential to improve delineation accuracy in particular for smaller tumors at practically feasible computational times. Graphical abstract Evaluation of accurate lesion volumes using CNR-guided and ROI-based restoration method for PET images.
Collapse
Affiliation(s)
- Alpaslan Koç
- Institute of Biomedical Engineering, Boğaziçi University, Kandilli Kampüs, Çengelköy, 34684, Istanbul, Turkey.
| | - Albert Güveniş
- Institute of Biomedical Engineering, Boğaziçi University, Kandilli Kampüs, Çengelköy, 34684, Istanbul, Turkey
| |
Collapse
|
20
|
Cai S, Liu K, Yang M, Tang J, Xiong X, Xiao M. A new development of non-local image denoising using fixed-point iteration for non-convex ℓp sparse optimization. PLoS One 2018; 13:e0208503. [PMID: 30540797 PMCID: PMC6291268 DOI: 10.1371/journal.pone.0208503] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Accepted: 11/19/2018] [Indexed: 11/18/2022] Open
Abstract
We proposed a new efficient image denoising scheme, which mainly leads to four important contributions whose approaches are different from existing ones. The first is to show the equivalence between the group-based sparse representation and the Schatten-p norm minimization problem, so that the sparsity of the coefficients for each group can be measured by estimating the underlying singular values. The second is that we construct the proximal operator for sparse optimization in ℓp space with p ∈ (0, 1] by using fixed-point iteration and obtained a new solution of Schatten-p norm minimization problem, which is more rigorous and accurate than current available results. The third is that we analyze the suitable setting of power p for each noise level σ = 20, 30, 50, 60, 75, 100, respectively. We find that the optimal value of p is inversely proportional to the noise level except for high level of noise, where the best values of p are 1 and 0.95, when the noise levels are respectively 75 and 100. Last we measure the structural similarity between two image patches and extends previous deterministic annealing-based solution to sparsity optimization problem through incorporating the idea of dictionary learning. Experimental results demonstrate that for every given noise level, the proposed Spatially Adaptive Fixed Point Iteration (SAFPI) algorithm attains the best denoising performance on the value of Peak Signal-to-Noise Ratio (PSNR) and structure similarity (SSIM), being able to retain the image structure information, which outperforms many state-of-the-art denoising methods such as Block-matching and 3D filtering (BM3D), Weighted Nuclear Norm Minimization (WNNM) and Weighted Schatten p-Norm Minimization (WSNM).
Collapse
Affiliation(s)
- Shuting Cai
- School of Automation, Guangdong University of Technology, Guangzhou, China
| | - Kun Liu
- School of Automation, Guangdong University of Technology, Guangzhou, China
| | - Ming Yang
- Department of Computer Science, Southern Illinois University-Carbondale, Carbondale, IL, United States of America
- * E-mail:
| | - Jianliang Tang
- College of Mathematics and Statistics, Shenzhen University, Shenzhen, Guangdong Province, China
| | - Xiaoming Xiong
- School of Automation, Guangdong University of Technology, Guangzhou, China
| | - Mingqing Xiao
- Department of Mathematics, Southern Illinois University-Carbondale, Carbondale, IL, United States of America
| |
Collapse
|