1
|
Zhu Y, Li S, Xie Z, Leung EK, Bayerlein R, Omidvari N, Abdelhafez YG, Cherry SR, Qi J, Badawi RD, Spencer BA, Wang G. Feasibility of PET-enabled dual-energy CT imaging: First physical phantom and initial patient study results. Eur J Nucl Med Mol Imaging 2025; 52:1912-1923. [PMID: 39549045 PMCID: PMC11928277 DOI: 10.1007/s00259-024-06975-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Accepted: 11/02/2024] [Indexed: 11/18/2024]
Abstract
PURPOSE Dual-energy (DE) CT enables material decomposition by using two different x-ray energies and may be combined with PET for improved multimodality imaging. However, this increases radiation dose and may require a hardware upgrade due to the added second x-ray CT scan. The recently proposed PET-enabled DECT method allows dual-energy imaging using a conventional PET/CT scanner without the need to change scanner hardware or increase radiation exposure. Here we demonstrate the first-time physical phantom and patient data evaluation of this method. METHODS The PET-enabled DECT method reconstructs a gamma-ray CT (gCT) image at 511 keV from the time-of-flight PET data with the maximum-likelihood attenuation and activity (MLAA) approach and then combines this image with the low-energy x-ray CT images to form a dual-energy image pair for material decomposition. To improve the image quality of gCT, a kernel MLAA method was developed using the x-ray CT as a priori information. Here we developed a general open-source implementation for gCT reconstruction and used this implementation for the first real data validation using both physical phantom study and human-subject study. Results from PET-enabled DECT were compared using x-ray DECT as the reference. Further, we applied the PET-enabled DECT method in another patient study to evaluate bone lesions. RESULTS Compared to the standard MLAA, results from the kernel MLAA showed significantly improved image quality. PET-enabled DECT with the kernel MLAA was able to generate fractional images that were comparable to the x-ray DECT, with high correlation coefficients for both the phantom study and human subject study (R > 0.99). The application study also indicates that PET-enabled DECT has potential to characterize bone lesions. CONCLUSION Results from this study have demonstrated the feasibility of this PET-enabled method for CT imaging and material decomposition. PET-enabled DECT shows promise to provide comparable results to x-ray DECT.
Collapse
Affiliation(s)
- Yansong Zhu
- Department of Radiology, UC Davis Health, 95817, Sacramento, CA, USA.
| | - Siqi Li
- Department of Radiology, UC Davis Health, 95817, Sacramento, CA, USA
| | - Zhaoheng Xie
- Department of Biomedical Engineering, University of California at Davis, 95616, Davis, CA, USA
| | - Edwin K Leung
- Department of Radiology, UC Davis Health, 95817, Sacramento, CA, USA
- Department of Biomedical Engineering, University of California at Davis, 95616, Davis, CA, USA
- UIH America, Inc., 77054, Houston, TX, USA
| | - Reimund Bayerlein
- Department of Radiology, UC Davis Health, 95817, Sacramento, CA, USA
- Department of Biomedical Engineering, University of California at Davis, 95616, Davis, CA, USA
| | - Negar Omidvari
- Department of Biomedical Engineering, University of California at Davis, 95616, Davis, CA, USA
| | | | - Simon R Cherry
- Department of Radiology, UC Davis Health, 95817, Sacramento, CA, USA
- Department of Biomedical Engineering, University of California at Davis, 95616, Davis, CA, USA
| | - Jinyi Qi
- Department of Biomedical Engineering, University of California at Davis, 95616, Davis, CA, USA
| | - Ramsey D Badawi
- Department of Radiology, UC Davis Health, 95817, Sacramento, CA, USA
- Department of Biomedical Engineering, University of California at Davis, 95616, Davis, CA, USA
| | - Benjamin A Spencer
- Department of Radiology, UC Davis Health, 95817, Sacramento, CA, USA
- Department of Biomedical Engineering, University of California at Davis, 95616, Davis, CA, USA
| | - Guobao Wang
- Department of Radiology, UC Davis Health, 95817, Sacramento, CA, USA
| |
Collapse
|
2
|
Lee O, Kim J. Noise-matched total-likelihood-based bilateral filter: Experimental feasibility in a benchtop photon-counting CBCT system. Phys Med 2025; 130:104901. [PMID: 39826465 DOI: 10.1016/j.ejmp.2025.104901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Revised: 12/13/2024] [Accepted: 01/07/2025] [Indexed: 01/22/2025] Open
Abstract
PURPOSE Material decomposition induces substantial noise in basis images and their synthesized computed tomography (CT) images. A likelihood-based bilateral filter was previously developed as a neighborhood filter that effectively reduces noise. However, this method is sensitive to image contrast, and the noise texture needs improvement. It is also necessary to address how to optimally combine filtered basis images to synthesize CT images. This study addressed these issues by introducing total likelihood and a noise-matched condition. METHODS The experimental feasibility of the proposed method was demonstrated in a benchtop photon-counting CT (PCCT) system using the following steps: (1) A calibration process for forward modeling, (2) maximum likelihood (ML)-based material decomposition, which is accurate but suffers from substantial noise, (3) noise reduction by applying a total-likelihood-based filter, and (4) CT image synthesis using the noise-matched condition. The proposed method was compared with conventional neighborhood filters and statistical iterative reconstruction with edge-preserving regularization. RESULTS The local noise and task-based modulation transfer function (TTF) were analyzed using a test phantom, and the proposed method was found to preserve the spatial resolution better than the other methods, especially in low-contrast regions. In the chicken leg experiment, the proposed method improved the fine structures and background textures in the denoised images and exhibited superior properties in analyzing the noise power spectrum. CONCLUSION The proposed method is effective and computationally efficient for noise reduction in PCCT and can potentially replace conventional iterative edge-preserved regularization approaches.
Collapse
Affiliation(s)
- Okkyun Lee
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333 Techno Jungang-daero, Hyeonpung-eup, Dalseong-gun, Daegu, 42988, Republic of Korea.
| | - Joonbeom Kim
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333 Techno Jungang-daero, Hyeonpung-eup, Dalseong-gun, Daegu, 42988, Republic of Korea.
| |
Collapse
|
3
|
Sanderson D, Martinez C, Fessler JA, Desco M, Abella M. Statistical image reconstruction with beam-hardening compensation for X-ray CT by a calibration step (2DIterBH). Med Phys 2024; 51:5204-5213. [PMID: 38873959 DOI: 10.1002/mp.17239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 04/15/2024] [Accepted: 05/02/2024] [Indexed: 06/15/2024] Open
Abstract
BACKGROUND The beam-hardening effect due to the polychromatic nature of the X-ray spectra results in two main artifacts in CT images: cupping in homogeneous areas and dark bands between dense parts in heterogeneous samples. Post-processing methods have been proposed in the literature to compensate for these artifacts, but these methods may introduce additional noise in low-dose acquisitions. Iterative methods are an alternative to compensate noise and beam-hardening artifacts simultaneously. However, they usually rely on the knowledge of the spectrum or the selection of empirical parameters. PURPOSE We propose an iterative reconstruction method with beam hardening compensation for small animal scanners that is robust against low-dose acquisitions and that does not require knowledge of the spectrum, overcoming the limitations of current beam-hardening correction algorithms. METHODS The proposed method includes an empirical characterization of the beam-hardening function based on a simple phantom in a polychromatic statistical reconstruction method. Evaluation was carried out on simulated data with different noise levels and step angles and on limited-view rodent data acquired with the ARGUS/CT system. RESULTS Results in small animal studies showed a proper correction of the beam-hardening artifacts in the whole sample, independently of the quantity of bone present on each slice. The proposed approach also reduced noise in the low-dose acquisitions and reduced streaks in the limited-view acquisitions. CONCLUSIONS Using an empirical model for the beam-hardening effect, obtained through calibration, in an iterative reconstruction method enables a robust correction of beam-hardening artifacts in low-dose small animal studies independently of the bone distribution.
Collapse
Affiliation(s)
- Daniel Sanderson
- Dept. Bioingeniería, Universidad Carlos III de Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Cristóbal Martinez
- Dept. Bioingeniería, Universidad Carlos III de Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Jeffrey A Fessler
- Electrical Engineering and Computer Science department, The University of Michigan, Ann Arbor, USA
| | - Manuel Desco
- Dept. Bioingeniería, Universidad Carlos III de Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
- Centro Nacional de Investigaciones Cardiovasculares Carlos III (CNIC), Madrid, Spain
- Centro de investigación en red en salud mental (CIBERSAM), Madrid, Spain
| | - Mónica Abella
- Dept. Bioingeniería, Universidad Carlos III de Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
- Centro Nacional de Investigaciones Cardiovasculares Carlos III (CNIC), Madrid, Spain
| |
Collapse
|
4
|
Li S, Zhu Y, Spencer BA, Wang G. Single-Subject Deep-Learning Image Reconstruction With a Neural Optimization Transfer Algorithm for PET-Enabled Dual-Energy CT Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4075-4089. [PMID: 38941203 DOI: 10.1109/tip.2024.3418347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2024]
Abstract
Combining dual-energy computed tomography (DECT) with positron emission tomography (PET) offers many potential clinical applications but typically requires expensive hardware upgrades or increases radiation doses on PET/CT scanners due to an extra X-ray CT scan. The recent PET-enabled DECT method allows DECT imaging on PET/CT without requiring a second X-ray CT scan. It combines the already existing X-ray CT image with a 511 keV γ -ray CT (gCT) image reconstructed from time-of-flight PET emission data. A kernelized framework has been developed for reconstructing gCT image but this method has not fully exploited the potential of prior knowledge. Use of deep neural networks may explore the power of deep learning in this application. However, common approaches require a large database for training, which is impractical for a new imaging method like PET-enabled DECT. Here, we propose a single-subject method by using neural-network representation as a deep coefficient prior to improving gCT image reconstruction without population-based pre-training. The resulting optimization problem becomes the tomographic estimation of nonlinear neural-network parameters from gCT projection data. This complicated problem can be efficiently solved by utilizing the optimization transfer strategy with quadratic surrogates. Each iteration of the proposed neural optimization transfer algorithm includes: PET activity image update; gCT image update; and least-square neural-network learning in the gCT image domain. This algorithm is guaranteed to monotonically increase the data likelihood. Results from computer simulation, real phantom data and real patient data have demonstrated that the proposed method can significantly improve gCT image quality and consequent multi-material decomposition as compared to other methods.
Collapse
|
5
|
Chen Z, Li Q, Wu D. Estimate and compensate head motion in non-contrast head CT scans using partial angle reconstruction and deep learning. Med Phys 2024; 51:3309-3321. [PMID: 38569143 PMCID: PMC11128317 DOI: 10.1002/mp.17047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/12/2024] [Accepted: 03/04/2024] [Indexed: 04/05/2024] Open
Abstract
BACKGROUND Patient head motion is a common source of image artifacts in computed tomography (CT) of the head, leading to degraded image quality and potentially incorrect diagnoses. The partial angle reconstruction (PAR) means dividing the CT projection into several consecutive angular segments and reconstructing each segment individually. Although motion estimation and compensation using PAR has been developed and investigated in cardiac CT scans, its potential for reducing motion artifacts in head CT scans remains unexplored. PURPOSE To develop a deep learning (DL) model capable of directly estimating head motion from PAR images of head CT scans and to integrate the estimated motion into an iterative reconstruction process to compensate for the motion. METHODS Head motion is considered as a rigid transformation described by six time-variant variables, including the three variables for translation and three variables for rotation. Each motion variable is modeled using a B-spline defined by five control points (CP) along time. We split the full projections from 360° into 25 consecutive PARs and subsequently input them into a convolutional neural network (CNN) that outputs the estimated CPs for each motion variable. The estimated CPs are used to calculate the object motion in each projection, which are incorporated into the forward and backprojection of an iterative reconstruction algorithm to reconstruct the motion-compensated image. The performance of our DL model is evaluated through both simulation and phantom studies. RESULTS The DL model achieved high accuracy in estimating head motion, as demonstrated in both the simulation study (mean absolute error (MAE) ranging from 0.28 to 0.45 mm or degree across different motion variables) and the phantom study (MAE ranging from 0.40 to 0.48 mm or degree). The resulting motion-corrected image,I D L , P A R ${I}_{DL,\ PAR}$ , exhibited a significant reduction in motion artifacts when compared to the traditional filtered back-projection reconstructions, which is evidenced both in the simulation study (image MAE drops from 178 ± $ \pm $ 33HU to 37 ± $ \pm $ 9HU, structural similarity index (SSIM) increases from 0.60 ± $ \pm $ 0.06 to 0.98 ± $ \pm $ 0.01) and the phantom study (image MAE drops from 117 ± $ \pm $ 17HU to 42 ± $ \pm $ 19HU, SSIM increases from 0.83 ± $ \pm $ 0.04 to 0.98 ± $ \pm $ 0.02). CONCLUSIONS We demonstrate that using PAR and our proposed deep learning model enables accurate estimation of patient head motion and effectively reduces motion artifacts in the resulting head CT images.
Collapse
Affiliation(s)
- Zhennong Chen
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA
| | - Dufan Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, 02114, USA
| |
Collapse
|
6
|
Bousse A, Kandarpa VSS, Rit S, Perelli A, Li M, Wang G, Zhou J, Wang G. Systematic Review on Learning-based Spectral CT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:113-137. [PMID: 38476981 PMCID: PMC10927029 DOI: 10.1109/trpms.2023.3314131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Spectral computed tomography (CT) has recently emerged as an advanced version of medical CT and significantly improves conventional (single-energy) CT. Spectral CT has two main forms: dual-energy computed tomography (DECT) and photon-counting computed tomography (PCCT), which offer image improvement, material decomposition, and feature quantification relative to conventional CT. However, the inherent challenges of spectral CT, evidenced by data and image artifacts, remain a bottleneck for clinical applications. To address these problems, machine learning techniques have been widely applied to spectral CT. In this review, we present the state-of-the-art data-driven techniques for spectral CT.
Collapse
Affiliation(s)
- Alexandre Bousse
- LaTIM, Inserm UMR 1101, Université de Bretagne Occidentale, 29238 Brest, France
| | | | - Simon Rit
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Étienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69373, Lyon, France
| | - Alessandro Perelli
- Department of Biomedical Engineering, School of Science and Engineering, University of Dundee, DD1 4HN, UK
| | - Mengzhou Li
- Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, New York, USA
| | - Guobao Wang
- Department of Radiology, University of California Davis Health, Sacramento, USA
| | - Jian Zhou
- CTIQ, Canon Medical Research USA, Inc., Vernon Hills, 60061, USA
| | - Ge Wang
- Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, New York, USA
| |
Collapse
|
7
|
Zhu W, Lee SJ. Similarity-Driven Fine-Tuning Methods for Regularization Parameter Optimization in PET Image Reconstruction. SENSORS (BASEL, SWITZERLAND) 2023; 23:5783. [PMID: 37447633 DOI: 10.3390/s23135783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 06/18/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023]
Abstract
We present an adaptive method for fine-tuning hyperparameters in edge-preserving regularization for PET image reconstruction. For edge-preserving regularization, in addition to the smoothing parameter that balances data fidelity and regularization, one or more control parameters are typically incorporated to adjust the sensitivity of edge preservation by modifying the shape of the penalty function. Although there have been efforts to develop automated methods for tuning the hyperparameters in regularized PET reconstruction, the majority of these methods primarily focus on the smoothing parameter. However, it is challenging to obtain high-quality images without appropriately selecting the control parameters that adjust the edge preservation sensitivity. In this work, we propose a method to precisely tune the hyperparameters, which are initially set with a fixed value for the entire image, either manually or using an automated approach. Our core strategy involves adaptively adjusting the control parameter at each pixel, taking into account the degree of patch similarities calculated from the previous iteration within the pixel's neighborhood that is being updated. This approach allows our new method to integrate with a wide range of existing parameter-tuning techniques for edge-preserving regularization. Experimental results demonstrate that our proposed method effectively enhances the overall reconstruction accuracy across multiple image quality metrics, including peak signal-to-noise ratio, structural similarity, visual information fidelity, mean absolute error, root-mean-square error, and mean percentage error.
Collapse
Affiliation(s)
- Wen Zhu
- Department of Electrical and Electronic Engineering, Pai Chai University, Daejeon 35345, Republic of Korea
| | - Soo-Jin Lee
- Department of Electrical and Electronic Engineering, Pai Chai University, Daejeon 35345, Republic of Korea
| |
Collapse
|
8
|
Zimmerman J, Thor D, Poludniowski G. Stopping-power ratio estimation for proton radiotherapy using dual-energy computed tomography and prior-image constrained denoising. Med Phys 2023; 50:1481-1495. [PMID: 36322128 DOI: 10.1002/mp.16063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Revised: 09/12/2022] [Accepted: 09/26/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Dual-energy computed tomography (DECT) is a promising technique for estimating stopping-power ratio (SPR) for proton therapy planning. It is known, however, that deriving electron density (ED) and effective atomic number (EAN) from DECT data can cause noise amplification in the resulting SPR images. This can negate the benefits of DECT. PURPOSE This work introduces a new algorithm for estimating SPR from DECT with noise suppression, using a pair of CT scans with spectral separation. The method is demonstrated using phantom measurements. MATERIALS AND METHODS An iterative algorithm is presented, reconstructing ED and EAN with noise suppression, based on Prior Image Constrained Denoising (PIC-D). The algorithm is tested using a Siemens Definition AS+ CT scanner (Siemens Healthcare, Forchheim, Germany). Three phantoms are investigated: a calibration phantom (CIRS 062M), a QA phantom (CATPHAN 700), and an anthropomorphic head phantom (CIRS 731-HN). A task-transfer function (TTF) and the noise power spectrum are derived from SPR images of the QA phantom for the evaluation of image quality. Comparisons of accuracy and noise for ED, EAN, and SPR are made for various versions of the algorithm in comparison to a solution based on Siemens syngo.via Rho/Z software and the current clinical standard of a single-energy CT stoichiometric calibration. A gamma analysis is also applied to the SPR images of the head phantom and water-equivalent distance (WED) is evaluated in a treatment planning system for a proton treatment field. RESULTS The algorithm is effective at suppressing noise in both ED and EAN and hence also SPR. The noise is tunable to a level equivalent to or lower than that of the syngo.via Rho/Z software. The spatial resolution (10% and 50% frequencies in the TTF) does not degrade even for the highest noise suppression investigated, although the average spatial frequency of noise does decrease. The PIC-D algorithm showed better accuracy than syngo.via Rho/Z for low density materials. In the calibration phantom, it was superior even when excluding lung substitutes, with root-mean-square deviations for ED and EAN less than 0.3% and 2%, respectively, compared to 0.5% and 3%. In the head phantom, however, the SPR accuracy of the PIC-D algorithm was comparable (excluding sinus tissue) to that derived from syngo.via Rho/Z: less than 1% error for soft tissue, brain, and trabecular bone substitutes and 5-7% for cortical bone, with the larger error for the latter likely related to the phantom geometry. Gamma evaluation showed that PIC-D can suppress noise in a patient-like geometry without introducing substantial errors in SPR. The absolute pass rates were almost identical for PIC-D and syngo.via Rho/Z. In the WED evaluations no general differences were shown. CONCLUSIONS The PIC-D DECT algorithm provides scanner-specific calibration and tunable noise suppression. It is vendor agnostic and applicable to any pair of CT scans with spectral separation. Improved accuracy to current methods was not clearly demonstrated for the complex geometry of a head phantom, but the suppression of noise without spatial resolution degradation and the possibility of incorporating constraints on image properties, suggests the usefulness of the approach.
Collapse
Affiliation(s)
- Jens Zimmerman
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
| | - Daniel Thor
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
| | - Gavin Poludniowski
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Stockholm, Sweden
- Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
9
|
Kim TH, Haldar JP. Efficient iterative solutions to complex-valued nonlinear least-squares problems with mixed linear and antilinear operators. OPTIMIZATION AND ENGINEERING 2022; 23:749-768. [PMID: 35656362 PMCID: PMC9159680 DOI: 10.1007/s11081-021-09604-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 02/11/2021] [Accepted: 02/12/2021] [Indexed: 06/15/2023]
Abstract
We consider a setting in which it is desired to find an optimal complex vector x ∈ C N that satisfies A (x) ≈ b in a least-squares sense, where b ∈ C M is a data vector (possibly noise-corrupted), and A (·) : C N → C M is a measurement operator. If A (·) were linear, this reduces to the classical linear least-squares problem, which has a well-known analytic solution as well as powerful iterative solution algorithms. However, instead of linear least-squares, this work considers the more complicated scenario where A (·) is nonlinear, but can be represented as the summation and/or composition of some operators that are linear and some operators that are antilinear. Some common nonlinear operations that have this structure include complex conjugation or taking the real-part or imaginary-part of a complex vector. Previous literature has shown that this kind of mixed linear/antilinear least-squares problem can be mapped into a linear least-squares problem by considering x as a vector in R 2N instead of C N . While this approach is valid, the replacement of the original complex-valued optimization problem with a real-valued optimization problem can be complicated to implement, and can also be associated with increased computational complexity. In this work, we describe theory and computational methods that enable mixed linear/antilinear least-squares problems to be solved iteratively using standard linear least-squares tools, while retaining all of the complex-valued structure of the original inverse problem. An illustration is providedtodemonstratethatthisapproachcansimplifytheimplementationandreduce the computational complexity of iterative solution algorithms.
Collapse
Affiliation(s)
- Tae Hyung Kim
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA
| | - Justin P. Haldar
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089, USA
| |
Collapse
|
10
|
Zhao X, Li Y, Han Y, Chen P, Wei J. Statistical iterative spectral CT imaging method based on blind separation of polychromatic projections. OPTICS EXPRESS 2022; 30:18219-18237. [PMID: 36221628 DOI: 10.1364/oe.456184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 05/02/2022] [Indexed: 06/16/2023]
Abstract
Spectral computed tomography (CT) can provide narrow-energy-width reconstructed images, thereby suppressing beam hardening artifacts and providing rich attenuation information for component characterization. We propose a statistical iterative spectral CT imaging method based on blind separation of polychromatic projections to improve the accuracy of narrow-energy-width image decomposition. For direct inversion in blind scenarios, we introduce the system matrix into the X-ray multispectral forward model to reduce indirect errors. A constrained optimization problem with edge-preserving regularization is established and decomposed into two sub-problems to be alternately solved. Experiments indicate that the novel algorithm obtains more accurate narrow-energy-width images than the state-of-the-art method.
Collapse
|
11
|
Iterative Reconstruction for Low-Dose CT using Deep Gradient Priors of Generative Model. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2022.3148373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
12
|
Wu D, Kim K, Li Q. Low-dose CT reconstruction with Noise2Noise network and testing-time fine-tuning. Med Phys 2021; 48:7657-7672. [PMID: 34791655 PMCID: PMC11216369 DOI: 10.1002/mp.15101] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 06/07/2021] [Accepted: 06/24/2021] [Indexed: 01/05/2023] Open
Abstract
PURPOSE Deep learning-based image denoising and reconstruction methods demonstrated promising performance on low-dose CT imaging in recent years. However, most existing deep learning-based low-dose CT reconstruction methods require normal-dose images for training. Sometimes such clean images do not exist such as for dynamic CT imaging or very large patients. The purpose of this work is to develop a low-dose CT image reconstruction algorithm based on deep learning which does not need clean images for training. METHODS In this paper, we proposed a novel reconstruction algorithm where the image prior was expressed via the Noise2Noise network, whose weights were fine-tuned along with the image during the iterative reconstruction. The Noise2Noise network built a self-consistent loss by projection data splitting and mapping the corresponding filtered backprojection (FBP) results to each other with a deep neural network. Besides, the network weights are optimized along with the image to be reconstructed under an alternating optimization scheme. In the proposed method, no clean image is needed for network training and the testing-time fine-tuning leads to optimization for each reconstruction. RESULTS We used the 2016 Low-dose CT Challenge dataset to validate the feasibility of the proposed method. We compared its performance to several existing iterative reconstruction algorithms that do not need clean training data, including total variation, non-local mean, convolutional sparse coding, and Noise2Noise denoising. It was demonstrated that the proposed Noise2Noise reconstruction achieved better RMSE, SSIM and texture preservation compared to the other methods. The performance is also robust against the different noise levels, hyperparameters, and network structures used in the reconstruction. Furthermore, we also demonstrated that the proposed methods achieved competitive results without any pre-training of the network at all, that is, using randomly initialized network weights during testing. The proposed iterative reconstruction algorithm also has empirical convergence with and without network pre-training. CONCLUSIONS The proposed Noise2Noise reconstruction method can achieve promising image quality in low-dose CT image reconstruction. The method works both with and without pre-training, and only noisy data are required for pre-training.
Collapse
Affiliation(s)
- Dufan Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
13
|
Peterlik I, Strzelecki A, Lehmann M, Messmer P, Munro P, Paysan P, Plamondon M, Seghers D. Reducing residual-motion artifacts in iterative 3D CBCT reconstruction in image-guided radiation therapy. Med Phys 2021; 48:6497-6507. [PMID: 34529270 DOI: 10.1002/mp.15236] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 07/04/2021] [Accepted: 08/27/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Recent evaluations of a 3D iterative cone-beam computed tomography (iCBCT) reconstruction method available on Varian radiation treatment devices demonstrated that iCBCT provides superior image quality when compared to analytical Feldkamp-Davis-Kress (FDK) method. However, iCBCT employs statistical penalized likelihood (PL) that is known to be highly sensitive to inconsistencies due to physiological motion occurring during the acquisition. We propose a computationally inexpensive extension of iCBCT addressing this deficiency. METHODS During the iterative process, the gradients of PL are modified to avoid the generation of motion-related artifacts. To assess the impact of this modification, we propose a motion simulation generating CBCT projections of a moving anatomy together with artifact-free images used as ground truth. Contrast-to-noise ratio and power spectra of difference images are computed to quantify the impact of the motion on reconstructed CBCT volumes as well as the effect of the proposed modification. RESULTS Using both simulated and clinical data, it is shown that the motion of patient's abdominal wall during the acquisition results in artifacts that can be quantified as low-frequency components in volumes reconstructed with iCBCT. Further, a quantitative evaluation demonstrates that the proposed modification of PL reduces these low-frequency components. While preserving the advantages of PL, it effectively suppresses the propagation of motion-related artifacts into clinically important regions, thus increasing the motion resiliency of iCBCT. CONCLUSIONS The proposed modified iterative reconstruction method significantly improves the quality of CBCT images of anatomies suffering from residual motion.
Collapse
Affiliation(s)
- Igor Peterlik
- Varian Medical Systems Imaging Laboratory GmbH, Taefernstrasse 7, Daettwil, Aargau, Switzerland
| | - Adam Strzelecki
- Varian Medical Systems Imaging Laboratory GmbH, Taefernstrasse 7, Daettwil, Aargau, Switzerland
| | - Mathias Lehmann
- Varian Medical Systems Imaging Laboratory GmbH, Taefernstrasse 7, Daettwil, Aargau, Switzerland
| | - Philippe Messmer
- Varian Medical Systems Imaging Laboratory GmbH, Taefernstrasse 7, Daettwil, Aargau, Switzerland
| | - Peter Munro
- Varian Medical Systems Imaging Laboratory GmbH, Taefernstrasse 7, Daettwil, Aargau, Switzerland
| | - Pascal Paysan
- Varian Medical Systems Imaging Laboratory GmbH, Taefernstrasse 7, Daettwil, Aargau, Switzerland
| | - Mathieu Plamondon
- Varian Medical Systems Imaging Laboratory GmbH, Taefernstrasse 7, Daettwil, Aargau, Switzerland
| | - Dieter Seghers
- Varian Medical Systems Imaging Laboratory GmbH, Taefernstrasse 7, Daettwil, Aargau, Switzerland
| |
Collapse
|
14
|
Li S, Wang G. Modified kernel MLAA using autoencoder for PET-enabled dual-energy CT. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200204. [PMID: 34218670 PMCID: PMC8255948 DOI: 10.1098/rsta.2020.0204] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 03/04/2021] [Indexed: 06/13/2023]
Abstract
Combined use of PET and dual-energy CT provides complementary information for multi-parametric imaging. PET-enabled dual-energy CT combines a low-energy X-ray CT image with a high-energy γ-ray CT (GCT) image reconstructed from time-of-flight PET emission data to enable dual-energy CT material decomposition on a PET/CT scanner. The maximum-likelihood attenuation and activity (MLAA) algorithm has been used for GCT reconstruction but suffers from noise. Kernel MLAA exploits an X-ray CT image prior through the kernel framework to guide GCT reconstruction and has demonstrated substantial improvements in noise suppression. However, similar to other kernel methods for image reconstruction, the existing kernel MLAA uses image intensity-based features to construct the kernel representation, which is not always robust and may lead to suboptimal reconstruction with artefacts. In this paper, we propose a modified kernel method by using an autoencoder convolutional neural network (CNN) to extract an intrinsic feature set from the X-ray CT image prior. A computer simulation study was conducted to compare the autoencoder CNN-derived feature representation with raw image patches for evaluation of kernel MLAA for GCT image reconstruction and dual-energy multi-material decomposition. The results show that the autoencoder kernel MLAA method can achieve a significant image quality improvement for GCT and material decomposition as compared to the existing kernel MLAA algorithm. A weakness of the proposed method is its potential over-smoothness in a bone region, indicating the importance of further optimization in future work. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 2'.
Collapse
Affiliation(s)
- Siqi Li
- University of California Davis Medical Center, Department of Radiology, Saramento, CA, USA
| | - Guobao Wang
- University of California Davis Medical Center, Department of Radiology, Saramento, CA, USA
| |
Collapse
|
15
|
Fang W, Wu D, Kim K, Kalra MK, Singh R, Li L, Li Q. Iterative material decomposition for spectral CT using self-supervised Noise2Noise prior. Phys Med Biol 2021; 66. [PMID: 34126602 DOI: 10.1088/1361-6560/ac0afd] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 06/14/2021] [Indexed: 11/11/2022]
Abstract
Compared to conventional computed tomography (CT), spectral CT can provide the capability of material decomposition, which can be used in many clinical diagnosis applications. However, the decomposed images can be very noisy due to the dose limit in CT scanning and the noise magnification of the material decomposition process. To alleviate this situation, we proposed an iterative one-step inversion material decomposition algorithm with a Noise2Noise prior. The algorithm estimated material images directly from projection data and used a Noise2Noise prior for denoising. In contrast to supervised deep learning methods, the designed Noise2Noise prior was built based on self-supervised learning and did not need external data for training. In our method, the data consistency term and the Noise2Noise network were alternatively optimized in the iterative framework, respectively, using a separable quadratic surrogate (SQS) and the Adam algorithm. The proposed iterative algorithm was validated and compared to other methods on simulated spectral CT data, preclinical photon-counting CT data and clinical dual-energy CT data. Quantitative analysis showed that our proposed method performs promisingly on noise suppression and structure detail recovery.
Collapse
Affiliation(s)
- Wei Fang
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, People's Republic of China.,Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Dufan Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Ramandeep Singh
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Liang Li
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| |
Collapse
|
16
|
Baek S, Lee O. A data-driven maximum likelihood classification for nanoparticle agent identification in photon-counting CT. Phys Med Biol 2021; 66. [PMID: 34144545 DOI: 10.1088/1361-6560/ac0cc1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 06/18/2021] [Indexed: 11/12/2022]
Abstract
The nanoparticle agent, combined with a targeting factor reacting with lesions, enables specific CT imaging. Thus, the identification of the nanoparticle agents has the potential to improve clinical diagnosis. Thanks to the energy sensitivity of the photon-counting detector (PCD), it can exploit the K-edge of the nanoparticle agents in the clinical x-ray energy range to identify the agents. In this paper, we propose a novel data-driven approach for nanoparticle agent identification using the PCD. We generate two sets of training data consisting of PCD measurements from calibration phantoms, one in the presence of nanoparticle agent and the other in the absence of the agent. For a given sinogram of PCD counts, the proposed method calculates the normalized log-likelihood sinogram for each class (class 1: with the agent, class 2: without the agent) using theKnearest neighbors (KNN) estimator, backproject the sinograms, and compare the backprojection images to identify the agent. We also proved that the proposed algorithm is equivalent to the maximum likelihood-based classification. We studied the robustness of dose reduction with gold nanoparticles as the K-edge contrast media and demonstrated that the proposed method identifies targets with different concentrations of the agents without background noise.
Collapse
Affiliation(s)
- Sumin Baek
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science and Technology, Daegu, 42988, Republic of Korea
| | - Okkyun Lee
- Department of Robotics Engineering, Daegu Gyeongbuk Institute of Science and Technology, Daegu, 42988, Republic of Korea
| |
Collapse
|
17
|
Tairi S, Anthoine S, Boursier Y, Dupont M, Morel C. ProMeSCT: A Proximal Metric Algorithm for Spectral CT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3036028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
18
|
Fu J, Feng F, Quan H, Wan Q, Chen Z, Liu X, Zheng H, Liang D, Cheng G, Hu Z. PWLS-PR: low-dose computed tomography image reconstruction using a patch-based regularization method based on the penalized weighted least squares total variation approach. Quant Imaging Med Surg 2021; 11:2541-2559. [PMID: 34079722 PMCID: PMC8107320 DOI: 10.21037/qims-20-963] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 02/01/2021] [Indexed: 11/06/2022]
Abstract
BACKGROUND Radiation exposure computed tomography (CT) scans and the associated risk of cancer in patients have been major clinical concerns. Existing research can achieve low-dose CT imaging by reducing the X-ray current and the number of projections per rotation of the human body. However, this method may produce excessive noise and fringe artifacts in the traditional filtered back projection (FBP)-reconstructed image. METHODS To solve this problem, iterative image reconstruction is a promising option to obtain high-quality images from low-dose scans. This paper proposes a patch-based regularization method based on penalized weighted least squares total variation (PWLS-PR) for iterative image reconstruction. This method uses neighborhood patches instead of single pixels to calculate the nonquadratic penalty. The proposed regularization method is more robust than the conventional regularization method in identifying random fluctuations caused by sharp edges and noise. Each iteration of the proposed algorithm can be described in the following three steps: image updating via the total variation based on penalized weighted least squares (PWLS-TV), image smoothing, and pixel-by-pixel image fusion. RESULTS Simulation and real-world projection experiments show that the proposed PWLS-PR algorithm achieves a higher image reconstruction performance than similar algorithms. Through the qualitative and quantitative evaluation of simulation experiments, the effectiveness of the method is also verified. CONCLUSIONS Furthermore, this study shows that the PWLS-PR method reduces the amount of projection data required for repeated CT scans and has the useful potential to reduce the radiation dose in clinical medical applications.
Collapse
Affiliation(s)
- Jing Fu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- College of Electrical and Information Engineering, Hunan University, Changsha, China
| | - Fei Feng
- Department of Radiology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Huimin Quan
- College of Electrical and Information Engineering, Hunan University, Changsha, China
| | - Qian Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Guanxun Cheng
- Department of Radiology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
19
|
Sisniega A, Stayman JW, Capostagno S, Weiss CR, Ehtiati T, Siewerdsen JH. Accelerated 3D image reconstruction with a morphological pyramid and noise-power convergence criterion. Phys Med Biol 2021; 66:055012. [PMID: 33477131 DOI: 10.1088/1361-6560/abde97] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Model-based iterative reconstruction (MBIR) for cone-beam CT (CBCT) offers better noise-resolution tradeoff and image quality than analytical methods for acquisition protocols with low x-ray dose or limited data, but with increased computational burden that poses a drawback to routine application in clinical scenarios. This work develops a comprehensive framework for acceleration of MBIR in the form of penalized weighted least squares optimized with ordered subsets separable quadratic surrogates. The optimization was scheduled on a set of stages forming a morphological pyramid varying in voxel size. Transition between stages was controlled with a convergence criterion based on the deviation between the mid-band noise power spectrum (NPS) measured on a homogeneous region of the evolving reconstruction and that expected for the converged image, computed with an analytical model that used projection data and the reconstruction parameters. A stochastic backprojector was developed by introducing a random perturbation to the sampling position of each voxel for each ray in the reconstruction within a voxel-based backprojector, breaking the deterministic pattern of sampling artifacts when combined with an unmatched Siddon forward projector. This fast, forward and backprojector pair were included into a multi-resolution reconstruction strategy to provide support for objects partially outside of the field of view. Acceleration from ordered subsets was combined with momentum accumulation stabilized with an adaptive technique that automatically resets the accumulated momentum when it diverges noticeably from the current iteration update. The framework was evaluated with CBCT data of a realistic abdomen phantom acquired on an imaging x-ray bench and with clinical CBCT data from an angiography robotic C-arm (Artis Zeego, Siemens Healthineers, Forchheim, Germany) acquired during a liver embolization procedure. Image fidelity was assessed with the structural similarity index (SSIM) computed with a converged reconstruction. The accelerated framework provided accurate reconstructions in 60 s (SSIM = 0.97) and as little as 27 s (SSIM = 0.94) for soft-tissue evaluation. The use of simple forward and backprojectors resulted in 9.3× acceleration. Accumulation of momentum provided extra ∼3.5× acceleration with stable convergence for 6-30 subsets. The NPS-driven morphological pyramid resulted in initial faster convergence, achieving similar SSIM with 1.5× lower runtime than the single-stage optimization. Acceleration of MBIR to provide reconstruction time compatible with clinical applications is feasible via architectures that integrate algorithmic acceleration with approaches to provide stable convergence, and optimization schedules that maximize convergence speed.
Collapse
Affiliation(s)
- A Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD United States of America
| | | | | | | | | | | |
Collapse
|
20
|
Liu SZ, Cao Q, Tivnan M, Tilley S, Siewerdsen JH, Stayman JW, Zbijewski W. Model-based dual-energy tomographic image reconstruction of objects containing known metal components. Phys Med Biol 2020; 65:245046. [PMID: 33113519 PMCID: PMC7942762 DOI: 10.1088/1361-6560/abc5a9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
Dual-energy (DE) decomposition has been adopted in orthopedic imaging to measure bone composition and visualize intraarticular contrast enhancement. One of the potential applications involves monitoring of callus mineralization for longitudinal assessment of fracture healing. However, fracture repair usually involves internal fixation hardware that can generate significant artifacts in reconstructed images. To address this challenge, we develop a novel algorithm that combines simultaneous reconstruction-decomposition using a previously reported method for model-based material decomposition (MBMD) augmented by the known-component (KC) reconstruction framework to mitigate metal artifacts. We apply the proposed algorithm to simulated DE data representative of a dedicated extremity cone-beam CT (CBCT) employing an x-ray unit with three vertically arranged sources. The scanner generates DE data with non-coinciding high- and low-energy projection rays when the central source is operated at high tube potential and the peripheral sources at low potential. The proposed algorithm was validated using a digital extremity phantom containing varying concentrations of Ca-water mixtures and Ti implants. Decomposition accuracy was compared to MBMD without the KC model. The proposed method suppressed metal artifacts and yielded estimated Ca concentrations that approached the reconstructions of an implant-free phantom for most mixture regions. In the vicinity of simple components, the errors of Ca density estimates obtained by incorporating KC in MBMD were ∼1.5-5× lower than the errors of conventional MBMD; for cases with complex implants, the errors were ∼3-5× lower. In conclusion, the proposed method can achieve accurate bone mineral density measurements in the presence of metal implants using non-coinciding DE projections acquired on a multisource CBCT system.
Collapse
Affiliation(s)
- Stephen Z. Liu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Qian Cao
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Matthew Tivnan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Steven Tilley
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | - J. Webster Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| |
Collapse
|
21
|
Wang G. PET-enabled dual-energy CT: image reconstruction and a proof-of-concept computer simulation study. Phys Med Biol 2020; 65:245028. [PMID: 33120376 DOI: 10.1088/1361-6560/abc5ca] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Standard dual-energy computed tomography (CT) uses two different x-ray energies to obtain energy-dependent tissue attenuation information to allow quantitative material decomposition. The combined use of dual-energy CT and positron emission tomography (PET) may provide a more comprehensive characterization of disease states in cancer and other diseases. However, the integration of dual-energy CT with PET is not trivial, either requiring costly hardware upgrades or increasing radiation exposure. This paper proposes a different dual-energy CT imaging method that is enabled by PET. Instead of using a second x-ray CT scan with a different energy, this method exploits time-of-flight PET image reconstruction via the maximum likelihood attenuation and activity (MLAA) algorithm to obtain a 511 keV gamma-ray attenuation image from PET emission data. The high-energy gamma-ray attenuation image is then combined with the low-energy x-ray CT of PET/CT to provide a pair of dual-energy CT images. A major challenge with the standard MLAA reconstruction is the high noise present in the reconstructed 511 keV attenuation map, which would not compromise the PET activity reconstruction too much but may significantly affect the performance of the gamma-ray attenuation image for material decomposition. To overcome the problem, we further propose a kernel MLAA algorithm to exploit the prior information from the available x-ray CT image. We conducted a computer simulation to test the concept and algorithm for the task of material decomposition. The simulation results demonstrate that this PET-enabled dual-energy CT method is promising for quantitative material decomposition. The proposed method can be readily implemented on time-of-flight PET/CT scanners to enable simultaneous PET and dual-energy CT imaging.
Collapse
Affiliation(s)
- Guobao Wang
- Department of Radiology, University of California, Davis, CA, United States of America
| |
Collapse
|
22
|
Wu D, Gong K, Arru CD, Homayounieh F, Bizzo B, Buch V, Ren H, Kim K, Neumark N, Xu P, Liu Z, Fang W, Xie N, Tak WY, Park SY, Lee YR, Kang MK, Park JG, Carriero A, Saba L, Masjedi M, Talari H, Babaei R, Mobin HK, Ebrahimian S, Dayan I, Kalra MK, Li Q. Severity and Consolidation Quantification of COVID-19 From CT Images Using Deep Learning Based on Hybrid Weak Labels. IEEE J Biomed Health Inform 2020; 24:3529-3538. [PMID: 33044938 PMCID: PMC8545170 DOI: 10.1109/jbhi.2020.3030224] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 08/19/2020] [Accepted: 09/26/2020] [Indexed: 11/09/2022]
Abstract
Early and accurate diagnosis of Coronavirus disease (COVID-19) is essential for patient isolation and contact tracing so that the spread of infection can be limited. Computed tomography (CT) can provide important information in COVID-19, especially for patients with moderate to severe disease as well as those with worsening cardiopulmonary status. As an automatic tool, deep learning methods can be utilized to perform semantic segmentation of affected lung regions, which is important to establish disease severity and prognosis prediction. Both the extent and type of pulmonary opacities help assess disease severity. However, manually pixel-level multi-class labelling is time-consuming, subjective, and non-quantitative. In this article, we proposed a hybrid weak label-based deep learning method that utilize both the manually annotated pulmonary opacities from COVID-19 pneumonia and the patient-level disease-type information available from the clinical report. A UNet was firstly trained with semantic labels to segment the total infected region. It was used to initialize another UNet, which was trained to segment the consolidations with patient-level information using the Expectation-Maximization (EM) algorithm. To demonstrate the performance of the proposed method, multi-institutional CT datasets from Iran, Italy, South Korea, and the United States were utilized. Results show that our proposed method can predict the infected regions as well as the consolidation regions with good correlation to human annotation.
Collapse
Affiliation(s)
- Dufan Wu
- Department of RadiologyMassachusetts General HospitalBostonMA02114USA
| | - Kuang Gong
- Department of RadiologyMassachusetts General HospitalBostonMA02114USA
| | | | | | - Bernardo Bizzo
- MGH & BWH Center for Clinical Data ScienceBostonMA02114USA
| | - Varun Buch
- MGH & BWH Center for Clinical Data ScienceBostonMA02114USA
| | - Hui Ren
- Department of RadiologyMassachusetts General HospitalBostonMA02114USA
| | - Kyungsang Kim
- Department of RadiologyMassachusetts General HospitalBostonMA02114USA
| | - Nir Neumark
- MGH & BWH Center for Clinical Data ScienceBostonMA02114USA
| | - Pengcheng Xu
- Department of RadiologyMassachusetts General HospitalBostonMA02114USA
| | - Zhiyuan Liu
- Department of RadiologyMassachusetts General HospitalBostonMA02114USA
| | - Wei Fang
- Department of RadiologyMassachusetts General HospitalBostonMA02114USA
| | - Nuobei Xie
- Department of RadiologyMassachusetts General HospitalBostonMA02114USA
| | - Won Young Tak
- Department of Internal Medicine, School of MedicineKyungpook National UniversityDaegu41944South Korea
| | - Soo Young Park
- Department of Internal Medicine, School of MedicineKyungpook National UniversityDaegu41944South Korea
| | - Yu Rim Lee
- Department of Internal Medicine, School of MedicineKyungpook National UniversityDaegu41944South Korea
| | - Min Kyu Kang
- Department of Internal MedicineYeungnam University College of MedicineDaegu41944South Korea
| | - Jung Gil Park
- Department of Internal MedicineYeungnam University College of MedicineDaegu41944South Korea
| | - Alessandro Carriero
- RadiologiaAzienda Ospedaliera Universitaria Maggiore della Carità28100NovaraItaly
| | - Luca Saba
- RadiologiaAzienda Ospedaliera Universitaria Policlinico di Cagliari09124CagliariItaly
| | - Mahsa Masjedi
- Department of RadiologyShahid Beheshti HospitalKashan00000Iran
| | | | - Rosa Babaei
- Department of Radiology, Firoozgar HospitalIran University of Medical SciencesTehran48711-15937Iran
| | - Hadi Karimi Mobin
- Department of Radiology, Firoozgar HospitalIran University of Medical SciencesTehran48711-15937Iran
| | - Shadi Ebrahimian
- Department of RadiologyMassachusetts General HospitalBostonMA02114USA
| | - Ittai Dayan
- MGH & BWH Center for Clinical Data ScienceBostonMA02114USA
| | | | - Quanzheng Li
- Department of RadiologyMassachusetts General HospitalBostonMA02114USA
| |
Collapse
|
23
|
Vegas-Sánchez-Ferrero G, San José Estépar R. Statistical characterization of the linear attenuation coefficient in polychromatic CT scans. Med Phys 2020; 47:5568-5581. [PMID: 32654155 PMCID: PMC7796558 DOI: 10.1002/mp.14384] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 06/17/2020] [Accepted: 07/04/2020] [Indexed: 12/21/2022] Open
Abstract
PURPOSE To provide a unifying statistical model that characterizes the integrated x-ray intensity at the detector after logarithmic transformation and can be extended to the characterization of computed tomography (CT) numbers in the reconstructed image. METHODS We study the statistical characteristics of polyenergetic x-ray beams in the detector. Firstly, we consider the characterization of the integrated x-ray intensity at the detector through a probabilistic model (compound Poisson) that describes its statistics. We analyze its properties and derive the probabilistic distribution after the logarithmic transformation analytically. Finally, we propose a more tractable probabilistic distribution with the same features observed in the characterization, the noncentral Gamma (nc-Gamma). This distribution exhibits desirable properties for the statistical characterization across the reconstruction process. We assess the assumptions adopted in the derivation of the statistical models throughout Monte Carlo simulations and validate them with a water phantom and a lung phantom acquired in a Siemens clinical CT scan. We evaluate the statistical similarities between the theoretical distribution and the nc-Gamma using a power analysis with a Kolmogorov-Smirnov test for a 95% confidence level. RESULTS The Kolmogorov-Smirnov goodness-of-fit test obtained for the Monte Carlo simulation shows an extremely high agreement between the empirical distribution of the post-logarithmic-integrated x-ray intensity and the nc-Gamma. The experimental validation performed with both phantoms confirmed the excellent match between the theoretical distribution, the proposed nc-Gamma, and sample distributions in all situations. CONCLUSION We derive an analytical model describing the post-log distribution of the linear attenuation coefficient in the sensor for polychromatic CT scans. We also demonstrate that the nc-Gamma distribution approximates well the theoretical distribution. This distribution also approximates well the CT numbers after reconstruction since it naturally extends across linear operations involved in filtered back projection reconstructions. This probabilistic model may provide the analytical foundation to define new likelihood-based reconstruction methodologies for polychromatic scans.
Collapse
Affiliation(s)
- Gonzalo Vegas-Sánchez-Ferrero
- Applied Chest Imaging Laboratory (ACIL), Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Raúl San José Estépar
- Applied Chest Imaging Laboratory (ACIL), Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
24
|
Gao Y, Liang Z, Xing Y, Zhang H, Pomeroy M, Lu S, Ma J, Lu H, Moore W. Characterization of tissue-specific pre-log Bayesian CT reconstruction by texture-dose relationship. Med Phys 2020; 47:5032-5047. [PMID: 32786070 DOI: 10.1002/mp.14449] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Revised: 06/21/2020] [Accepted: 08/04/2020] [Indexed: 12/18/2022] Open
Abstract
PURPOSE Tissue textures have been recognized as biomarkers for various clinical tasks. In computed tomography (CT) image reconstruction, it is important but challenging to preserve the texture when lowering x-ray exposure from full- toward low-/ultra-low dose level. Therefore, this paper aims to explore the texture-dose relationship within one tissue-specific pre-log Bayesian CT reconstruction algorithm. METHODS To enhance the texture in ultra-low dose CT (ULdCT) reconstruction, this paper presents a Bayesian type algorithm. A shifted Poisson model is adapted to describe the statistical properties of pre-log data, and a tissue-specific Markov random field prior (MRFt) is used to incorporate tissue texture from previous full-dose CT, thus called SP-MRFt algorithm. Utilizing the SP-MRFt algorithm, we investigated tissue texture degradation as a function of x-ray dose levels from full dose (100 mAs/120 kVp) to ultralow dose (1 mAs/120 kVp) by using quantitative texture-based evaluation metrics. RESULTS Experimental results show the SP-MRFt algorithm outperforms conventional filtered back projection (FBP) and post-log domain penalized weighted least square MRFt (PWLS-MRFt) in terms of noise suppression and texture preservation. Comparable results are also obtained with shifted Poisson model with 7 × 7 Huber MRF weights (SP-Huber7). The investigation on texture-dose relationship shows that the quantified texture measures drop monotonically as dose level decreases, and interestingly a turning point is observed on the texture-dose response curve. CONCLUSIONS This important observation implies that there exists a minimum dose level, at which a given CT scanner (hardware configuration and image reconstruction software) can achieve without compromising clinical tasks. Moreover, the experiment results show that the variance of electronic noise has higher impact than the mean to the texture-dose relationship.
Collapse
Affiliation(s)
- Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Zhengrong Liang
- Departments of Radiology, Biomedical Engineering, Computer Science, and Electrical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Yuxiang Xing
- Department of Engineering Physics, Tsinghua University, Beijing, 100871, China
| | - Hao Zhang
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Marc Pomeroy
- Departments of Radiology and Biomedical Engineering, State University of New York at Stony Brook, Stony Brook, NY, 11794, USA
| | - Siming Lu
- Departments of Radiology and Biomedical Engineering, State University of New York at Stony Brook, Stony Brook, NY, 11794, USA
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Hongbing Lu
- Department of Biomedical Engineering, Fourth Military Medical University, Xi'an, 710032, China
| | - William Moore
- Department of Radiology, New York University, New York, NY, 10016, USA
| |
Collapse
|
25
|
Hehn L, Gradl R, Dierolf M, Morgan KS, Paganin DM, Pfeiffer F. Model-Based Iterative Reconstruction for Propagation-Based Phase-Contrast X-Ray CT including Models for the Source and the Detector. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1975-1987. [PMID: 31880549 DOI: 10.1109/tmi.2019.2962615] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Propagation-based phase-contrast X-ray computed tomography is a valuable tool for high-resolution visualization of biological samples, giving distinct improvements in terms of contrast and dose requirements compared to conventional attenuation-based computed tomography. Due to its ease of implementation and advances in laboratory X-ray sources, this imaging technique is increasingly being transferred from synchrotron facilities to laboratory environments. This however poses additional challenges, such as the limited spatial coherence and flux of laboratory sources, resulting in worse resolution and higher noise levels. Here we extend a previously developed iterative reconstruction algorithm for this imaging technique to include models for the reduced spatial coherence and the signal spreading of efficient scintillator-based detectors directly into the physical forward model. Furthermore, we employ a noise model which accounts for the full covariance statistics of the image formation process. In addition, we extend the model describing the interference effects such that it now matches the formalism of the widely-used single-material phase-retrieval algorithm, which is based on the sample-homogeneity assumption. We perform a simulation study as well as an experimental study at a laboratory inverse Compton source and compare our approach to the conventional analytical approaches. We find that the modeling of the source and the detector inside the physical forward model can tremendously improve the resolution at matched noise levels and that the modeling of the covariance statistics reduces overshoots (i.e. incorrect increase / decrease in sample properties) at the sample edges significantly.
Collapse
|
26
|
Gao Y, Tan J, Shi Y, Lu S, Gupta A, Li H, Liang Z. Constructing a tissue-specific texture prior by machine learning from previous full-dose scan for Bayesian reconstruction of current ultralow-dose CT images. J Med Imaging (Bellingham) 2020; 7:032502. [PMID: 32118093 PMCID: PMC7040436 DOI: 10.1117/1.jmi.7.3.032502] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 01/27/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Bayesian theory provides a sound framework for ultralow-dose computed tomography (ULdCT) image reconstruction with two terms for modeling the data statistical property and incorporating a priori knowledge for the image that is to be reconstructed. We investigate the feasibility of using a machine learning (ML) strategy, particularly the convolutional neural network (CNN), to construct a tissue-specific texture prior from previous full-dose computed tomography. Approach: Our study constructs four tissue-specific texture priors, corresponding with lung, bone, fat, and muscle, and integrates the prior with the prelog shift Poisson (SP) data property for Bayesian reconstruction of ULdCT images. The Bayesian reconstruction was implemented by an algorithm called SP-CNN-T and compared with our previous Markov random field (MRF)-based tissue-specific texture prior algorithm called SP-MRF-T. Results: In addition to conventional quantitative measures, mean squared error and peak signal-to-noise ratio, structure similarity index, feature similarity, and texture Haralick features were used to measure the performance difference between SP-CNN-T and SP-MRF-T algorithms in terms of the structure and tissue texture preservation, demonstrating the feasibility and the potential of the investigated ML approach. Conclusions: Both training performance and image reconstruction results showed the feasibility of constructing CNN texture prior model and the potential of improving the structure preservation of the nodule comparing to our previous regional tissue-specific MRF texture prior model.
Collapse
Affiliation(s)
- Yongfeng Gao
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Jiaxing Tan
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Yongyi Shi
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Siming Lu
- State University of New York, Department of Radiology, Stony Brook, New York, United States
- State University of New York, Department of Biomedical Engineering, Stony Brook, New York, United States
| | - Amit Gupta
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Haifang Li
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Zhengrong Liang
- State University of New York, Department of Radiology, Stony Brook, New York, United States
- State University of New York, Department of Biomedical Engineering, Stony Brook, New York, United States
| |
Collapse
|
27
|
Ye S, Ravishankar S, Long Y, Fessler JA. SPULTRA: Low-Dose CT Image Reconstruction With Joint Statistical and Learned Image Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:729-741. [PMID: 31425021 PMCID: PMC7170173 DOI: 10.1109/tmi.2019.2934933] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Low-dose CT image reconstruction has been a popular research topic in recent years. A typical reconstruction method based on post-log measurements is called penalized weighted-least squares (PWLS). Due to the underlying limitations of the post-log statistical model, the PWLS reconstruction quality is often degraded in low-dose scans. This paper investigates a shifted-Poisson (SP) model based likelihood function that uses the pre-log raw measurements that better represents the measurement statistics, together with a data-driven regularizer exploiting a Union of Learned TRAnsforms (SPULTRA). Both the SP induced data-fidelity term and the regularizer in the proposed framework are nonconvex. The proposed SPULTRA algorithm uses quadratic surrogate functions for the SP induced data-fidelity term. Each iteration involves a quadratic subproblem for updating the image, and a sparse coding and clustering subproblem that has a closed-form solution. The SPULTRA algorithm has a similar computational cost per iteration as its recent counterpart PWLS-ULTRA that uses post-log measurements, and it provides better image reconstruction quality than PWLS-ULTRA, especially in low-dose scans.
Collapse
|
28
|
Bousse A, Courdurier M, Emond E, Thielemans K, Hutton BF, Irarrazaval P, Visvikis D. PET Reconstruction With Non-Negativity Constraint in Projection Space: Optimization Through Hypo-Convergence. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:75-86. [PMID: 31170066 DOI: 10.1109/tmi.2019.2920109] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Standard positron emission tomography (PET) reconstruction techniques are based on maximum-likelihood (ML) optimization methods, such as the maximum-likelihood expectation-maximization (MLEM) algorithm and its variations. Most methodologies rely on a positivity constraint on the activity distribution image. Although this constraint is meaningful from a physical point of view, it can be a source of bias for low-count/high-background PET, which can compromise accurate quantification. Existing methods that allow for negative values in the estimated image usually utilize a modified log-likelihood, and therefore break the data statistics. In this paper, we propose to incorporate the positivity constraint on the projections only, by approximating the (penalized) log-likelihood function by an adequate sequence of objective functions that are easily maximized without constraint. This sequence is constructed such that there is hypo-convergence (a type of convergence that allows the convergence of the maximizers under some conditions) to the original log-likelihood, hence allowing us to achieve maximization with positivity constraint on the projections using simple settings. A complete proof of convergence under weak assumptions is given. We provide results of experiments on simulated data where we compare our methodology with the alternative direction method of multipliers (ADMM) method, showing that our algorithm converges to a maximizer, which stays in the desired feasibility set, with faster convergence than ADMM. We also show that this approach reduces the bias, as compared with MLEM images, in necrotic tumors-which are characterized by cold regions surrounded by hot structures-while reconstructing similar activity values in hot regions.
Collapse
|
29
|
Abella M, Martinez C, Desco M, Vaquero JJ, Fessler JA. Simplified Statistical Image Reconstruction for X-ray CT With Beam-Hardening Artifact Compensation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:111-118. [PMID: 31180844 PMCID: PMC6995645 DOI: 10.1109/tmi.2019.2921929] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
CT images are often affected by beam-hardening artifacts due to the polychromatic nature of the X-ray spectra. These artifacts appear in the image as cupping in homogeneous areas and as dark bands between dense regions such as bones. This paper proposes a simplified statistical reconstruction method for X-ray CT based on Poisson statistics that accounts for the non-linearities caused by beam hardening. The main advantages of the proposed method over previous algorithms are that it avoids the preliminary segmentation step, which can be tricky, especially for low-dose scans, and it does not require knowledge of the whole source spectrum, which is often unknown. Each voxel attenuation is modeled as a mixture of bone and soft tissue by defining density-dependent tissue fractions and maintaining one unknown per voxel. We approximate the energy-dependent attenuation corresponding to different combinations of bone and soft tissues, the so-called beam-hardening function, with the 1D function corresponding to water plus two parameters that can be tuned empirically. Results on both simulated data with Poisson sinogram noise and two rodent studies acquired with the ARGUS/CT system showed a beam hardening reduction (both cupping and dark bands) similar to analytical reconstruction followed by post-processing techniques but with reduced noise and streaks in cases with a low number of projections, as expected for statistical image reconstruction.
Collapse
|
30
|
Chang S, Li M, Yu H, Chen X, Deng S, Zhang P, Mou X. Spectrum Estimation-Guided Iterative Reconstruction Algorithm for Dual Energy CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:246-258. [PMID: 31251178 DOI: 10.1109/tmi.2019.2924920] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
X-ray spectrum plays a very important role in dual energy computed tomography (DECT) reconstruction. Because it is difficult to measure x-ray spectrum directly in practice, efforts have been devoted into spectrum estimation by using transmission measurements. These measurement methods are independent of the image reconstruction, which bring extra cost and are time consuming. Furthermore, the estimated spectrum mismatch would degrade the quality of the reconstructed images. In this paper, we propose a spectrum estimation-guided iterative reconstruction algorithm for DECT which aims to simultaneously recover the spectrum and reconstruct the image. The proposed algorithm is formulated as an optimization framework combining spectrum estimation based on model spectra representation, image reconstruction, and regularization for noise suppression. To resolve the multi-variable optimization problem of simultaneously obtaining the spectra and images, we introduce the block coordinate descent (BCD) method into the optimization iteration. Both the numerical simulations and physical phantom experiments are performed to verify and evaluate the proposed method. The experimental results validate the accuracy of the estimated spectra and reconstructed images under different noise levels. The proposed method obtains a better image quality compared with the reconstructed images from the known exact spectra and is robust in noisy data applications.
Collapse
|
31
|
Reduction of beam hardening artifacts on real C-arm CT data using polychromatic statistical image reconstruction. Z Med Phys 2019; 30:40-50. [PMID: 31831207 DOI: 10.1016/j.zemedi.2019.10.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Revised: 09/02/2019] [Accepted: 10/07/2019] [Indexed: 11/24/2022]
Abstract
PURPOSE This work aims at the compensation of beam hardening artifacts by the means of an extended three-dimensional polychromatic statistical reconstruction to be applied for flat panel cone-beam CT. METHODS We implemented this reconstruction technique as being introduced by Elbakri et al. (2002) [1] for a multi-GPU system, assuming the underlying object consists of several well-defined materials. Furthermore, we assume one voxel can only contain an overlap of at most two materials, depending on its density value. Given the X-ray spectrum, the procedure enables to reconstruct the energy-dependent attenuation values of the volume. RESULTS We evaluated the method by using flat-panel cone-beam CT measurements of structures containing small metal objects and clinical head scan data. In comparison with the water-corrected filtered backprojection, as well as a maximum likelihood reconstruction with a consistency-based beam hardening correction, our method features clearly reduced beam hardening artifacts and a more accurate shape of metal objects. CONCLUSIONS Our multi-GPU implementation of the polychromatic reconstruction, which does not require any image pre-segmentation, clearly outperforms the standard reconstructions of objects, with respect to beam hardening even in the presence of metal objects inside the volume. However, remaining artifacts, caused mainly by the limited dynamic range of the detector, may have to be addressed in future work.
Collapse
|
32
|
Zeng GL, Li Y. Extension of emission expectation maximization lookalike algorithms to Bayesian algorithms. Vis Comput Ind Biomed Art 2019; 2:14. [PMID: 32190406 PMCID: PMC7055571 DOI: 10.1186/s42492-019-0027-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 10/11/2019] [Indexed: 11/23/2022] Open
Abstract
We recently developed a family of image reconstruction algorithms that look like the emission maximum-likelihood expectation-maximization (ML-EM) algorithm. In this study, we extend these algorithms to Bayesian algorithms. The family of emission-EM-lookalike algorithms utilizes a multiplicative update scheme. The extension of these algorithms to Bayesian algorithms is achieved by introducing a new simple factor, which contains the Bayesian information. One of the extended algorithms can be applied to emission tomography and another to transmission tomography. Computer simulations are performed and compared with the corresponding un-extended algorithms. The total-variation norm is employed as the Bayesian constraint in the computer simulations. The newly developed algorithms demonstrate a stable performance. A simple Bayesian algorithm can be derived for any noise variance function. The proposed algorithms have properties such as multiplicative updating, non-negativity, faster convergence rates for bright objects, and ease of implementation. Our algorithms are inspired by Green's one-step-late algorithm. If written in additive-update form, Green's algorithm has a step size determined by the future image value, which is an undesirable feature that our algorithms do not have.
Collapse
Affiliation(s)
- Gengsheng L. Zeng
- Department of Engineering, Utah Valley University, 800 W University Parkway, Orem, UT 84058 USA
- Department of Radiology and Imaging Sciences, University of Utah, 729 Arapeen Drive, Salt Lake City, UT 84108 USA
| | - Ya Li
- Department of Mathematics, Utah Valley University, 800 W University Parkway, Orem, UT 84058 USA
| |
Collapse
|
33
|
Kim S, Ra JB. Dynamic focal plane estimation for dental panoramic radiography. Med Phys 2019; 46:4907-4917. [PMID: 31520417 DOI: 10.1002/mp.13823] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Revised: 09/02/2019] [Accepted: 09/03/2019] [Indexed: 11/10/2022] Open
Abstract
PURPOSE The digital panoramic radiography is widely used in dental clinics and provides the anatomical information of the intraoral structure along the predefined arc-shaped path. Since the intraoral structure varies depending on the patient, however, it is nearly impossible to design a common and static focal path or plane fitted to the dentition of all patients. In response, we introduce an imaging algorithm for digital panoramic radiography that can provide a focused panoramic radiographic image for all patients, by automatically estimating the best focal plane for each patient. METHODS The aim of this study is to improve the image quality of dental panoramic radiography based on a three-dimensional (3D) dynamic focal plane. The plane is newly introduced to represent the arbitrary 3D intraoral structure of each patient. The proposed algorithm consists of three steps: preprocessing, focal plane estimation, and image reconstruction. We first perform preprocessing to improve the accuracy of focal plane estimation. The 3D dynamic focal plane is then estimated by adjusting the position of the image plane so that object boundaries in the neighboring projection data are aligned or focused on the plane. Finally, a panoramic radiographic image is reconstructed using the estimated dynamic focal plane. RESULTS The proposed algorithm is evaluated using a numerical phantom dataset and four clinical human datasets. In order to examine the image quality improvement owing to the proposed algorithm, we generate panoramic radiographic images based on a conventional static focal plane and estimated 3D dynamic focal planes, respectively. Experimental results show that the image quality is dramatically improved for all datasets using the 3D dynamic focal planes that are estimated from the proposed algorithm. CONCLUSIONS We propose an imaging algorithm for digital panoramic radiography that provides improved image quality by estimating dynamic focal planes fitted to each individual patient's intraoral structure.
Collapse
Affiliation(s)
- Seungeon Kim
- School of Electrical Engineering, KAIST, Daejeon, Republic of Korea
| | - Jong Beom Ra
- School of Electrical Engineering, KAIST, Daejeon, Republic of Korea
| |
Collapse
|
34
|
Allner S, Gustschin A, Fehringer A, Noël PB, Pfeiffer F. Metric-guided regularisation parameter selection for statistical iterative reconstruction in computed tomography. Sci Rep 2019; 9:6016. [PMID: 30979911 PMCID: PMC6461679 DOI: 10.1038/s41598-019-40837-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2018] [Accepted: 02/18/2019] [Indexed: 11/09/2022] Open
Abstract
As iterative reconstruction in Computed Tomography (CT) is an ill-posed problem, additional prior information has to be used to get a physically meaningful result (close to ground truth if available). However, the amount of influence of the regularisation prior is crucial to the outcome of the reconstruction. Therefore, we propose a scheme for tuning the strength of the prior via a certain image metric. In this work, the parameter is tuned for minimal histogram entropy in selected regions of the reconstruction as histogram entropy is a very basic approach to characterise the information content of data. We performed a sweep over different regularisation parameters showing that the histogram entropy is a suitable metric as it is well behaved over a wide range of parameters. The parameter determination is a feedback loop approach we applied to numerically simulated FORBILD phantom data and verified with an experimental measurement of a micro-CT device. The outcome is evaluated visually and quantitatively by means of root mean squared error (RMSE) and structural similarity (SSIM) for the simulation and visually for the measured sample (no ground truth available). The final reconstructed images exhibit noise-suppressed iterative reconstruction. For both datasets, the optimisation is robust where its initial value is concerned. The parameter tuning approach shows that the proposed metric-driven feedback loop is a promising tool for finding a suitable regularisation parameter in statistical iterative reconstruction.
Collapse
Affiliation(s)
- Sebastian Allner
- Chair of Biomedical Physics and Department of Physics and Munich School of BioEngineering, Technical University of Munich, 85748, Garching, Germany.
| | - Alex Gustschin
- Chair of Biomedical Physics and Department of Physics and Munich School of BioEngineering, Technical University of Munich, 85748, Garching, Germany
| | | | - Peter B Noël
- Department of Diagnostic and Interventional Radiology, Klinikum rechts der Isar, Technical University of Munich, 81675, München, Germany
| | - Franz Pfeiffer
- Chair of Biomedical Physics and Department of Physics and Munich School of BioEngineering, Technical University of Munich, 85748, Garching, Germany
- Department of Diagnostic and Interventional Radiology, Klinikum rechts der Isar, Technical University of Munich, 81675, München, Germany
| |
Collapse
|
35
|
Tilley S, Zbijewski W, Stayman JW. Model-based material decomposition with a penalized nonlinear least-squares CT reconstruction algorithm. Phys Med Biol 2019; 64:035005. [PMID: 30561382 DOI: 10.1088/1361-6560/aaf973] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Spectral information in CT may be used for material decomposition to produce accurate reconstructions of material density and to separate materials with similar overall attenuation. Traditional methods separate the reconstruction and decomposition steps, often resulting in undesirable trade-offs (e.g. sampling constraints, a simplified spectral model). In this work, we present a model-based material decomposition algorithm which performs the reconstruction and decomposition simultaneously using a multienergy forward model. In a kV-switching simulation study, the presented method is capable of reconstructing iodine at 0.5 mg ml-1 with a contrast-to-noise ratio greater than two, as compared to 3.0 mg ml-1 for image domain decomposition. The presented method also enables novel acquisition methods, which was demonstrated in this work with a combined kV-switching/split-filter acquisition explored in simulation and physical test bench studies. This novel design used four spectral channels to decompose three materials: water, iodine, and gadolinium. In simulation, the presented method accurately reconstructed concentration value estimates with RMSE values of 4.86 mg ml-1 for water, 0.108 mg ml-1 for iodine and 0.170 mg ml-1 for gadolinium. In test-bench data, the RMSE values were 134 mg ml-1, 5.26 mg ml-1 and 1.85 mg ml-1, respectively. These studies demonstrate the ability of model-based material decomposition to produce accurate concentration estimates in challenging spatial/spectral sampling acquisitions.
Collapse
Affiliation(s)
- Steven Tilley
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | | | | |
Collapse
|
36
|
Wang W, Gang GJ, Siewerdsen JH, Stayman JW. Predicting image properties in penalized-likelihood reconstructions of flat-panel CBCT. Med Phys 2019; 46:65-80. [PMID: 30372536 PMCID: PMC6904934 DOI: 10.1002/mp.13249] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Revised: 09/17/2018] [Accepted: 10/09/2018] [Indexed: 12/18/2022] Open
Abstract
PURPOSE Model-based iterative reconstruction (MBIR) algorithms such as penalized-likelihood (PL) methods exhibit data-dependent and shift-variant properties. Image quality predictors have been derived to prospectively estimate local noise and spatial resolution, facilitating both system hardware design and tuning of reconstruction methods. However, current MBIR image quality predictors rely on idealized system models, ignoring physical blurring effects and noise correlations found in real systems. In this work, we develop and validate a new set of predictors using a physical system model specific to flat-panel cone-beam CT (FP-CBCT). METHODS Physical models appropriate for integration with MBIR analysis are developed and parameterized to represent nonidealities in FP projection data including focal spot blur, scintillator blur, detector aperture effect, and noise correlations. Flat-panel-specific predictors for local spatial resolution and local noise properties in PL reconstructions are developed based on these realistic physical models. Estimation accuracy of conventional (idealized) and FP-specific predictors is investigated and validated against experimental CBCT measurements using specialized phantoms. RESULTS Validation studies show that flat-panel-specific predictors can accurately estimate the local spatial resolution and noise properties, while conventional predictors show significant deviations in the magnitude and scale of the spatial resolution and local noise. The proposed predictors show accurate estimations over a range of imaging conditions including varying x-ray technique and regularization strength. The conventional spatial resolution prediction is sharper than ground truth. Using conventional spatial resolution predictor, the full width at half maximum (FWHM) of local point spread function (PSF) is underestimated by 0.2 mm. This mismatch is mostly eliminated in FP-specific prediction. The general shape and amplitude of local noise power spectrum (NPS) FP-specific predictions are consistent with measurement, while the conventional predictions underestimated the noise level by 70%. CONCLUSION The proposed image quality predictors permit accurate estimation of local spatial resolution and noise properties for PL reconstruction, accounting for dependencies on the system geometry, x-ray technique, and patient-specific anatomy in real FP-CBCT. Such tools enable prospective analysis of image quality for a range of goals including novel system and acquisition design, adaptive and task-driven imaging, and tuning of MBIR for robust and reliable behavior.
Collapse
Affiliation(s)
- Wenying Wang
- Department of Biomedical EngineeringJohns Hopkins UniversityBaltimoreMD21205USA
| | - Grace J. Gang
- Department of Biomedical EngineeringJohns Hopkins UniversityBaltimoreMD21205USA
| | | | - J. Webster Stayman
- Department of Biomedical EngineeringJohns Hopkins UniversityBaltimoreMD21205USA
| |
Collapse
|
37
|
Kim S, Chang Y, Ra JB. Cardiac Motion Correction for Helical CT Scan With an Ordinary Pitch. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1587-1596. [PMID: 29969409 DOI: 10.1109/tmi.2018.2817594] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Cardiac X-ray computed tomography (CT) imaging is still challenging due to the cardiac motion during CT scanning, which leads to the presence of motion artifacts in the reconstructed image. In response, many cardiac X-ray CT imaging algorithms have been proposed, based on motion estimation (ME) and motion compensation (MC), to improve the image quality by alleviating the motion artifacts in the reconstructed image. However, these ME/MC algorithms are mainly based on an axial scan or a low-pitch helical scan. In this paper, we propose a ME/MC-based cardiac imaging algorithm for the data set acquired from a helical scan with an ordinary pitch of around 1.0 so as to obtain the whole cardiac image within a single scan of short time without ECG gating. In the proposed algorithm, a sequence of partial angle reconstructed (PAR) images is generated by using consecutive parts of the sinogram, each of which has a small angular span. Subsequently, an initial 4-D motion vector field (MVF) is obtained using multiple pairs of conjugate PAR images. The 4-D MVF is then refined based on an image quality metric so as to improve the quality of the motion-compensated image. Finally, a time-resolved cardiac image is obtained by performing motion-compensated image reconstruction by using the refined 4-D MVF. Using digital XCAT phantom data sets and a human data set commonly obtained via a helical scan with a pitch of 1.0, we demonstrate that the proposed algorithm significantly improves the image quality by alleviating motion artifacts.
Collapse
|
38
|
Tilley S, Jacobson M, Cao Q, Brehler M, Sisniega A, Zbijewski W, Stayman JW. Penalized-Likelihood Reconstruction With High-Fidelity Measurement Models for High-Resolution Cone-Beam Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:988-999. [PMID: 29621002 PMCID: PMC5889122 DOI: 10.1109/tmi.2017.2779406] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
We present a novel reconstruction algorithm based on a general cone-beam CT forward model, which is capable of incorporating the blur and noise correlations that are exhibited in flat-panel CBCT measurement data. Specifically, the proposed model may include scintillator blur, focal-spot blur, and noise correlations due to light spread in the scintillator. The proposed algorithm (GPL-BC) uses a Gaussian Penalized-Likelihood objective function, which incorporates models of blur and correlated noise. In a simulation study, GPL-BC was able to achieve lower bias as compared with deblurring followed by FDK as well as a model-based reconstruction method without integration of measurement blur. In the same study, GPL-BC was able to achieve better line-pair reconstructions (in terms of segmented-image accuracy) as compared with deblurring followed by FDK, a model-based method without blur, and a model-based method with blur but not noise correlations. A prototype extremities quantitative cone-beam CT test-bench was used to image a physical sample of human trabecular bone. These data were used to compare reconstructions using the proposed method and model-based methods without blur and/or correlation to a registered CT image of the same bone sample. The GPL-BC reconstructions resulted in more accurate trabecular bone segmentation. Multiple trabecular bone metrics, including trabecular thickness (Tb.Th.) were computed for each reconstruction approach as well as the CT volume. The GPL-BC reconstruction provided the most accurate Tb.Th. measurement, 0.255 mm, as compared with the CT derived value of 0.193 mm, followed by the GPL-B reconstruction, the GPL-I reconstruction, and then the FDK reconstruction (0.271 mm, 0.309 mm, and 0.335 mm, respectively).
Collapse
|
39
|
Seyyedi S, Liapi E, Lasser T, Ivkov R, Hatwar R, Stayman JW. Low-Dose CT Perfusion of the Liver using Reconstruction of Difference. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2018; 2:205-214. [PMID: 29785411 DOI: 10.1109/trpms.2018.2812360] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Liver CT perfusion (CTP) is used in the detection, staging, and treatment response analysis of hepatic diseases. Unfortunately, CTP radiation exposures is significant, limiting more widespread use. Traditional CTP data processing reconstructs individual temporal samples, ignoring a large amount of shared anatomical information between temporal samples, suggesting opportunities for improved data processing. We adopt a prior-image-based reconstruction approach called Reconstruction of Difference (RoD) to enable low-exposure CTP acquisition. RoD differs from many algorithms by directly estimating the attenuation changes between the current patient state and a prior CT volume. We propose to use a high-fidelity unenhanced baseline CT image to integrate prior anatomical knowledge into subsequent data reconstructions. Using simulation studies based on a 4D digital anthropomorphic phantom with realistic time-attenuation curves, we compare RoD with conventional filtered-backprojection, penalized-likelihood estimation, and prior image penalized-likelihood estimation. We evaluate each method in comparisons of reconstructions at individual time points, accuracy of estimated time-attenuation curves, and in an analysis of common perfusion metric maps including hepatic arterial perfusion, hepatic portal perfusion, perfusion index, and time-to-peak. Results suggest that RoD enables significant exposure reductions, outperforming standard and more sophisticated model-based reconstruction, making RoD a potentially important tool to enable low-dose liver CTP.
Collapse
Affiliation(s)
- Saeed Seyyedi
- Computer Aided Medical Procedures and Chair of Biomedical Physics, Technical University of Munich, Munich, 85748 Germany
| | - Eleni Liapi
- Department of Radiology and Radiological Sciences, Johns Hopkins Hospital, Baltimore, MD 21205 USA
| | - Tobias Lasser
- Computer Aided Medical Procedures, Technical University of Munich, Munich, 85748 Germany
| | - Robert Ivkov
- Department of Radiation Oncology, Johns Hopkins Hospital, Baltimore, MD 21205 USA
| | - Rajeev Hatwar
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21205 USA
| | - J Webster Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205 USA
| |
Collapse
|
40
|
Ahn S, Cheng L, Shanbhag DD, Qian H, Kaushik SS, Jansen FP, Wiesinger F. Joint estimation of activity and attenuation for PET using pragmatic MR-based prior: application to clinical TOF PET/MR whole-body data for FDG and non-FDG tracers. Phys Med Biol 2018; 63:045006. [PMID: 29345242 DOI: 10.1088/1361-6560/aaa8a6] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Accurate and robust attenuation correction remains challenging in hybrid PET/MR particularly for torsos because it is difficult to segment bones, lungs and internal air in MR images. Additionally, MR suffers from susceptibility artifacts when a metallic implant is present. Recently, joint estimation (JE) of activity and attenuation based on PET data, also known as maximum likelihood reconstruction of activity and attenuation, has gained considerable interest because of (1) its promise to address the challenges in MR-based attenuation correction (MRAC), and (2) recent advances in time-of-flight (TOF) technology, which is known to be the key to the success of JE. In this paper, we implement a JE algorithm using an MR-based prior and evaluate the algorithm using whole-body PET/MR patient data, for both FDG and non-FDG tracers, acquired from GE SIGNA PET/MR scanners with TOF capability. The weight of the MR-based prior is spatially modulated, based on MR signal strength, to control the balance between MRAC and JE. Large prior weights are used in strong MR signal regions such as soft tissue and fat (i.e. MR tissue classification with a high degree of certainty) and small weights are used in low MR signal regions (i.e. MR tissue classification with a low degree of certainty). The MR-based prior is pragmatic in the sense that it is convex and does not require training or population statistics while exploiting synergies between MRAC and JE. We demonstrate the JE algorithm has the potential to improve the robustness and accuracy of MRAC by recovering the attenuation of metallic implants, internal air and some bones and by better delineating lung boundaries, not only for FDG but also for more specific non-FDG tracers such as 68Ga-DOTATOC and 18F-Fluoride.
Collapse
Affiliation(s)
- Sangtae Ahn
- GE Global Research, Niskayuna, NY, United States of America
- Author to whom any correspondence should be addressed
| | - Lishui Cheng
- GE Global Research, Niskayuna, NY, United States of America
| | | | - Hua Qian
- GE Global Research, Niskayuna, NY, United States of America
| | | | | | | |
Collapse
|
41
|
Lim H, Dewaraja YK, Fessler JA. A PET reconstruction formulation that enforces non-negativity in projection space for bias reduction in Y-90 imaging. Phys Med Biol 2018; 63:035042. [PMID: 29327692 PMCID: PMC5854483 DOI: 10.1088/1361-6560/aaa71b] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.
Collapse
Affiliation(s)
- Hongki Lim
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, United States of America. Department of Radiology, University of Michigan, Ann Arbor, MI 48109, United States of America
| | | | | |
Collapse
|
42
|
Wang G, Zhou J, Yu Z, Wang W, Qi J. Hybrid Pre-Log and Post-Log Image Reconstruction for Computed Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2457-2465. [PMID: 28920898 PMCID: PMC5783547 DOI: 10.1109/tmi.2017.2751679] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Tomographic image reconstruction for low-dose computed tomography (CT) is increasingly challenging as dose continues to reduce in clinical applications. Pre-log domain methods and post-log domain methods have been proposed individually and each method has its own disadvantage. While having the potential to improve image quality for low-dose data by using an accurate imaging model, pre-log domain methods suffer slow convergence in practice due to the nonlinear transformation from the image to measurements. In contrast, post-log domain methods have fast convergence speed but the resulting image quality is suboptimal for low dose CT data because the log transformation is extremely unreliable for low-count measurements and undefined for negative values. This paper proposes a hybrid method that integrates the pre-log model and post-log model together to overcome the disadvantages of individual pre-log and post-log methods. We divide a set of CT data into high-count and low-count regions. The post-log weighted least squares model is used for measurements in the high-count region and the pre-log shifted Poisson model for measurements in the low-count region. The hybrid likelihood function can be optimized using an existing iterative algorithm. Computer simulations and phantom experiments show that the proposed hybrid method can achieve faster early convergence than the pre-log shifted Poisson likelihood method and better signal-to-noise performance than the post-log weighted least squares method.
Collapse
|
43
|
Wu D, Kim K, El Fakhri G, Li Q. Iterative Low-Dose CT Reconstruction With Priors Trained by Artificial Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2479-2486. [PMID: 28922116 PMCID: PMC5897914 DOI: 10.1109/tmi.2017.2753138] [Citation(s) in RCA: 128] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Dose reduction in computed tomography (CT) is essential for decreasing radiation risk in clinical applications. Iterative reconstruction algorithms are one of the most promising way to compensate for the increased noise due to reduction of photon flux. Most iterative reconstruction algorithms incorporate manually designed prior functions of the reconstructed image to suppress noises while maintaining structures of the image. These priors basically rely on smoothness constraints and cannot exploit more complex features of the image. The recent development of artificial neural networks and machine learning enabled learning of more complex features of image, which has the potential to improve reconstruction quality. In this letter, K-sparse auto encoder was used for unsupervised feature learning. A manifold was learned from normal-dose images and the distance between the reconstructed image and the manifold was minimized along with data fidelity during reconstruction. Experiments on 2016 Low-dose CT Grand Challenge were used for the method verification, and results demonstrated the noise reduction and detail preservation abilities of the proposed method.
Collapse
|
44
|
Mason JH, Perelli A, Nailon WH, Davies ME. Polyquant CT: direct electron and mass density reconstruction from a single polyenergetic source. ACTA ACUST UNITED AC 2017; 62:8739-8762. [DOI: 10.1088/1361-6560/aa9162] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
45
|
|
46
|
Kim S, Chang Y, Ra JB. Cardiac Image Reconstruction via Nonlinear Motion Correction Based on Partial Angle Reconstructed Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1151-1161. [PMID: 28103549 DOI: 10.1109/tmi.2017.2654508] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Even though the X-ray Computed Tomography (CT) scan is considered suitable for fast imaging, motion-artifact-free cardiac imaging is still an important issue, because the gantry rotation speed is not fast enough compared with the heart motion. To obtain a heart image with less motion artifacts, a motion estimation (ME) and motion compensation (MC) approach is usually adopted. In this paper, we propose an ME/MC algorithm that can estimate a nonlinear heart motion model from a sinogram with a rotation angle of less than 360°. In this algorithm, we first assume the heart motion to be nonrigid but linear, and thereby estimate an initial 4-D motion vector field (MVF) during a half rotation by using conjugate partial angle reconstructed images, as in our previous ME/MC algorithm. We then refine the MVF to determine a more accurate nonlinear MVF by maximizing the information potential of a motion-compensated image. Finally, MC is performed by incorporating the determined MVF into the image reconstruction process, and a time-resolved heart image is obtained. By using a numerical phantom, a physical cardiac phantom, and an animal data set, we demonstrate that the proposed algorithm can noticeably improve the image quality by reducing motion artifacts throughout the image.
Collapse
|
47
|
Xu S, Uneri A, Khanna AJ, Siewerdsen JH, Stayman JW. Polyenergetic known-component CT reconstruction with unknown material compositions and unknown x-ray spectra. Phys Med Biol 2017; 62:3352-3374. [PMID: 28230539 PMCID: PMC5728157 DOI: 10.1088/1361-6560/aa6285] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Metal artifacts can cause substantial image quality issues in computed tomography. This is particularly true in interventional imaging where surgical tools or metal implants are in the field-of-view. Moreover, the region-of-interest is often near such devices which is exactly where image quality degradations are largest. Previous work on known-component reconstruction (KCR) has shown the incorporation of a physical model (e.g. shape, material composition, etc) of the metal component into the reconstruction algorithm can significantly reduce artifacts even near the edge of a metal component. However, for such approaches to be effective, they must have an accurate model of the component that include energy-dependent properties of both the metal device and the CT scanner, placing a burden on system characterization and component material knowledge. In this work, we propose a modified KCR approach that adopts a mixed forward model with a polyenergetic model for the component and a monoenergetic model for the background anatomy. This new approach called Poly-KCR jointly estimates a spectral transfer function associated with known components in addition to the background attenuation values. Thus, this approach eliminates both the need to know component material composition a prior as well as the requirement for an energy-dependent characterization of the CT scanner. We demonstrate the efficacy of this novel approach and illustrate its improved performance over traditional and model-based iterative reconstruction methods in both simulation studies and in physical data including an implanted cadaver sample.
Collapse
Affiliation(s)
- S Xu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | | | | | | | | |
Collapse
|
48
|
Alessio AM, Kinahan PE, Sauer K, Kalra MK, De Man B. Comparison Between Pre-Log and Post-Log Statistical Models in Ultra-Low-Dose CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:707-720. [PMID: 28113926 PMCID: PMC5424567 DOI: 10.1109/tmi.2016.2627004] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
X-ray detectors in clinical computed tomography (CT) usually operate in current-integrating mode. Their complicated signal statistics often lead to intractable likelihood functions for practical use in model-based image reconstruction (MBIR). It is therefore desirable to design simplified statistical models without losing the essential factors. Depending on whether the CT transmission data are logarithmically transformed, pre-log and post-log models are two major categories of choices in CT MBIR. Both being approximations, it remains an open question whether one model can notably improve image quality over the other on real scanners. In this study, we develop and compare several pre-log and post-log MBIR algorithms under a unified framework. Their reconstruction accuracy based on simulation and clinical datasets are evaluated. The results show that pre-log MBIR can achieve notably better quantitative accuracy than post-log MBIR in ultra-low-dose CT, although in less extreme cases, post-log MBIR with handcrafted pre-processing remains a competitive alternative. Pre-log MBIR could play a growing role in emerging ultra-low-dose CT applications.
Collapse
|
49
|
Xue Y, Ruan R, Hu X, Kuang Y, Wang J, Long Y, Niu T. Statistical image-domain multimaterial decomposition for dual-energy CT. Med Phys 2017; 44:886-901. [PMID: 28060999 DOI: 10.1002/mp.12096] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2016] [Revised: 12/12/2016] [Accepted: 12/27/2016] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Dual-energy CT (DECT) enhances tissue characterization because of its basis material decomposition capability. In addition to conventional two-material decomposition from DECT measurements, multimaterial decomposition (MMD) is required in many clinical applications. To solve the ill-posed problem of reconstructing multi-material images from dual-energy measurements, additional constraints are incorporated into the formulation, including volume and mass conservation and the assumptions that there are at most three materials in each pixel and various material types among pixels. The recently proposed flexible image-domain MMD method decomposes pixels sequentially into multiple basis materials using a direct inversion scheme which leads to magnified noise in the material images. In this paper, we propose a statistical image-domain MMD method for DECT to suppress the noise. METHODS The proposed method applies penalized weighted least-square (PWLS) reconstruction with a negative log-likelihood term and edge-preserving regularization for each material. The statistical weight is determined by a data-based method accounting for the noise variance of high- and low-energy CT images. We apply the optimization transfer principles to design a serial of pixel-wise separable quadratic surrogates (PWSQS) functions which monotonically decrease the cost function. The separability in each pixel enables the simultaneous update of all pixels. RESULTS The proposed method is evaluated on a digital phantom, Catphan©600 phantom and three patients (pelvis, head, and thigh). We also implement the direct inversion and low-pass filtration methods for a comparison purpose. Compared with the direct inversion method, the proposed method reduces noise standard deviation (STD) in soft tissue by 95.35% in the digital phantom study, by 88.01% in the Catphan©600 phantom study, by 92.45% in the pelvis patient study, by 60.21% in the head patient study, and by 81.22% in the thigh patient study, respectively. The overall volume fraction accuracy is improved by around 6.85%. Compared with the low-pass filtration method, the root-mean-square percentage error (RMSE(%)) of electron densities in the Catphan©600 phantom is decreased by 20.89%. As modulation transfer function (MTF) magnitude decreased to 50%, the proposed method increases the spatial resolution by an overall factor of 1.64 on the digital phantom, and 2.16 on the Catphan©600 phantom. The overall volume fraction accuracy is increased by 6.15%. CONCLUSIONS We proposed a statistical image-domain MMD method using DECT measurements. The method successfully suppresses the magnified noise while faithfully retaining the quantification accuracy and anatomical structure in the decomposed material images. The proposed method is practical and promising for advanced clinical applications using DECT imaging.
Collapse
Affiliation(s)
- Yi Xue
- Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, 310009, China.,Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, Zhejiang, 310009, China
| | - Ruoshui Ruan
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Xiuhua Hu
- Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, 310009, China
| | - Yu Kuang
- Department of Medical Physics, University of Nevada, 4505 S Maryland Pkwy Box 453037, Las Vegas, NV, 89154-3037, USA
| | - Jing Wang
- Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, 310009, China
| | - Yong Long
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Tianye Niu
- Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang, 310009, China.,Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, Zhejiang, 310009, China
| |
Collapse
|
50
|
Cao Q, Zbijewski W, Sisniega A, Yorkston J, Siewerdsen JH, Stayman JW. Multiresolution iterative reconstruction in high-resolution extremity cone-beam CT. Phys Med Biol 2016; 61:7263-7281. [PMID: 27694701 DOI: 10.1088/0031-9155/61/20/7263] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution penalized-weighted least squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view.
Collapse
Affiliation(s)
- Qian Cao
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | | | | | | | | | | |
Collapse
|