101
|
Xu J, Taguchi K, Tsui BMW. Statistical projection completion in X-ray CT using consistency conditions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2010; 29:1528-40. [PMID: 20442046 PMCID: PMC3097419 DOI: 10.1109/tmi.2010.2048335] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Projection data incompleteness arises in many situations relevant to X-ray computed tomography (CT) imaging. We propose a penalized maximum likelihood statistical sinogram restoration approach that incorporates the Helgason-Ludwig (HL) consistency conditions to accommodate projection data incompleteness. Image reconstruction is performed by the filtered-backprojection (FBP) in a second step. In our problem formulation, the objective function consists of the log-likelihood of the X-ray CT data and a penalty term; the HL condition poses a linear constraint on the restored sinogram and can be implemented efficiently via fast Fourier transform (FFT) and inverse FFT. We derive an iterative algorithm that increases the objective function monotonically. The proposed algorithm is applied to both computer simulated data and real patient data. We study different factors in the problem formulation that affect the properties of the final FBP reconstructed images, including the data truncation level, the amount of prior knowledge on the object support, as well as different approximations of the statistical distribution of the available projection data. We also compare its performance with an analytical truncation artifacts reduction method. The proposed method greatly improves both the accuracy and the precision of the reconstructed images within the scan field-of-view, and to a certain extent recovers the truncated peripheral region of the object. The proposed method may also be applied in areas such as limited angle tomography, metal artifacts reduction, and sparse sampling imaging.
Collapse
Affiliation(s)
- Jingyan Xu
- Division of Medical Imaging Physics, Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| | - Katsuyuki Taguchi
- Division of Medical Imaging Physics, Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| | - Benjamin M. W. Tsui
- Division of Medical Imaging Physics, Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| |
Collapse
|
102
|
Wang G, Qi J. Generalized algorithms for direct reconstruction of parametric images from dynamic PET data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2009; 28:1717-26. [PMID: 19447699 PMCID: PMC2901800 DOI: 10.1109/tmi.2009.2021851] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Indirect and direct methods have been developed for reconstructing parametric images from dynamic positron emission tomography (PET) data. Indirect methods are simple and easy to implement because reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from dynamic PET sinograms and, in theory, can be statistically more efficient, but the algorithms are often difficult to implement and are very specific to the kinetic model being used. This paper presents a class of generalized algorithms for direct reconstruction of parametric images that are relatively easy to implement and can be adapted to different kinetic models. The proposed algorithms use optimization transfer principle to convert the maximization of a penalized likelihood into a pixel-wise weighted least squares (WLS) kinetic fitting problem at each iteration. Thus, it can employ existing WLS algorithms developed for kinetic models. The proposed algorithms resemble the empirical iterative implementation of the indirect approach, but converge to a solution of the direct formulation. Computer simulations showed that the proposed direct reconstruction algorithms are flexible and achieve a better bias-variance tradeoff than indirect reconstruction methods.
Collapse
|
103
|
Chen Y, Hao L, Ye X, Chen W, Luo L, Yin X. PET transmission tomography using a novel nonlocal MRF prior. Comput Med Imaging Graph 2009; 33:623-33. [PMID: 19717279 DOI: 10.1016/j.compmedimag.2009.06.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2008] [Revised: 05/06/2009] [Accepted: 06/24/2009] [Indexed: 10/20/2022]
Abstract
In positron emission tomography, transmission scans can be performed to estimate attenuation correction factors (ACFs) which are in turn used to correct the emission scans. And such an attenuation correction is crucial for quantitatively accurate PET reconstructions. The prior model used in this work was based on our assumption that the attenuation values vary smoothly, with occasional discontinuities at anatomical borders. And on the other hand, long acquisition or scan times, although alleviating the noise effect of the count-limited scans, are blamed for patient uncomfortableness and movements. So, transmission tomography often suffers from the noise effect because of the short scan time. Thus reconstruction which is capable of overcoming the noise effect is highly needed. In this article, we apply the nonlocal prior Bayesian reconstruction method in PET transmission tomography. Resulting experimentations validate that the reconstructions using the nonlocal prior can reconstruct better transmission images and overcome noise effect even when the scan time is relatively short.
Collapse
Affiliation(s)
- Yang Chen
- The Laboratory of Image Science and Technology, Southeast University, China; The School of Biomedical Engineering, Southern Medical University, China
| | | | | | | | | | | |
Collapse
|
104
|
Chen Y, Gao D, Nie C, Luo L, Chen W, Yin X, Lin Y. Bayesian statistical reconstruction for low-dose X-ray computed tomography using an adaptive-weighting nonlocal prior. Comput Med Imaging Graph 2009; 33:495-500. [PMID: 19515533 DOI: 10.1016/j.compmedimag.2008.12.007] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2008] [Revised: 10/02/2008] [Accepted: 12/05/2008] [Indexed: 01/09/2023]
Abstract
How to reduce the radiation dose delivered to the patients has always been a important concern since the introduction of computed tomography (CT). Though clinically desired, low-dose CT images can be severely degraded by the excessive quantum noise under extremely low X-ray dose circumstances. Bayesian statistical reconstructions outperform the traditional filtered back-projection (FBP) reconstructions by accurately expressing the system models of physical effects and the statistical character of the measurement data. This work aims to improve the image quality of low-dose CT images using a novel AW nonlocal (adaptive-weighting nonlocal) prior statistical reconstruction approach. Compared to traditional local priors, the proposed prior can adaptively and selectively exploit the global image information. It imposes an effective resolution-preserving and noise-removing regularization for reconstructions. Experimentation validates that the reconstructions using the proposed prior have excellent performance for X-ray CT with low-dose scans.
Collapse
Affiliation(s)
- Yang Chen
- The Laboratory of Image Science and Technology, Southeast University, China
| | | | | | | | | | | | | |
Collapse
|
105
|
Xu J, Tsui BMW. Electronic noise modeling in statistical iterative reconstruction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2009; 18:1228-38. [PMID: 19398410 PMCID: PMC3107070 DOI: 10.1109/tip.2009.2017139] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
We consider electronic noise modeling in tomographic image reconstruction when the measured signal is the sum of a Gaussian distributed electronic noise component and another random variable whose log-likelihood function satisfies a certain linearity condition. Examples of such likelihood functions include the Poisson distribution and an exponential dispersion (ED) model that can approximate the signal statistics in integration mode X-ray detectors. We formulate the image reconstruction problem as a maximum-likelihood estimation problem. Using an expectation-maximization approach, we demonstrate that a reconstruction algorithm can be obtained following a simple substitution rule from the one previously derived without electronic noise considerations. To illustrate the applicability of the substitution rule, we present examples of a fully iterative reconstruction algorithm and a sinogram smoothing algorithm both in transmission CT reconstruction when the measured signal contains additive electronic noise. Our simulation studies show the potential usefulness of accurate electronic noise modeling in low-dose CT applications.
Collapse
Affiliation(s)
- Jingyan Xu
- Johns Hopkins University, Baltimore, MD 21287-0859, USA.
| | | |
Collapse
|
106
|
Chueh HS, Tsai WK, Chang CC, Chang SM, Su KH, Chen JC. Development of novel statistical reconstruction algorithms for poly-energetic X-ray computed tomography. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2008; 92:289-293. [PMID: 18508153 DOI: 10.1016/j.cmpb.2008.04.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2007] [Revised: 03/25/2008] [Accepted: 04/09/2008] [Indexed: 05/26/2023]
Abstract
A beam-hardening effect is a common problem affecting the quantitative aspects of X-ray computed tomography (CT). We have developed two statistical reconstruction algorithms for poly-energetic X-ray CT that can effectively reduce the beam-hardening effect. Phantom tests were used to evaluate our approach in comparison with traditional correction methods. Unlike previous methods, our algorithm utilizes multiple energy-corresponding blank scans to estimate the attenuation map for a particular energy spectrum. Therefore, our algorithm is an energy-selective reconstruction. In addition to benefits over other statistical algorithms for poly-energetic reconstruction, our algorithm has the advantage of not requiring prior knowledge of the object material, the energy spectrum of the source and the energy sensitivity of the detector. The results showed an improvement in coefficient of variation, uniformity and signal-to-noise ratio; overall, this novel approach produces a better beam-hardening correction.
Collapse
Affiliation(s)
- Ho-Shiang Chueh
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan, ROC
| | | | | | | | | | | |
Collapse
|
107
|
CARUSO S, MURPHY MF, JATUFF F, CHAWLA R. Nondestructive Determination of Fresh and Spent Nuclear Fuel Rod Density Distributions through Computerised Gamma-Ray Transmission Tomography. J NUCL SCI TECHNOL 2008. [DOI: 10.1080/18811248.2008.9711484] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
108
|
Ahn S, Leahy RM. Analysis of Resolution and Noise Properties of Nonquadratically Regularized Image Reconstruction Methods for PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:413-24. [PMID: 18334436 DOI: 10.1109/tmi.2007.911549] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
We present accurate and efficient methods for estimating the spatial resolution and noise properties of nonquadratically regularized image reconstruction for positron emission tomography (PET). It is well known that quadratic regularization tends to over-smooth sharp edges. Many types of edge-preserving nonquadratic penalties have been proposed to overcome this problem. However, there has been little research on the quantitative analysis of nonquadratic regularization due to its nonlinearity. In contrast, quadratically regularized estimators are approximately linear and are well understood in terms of resolution and variance properties. We derive new approximate expressions for the linearized local perturbation response (LLPR) and variance using the Taylor expansion with the remainder term. Although the expressions are implicit, we can use them to accurately predict resolution and variance for nonquadratic regularization where the conventional expressions based on the first-order Taylor truncation fail. They also motivate us to extend the use of a certainty-based modified penalty to nonquadratic regularization cases in order to achieve spatially uniform perturbation responses, analogous to uniform spatial resolution in quadratic regularization. Finally, we develop computationally efficient methods for predicting resolution and variance of nonquadratically regularized reconstruction and present simulations that illustrate the validity of these methods.
Collapse
Affiliation(s)
- Sangtae Ahn
- Signal and Image Processing Institute, University ofSouthern California, Los Angeles, CA 90089, USA
| | | |
Collapse
|
109
|
Figueiredo MAT, Bioucas-Dias JM, Nowak RD. Majorization-minimization algorithms for wavelet-based image restoration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:2980-91. [PMID: 18092597 DOI: 10.1109/tip.2007.909318] [Citation(s) in RCA: 78] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous "singularity issue" (SI) of "iteratively reweighted least squares" (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using l1 bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios.
Collapse
Affiliation(s)
- Mário A T Figueiredo
- Instituto de Telecomunicacões, Technical University of Lisbon, 1049-001 Lisboa, Portugal.
| | | | | |
Collapse
|
110
|
Jacobson MW, Fessler JA. An expanded theoretical treatment of iteration-dependent majorize-minimize algorithms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:2411-22. [PMID: 17926925 PMCID: PMC2750827 DOI: 10.1109/tip.2007.904387] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
The majorize-minimize (MM) optimization technique has received considerable attention in signal and image processing applications, as well as in statistics literature. At each iteration of an MM algorithm, one constructs a tangent majorant function that majorizes the given cost function and is equal to it at the current iterate. The next iterate is obtained by minimizing this tangent majorant function, resulting in a sequence of iterates that reduces the cost function monotonically. A well-known special case of MM methods are expectation-maximization algorithms. In this paper, we expand on previous analyses of MM, due to Fessler and Hero, that allowed the tangent majorants to be constructed in iteration-dependent ways. Also, this paper overcomes an error in one of those earlier analyses. There are three main aspects in which our analysis builds upon previous work. First, our treatment relaxes many assumptions related to the structure of the cost function, feasible set, and tangent majorants. For example, the cost function can be nonconvex and the feasible set for the problem can be any convex set. Second, we propose convergence conditions, based on upper curvature bounds, that can be easier to verify than more standard continuity conditions. Furthermore, these conditions allow for considerable design freedom in the iteration-dependent behavior of the algorithm. Finally, we give an original characterization of the local region of convergence of MM algorithms based on connected (e.g., convex) tangent majorants. For such algorithms, cost function minimizers will locally attract the iterates over larger neighborhoods than typically is guaranteed with other methods. This expanded treatment widens the scope of the MM algorithm designs that can be considered for signal and image processing applications, allows us to verify the convergent behavior of previously published algorithms, and gives a fuller understanding overall of how these algorithms behave.
Collapse
|
111
|
Morrison RL, Do MN, Munson DC. SAR image autofocus by sharpness optimization: a theoretical study. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:2309-21. [PMID: 17784604 DOI: 10.1109/tip.2007.903252] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Synthetic aperture radar (SAR) autofocus techniques that optimize sharpness metrics can produce excellent restorations in comparison with conventional autofocus approaches. To help formalize the understanding of metric-based SAR autofocus methods, and to gain more insight into their performance, we present a theoretical analysis of these techniques using simple image models. Specifically, we consider the intensity-squared metric, and a dominant point-targets image model, and derive expressions for the resulting objective function. We examine the conditions under which the perfectly focused image models correspond to stationary points of the objective function. A key contribution is that we demonstrate formally, for the specific case of intensity-squared minimization autofocus, the mechanism by which metric-based methods utilize the multichannel defocusing model of SAR autofocus to enforce the stationary point property for multiple image columns. Furthermore, our analysis shows that the objective function has a special separble property through which it can be well approximated locally by a sum of 1-D functions of each phase error component. This allows fast performance through solving a sequence of 1-D optimization problems for each phase component simultaneously. Simulation results using the proposed models and actual SAR imagery confirm that the analysis extends well to realistic situations.
Collapse
Affiliation(s)
- Robert L Morrison
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.
| | | | | |
Collapse
|
112
|
O'Sullivan JA, Benac J. Alternating minimization algorithms for transmission tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:283-97. [PMID: 17354635 DOI: 10.1109/tmi.2006.886806] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
A family of alternating minimization algorithms for finding maximum-likelihood estimates of attenuation functions in transmission X-ray tomography is described. The model from which the algorithms are derived includes polyenergetic photon spectra, background events, and nonideal point spread functions. The maximum-likelihood image reconstruction problem is reformulated as a double minimization of the I-divergence. A novel application of the convex decomposition lemma results in an alternating minimization algorithm that monotonically decreases the objective function. Each step of the minimization is in closed form. The family of algorithms includes variations that use ordered subset techniques for increasing the speed of convergence. Simulations demonstrate the ability to correct the cupping artifact due to beam hardening and the ability to reduce streaking artifacts that arise from beam hardening and background events.
Collapse
Affiliation(s)
- Joseph A O'Sullivan
- Electronic Systems and Signals Research Laboratory, Department of Electrical and Systems Engineering, Washington University, St. Louis, MO 63130, USA.
| | | |
Collapse
|
113
|
Zeng R, Fessler JA, Balter JM. Estimating 3-D respiratory motion from orbiting views by tomographic image registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:153-63. [PMID: 17304730 PMCID: PMC2851164 DOI: 10.1109/tmi.2006.889719] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Respiratory motion remains a significant source of errors in treatment planning for the thorax and upper abdomen. Recently, we proposed a method to estimate two-dimensional (2-D) object motion from a sequence of slowly rotating X-ray projection views, which we called deformation from orbiting views (DOVs). In this method, we model the motion as a time varying deformation of a static prior of the anatomy. We then optimize the parameters of the motion model by maximizing the similarity between the modeled and actual projection views. This paper extends the method to full three-dimensional (3-D) motion and cone-beam projection views. We address several practical issues for using a cone-beam computed tomography (CBCT) scanner that is integrated in a radiotherapy system, such as the effects of Compton scatter and the limited gantry rotation for one breathing cycle. We also present simulation and phantom results to illustrate the performance of this method.
Collapse
Affiliation(s)
- Rongping Zeng
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor 48109, USA.
| | | | | |
Collapse
|
114
|
Murphy RJ, Yan S, O'Sullivan JA, Snyder DL, Whiting BR, Politte DG, Lasio G, Williamson JF. Pose estimation of known objects during transmission tomographic image reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:1392-404. [PMID: 17024842 DOI: 10.1109/tmi.2006.880673] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
We address the problem of image formation in transmission tomography when metal objects of known composition and shape, but unknown pose, are present in the scan subject. Using an alternating minimization (AM) algorithm, derived from a model in which the detected data are viewed as Poisson-distributed photon counts, we seek to eliminate the streaking artifacts commonly seen in filtered back projection images containing high-contrast objects. We show that this algorithm, which minimizes the I-divergence (or equivalently, maximizes the log-likelihood) between the measured data and model-based estimates of the means of the data, converges much faster when knowledge of the high-density materials (such as brachytherapy applicators or prosthetic implants) is exploited. The algorithm incorporates a steepest descent-based method to find the position and orientation (collectively called the pose) of the known objects. This pose is then used to constrain the image pixels to their known attenuation values, or, for example, to form a mask on the "missing" projection data in the shadow of the objects. Results from two-dimensional simulations are shown in this paper. The extension of the model and methods used to three dimensions is outlined.
Collapse
Affiliation(s)
- Ryan J Murphy
- Advanced Information Systems, General Dynamics, Ypsilanti, MI 48197, USA.
| | | | | | | | | | | | | | | |
Collapse
|
115
|
La Rivière PJ, Vargas PA. Monotonic penalized-likelihood image reconstruction for X-ray fluorescence computed tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:1117-29. [PMID: 16967798 DOI: 10.1109/tmi.2006.877441] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
In this paper, we derive a monotonic penalized-likelihood algorithm for image reconstruction in X-ray fluorescence computed tomography (XFCT) when the attenuation maps at the energies of the fluorescence X-rays are unknown. In XFCT, a sample is irradiated with pencil beams of monochromatic synchrotron radiation that stimulate the emission of fluorescence X-rays from atoms of elements whose K- or L-edges lie below the energy of the stimulating beam. Scanning and rotating the object through the beam allows for acquisition of a tomographic dataset that can be used to reconstruct images of the distribution of the elements in question. XFCT is a stimulated emission tomography modality, and it is thus necessary to correct for attenuation of the incident and fluorescence photons. The attenuation map is, however, generally known only at the stimulating beam energy and not at the energies of the various fluorescence X-rays of interest. We have developed a penalized-likelihood image reconstruction strategy for this problem. The approach alternates between updating the distribution of a given element and updating the attenuation map for that element's fluorescence X-rays. The approach is guaranteed to increase the penalized likelihood at each iteration. Because the joint objective function is not necessarily concave, the approach may drive the solution to a local maximum. To encourage the algorithm to seek out a reasonable local maximum, we include in the objective function a prior that encourages a relationship, based on physical considerations, between the fluorescence attenuation map and the distribution of the element being reconstructed.
Collapse
|
116
|
La Rivière PJ, Bian J, Vargas PA. Penalized-likelihood sinogram restoration for computed tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:1022-36. [PMID: 16894995 DOI: 10.1109/tmi.2006.875429] [Citation(s) in RCA: 99] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
We formulate computed tomography (CT) sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. CT measurement data are degraded by a number of factors-including beam hardening and off-focal radiation-that produce artifacts in reconstructed images unless properly corrected. Currently, such effects are addressed by a sequence of sinogram-preprocessing steps, including deconvolution corrections for off-focal radiation, that have the potential to amplify noise. Noise itself is generally mitigated through apodization of the reconstruction kernel, which effectively ignores the measurement statistics, although in high-noise situations adaptive filtering methods that loosely model data statistics are sometimes applied. As an alternative, we present a general imaging model relating the degraded measurements to the sinogram of ideal line integrals and propose to estimate these line integrals by iteratively optimizing a statistically based objective function. We consider three different strategies for estimating the set of ideal line integrals, one based on direct estimation of ideal "monochromatic" line integrals that have been corrected for single-material beam hardening, one based on estimation of ideal "polychromatic" line integrals that can be readily mapped to monochromatic line integrals, and one based on estimation of ideal transmitted intensities, from which ideal, monochromatic line integrals can be readily estimated. The first two approaches involve maximization of a penalized Poisson-likelihood objective function while the third involves minimization of a quadratic penalized weighted least squares (PWLS) objective applied in the transmitted intensity domain. We find that at low exposure levels typical of those being considered for screening CT, the Poisson-likelihood based approaches outperform the PWLS objective as well as a standard approach based on adaptive filtering followed by deconvolution. At higher exposure levels, the approaches all perform similarly.
Collapse
|
117
|
Feng B, Fessler JA, King MA. Incorporation of system resolution compensation (RC) in the ordered-subset transmission (OSTR) algorithm for transmission imaging in SPECT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:941-9. [PMID: 16827494 DOI: 10.1109/tmi.2006.876151] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
In order to reconstruct attenuation maps with improved spatial resolution and quantitative accuracy, we developed an approximate method of incorporating system resolution compensation (RC) in the ordered-subset transmission (OSTR) algorithm for transmission reconstruction. Our method approximately models the blur caused by the finite intrinsic detector resolution, the nonideal source collimation and detector collimation. We derived the formulation using the optimization transfer principle as in the derivation of the OSTR algorithm. The formulation includes one forward-blur step and one back-blur step, which do not severely slow down reconstruction. The formulation could be applicable to various transmission geometries, such as point-source, line-source, and sheet-source systems. Through computer simulations of the MCAT phantom and transmission measurements of the air-filled Data Spectrum Deluxe single photo emission computed tomography (SPECT) Phantom on a system which employed a cone-beam geometry and a system which employed a scanning-line-source geometry, we showed that incorporation of RC increased spatial resolution and improved the quantitative accuracy of reconstruction. In simulation studies, attenuation maps reconstructed with RC correction improved the quantitative accuracy of emission reconstruction.
Collapse
Affiliation(s)
- Bing Feng
- Department of Radiology, University of Massachusetts Medical School, Worcester, MA 01655, USA.
| | | | | |
Collapse
|
118
|
Allain M, Idier J, Goussard Y. On global and local convergence of half-quadratic algorithms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:1130-42. [PMID: 16671294 DOI: 10.1109/tip.2005.864173] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
This paper provides original results on the global and local convergence properties of half-quadratic (HQ) algorithms resulting from the Geman and Yang (GY) and Geman and Reynolds (GR) primal-dual constructions. First, we show that the convergence domain of the GY algorithm can be extended with the benefit of an improved convergence rate. Second, we provide a precise comparison of the convergence rates for both algorithms. This analysis shows that the GR form does not benefit from a better convergence rate in general. Moreover, the GY iterates often take advantage of a low cost implementation. In this case, the GY form is usually faster than the GR form from the CPU time viewpoint.
Collapse
Affiliation(s)
- Marc Allain
- Institut de Recherche en Communications et en Cybernétique de Nantes (IRCCyN), BP 92 101-44321 Nantes Cedex 03, France.
| | | | | |
Collapse
|
119
|
Ahn S, Fessler JA, Blatt D, Hero AO. Convergent incremental optimization transfer algorithms: application to tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:283-96. [PMID: 16524085 DOI: 10.1109/tmi.2005.862740] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters, and methods based on the incremental expectation-maximization (EM) approach. This paper generalizes the incremental EM approach by introducing a general framework, "incremental optimization transfer." The proposed algorithms accelerate convergence speeds and ensure global convergence without requiring relaxation parameters. The general optimization transfer framework allows the use of a very broad family of surrogate functions, enabling the development of new algorithms. This paper provides the first convergent OS-type algorithm for (nonconcave) penalized-likelihood (PL) transmission image reconstruction by using separable paraboloidal surrogates (SPS) which yield closed-form maximization steps. We found it is very effective to achieve fast convergence rates by starting with an OS algorithm with a large number of subsets and switching to the new "transmission incremental optimization transfer (TRIOT)" algorithm. Results show that TRIOT is faster in increasing the PL objective than nonincremental ordinary SPS and even OS-SPS yet is convergent.
Collapse
Affiliation(s)
- Sangtae Ahn
- Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor 48109-2122, USA.
| | | | | | | |
Collapse
|
120
|
Comparison of quadratic- and median-based roughness penalties for penalized-likelihood sinogram restoration in computed tomography. Int J Biomed Imaging 2006; 2006:41380. [PMID: 23165029 PMCID: PMC2324011 DOI: 10.1155/ijbi/2006/41380] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2005] [Revised: 03/03/2006] [Accepted: 03/05/2006] [Indexed: 11/17/2022] Open
Abstract
We have compared the performance of two different penalty choices for a penalized-likelihood sinogram-restoration strategy we have been developing. One is a quadratic penalty we have employed previously and the other is a new median-based penalty. We compared the approaches to a noniterative adaptive filter that loosely but not explicitly models data statistics. We found that the two approaches produced similar resolution-variance tradeoffs to each other and that they outperformed the adaptive filter in the low-dose regime, which suggests that the particular choice of penalty in our approach may be less important than the fact that we are explicitly modeling data statistics at all. Since the quadratic penalty allows for derivation of an algorithm that is guaranteed to monotonically increase the penalized-likelihood objective function, we find it to be preferable to the median-based penalty.
Collapse
|
121
|
Hwang D, Zeng GL. A new simple iterative reconstruction algorithm for SPECT transmission measurement. Med Phys 2005; 32:2312-2319. [PMID: 16121587 DOI: 10.1118/1.1944288] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2004] [Revised: 05/02/2005] [Accepted: 05/06/2005] [Indexed: 11/07/2022] Open
Abstract
This paper proposes a new iterative reconstruction algorithm for transmission tomography and compares this algorithm with several other methods. The new algorithm is simple and resembles the emission ML-EM algorithm in form. Due to its simplicity, it is easy to implement and fast to compute a new update at each iteration. The algorithm also always guarantees non-negative solutions. Evaluations are performed using simulation studies and real phantom data. Comparisons with other algorithms such as convex, gradient, and logMLEM show that the proposed algorithm is as good as others and performs better in some cases.
Collapse
Affiliation(s)
- DoSik Hwang
- Department of Bioengineering and Department of Radiology, University of Utah, Salt Lake City, Utah 84108, USA.
| | | |
Collapse
|
122
|
Abstract
We have developed a sinogram smoothing approach for low-dose computed tomography (CT) that seeks to estimate the line integrals needed for reconstruction from the noisy measurements by maximizing a penalized-likelihood objective function. The maximization is performed by an algorithm derived by use of the separable paraboloidal surrogates framework. The approach overcomes some of the computational limitations of a previously proposed spline-based penalized-likelihood sinogram smoothing approach, and it is found to yield better resolution-variance tradeoffs than this spline-based approach as well an existing adaptive filtering approach. Such sinogram smoothing approaches could be valuable when applied to the low-dose data acquired in CT screening exams, such as those being considered for lung-nodule detection.
Collapse
Affiliation(s)
- Patrick J La Rivière
- Department of Radiology, The University of Chicago, Chicago, Illinois 60637, USA.
| |
Collapse
|
123
|
Anderson JMM, Srinivasan R, Mair BA, Votaw JR. Accelerated penalized weighted least-squares and maximum likelihood algorithms for reconstructing transmission images from PET transmission data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:337-351. [PMID: 15754984 DOI: 10.1109/tmi.2004.842453] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We present penalized weighted least-squares (PWLS) and penalized maximum-likelihood (PML) methods for reconstructing transmission images from positron emission tomography transmission data. First, we view the problem of minimizing the weighted least-squares (WLS) and maximum likelihood objective functions as a sequence of nonnegative least-squares minimization problems. This viewpoint follows from using certain quadratic functions as surrogate functions for the WLS and maximum likelihood objective functions. Second, we construct surrogate functions for a class of penalty functions that yield closed form expressions for the iterates of the PWLS and PML algorithms. Due to the slow convergence of the PWLS and PML algorithms, accelerated versions of them are developed that are theoretically guaranteed to monotonically decrease their respective objective functions. In experiments using real phantom data, the PML images produced the most accurate attenuation correction factors. On the other hand, the PWLS images produced images with the highest levels of contrast for low-count data.
Collapse
Affiliation(s)
- J M M Anderson
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA.
| | | | | | | |
Collapse
|
124
|
Li T, Wen J, Han G, Lu H, Liang Z. Evaluation of an efficient compensation method for quantitative fan-beam brain SPECT reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:170-179. [PMID: 15707243 DOI: 10.1109/tmi.2004.839365] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Fan-beam collimators are designed to improve the system sensitivity and resolution for imaging small objects such as the human brain and breasts in single photon emission computed tomography (SPECT). Many reconstruction algorithms have been studied and applied to this geometry to deal with every kind of degradation factor. This paper presents a new reconstruction approach for SPECT with circular orbit, which demonstrated good performance in terms of both accuracy and efficiency. The new approach compensates for degradation factors including noise, scatter, attenuation, and spatially variant detector response. Its uniform attenuation approximation strategy avoids the additional transmission scan for the brain attenuation map, hence reducing the patient radiation dose and furthermore simplifying the imaging procedure. We evaluate and compare this new approach with the well-established ordered-subset expectation-maximization iterative method, using Monte Carlo simulations. We perform quantitative analysis with regional bias-variance, receiver operating characteristics, and Hotelling trace studies for both methods. The results demonstrate that our reconstruction strategy has comparable performance with a significant reduction of computing time.
Collapse
Affiliation(s)
- Tianfang Li
- Departments of Radiology and Physics and Astronomy, State University of New York, Stony Brook, NY 11794, USA.
| | | | | | | | | |
Collapse
|
125
|
López A, Molina R, Katsaggelos AK. Bayesian Reconstruction for Transmission Tomography with Scale Hyperparameter Estimation. PATTERN RECOGNITION AND IMAGE ANALYSIS 2005. [DOI: 10.1007/11492542_56] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
126
|
Stayman JW, Fessler JA. Efficient calculation of resolution and covariance for penalized-likelihood reconstruction in fully 3-D SPECT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:1543-1556. [PMID: 15575411 DOI: 10.1109/tmi.2004.837790] [Citation(s) in RCA: 19] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Resolution and covariance predictors have been derived previously for penalized-likelihood estimators. These predictors can provide accurate approximations to the local resolution properties and covariance functions for tomographic systems given a good estimate of the mean measurements. Although these predictors may be evaluated iteratively, circulant approximations are often made for practical computation times. However, when numerous evaluations are made repeatedly (as in penalty design or calculation of variance images), these predictors still require large amounts of computing time. In Stayman and Fessler (2000), we discussed methods for precomputing a large portion of the predictor for shift-invariant system geometries. In this paper, we generalize the efficient procedure discussed in Stayman and Fessler (2000) to shift-variant single photon emission computed tomography (SPECT) systems. This generalization relies on a new attenuation approximation and several observations on the symmetries in SPECT systems. These new general procedures apply to both two-dimensional and fully three-dimensional (3-D) SPECT models, that may be either precomputed and stored, or written in procedural form. We demonstrate the high accuracy of the predictions based on these methods using a simulated anthropomorphic phantom and fully 3-D SPECT system. The evaluation of these predictors requires significantly less computation time than traditional prediction techniques, once the system geometry specific precomputations have been made.
Collapse
MESH Headings
- Abdomen/diagnostic imaging
- Algorithms
- Artificial Intelligence
- Cluster Analysis
- Computer Simulation
- Humans
- Image Enhancement/methods
- Image Interpretation, Computer-Assisted/methods
- Imaging, Three-Dimensional/methods
- Information Storage and Retrieval/methods
- Likelihood Functions
- Models, Biological
- Models, Statistical
- Numerical Analysis, Computer-Assisted
- Phantoms, Imaging
- Regression Analysis
- Reproducibility of Results
- Sensitivity and Specificity
- Signal Processing, Computer-Assisted
- Tomography, Emission-Computed, Single-Photon/instrumentation
- Tomography, Emission-Computed, Single-Photon/methods
Collapse
Affiliation(s)
- J Webster Stayman
- Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, MI 48109-2122, USA.
| | | |
Collapse
|
127
|
Chang JH, Anderson JMM, Votaw JR. Regularized image reconstruction algorithms for positron emission tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:1165-75. [PMID: 15377125 DOI: 10.1109/tmi.2004.831224] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
We develop algorithms for obtaining regularized estimates of emission means in positron emission tomography. The first algorithm iteratively minimizes a penalized maximum-likelihood (PML) objective function. It is based on standard de-coupled surrogate functions for the ML objective function and de-coupled surrogate functions for a certain class of penalty functions. As desired, the PML algorithm guarantees nonnegative estimates and monotonically decreases the PML objective function with increasing iterations. The second algorithm is based on an iteration dependent, de-coupled penalty function that introduces smoothing while preserving edges. For the purpose of making comparisons, the MLEM algorithm and a penalized weighted least-squares algorithm were implemented. In experiments using synthetic data and real phantom data, it was found that, for a fixed level of background noise, the contrast in the images produced by the proposed algorithms was the most accurate.
Collapse
Affiliation(s)
- Ji-Ho Chang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA
| | | | | |
Collapse
|
128
|
Ahn S, Fessler JA. Emission image reconstruction for randoms-precorrected PET allowing negative sinogram values. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:591-601. [PMID: 15147012 DOI: 10.1109/tmi.2004.826046] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Most positron emission tomography (PET) emission scans are corrected for accidental coincidence (AC) events by real-time subtraction of delayed-window coincidences, leaving only the randoms-precorrected data available for image reconstruction. The real-time randoms precorrection compensates in mean for AC events but destroys the Poisson statistics. The exact log-likelihood for randoms-precorrected data is inconvenient, so practical approximations are needed for maximum likelihood or penalized-likelihood image reconstruction. Conventional approximations involve setting negative sinogram values to zero, which can induce positive systematic biases, particularly for scans with low counts per ray. We propose new likelihood approximations that allow negative sinogram values without requiring zero-thresholding. With negative sinogram values, the log-likelihood functions can be nonconcave, complicating maximization; nevertheless, we develop monotonic algorithms for the new models by modifying the separable paraboloidal surrogates and the maximum-likelihood expectation-maximization (ML-EM) methods. These algorithms ascend to local maximizers of the objective function. Analysis and simulation results show that the new shifted Poisson (SP) model is nearly free of systematic bias yet keeps low variance. Despite its simpler implementation, the new SP performs comparably to the saddle-point model which has shown the best performance (as to systematic bias and variance) in randoms-precorrected PET emission reconstruction.
Collapse
Affiliation(s)
- Sangtae Ahn
- Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, MI 48109-2122, USA.
| | | |
Collapse
|
129
|
Sotthivirat S, Fessler JA. Penalized-likelihood image reconstruction for digital holography. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2004; 21:737-750. [PMID: 15139426 DOI: 10.1364/josaa.21.000737] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Conventional numerical reconstruction for digital holography using a filter applied in the spatial-frequency domain to extract the primary image may yield suboptimal image quality because of the loss in high-frequency components and interference from other undesirable terms of a hologram. We propose a new numerical reconstruction approach using a statistical technique. This approach reconstructs the complex field of the object from the real-valued hologram intensity data. Because holographic image reconstruction is an ill-posed problem, our statistical technique is based on penalized-likelihood estimation. We develop a Poisson statistical model for this problem and derive an optimization transfer algorithm that monotonically decreases the cost function at each iteration. Simulation results show that our statistical technique has the potential to improve image quality in digital holography relative to conventional reconstruction techniques.
Collapse
Affiliation(s)
- Saowapak Sotthivirat
- National Electronics and Computer Development Center, National Science and Technology Development Agency, Ministry of Science and Technology, Klong Luang, Pathumthani 12120, Thailand.
| | | |
Collapse
|
130
|
Abstract
Iterative image estimation methods have been widely used in emission tomography. Accurate estimation of the uncertainty of the reconstructed images is essential for quantitative applications. While both iteration-based noise analysis and fixed-point noise analysis have been developed, current iteration-based results are limited to only a few algorithms that have an explicit multiplicative update equation and some may not converge to the fixed-point result. This paper presents a theoretical noise analysis that is applicable to a wide range of preconditioned gradient-type algorithms. Under a certain condition, the proposed method does not require an explicit expression of the preconditioner. By deriving the fixed-point expression from the iteration-based result, we show that the proposed iteration-based noise analysis is consistent with fixed-point analysis. Examples in emission tomography and transmission tomography are shown. The results are validated using Monte Carlo simulations.
Collapse
Affiliation(s)
- Jinyi Qi
- Department of Nuclear Medicine and Functional Imaging, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA.
| |
Collapse
|
131
|
Idris A E, Fessler JA. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation. Phys Med Biol 2003; 48:2453-77. [PMID: 12953909 DOI: 10.1088/0031-9155/48/15/314] [Citation(s) in RCA: 104] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.
Collapse
Affiliation(s)
- Elbakri Idris A
- Electrical Engineering and Computer Science Department, University of Michigan, 1301 Beal Ave, Ann Arbor, MI 48109, USA.
| | | |
Collapse
|
132
|
Ahn S, Fessler JA. Globally convergent image reconstruction for emission tomography using relaxed ordered subsets algorithms. IEEE TRANSACTIONS ON MEDICAL IMAGING 2003; 22:613-626. [PMID: 12846430 DOI: 10.1109/tmi.2003.812251] [Citation(s) in RCA: 110] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We present two types of globally convergent relaxed ordered subsets (OS) algorithms for penalized-likelihood image reconstruction in emission tomography: modified block sequential regularized expectation-maximization (BSREM) and relaxed OS separable paraboloidal surrogates (OS-SPS). The global convergence proof of the existing BSREM (De Pierro and Yamagishi, 2001) required a few a posteriori assumptions. By modifying the scaling functions of BSREM, we are able to prove the convergence of the modified BSREM under realistic assumptions. Our modification also makes stepsize selection more convenient. In addition, we introduce relaxation into the OS-SPS algorithm (Erdoğan and Fessler, 1999) that otherwise would converge to a limit cycle. We prove the global convergence of diagonally scaled incremental gradient methods of which the relaxed OS-SPS is a special case; main results of the proofs are from (Nedić and Bertsekas, 2001) and (Correa and Lemaréchal, 1993). Simulation results showed that both new algorithms achieve global convergence yet retain the fast initial convergence speed of conventional unrelaxed ordered subsets algorithms.
Collapse
Affiliation(s)
- Sangtae Ahn
- Electrical Engineering and Computer Science Department, University of Michigan, 4415 Electrical Engineering and Computer Science Building, 1301 Beal Avenue, Ann Arbor, MI 48109-2122, USA.
| | | |
Collapse
|
133
|
Sotthivirat S, Fessler JA. Relaxed ordered-subset algorithm for penalized-likelihood image restoration. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2003; 20:439-449. [PMID: 12630830 DOI: 10.1364/josaa.20.000439] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The expectation-maximization (EM) algorithm for maximum-likelihood image recovery is guaranteed to converge, but it converges slowly. Its ordered-subset version (OS-EM) is used widely in tomographic image reconstruction because of its order-of-magnitude acceleration compared with the EM algorithm, but it does not guarantee convergence. Recently the ordered-subset, separable-paraboloidal-surrogate (OS-SPS) algorithm with relaxation has been shown to converge to the optimal point while providing fast convergence. We adapt the relaxed OS-SPS algorithm to the problem of image restoration. Because data acquisition in image restoration is different from that in tomography, we employ a different strategy for choosing subsets, using pixel locations rather than projection angles. Simulation results show that the relaxed OS-SPS algorithm can provide an order-of-magnitude acceleration over the EM algorithm for image restoration. This new algorithm now provides the speed and guaranteed convergence necessary for efficient image restoration.
Collapse
Affiliation(s)
- Saowapak Sotthivirat
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, Michigan 48109, USA.
| | | |
Collapse
|
134
|
Narayanan MV, King MA, Byrne CL. An iterative transmission algorithm incorporating cross-talk correction for SPECT. Med Phys 2002; 29:694-700. [PMID: 12033564 DOI: 10.1118/1.1472500] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Simultaneous emission/transmission acquisitions in cardiac SPECT with a Tc99m/Gd153 source combination offer the capability for nonuniform attenuation correction. However, cross-talk of Tc99m photons downscattered into the Gd153 energy window contaminates the reconstructed transmission map used for attenuation correction. The estimated cross-talk contribution can be subtracted prior to transmission reconstruction or incorporated in the reconstruction algorithm itself. In this work, we propose an iterative transmission algorithm (MLTG-S) based on the maximum-likelihood gradient algorithm (MLTG) that explicitly accounts for this cross-talk estimate. Clinical images were acquired on a three-headed SPECT camera, acquiring Tc99m emission and Gd153 transmission images simultaneously. Subtracting the cross-talk estimate prior to transmission reconstruction can result in negative and zero values if the estimate is larger than or equal to the count in the transmission projection bin, especially with increased attenuator size or amount of cross-talk. This results in inaccurate attenuation coefficients for MLTG reconstructions with cross-talk subtraction. MLTG-S reconstructions on the other hand, yield better estimates of attenuation maps, by avoiding the subtraction of the cross-talk estimate. Comparison of emission slices corrected for nonuniform attenuation reveals that inaccuracies in the reconstructed attenuation map caused by cross-talk can artificially enhance the extra-cardiac activity, confounding the ability to visualize the left-ventricular walls.
Collapse
|
135
|
Bowsher JE, Tornai MP, Peter J, González Trotter DE, Krol A, Gilland DR, Jaszczak RJ. Modeling the axial extension of a transmission line source within iterative reconstruction via multiple transmission sources. IEEE TRANSACTIONS ON MEDICAL IMAGING 2002; 21:200-215. [PMID: 11989845 DOI: 10.1109/42.996339] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Reconstruction algorithms for transmission tomography have generally assumed that the photons reaching a particular detector bin at a particular angle originate from a single point source. In this paper, we highlight several cases of extended transmission sources, in which it may be useful to approach the estimation of attenuation coefficients as a problem involving multiple transmission point sources. Examined in detail is the case of a fixed transmission line source with a fan-beam collimator. This geometry can result in attenuation images that have significant axial blur. Herein it is also shown, empirically, that extended transmission sources can result in biased estimates of the average attenuation, and an explanation is proposed. The finite axial resolution of the transmission line source configuration is modeled within iterative reconstruction using an expectation-maximization algorithm that was previously derived for estimating attenuation coefficients from single photon emission computed tomography (SPECT) emission data. The same algorithm is applicable to both problems because both can be thought of as involving multiple transmission sources. It is shown that modeling axial blur within reconstruction removes the bias in the average estimated attenuation and substantially improves the axial resolution of attenuation images.
Collapse
Affiliation(s)
- J E Bowsher
- Duke University Medical Center, Durham, NC 27710, USA.
| | | | | | | | | | | | | |
Collapse
|
136
|
Elbakri IA, Fessler JA. Statistical image reconstruction for polyenergetic X-ray computed tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2002; 21:89-99. [PMID: 11929108 DOI: 10.1109/42.993128] [Citation(s) in RCA: 298] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This paper describes a statistical image reconstruction method for X-ray computed tomography (CT) that is based on a physical model that accounts for the polyenergetic X-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. We assume that the object consists of a given number of nonoverlapping materials, such as soft tissue and bone. The attenuation coefficient of each voxel is the product of its unknown density and a known energy-dependent mass attenuation coefficient. We formulate a penalized-likelihood function for this polyenergetic model and develop an ordered-subsets iterative algorithm for estimating the unknown densities in each voxel. The algorithm monotonically decreases the cost function at each iteration when one subset is used. Applying this method to simulated X-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artifacts.
Collapse
Affiliation(s)
- Idris A Elbakri
- Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor 48109-2122, USA.
| | | |
Collapse
|
137
|
Yu DF, Fessler JA. Edge-preserving tomographic reconstruction with nonlocal regularization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2002; 21:159-173. [PMID: 11929103 DOI: 10.1109/42.993134] [Citation(s) in RCA: 42] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Tomographic image reconstruction using statistical methods can provide more accurate system modeling, statistical models, and physical constraints than the conventional filtered backprojection (FBP) method. Because of the ill posedness of the reconstruction problem, a roughness penalty is often imposed on the solution to control noise. To avoid smoothing of edges, which are important image attributes, various edge-preserving regularization methods have been proposed. Most of these schemes rely on information from local neighborhoods to determine the presence of edges. In this paper, we propose a cost function that incorporates nonlocal boundary information into the regularization method. We use an alternating minimization algorithm with deterministic annealing to minimize the proposed cost function, jointly estimating region boundaries and object pixel values. We apply variational techniques implemented using level-sets methods to update the boundary estimates; then, using the most recent boundary estimate, we minimize a space-variant quadratic cost function to update the image estimate. For the positron emission tomography transmission reconstruction application, we compare the bias-variance tradeoff of this method with that of a "conventional" penalized-likelihood algorithm with local Huber roughness penalty.
Collapse
Affiliation(s)
- Daniel F Yu
- Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor 48109-2122, USA
| | | |
Collapse
|
138
|
Sotthivirat S, Fessler JA. Image recovery using partitioned-separable paraboloidal surrogate coordinate ascent algorithms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2002; 11:306-317. [PMID: 18244633 DOI: 10.1109/83.988963] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Iterative coordinate ascent algorithms have been shown to be useful for image recovery, but are poorly suited to parallel computing due to their sequential nature. This paper presents a new fast converging parallelizable algorithm for image recovery that can be applied to a very broad class of objective functions. This method is based on paraboloidal surrogate functions and a concavity technique. The paraboloidal surrogates simplify the optimization problem. The idea of the concavity technique is to partition pixels into subsets that can be updated in parallel to reduce the computation time. For fast convergence, pixels within each subset are updated sequentially using a coordinate ascent algorithm. The proposed algorithm is guaranteed to monotonically increase the objective function and intrinsically accommodates nonnegativity constraints. A global convergence proof is summarized. Simulation results show that the proposed algorithm requires less elapsed time for convergence than iterative coordinate ascent algorithms. With four parallel processors, the proposed algorithm yields a speedup factor of 3.77 relative to single processor coordinate ascent algorithms for a three-dimensional (3-D) confocal image restoration problem.
Collapse
Affiliation(s)
- Saowapak Sotthivirat
- Dept. of Electr. Eng. and Comput. Sci., Michigan Univ., Ann Arbor, MI 48109-2122, USA.
| | | |
Collapse
|
139
|
Yu DF, Fessler JA, Ficaro EP. Maximum-likelihood transmission image reconstruction for overlapping transmission beams. IEEE TRANSACTIONS ON MEDICAL IMAGING 2000; 19:1094-1105. [PMID: 11204847 DOI: 10.1109/42.896785] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
In many transmission imaging geometries, the transmitted "beams" of photons overlap on the detector, such that a detector element may record photons that originated in different sources or source locations and thus traversed different paths through the object. Examples include systems based on scanning line sources or on multiple parallel rod sources. The overlap of these beams has been disregarded by both conventional analytical reconstruction methods as well as by previous statistical reconstruction methods. We propose a new algorithm for statistical image reconstruction of attenuation maps that explicitly accounts for overlapping beams in transmission scans. The algorithm is guaranteed to monotonically increase the objective function at each iteration. The availability of this algorithm enables the possibility of deliberately increasing the beam overlap so as to increase count rates. Simulated single photon emission tomography transmission scans based on a multiple line source array demonstrate that the proposed method yields improved resolution/noise tradeoffs relative to "conventional" reconstruction algorithms, both statistical and nonstatistical.
Collapse
Affiliation(s)
- D F Yu
- University of Michigan, Ann Arbor 48109-2122, USA
| | | | | |
Collapse
|
140
|
|
141
|
Hunter DR, Lange K. Rejoinder. J Comput Graph Stat 2000. [DOI: 10.1080/10618600.2000.10474865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|