201
|
Stayman JW, Fessler JA. Efficient calculation of resolution and covariance for penalized-likelihood reconstruction in fully 3-D SPECT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:1543-1556. [PMID: 15575411 DOI: 10.1109/tmi.2004.837790] [Citation(s) in RCA: 19] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Resolution and covariance predictors have been derived previously for penalized-likelihood estimators. These predictors can provide accurate approximations to the local resolution properties and covariance functions for tomographic systems given a good estimate of the mean measurements. Although these predictors may be evaluated iteratively, circulant approximations are often made for practical computation times. However, when numerous evaluations are made repeatedly (as in penalty design or calculation of variance images), these predictors still require large amounts of computing time. In Stayman and Fessler (2000), we discussed methods for precomputing a large portion of the predictor for shift-invariant system geometries. In this paper, we generalize the efficient procedure discussed in Stayman and Fessler (2000) to shift-variant single photon emission computed tomography (SPECT) systems. This generalization relies on a new attenuation approximation and several observations on the symmetries in SPECT systems. These new general procedures apply to both two-dimensional and fully three-dimensional (3-D) SPECT models, that may be either precomputed and stored, or written in procedural form. We demonstrate the high accuracy of the predictions based on these methods using a simulated anthropomorphic phantom and fully 3-D SPECT system. The evaluation of these predictors requires significantly less computation time than traditional prediction techniques, once the system geometry specific precomputations have been made.
Collapse
MESH Headings
- Abdomen/diagnostic imaging
- Algorithms
- Artificial Intelligence
- Cluster Analysis
- Computer Simulation
- Humans
- Image Enhancement/methods
- Image Interpretation, Computer-Assisted/methods
- Imaging, Three-Dimensional/methods
- Information Storage and Retrieval/methods
- Likelihood Functions
- Models, Biological
- Models, Statistical
- Numerical Analysis, Computer-Assisted
- Phantoms, Imaging
- Regression Analysis
- Reproducibility of Results
- Sensitivity and Specificity
- Signal Processing, Computer-Assisted
- Tomography, Emission-Computed, Single-Photon/instrumentation
- Tomography, Emission-Computed, Single-Photon/methods
Collapse
Affiliation(s)
- J Webster Stayman
- Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, MI 48109-2122, USA.
| | | |
Collapse
|
202
|
Li Q, Asma E, Qi J, Bading JR, Leahy RM. Accurate estimation of the Fisher information matrix for the PET image reconstruction problem. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:1057-1064. [PMID: 15377114 DOI: 10.1109/tmi.2004.833202] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The Fisher information matrix (FIM) plays a key role in the analysis and applications of statistical image reconstruction methods based on Poisson data models. The elements of the FIM are a function of the reciprocal of the mean values of sinogram elements. Conventional plug-in FIM estimation methods do not work well at low counts, where the FIM estimate is highly sensitive to the reciprocal mean estimates at individual detector pairs. A generalized error look-up table (GELT) method is developed to estimate the reciprocal of the mean of the sinogram data. This approach is also extended to randoms precorrected data. Based on these techniques, an accurate FIM estimate is obtained for both Poisson and randoms precorrected data. As an application, the new GELT method is used to improve resolution uniformity and achieve near-uniform image resolution in low count situations.
Collapse
Affiliation(s)
- Quanzheng Li
- Signal and Image Processing Institute, Univ of Southern California, Los Angeles, CA 90089, USA
| | | | | | | | | |
Collapse
|
203
|
Qi J, Huesman RH. Propagation of errors from the sensitivity image in list mode reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:1094-1099. [PMID: 15377118 DOI: 10.1109/tmi.2004.829333] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
List mode image reconstruction is attracting renewed attention. It eliminates the storage of empty sinogram bins. However, a single back projection of all LORs is still necessary for the pre-calculation of a sensitivity image. Since the detection sensitivity is dependent on the object attenuation and detector efficiency, it must be computed for each study. Exact computation of the sensitivity image can be a daunting task for modern scanners with huge numbers of LORs. Thus, some fast approximate calculation may be desirable. In this paper, we analyze the error propagation from the sensitivity image into the reconstructed image. The theoretical analysis is based on the fixed point condition of the list mode reconstruction. The nonnegativity constraint is modeled using the Kuhn-Tucker condition. With certain assumptions and the first-order Taylor series approximation, we derive a closed form expression for the error in the reconstructed image as a function of the error in the sensitivity image. The result shows that the error response is frequency-dependent and provides a simple expression for determining the required accuracy of the sensitivity image calculation. Computer simulations show that the theoretical results are in good agreement with the measured results.
Collapse
Affiliation(s)
- Jinyi Qi
- Department of Nuclear Medicine and Functional Imaging, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA.
| | | |
Collapse
|
204
|
Xing Y, Hsiao IT, Gindi G. Rapid calculation of detectability in Bayesian single photon emission computed tomography. Phys Med Biol 2004; 48:3755-73. [PMID: 14680271 DOI: 10.1088/0031-9155/48/22/009] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
We consider the calculation of lesion detectability using a mathematical model observer, the channelized Hotelling observer (CHO), in a signal-known-exactly/background-known-exactly detection task for single photon emission computed tomography (SPECT). We focus on SPECT images reconstructed with Bayesian maximum a posteriori methods. While model observers are designed to replace time-consuming studies using human observers, the calculation of CHO detectability is usually accomplished using a large number of sample images, which is still time consuming. We develop theoretical expressions for a measure of detectability, the signal-to-noise-ratio (SNR) of a CHO observer, that can be very rapidly evaluated. Key to our expressions are approximations to the reconstructed image covariance. In these approximations, we use methods developed in the PET literature, but modify them to reflect the different nature of attenuation and distance-dependent blur in SPECT. We validate our expressions with Monte Carlo methods. We show that reasonably accurate estimates of the SNR can be obtained at a computational expense equivalent to approximately two projection operations, and that evaluating SNR for subsequent lesion locations requires negligible additional computation.
Collapse
Affiliation(s)
- Yuxiang Xing
- Department of Electrical & Computer Engineering, SUNY Stony Brook, Stony Brook, NY 11784, USA
| | | | | |
Collapse
|
205
|
Ahn S, Fessler JA. Emission image reconstruction for randoms-precorrected PET allowing negative sinogram values. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:591-601. [PMID: 15147012 DOI: 10.1109/tmi.2004.826046] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Most positron emission tomography (PET) emission scans are corrected for accidental coincidence (AC) events by real-time subtraction of delayed-window coincidences, leaving only the randoms-precorrected data available for image reconstruction. The real-time randoms precorrection compensates in mean for AC events but destroys the Poisson statistics. The exact log-likelihood for randoms-precorrected data is inconvenient, so practical approximations are needed for maximum likelihood or penalized-likelihood image reconstruction. Conventional approximations involve setting negative sinogram values to zero, which can induce positive systematic biases, particularly for scans with low counts per ray. We propose new likelihood approximations that allow negative sinogram values without requiring zero-thresholding. With negative sinogram values, the log-likelihood functions can be nonconcave, complicating maximization; nevertheless, we develop monotonic algorithms for the new models by modifying the separable paraboloidal surrogates and the maximum-likelihood expectation-maximization (ML-EM) methods. These algorithms ascend to local maximizers of the objective function. Analysis and simulation results show that the new shifted Poisson (SP) model is nearly free of systematic bias yet keeps low variance. Despite its simpler implementation, the new SP performs comparably to the saddle-point model which has shown the best performance (as to systematic bias and variance) in randoms-precorrected PET emission reconstruction.
Collapse
Affiliation(s)
- Sangtae Ahn
- Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, MI 48109-2122, USA.
| | | |
Collapse
|
206
|
Mustafovic S, Thielemans K. Object dependency of resolution in reconstruction algorithms with interiteration filtering applied to PET data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:433-446. [PMID: 15084069 DOI: 10.1109/tmi.2004.824225] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
In this paper, we study the resolution properties of those algorithms where a filtering step is applied after every iteration. As concrete examples we take filtered preconditioned gradient descent algorithms for the Poisson log likelihood for PET emission data. For nonlinear estimators, resolution can be characterized in terms of the linearized local impulse response (LLIR). We provide analytic approximations for the LLIR for the class of algorithms mentioned above. Our expressions clearly show that when interiteration filtering (with linear filters) is used, the resolution properties are, in most cases, spatially varying, object dependent and asymmetric. These nonuniformities are solely due to the interaction between the filtering step and the Poisson noise model. This situation is similar to penalized likelihood reconstructions as studied previously in the literature. In contrast, nonregularized and postfiltered maximum-likelihood expectation maximization (MLEM) produce images with nearly "perfect" uniform resolution when convergence is reached. We use the analytic expressions for the LLIR to propose three different approaches to obtain nearly object independent and uniform resolution. Two of them are based on calculating filter coefficients on a pixel basis, whereas the third one chooses an appropriate preconditioner. These three approaches are tested on simulated data for the filtered MLEM algorithm or the filtered separable paraboloidal surrogates algorithm. The evaluation confirms that images obtained using our proposed regularization methods have nearly object independent and uniform resolution.
Collapse
Affiliation(s)
- Sanida Mustafovic
- Imperial College and Hammersmith Imanet Ltd., Hammersmith Hospital, London W12 0NN, UK.
| | | |
Collapse
|
207
|
Stayman JW, Fessler JA. Compensation for nonuniform resolution using penalized-likelihood reconstruction in space-variant imaging systems. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:269-284. [PMID: 15027520 DOI: 10.1109/tmi.2003.823063] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Imaging systems that form estimates using a statistical approach generally yield images with nonuniform resolution properties. That is, the reconstructed images possess resolution properties marked by space-variant and/or anisotropic responses. We have previously developed a space-variant penalty for penalized-likelihood (PL) reconstruction that yields nearly uniform resolution properties. We demonstrated how to calculate this penalty efficiently and apply it to an idealized positron emission tomography (PET) system whose geometric response is space-invariant. In this paper, we demonstrate the efficient calculation and application of this penalty to space-variant systems. (The method is most appropriate when the system matrix has been precalculated.) We apply the penalty to a large field of view PET system where crystal penetration effects make the geometric response space-variant, and to a two-dimensional single photon emission computed tomography system whose detector responses are modeled by a depth-dependent Gaussian with linearly varying full-width at half-maximum. We perform a simulation study comparing reconstructions using our proposed PL approach with other reconstruction methods and demonstrate the relative resolution uniformity, and discuss tradeoffs among estimators that yield nearly uniform resolution. We observe similar noise performance for the PL and post-smoothed maximum-likelihood (ML) approaches with carefully matched resolution, so choosing one estimator over another should be made on other factors like computational complexity and convergence rates of the iterative reconstruction. Additionally, because the postsmoothed ML and the proposed PL approach can outperform one another in terms of resolution uniformity depending on the desired reconstruction resolution, we present and discuss a hybrid approach adopting both a penalty and post-smoothing.
Collapse
Affiliation(s)
- J Webster Stayman
- Department of Electrical Engineering and Computer Science (4415 EECS), University of Michigan, Ann Arbor, MI 48109-2122, USA.
| | | |
Collapse
|
208
|
Qi J. Analysis of lesion detectability in Bayesian emission reconstruction with nonstationary object variability. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:321-329. [PMID: 15027525 DOI: 10.1109/tmi.2004.824239] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Bayesian methods based on the maximum a posteriori principle (also called penalized maximum-likelihood methods) have been developed to improve image quality in emission tomography. To explore the full potential of Bayesian reconstruction for lesion detection, we derive simplified theoretical expressions that allow fast evaluation of the detectability of a lesion in Bayesian reconstruction. This work is builded on the recent progress on the theoretical analysis of image properties of statistical reconstructions and the development of numerical observers. We explicitly model the nonstationary variation of the lesion and background without assuming that they are locally stationary. The results can be used to choose the optimum prior parameters for the maximum lesion detectability. The theoretical results are validated using Monte Carlo simulations. The comparisons show good agreement between the theoretical predictions and the Monte Carlo results. We also demonstrate that the lesion detectability can be reliably estimated using one noisy data set.
Collapse
Affiliation(s)
- Jinyi Qi
- Department of Nuclear Medicine and Functional Imaging, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA.
| |
Collapse
|
209
|
Abstract
Iterative image estimation methods have been widely used in emission tomography. Accurate estimation of the uncertainty of the reconstructed images is essential for quantitative applications. While both iteration-based noise analysis and fixed-point noise analysis have been developed, current iteration-based results are limited to only a few algorithms that have an explicit multiplicative update equation and some may not converge to the fixed-point result. This paper presents a theoretical noise analysis that is applicable to a wide range of preconditioned gradient-type algorithms. Under a certain condition, the proposed method does not require an explicit expression of the preconditioner. By deriving the fixed-point expression from the iteration-based result, we show that the proposed iteration-based noise analysis is consistent with fixed-point analysis. Examples in emission tomography and transmission tomography are shown. The results are validated using Monte Carlo simulations.
Collapse
Affiliation(s)
- Jinyi Qi
- Department of Nuclear Medicine and Functional Imaging, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA.
| |
Collapse
|
210
|
Nuyts J, Fessler JA. A penalized-likelihood image reconstruction method for emission tomography, compared to postsmoothed maximum-likelihood with matched spatial resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2003; 22:1042-1052. [PMID: 12956260 DOI: 10.1109/tmi.2003.816960] [Citation(s) in RCA: 41] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Regularization is desirable for image reconstruction in emission tomography. A powerful regularization method is the penalized-likelihood (PL) reconstruction algorithm (or equivalently, maximum a posteriori reconstruction), where the sum of the likelihood and a noise suppressing penalty term (or Bayesian prior) is optimized. Usually, this approach yields position-dependent resolution and bias. However, for some applications in emission tomography, a shift-invariant point spread function would be advantageous. Recently, a new method has been proposed, in which the penalty term is tuned in every pixel to impose a uniform local impulse response. In this paper, an alternative way to tune the penalty term is presented. We performed positron emission tomography and single photon emission computed tomography simulations to compare the performance of the new method to that of the postsmoothed maximum-likelihood (ML) approach, using the impulse response of the former method as the postsmoothing filter for the latter. For this experiment, the noise properties of the PL algorithm were not superior to those of postsmoothed ML reconstruction.
Collapse
Affiliation(s)
- Johan Nuyts
- Department of Nuclear Medicine, K.U. Leuven, Herestraat 49, B3000 Leuven, Belgium.
| | | |
Collapse
|
211
|
Idris A E, Fessler JA. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation. Phys Med Biol 2003; 48:2453-77. [PMID: 12953909 DOI: 10.1088/0031-9155/48/15/314] [Citation(s) in RCA: 104] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.
Collapse
Affiliation(s)
- Elbakri Idris A
- Electrical Engineering and Computer Science Department, University of Michigan, 1301 Beal Ave, Ann Arbor, MI 48109, USA.
| | | |
Collapse
|
212
|
Meng LJ, Wehe DK. A Feasibility Study of Using Hybrid Collimation for Nuclear Environment. IEEE TRANSACTIONS ON NUCLEAR SCIENCE 2003; 50:1103-1110. [PMID: 28260807 PMCID: PMC5333790 DOI: 10.1109/tns.2003.815135] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
This paper presents a feasibility of a gamma ray imager using combined electronic and mechanical collimation methods. This detector is based on the use of a multiple pinhole collimator, a position sensitive scintillation detector with Anger logic readout. A pixelated semiconductor detector, located between the collimator and the scintillation detector, is used as a scattering detector. For gamma rays scattered in the first detector and then stopped in the second detector, an image can also be built up based on the joint probability of their passing through the collimator and falling into a broadened conical surface, defined by the detected Compton scattering event. Since these events have a much smaller angular uncertainty, they provide more information content per photon compared with using solely the mechanical or electronic collimation. Therefore, the overall image quality can be improved. This feasibility study adapted a theoretical approach, based on analysing the resolution-variance trade-off in images reconstructed using Maximum a priori (MAP) algorithm. The effect of factors such as the detector configuration, Doppler broadening and collimator configuration are studied. The results showed that the combined collimation leads to a significant improvement in image quality at energy range below 300keV. However, due to the mask penetration, the performance of such a detector configuration is worse than a standard Compton camera at above this energy.
Collapse
Affiliation(s)
- L J Meng
- Department of Nuclear Engineering and Radiological Sciences, University of Michigan
| | - D K Wehe
- Department of Nuclear Engineering and Radiological Sciences, University of Michigan
| |
Collapse
|
213
|
Qi J. Theoretical evaluation of the detectability of random lesions in Bayesian emission reconstruction. INFORMATION PROCESSING IN MEDICAL IMAGING : PROCEEDINGS OF THE ... CONFERENCE 2003; 18:354-65. [PMID: 15344471 DOI: 10.1007/978-3-540-45087-0_30] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Detecting cancerous lesion is an important task in positron emission tomography (PET). Bayesian methods based on the maximum a posteriori principle (also called penalized maximum likelihood methods) have been developed to deal with the low signal to noise ratio in the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the prior parameters in Bayesian reconstruction control the resolution and noise trade-off and hence affect detectability of lesions in reconstructed images. Bayesian reconstructions are difficult to analyze because the resolution and noise properties are nonlinear and object-dependent. Most research has been based on Monte Carlo simulations, which are very time consuming. Building on the recent progress on the theoretical analysis of image properties of statistical reconstructions and the development of numerical observers, here we develop a theoretical approach for fast computation of lesion detectability in Bayesian reconstruction. The results can be used to choose the optimum hyperparameter for the maximum lesion detectability. New in this work is the use of theoretical expressions that explicitly model the statistical variation of the lesion and background without assuming that the object variation is (locally) stationary. The theoretical results are validated using Monte Carlo simulations. The comparisons show good agreement between the theoretical predications and the Monte Carlo results.
Collapse
Affiliation(s)
- Jinyi Qi
- Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| |
Collapse
|
214
|
Hsiao IT, Rangarajan A, Gindi G. A new convex edge-preserving median prior with applications to tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2003; 22:580-585. [PMID: 12846427 DOI: 10.1109/tmi.2003.812249] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
In a Bayesian tomographic maximum a posteriori (MAP) reconstruction, an estimate of the object f is computed by iteratively minimizing an objective function that typically comprises the sum of a log-likelihood (data consistency) term and prior (or penalty) term. The prior can be used to stabilize the solution and to also impose spatial properties on the solution. One such property, preservation of edges and locally monotonic regions, is captured by the well-known median root prior (MRP), an empirical method that has been applied to emission and transmission tomography. We propose an entirely new class of convex priors that depends on f and also on m, an auxiliary field in register with f. We specialize this class to our median prior (MP). The approximate action of the median prior is to draw, at each iteration, an object voxel toward its own local median. This action is similar to that of MRP and results in solutions that impose the same sorts of object properties as does MRP. Our MAP method is not empirical, since the problem is stated completely as the minimization of a joint (on f and m) objective. We propose an alternating algorithm to compute the joint MAP solution and apply this to emission tomography, showing that the reconstructions are qualitatively similar to those obtained using MRP.
Collapse
Affiliation(s)
- Ing-Tsung Hsiao
- Department of Radiology, SUNY Stony Brook, Stony Brook, NY 11784, USA
| | | | | |
Collapse
|
215
|
Frese T, Rouze NC, Bouman CA, Sauer K, Hutchins GD. Quantitative comparison of FBP, EM, and Bayesian reconstruction algorithms for the IndyPET scanner. IEEE TRANSACTIONS ON MEDICAL IMAGING 2003; 22:258-276. [PMID: 12716002 DOI: 10.1109/tmi.2002.808353] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We quantitatively compare filtered backprojection (FBP), expectation-maximization (EM), and Bayesian reconstruction algorithms as applied to the IndyPET scanner--a dedicated research scanner which has been developed for small and intermediate field of view imaging applications. In contrast to previous approaches that rely on Monte Carlo simulations, a key feature of our investigation is the use of an empirical system kernel determined from scans of line source phantoms. This kernel is incorporated into the forward model of the EM and Bayesian algorithms to achieve resolution recovery. Three data sets are used, data collected on the IndyPET scanner using a bar phantom and a Hoffman three-dimensional brain phantom, and simulated data containing a hot lesion added to a uniform background. Reconstruction quality is analyzed quantitatively in terms of bias-variance measures (bar phantom) and mean square error (lesion phantom). We observe that without use of the empirical system kernel, the FBP, EM, and Bayesian algorithms give similar performance. However, with the inclusion of the empirical kernel, the iterative algorithms provide superior reconstructions compared with FBP, both in terms of visual quality and quantitative measures. Furthermore, Bayesian methods outperform EM. We conclude that significant improvements in reconstruction quality can be realized by combining accurate models of the system response with Bayesian reconstruction algorithms.
Collapse
Affiliation(s)
- Thomas Frese
- McKinsey & Company, 21 South Clark Street, Suite 2900, Chicago, IL 60603, USA.
| | | | | | | | | |
Collapse
|
216
|
Sutton BP, Noll DC, Fessler JA. Fast, iterative image reconstruction for MRI in the presence of field inhomogeneities. IEEE TRANSACTIONS ON MEDICAL IMAGING 2003; 22:178-188. [PMID: 12715994 DOI: 10.1109/tmi.2002.808360] [Citation(s) in RCA: 240] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
In magnetic resonance imaging, magnetic field inhomogeneities cause distortions in images that are reconstructed by conventional fast Fourier trasform (FFT) methods. Several noniterative image reconstruction methods are used currently to compensate for field inhomogeneities, but these methods assume that the field map that characterizes the off-resonance frequencies is spatially smooth. Recently, iterative methods have been proposed that can circumvent this assumption and provide improved compensation for off-resonance effects. However, straightforward implementations of such iterative methods suffer from inconveniently long computation times. This paper describes a tool for accelerating iterative reconstruction of field-corrected MR images: a novel time-segmented approximation to the MR signal equation. We use a min-max formulation to derive the temporal interpolator. Speedups of around 60 were achieved by combining this temporal interpolator with a nonuniform fast Fourier transform with normalized root mean squared approximation errors of 0.07%. The proposed method provides fast, accurate, field-corrected image reconstruction even when the field map is not smooth.
Collapse
Affiliation(s)
- Bradley P Sutton
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109-2108, USA.
| | | | | |
Collapse
|
217
|
López A, Molina R, Katsaggelos AK. Bayesian SPECT Image Reconstruction with Scale Hyperparameter Estimation for Scalable Prior. PATTERN RECOGNITION AND IMAGE ANALYSIS 2003. [DOI: 10.1007/978-3-540-44871-6_52] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
218
|
Khurd P, Gindi G. Rapid Computation of LROC Figures of Merit Using Numerical Observers (for SPECT/PET Reconstruction). IEEE TRANSACTIONS ON NUCLEAR SCIENCE 2003; 4:2516-2520. [PMID: 20442799 PMCID: PMC2862501 DOI: 10.1109/tns.2005.851458] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
The assessment of PET and SPECT image reconstructions by image quality metrics is typically time consuming, even if methods employing model observers and samples of reconstructions are used to replace human testing. We consider a detection task where the background is known exactly and the signal is known except for location. We develop theoretical formulae to rapidly evaluate two relevant figures of merit, the area under the LROC curve and the probability of correct localization. The formulae can accommodate different forms of model observer. The theory hinges on the fact that we are able to rapidly compute the mean and covariance of the reconstruction. For four forms of model observer, the theoretical expressions are validated by Monte Carlo studies for the case of MAP (maximum a posteriori) reconstruction. The theory method affords a 10(2) - 10(3) speedup relative to methods in which model observers are applied to sample reconstructions.
Collapse
|
219
|
Wilson DW, Barrett HH. The Effects of Incorrect Modeling on Noise and Resolution Properties of ML-EM Images. IEEE TRANSACTIONS ON NUCLEAR SCIENCE 2002; 49:768-773. [PMID: 21785511 PMCID: PMC3140698 DOI: 10.1109/tns.2002.1039561] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The effects of incorrect compensation for collimator blur in single-photon emission computed tomography (SPECT) images are studied in terms of the noise and resolution properties of the reconstructed images. Qualitative analysis of images of the Hoffman brain phantom reconstructed using nonlinear maximum-likelihood-expectation maximization (ML-EM) show the behavior of longer noise correlations for high-pass filtered images. These qualitative observations are confirmed with more quantitative noise measures. The differences also appear in images reconstructed using linear Landweber iteration. However, the signal-to-noise ratio, in terms of the noise-equivalent quanta, remains largely unchanged. We conclude that the compensation model affects SPECT image properties, though the effect on human task performance remains to be studied.
Collapse
Affiliation(s)
- D. W. Wilson
- Department of Radiology, University of Arizona, Tucson, AZ 85724 USA
| | - H. H. Barrett
- Department of Radiology and the Optical Science Center, University of Arizona, Tucson, AZ 85724 USA
| |
Collapse
|
220
|
Elbakri IA, Fessler JA. Statistical image reconstruction for polyenergetic X-ray computed tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2002; 21:89-99. [PMID: 11929108 DOI: 10.1109/42.993128] [Citation(s) in RCA: 298] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This paper describes a statistical image reconstruction method for X-ray computed tomography (CT) that is based on a physical model that accounts for the polyenergetic X-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. We assume that the object consists of a given number of nonoverlapping materials, such as soft tissue and bone. The attenuation coefficient of each voxel is the product of its unknown density and a known energy-dependent mass attenuation coefficient. We formulate a penalized-likelihood function for this polyenergetic model and develop an ordered-subsets iterative algorithm for estimating the unknown densities in each voxel. The algorithm monotonically decreases the cost function at each iteration when one subset is used. Applying this method to simulated X-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artifacts.
Collapse
Affiliation(s)
- Idris A Elbakri
- Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor 48109-2122, USA.
| | | |
Collapse
|
221
|
Yu DF, Fessler JA. Edge-preserving tomographic reconstruction with nonlocal regularization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2002; 21:159-173. [PMID: 11929103 DOI: 10.1109/42.993134] [Citation(s) in RCA: 42] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Tomographic image reconstruction using statistical methods can provide more accurate system modeling, statistical models, and physical constraints than the conventional filtered backprojection (FBP) method. Because of the ill posedness of the reconstruction problem, a roughness penalty is often imposed on the solution to control noise. To avoid smoothing of edges, which are important image attributes, various edge-preserving regularization methods have been proposed. Most of these schemes rely on information from local neighborhoods to determine the presence of edges. In this paper, we propose a cost function that incorporates nonlocal boundary information into the regularization method. We use an alternating minimization algorithm with deterministic annealing to minimize the proposed cost function, jointly estimating region boundaries and object pixel values. We apply variational techniques implemented using level-sets methods to update the boundary estimates; then, using the most recent boundary estimate, we minimize a space-variant quadratic cost function to update the image estimate. For the positron emission tomography transmission reconstruction application, we compare the bias-variance tradeoff of this method with that of a "conventional" penalized-likelihood algorithm with local Huber roughness penalty.
Collapse
Affiliation(s)
- Daniel F Yu
- Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor 48109-2122, USA
| | | |
Collapse
|
222
|
Meng LJ, Rogers WL, Clinthorne NH. Feasibility Study of Compton Scattering Enhanced Multiple Pinhole Imager for Nuclear Medicine. IEEE TRANSACTIONS ON NUCLEAR SCIENCE 2002; 2:1258-1262. [PMID: 28250473 PMCID: PMC5328635 DOI: 10.1109/nssmic.2002.1239548] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
This paper presents a feasibility study of a Compton scattering enhanced (CSE) multiple pinhole imaging system for gamma rays with energy of 140keV or higher. This system consists of a multiple-pinhole collimator, a position sensitive scintillation detector as used in standard Gamma camera, and a Silicon pad detector array, inserted between the collimator and the scintillation detector. The problem of multiplexing, normally associated with multiple pinhole system, is reduced by using the extra information from the detected Compton scattering events. In order to compensate for the sensitivity loss, due to the low probability of detecting Compton scattered events, the proposed detector is designed to collect both Compton scattering and Non-Compton events. It has been shown that with properly selected pinhole spacing, the proposed detector design leads to an improved image quality.
Collapse
Affiliation(s)
- L J Meng
- Department of Radiology, University of Michigan
| | - W L Rogers
- Department of Radiology, University of Michigan
| | | |
Collapse
|
223
|
Qi J, Huesman RH. Theoretical study of lesion detectability of MAP reconstruction using computer observers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2001; 20:815-822. [PMID: 11513032 DOI: 10.1109/42.938249] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
The low signal-to-noise ratio (SNR) in emission data has stimulated the development of statistical image reconstruction methods based on the maximum a posteriori (MAP) principle. Experimental examples have shown that statistical methods improve image quality compared to the conventional filtered backprojection (FBP) method. However, these results depend on isolated data sets. Here we study the lesion detectability of MAP reconstruction theoretically, using computer observers. These theoretical results can be applied to different object structures. They show that for a quadratic smoothing prior, the lesion detectability using the prewhitening observer is independent of the smoothing parameter and the neighborhood of the prior, while the nonprewhitening observer exhibits an optimum smoothing point. We also compare the results to those of FBP reconstruction. The comparison shows that for ideal positron emission tomography (PET) systems (where data are true line integrals of the tracer distribution) the MAP reconstruction has a higher SNR for lesion detection than FBP reconstruction due to the modeling of the Poisson noise. For realistic systems, MAP reconstruction further benefits from accurately modeling the physical photon detection process in PET.
Collapse
Affiliation(s)
- J Qi
- Center for Functional Imaging, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA.
| | | |
Collapse
|
224
|
La Rivière PJ, Pan X. Nonparametric regression sinogram smoothing using a roughness-penalized Poisson likelihood objective function. IEEE TRANSACTIONS ON MEDICAL IMAGING 2000; 19:773-786. [PMID: 11055801 DOI: 10.1109/42.876303] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We develop and investigate an approach to tomographic image reconstruction in which nonparametric regression using a roughness-penalized Poisson likelihood objective function is used to smooth each projection independently prior to reconstruction by unapodized filtered backprojection (FBP). As an added generalization, the roughness penalty is expressed in terms of a monotonic transform, known as the link function, of the projections. The approach is compared to shift-invariant projection filtering through the use of a Hanning window as well as to a related nonparametric regression approach that makes use of an objective function based on weighted least squares (WLS) rather than the Poisson likelihood. The approach is found to lead to improvements in resolution-noise tradeoffs over the Hanning filter as well as over the WLS approach. We also investigate the resolution and noise effects of three different link functions: the identity, square root, and logarithm links. The choice of link function is found to influence the resolution uniformity and isotropy properties of the reconstructed images. In particular, in the case of an idealized imaging system with intrinsically uniform and isotropic resolution, the choice of a square root link function yields the desirable outcome of essentially uniform and isotropic resolution in reconstructed images, with noise performance still superior to that of the Hanning filter as well as that of the WLS approach.
Collapse
MESH Headings
- Algorithms
- Anisotropy
- Artifacts
- Humans
- Image Processing, Computer-Assisted/methods
- Image Processing, Computer-Assisted/statistics & numerical data
- Likelihood Functions
- Models, Statistical
- Phantoms, Imaging/statistics & numerical data
- Poisson Distribution
- Regression Analysis
- Reproducibility of Results
- Statistics, Nonparametric
- Tomography, Emission-Computed/methods
- Tomography, Emission-Computed/statistics & numerical data
- Tomography, Emission-Computed, Single-Photon/methods
- Tomography, Emission-Computed, Single-Photon/statistics & numerical data
Collapse
Affiliation(s)
- P J La Rivière
- Department of Radiology, The University of Chicago, IL 60637, USA
| | | |
Collapse
|
225
|
Stayman JW, Fessler JA. Regularization for uniform spatial resolution properties in penalized-likelihood image reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2000; 19:601-615. [PMID: 11026463 DOI: 10.1109/42.870666] [Citation(s) in RCA: 66] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Traditional space-invariant regularization methods in tomographic image reconstruction using penalized-likelihood estimators produce images with nonuniform spatial resolution properties. The local point spread functions that quantify the smoothing properties of such estimators are space-variant, asymmetric, and object-dependent even for space-invariant imaging systems. We propose a new quadratic regularization scheme for tomographic imaging systems that yields increased spatial uniformity and is motivated by the least-squares fitting of a parameterized local impulse response to a desired global response. We have developed computationally efficient methods for PET systems with shift-invariant geometric responses. We demonstrate the increased spatial uniformity of this new method versus conventional quadratic regularization schemes in simulated PET thorax scans.
Collapse
Affiliation(s)
- J W Stayman
- EECS Department, University of Michigan, Ann Arbor, 48109 USA.
| | | |
Collapse
|
226
|
Chatziioannou A, Qi J, Moore A, Annala A, Nguyen K, Leahy R, Cherry SR. Comparison of 3-D maximum a posteriori and filtered backprojection algorithms for high-resolution animal imaging with microPET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2000; 19:507-512. [PMID: 11021693 DOI: 10.1109/42.870260] [Citation(s) in RCA: 68] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We have evaluated the performance of two three-dimensional (3-D) reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data, and the second was a statistical maximum a posteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing three million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the 60 reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally, in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.
Collapse
Affiliation(s)
- A Chatziioannou
- Crump Institute for Biological Imaging, Department of Molecular and Medical Pharmacology, UCLA School of Medicine, Los Angeles, CA 90095-1770, USA.
| | | | | | | | | | | | | |
Collapse
|
227
|
Qi J, Leahy RM. Resolution and noise properties of MAP reconstruction for fully 3-D PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2000; 19:493-506. [PMID: 11021692 DOI: 10.1109/42.870259] [Citation(s) in RCA: 178] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We derive approximate analytical expressions for the local impulse response and covariance of images reconstructed from fully three-dimensional (3-D) positron emission tomography (PET) data using maximum a posteriori (MAP) estimation. These expressions explicitly account for the spatially variant detector response and sensitivity of a 3-D tomograph. The resulting spatially variant impulse response and covariance are computed using 3-D Fourier transforms. A truncated Gaussian distribution is used to account for the effect on the variance of the nonnegativity constraint used in MAP reconstruction. Using Monte Carlo simulations and phantom data from the microPET small animal scanner, we show that the approximations provide reasonably accurate estimates of contrast recovery and covariance of MAP reconstruction for priors with quadratic energy functions. We also describe how these analytical results can be used to achieve near-uniform contrast recovery throughout the reconstructed volume.
Collapse
Affiliation(s)
- J Qi
- Signal and Image Processing Institute, University of Southern California, Los Angeles 90089-2564, USA.
| | | |
Collapse
|
228
|
Erdoğan H, Fessler JA. Monotonic algorithms for transmission tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 1999; 18:801-814. [PMID: 10571385 DOI: 10.1109/42.802758] [Citation(s) in RCA: 141] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We present a framework for designing fast and monotonic algorithms for transmission tomography penalized-likelihood image reconstruction. The new algorithms are based on paraboloidal surrogate functions for the log likelihood. Due to the form of the log-likelihood function it is possible to find low curvature surrogate functions that guarantee monotonicity. Unlike previous methods, the proposed surrogate functions lead to monotonic algorithms even for the nonconvex log likelihood that arises due to background events, such as scatter and random coincidences. The gradient and the curvature of the likelihood terms are evaluated only once per iteration. Since the problem is simplified at each iteration, the CPU time is less than that of current algorithms which directly minimize the objective, yet the convergence rate is comparable. The simplicity, monotonicity, and speed of the new algorithms are quite attractive. The convergence rates of the algorithms are demonstrated using real and simulated PET transmission scans.
Collapse
Affiliation(s)
- H Erdoğan
- IBM T.J. Watson Research Labs, Yorktown Heights, NY 10598, USA.
| | | |
Collapse
|
229
|
Yavuz M, Fessler JA. Penalized-likelihood estimators and noise analysis for randoms-precorrected PET transmission scans. IEEE TRANSACTIONS ON MEDICAL IMAGING 1999; 18:665-674. [PMID: 10534049 DOI: 10.1109/42.796280] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This paper analyzes and compares image reconstruction methods based on practical approximations to the exact log likelihood of randoms-precorrected positron emission tomography (PET) measurements. The methods apply to both emission and transmission tomography, however, in this paper we focus on transmission tomography. The results of experimental PET transmission scans and variance approximations demonstrate that the shifted Poisson (SP) method avoids the systematic bias of the conventional data-weighted least squares (WLS) method and leads to significantly lower variance than conventional statistical methods based on the log likelihood of the ordinary Poisson (OP) model. We develop covariance approximations to analyze the propagation of noise from attenuation maps into emission images via the attenuation correction factors (ACF's). Empirical pixel and region variances from real transmission data agree closely with the analytical predictions. Both the approximations and the empirical results show that the performance differences between the OP model and SP model are even larger, when considering noise propagation from the transmission images into the final emission images, than the differences in the attenuation maps themselves.
Collapse
Affiliation(s)
- M Yavuz
- GE Research and Development Center, Niskayuna, NY 12309, USA
| | | |
Collapse
|
230
|
Qi J, Leahy RM. A theoretical study of the contrast recovery and variance of MAP reconstructions from PET data. IEEE TRANSACTIONS ON MEDICAL IMAGING 1999; 18:293-305. [PMID: 10385287 DOI: 10.1109/42.768839] [Citation(s) in RCA: 56] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We examine the spatial resolution and variance properties of PET images reconstructed using maximum a posteriori (MAP) or penalized-likelihood methods. Resolution is characterized by the contrast recovery coefficient (CRC) of the local impulse response. Simplified approximate expressions are derived for the local impulse response CRC's and variances for each voxel. Using these results we propose a practical scheme for selecting spatially variant smoothing parameters to optimize lesion detectability through maximization of the local CRC-to-noise ratio in the reconstructed image.
Collapse
Affiliation(s)
- J Qi
- Signal and Image Processing Institute, University of Southern California, Los Angeles 90089-2564, USA.
| | | |
Collapse
|
231
|
Fessler JA, Booth SD. Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:688-699. [PMID: 18267484 DOI: 10.1109/83.760336] [Citation(s) in RCA: 83] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.
Collapse
Affiliation(s)
- J A Fessler
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109-2122, USA.
| | | |
Collapse
|
232
|
Fessler JA, Ficaro EP, Clinthorne NH, Lange K. Grouped-coordinate ascent algorithms for penalized-likelihood transmission image reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 1997; 16:166-175. [PMID: 9101326 DOI: 10.1109/42.563662] [Citation(s) in RCA: 61] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
This paper presents a new class of algorithms for penalized-likelihood reconstruction of attenuation maps from low-count transmission scans. We derive the algorithms by applying to the transmission log-likelihood a version of the convexity technique developed by De Pierro for emission tomography. The new class includes the single-coordinate ascent (SCA) algorithm and Lange's convex algorithm for transmission tomography as special cases. The new grouped-coordinate ascent (GCA) algorithms in the class overcome several limitations associated with previous algorithms. 1) Fewer exponentiations are required than in the transmission maximum likelihood-expectation maximization (ML-EM) algorithm or in the SCA algorithm. 2) The algorithms intrinsically accommodate nonnegativity constraints, unlike many gradient-based methods. 3) The algorithms are easily parallelizable, unlike the SCA algorithm and perhaps line-search algorithms. We show that the GCA algorithms converge faster than the SCA algorithm, even on conventional workstations. An example from a low-count positron emission tomography (PET) transmission scan illustrates the method.
Collapse
Affiliation(s)
- J A Fessler
- University of Michigan, Ann Arbor 48109-2122, USA.
| | | | | | | |
Collapse
|
233
|
Abbey CK, Barrett HH, Wilson DW. Observer signal-to-noise ratios for the ML-EM algorithm. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 1996; 2712:47-58. [PMID: 20865139 DOI: 10.1117/12.236860] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We have used an approximate method developed by Barrett, Wilson, and Tsui for finding the ensemble statistics of the Maximum Likelihood-Expectation Maximization algorithm to compute task-dependent figures of merit as a function of stopping point. For comparison, human-observer performance was assessed through conventional psychophysics.The results of our studies show the dependence of the optimal stopping point of the algorithm on the detection task. Comparisons of human and various model observers show that a channelized Hotelling observer with overlapping channels is the best predictor of human performance.
Collapse
Affiliation(s)
- Craig K Abbey
- Program in Applied Mathematics, University of Arizona, Tucson, AZ 85724
| | | | | |
Collapse
|