101
|
Abstract
In emission tomography statistically based iterative methods can improve image quality relative to analytic image reconstruction through more accurate physical and statistical modelling of high-energy photon production and detection processes. Continued exponential improvements in computing power, coupled with the development of fast algorithms, have made routine use of iterative techniques practical, resulting in their increasing popularity in both clinical and research environments. Here we review recent progress in developing statistically based iterative techniques for emission computed tomography. We describe the different formulations of the emission image reconstruction problem and their properties. We then describe the numerical algorithms that are used for optimizing these functions and illustrate their behaviour using small scale simulations.
Collapse
Affiliation(s)
- Jinyi Qi
- Department of Biomedical Engineering, University of California, Davis, CA 95616, USA
| | | |
Collapse
|
102
|
Qi J, Huesman RH. Theoretical study of penalized-likelihood image reconstruction for region of interest quantification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:640-8. [PMID: 16689267 DOI: 10.1109/tmi.2006.873223] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Region of interest (ROI) quantification is an important task in emission tomography (e.g., positron emission tomography and single photon emission computed tomography). It is essential for exploring clinical factors such as tumor activity, growth rate, and the efficacy of therapeutic interventions. Statistical image reconstruction methods based on the penalized maximum-likelihood (PML) or maximum a posteriori principle have been developed for emission tomography to deal with the low signal-to-noise ratio of the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the regularization parameter in PML reconstruction controls the resolution and noise tradeoff and, hence, affects ROI quantification. In this paper, we theoretically analyze the performance of ROI quantification in PML reconstructions. Building on previous work, we derive simplified theoretical expressions for the bias, variance, and ensemble mean-squared-error (EMSE) of the estimated total activity in an ROI that is surrounded by a uniform background. When the mean and covariance matrix of the activity inside the ROI are known, the theoretical expressions are readily computable and allow for fast evaluation of image quality for ROI quantification with different regularization parameters. The optimum regularization parameter can then be selected to minimize the EMSE. Computer simulations are conducted for small ROIs with variable uniform uptake. The results show that the theoretical predictions match the Monte Carlo results reasonably well.
Collapse
Affiliation(s)
- Jinyi Qi
- Department of Biomedical Engineering, University of California, Davis, CA 95616, USA.
| | | |
Collapse
|
103
|
Allain M, Idier J, Goussard Y. On global and local convergence of half-quadratic algorithms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:1130-42. [PMID: 16671294 DOI: 10.1109/tip.2005.864173] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
This paper provides original results on the global and local convergence properties of half-quadratic (HQ) algorithms resulting from the Geman and Yang (GY) and Geman and Reynolds (GR) primal-dual constructions. First, we show that the convergence domain of the GY algorithm can be extended with the benefit of an improved convergence rate. Second, we provide a precise comparison of the convergence rates for both algorithms. This analysis shows that the GR form does not benefit from a better convergence rate in general. Moreover, the GY iterates often take advantage of a low cost implementation. In this case, the GY form is usually faster than the GR form from the CPU time viewpoint.
Collapse
Affiliation(s)
- Marc Allain
- Institut de Recherche en Communications et en Cybernétique de Nantes (IRCCyN), BP 92 101-44321 Nantes Cedex 03, France.
| | | | | |
Collapse
|
104
|
Ying L, Liang ZP, Munson DC, Koetter R, Frey BJ. Unwrapping of MR phase images using a Markov random field model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:128-36. [PMID: 16398421 DOI: 10.1109/tmi.2005.861021] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Phase unwrapping is an important problem in many magnetic resonance imaging applications, such as field mapping and flow imaging. The challenge in two-dimensional phase unwrapping lies in distinguishing jumps due to phase wrapping from those due to noise and/or abrupt variations in the actual function. This paper addresses this problem using a Markov random field to model the true phase function, whose parameters are determined by maximizing the a posteriori probability. To reduce the computational complexity of the optimization procedure, an efficient algorithm is also proposed for parameter estimation using a series of dynamic programming connected by the iterated conditional modes. The proposed method has been tested with both simulated and experimental data, yielding better results than some of the state-of-the-art method (e.g., the popular least-squares method) in handling noisy phase images with rapid phase variations.
Collapse
Affiliation(s)
- Lei Ying
- Department of Electrical Engineering and Computer Science, University of Wisconsin, Milwaukee, WI 53201, USA.
| | | | | | | | | |
Collapse
|
105
|
Chang TC, Allebach JP. Quantization of accumulated diffused errors in error diffusion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2005; 14:1960-76. [PMID: 16370451 DOI: 10.1109/tip.2005.859372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Due to its high image quality and moderate computational complexity, error diffusion is a popular halftoning algorithm for use with inkjet printers. However, error diffusion is an inherently serial algorithm that requires buffering a full row of accumulated diffused error (ADE) samples. For the best performance when the algorithm is implemented in hardware, the ADE data should be stored on the chip on which the error diffusion algorithm is implemented. However, this may result in an unacceptable hardware cost. In this paper, we examine the use of quantization of the ADE to reduce the amount of data that must be stored. We consider both uniform and nonuniform quantizers. For the nonuniform quantizers, we build on the concept of tone-dependency in error diffusion, by proposing several novel feature-dependent quantizers that yield improved image quality at a given bit rate, compared to memoryless quantizers. The optimal design of these quantizers is coupled with the design of the tone-dependent parameters associated with error diffusion. This is done via a combination of the classical Lloyd-Max algorithm and the training framework for tone-dependent error diffusion. Our results show that 4-bit uniform quantization of the ADE yields the same halftone quality as error diffusion without quantization of the ADE. At rates that vary from 2 to 3 bits per pixel, depending on the selectivity of the feature on which the quantizer depends, the feature-dependent quantizers achieve essentially the same quality as 4-bit uniform quantization.
Collapse
Affiliation(s)
- Ti-Chiun Chang
- Siemens Corporate Research, Inc., Princeton, NJ 08540, USA.
| | | |
Collapse
|
106
|
Abstract
Statistically based iterative image reconstruction methods have been developed for emission tomography. One important component in iterative image reconstruction is the system matrix, which defines the mapping from the image space to the data space. Several groups have demonstrated that an accurate system matrix can improve image quality in both single photon emission computed tomography (SPECT) and positron emission tomography (PET). While iterative methods are amenable to arbitrary and complicated system models, the true system response is never known exactly. In practice, one also has to sacrifice the accuracy of the system model because of limited computing and imaging resources. This paper analyses the effect of errors in the system matrix on iterative image reconstruction methods that are based on the maximum a posteriori principle. We derived an analytical expression for calculating artefacts in a reconstructed image that are caused by errors in the system matrix using the first-order Taylor series approximation. The theoretical expression is used to determine the required minimum accuracy of the system matrix in emission tomography. Computer simulations show that the theoretical results work reasonably well in low-noise situations.
Collapse
Affiliation(s)
- Jinyi Qi
- Department of Biomedical Engineering, University of California, Davis, CA 95616, USA.
| | | |
Collapse
|
107
|
Kamasak ME, Bouman CA, Morris ED, Sauer K. Direct reconstruction of kinetic parameter images from dynamic PET data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:636-50. [PMID: 15889551 DOI: 10.1109/tmi.2005.845317] [Citation(s) in RCA: 121] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Our goal in this paper is the estimation of kinetic model parameters for each voxel corresponding to a dense three-dimensional (3-D) positron emission tomography (PET) image. Typically, the activity images are first reconstructed from PET sinogram frames at each measurement time, and then the kinetic parameters are estimated by fitting a model to the reconstructed time-activity response of each voxel. However, this "indirect" approach to kinetic parameter estimation tends to reduce signal-to-noise ratio (SNR) because of the requirement that the sinogram data be divided into individual time frames. In 1985, Carson and Lange proposed, but did not implement, a method based on the expectation-maximization (EM) algorithm for direct parametric reconstruction. The approach is "direct" because it estimates the optimal kinetic parameters directly from the sinogram data, without an intermediate reconstruction step. However, direct voxel-wise parametric reconstruction remained a challenge due to the unsolved complexities of inversion and spatial regularization. In this paper, we demonstrate and evaluate a new and efficient method for direct voxel-wise reconstruction of kinetic parameter images using all frames of the PET data. The direct parametric image reconstruction is formulated in a Bayesian framework, and uses the parametric iterative coordinate descent (PICD) algorithm to solve the resulting optimization problem. The PICD algorithm is computationally efficient and is implemented with spatial regularization in the domain of the physiologically relevant parameters. Our experimental simulations of a rat head imaged in a working small animal scanner indicate that direct parametric reconstruction can substantially reduce root-mean-squared error (RMSE) in the estimation of kinetic parameters, as compared to indirect methods, without appreciably increasing computation.
Collapse
Affiliation(s)
- M E Kamasak
- School of Electrical and Computer Engineering, Purdue University, 1285 EE Building, PO 268, West Lafayette, IN 47907, USA.
| | | | | | | |
Collapse
|
108
|
Anderson JMM, Srinivasan R, Mair BA, Votaw JR. Accelerated penalized weighted least-squares and maximum likelihood algorithms for reconstructing transmission images from PET transmission data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:337-351. [PMID: 15754984 DOI: 10.1109/tmi.2004.842453] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We present penalized weighted least-squares (PWLS) and penalized maximum-likelihood (PML) methods for reconstructing transmission images from positron emission tomography transmission data. First, we view the problem of minimizing the weighted least-squares (WLS) and maximum likelihood objective functions as a sequence of nonnegative least-squares minimization problems. This viewpoint follows from using certain quadratic functions as surrogate functions for the WLS and maximum likelihood objective functions. Second, we construct surrogate functions for a class of penalty functions that yield closed form expressions for the iterates of the PWLS and PML algorithms. Due to the slow convergence of the PWLS and PML algorithms, accelerated versions of them are developed that are theoretically guaranteed to monotonically decrease their respective objective functions. In experiments using real phantom data, the PML images produced the most accurate attenuation correction factors. On the other hand, the PWLS images produced images with the highest levels of contrast for low-count data.
Collapse
Affiliation(s)
- J M M Anderson
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA.
| | | | | | | |
Collapse
|
109
|
Soussen C, Mohammad-Djafari A. Polygonal and polyhedral contour reconstruction in computed tomography. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2004; 13:1507-1523. [PMID: 15540458 DOI: 10.1109/tip.2004.836159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
This paper is about three-dimensional (3-D) reconstruction of a binary image from its X-ray tomographic data. We study the special case of a compact uniform polyhedron totally included in a uniform background and directly perform the polyhedral surface estimation. We formulate this problem as a nonlinear inverse problem using the Bayesian framework. Vertice estimation is done without using a voxel approximation of the 3-D image. It is based on the construction and optimization of a regularized criterion that accounts for surface smoothness. We investigate original deterministic local algorithms, based on the exact computation of the line projections, their update, and their derivatives with respect to the vertice coordinates. Results are first derived in the two-dimensional (2-D) case, which consists of reconstructing a 2-D object of deformable polygonal contour from its tomographic data. Then, we investigate the 3-D extension that requires technical adaptations. Simulation results illustrate the performance of polygonal and polyhedral reconstruction algorithms in terms of quality and computation time.
Collapse
Affiliation(s)
- Charles Soussen
- Laboratoire des Signaux et Systèmes, Centre National de la Recherche Scientifique, Supélec, Gif-sur-Yvette, France.
| | | |
Collapse
|
110
|
Chang JH, Anderson JMM, Votaw JR. Regularized image reconstruction algorithms for positron emission tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:1165-75. [PMID: 15377125 DOI: 10.1109/tmi.2004.831224] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
We develop algorithms for obtaining regularized estimates of emission means in positron emission tomography. The first algorithm iteratively minimizes a penalized maximum-likelihood (PML) objective function. It is based on standard de-coupled surrogate functions for the ML objective function and de-coupled surrogate functions for a certain class of penalty functions. As desired, the PML algorithm guarantees nonnegative estimates and monotonically decreases the PML objective function with increasing iterations. The second algorithm is based on an iteration dependent, de-coupled penalty function that introduces smoothing while preserving edges. For the purpose of making comparisons, the MLEM algorithm and a penalized weighted least-squares algorithm were implemented. In experiments using synthetic data and real phantom data, it was found that, for a fixed level of background noise, the contrast in the images produced by the proposed algorithms was the most accurate.
Collapse
Affiliation(s)
- Ji-Ho Chang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA
| | | | | |
Collapse
|
111
|
Sotthivirat S, Fessler JA. Penalized-likelihood image reconstruction for digital holography. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2004; 21:737-750. [PMID: 15139426 DOI: 10.1364/josaa.21.000737] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Conventional numerical reconstruction for digital holography using a filter applied in the spatial-frequency domain to extract the primary image may yield suboptimal image quality because of the loss in high-frequency components and interference from other undesirable terms of a hologram. We propose a new numerical reconstruction approach using a statistical technique. This approach reconstructs the complex field of the object from the real-valued hologram intensity data. Because holographic image reconstruction is an ill-posed problem, our statistical technique is based on penalized-likelihood estimation. We develop a Poisson statistical model for this problem and derive an optimization transfer algorithm that monotonically decreases the cost function at each iteration. Simulation results show that our statistical technique has the potential to improve image quality in digital holography relative to conventional reconstruction techniques.
Collapse
Affiliation(s)
- Saowapak Sotthivirat
- National Electronics and Computer Development Center, National Science and Technology Development Agency, Ministry of Science and Technology, Klong Luang, Pathumthani 12120, Thailand.
| | | |
Collapse
|
112
|
Qi J. Analysis of lesion detectability in Bayesian emission reconstruction with nonstationary object variability. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:321-329. [PMID: 15027525 DOI: 10.1109/tmi.2004.824239] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Bayesian methods based on the maximum a posteriori principle (also called penalized maximum-likelihood methods) have been developed to improve image quality in emission tomography. To explore the full potential of Bayesian reconstruction for lesion detection, we derive simplified theoretical expressions that allow fast evaluation of the detectability of a lesion in Bayesian reconstruction. This work is builded on the recent progress on the theoretical analysis of image properties of statistical reconstructions and the development of numerical observers. We explicitly model the nonstationary variation of the lesion and background without assuming that they are locally stationary. The results can be used to choose the optimum prior parameters for the maximum lesion detectability. The theoretical results are validated using Monte Carlo simulations. The comparisons show good agreement between the theoretical predictions and the Monte Carlo results. We also demonstrate that the lesion detectability can be reliably estimated using one noisy data set.
Collapse
Affiliation(s)
- Jinyi Qi
- Department of Nuclear Medicine and Functional Imaging, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA.
| |
Collapse
|
113
|
Villain N, Goussard Y, Idier J, Allain M. Three-dimensional edge-preserving image enhancement for computed tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2003; 22:1275-1287. [PMID: 14552581 DOI: 10.1109/tmi.2003.817767] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Computed tomography (CT) images exhibit a variable amount of noise and blur, depending on the physical characteristics of the apparatus and the selected reconstruction method. Standard algorithms tend to favor reconstruction speed over resolution, thereby jeopardizing applications where accuracy is critical. In this paper, we propose to enhance CT images by applying half-quadratic edge-preserving image restoration (or deconvolution) to them. This approach may be used with virtually any CT scanner, provided the overall point-spread function can be roughly estimated. In image restoration, Markov random fields (MRFs) have proven to be very flexible a priori models and to yield impressive results with edge-preserving penalization, but their implementation in clinical routine is limited because they are often viewed as complex and time consuming. For these practical reasons, we focused on numerical efficiency and developed a fast implementation based on a simple three-dimensional MRF model with convex edge-preserving potentials. The resulting restoration method provides good recovery of sharp discontinuities while using convex duality principles yields fairly simple implementation of the optimization. Further reduction of the computational load can be achieved if the point-spread function is assumed to be separable. Synthetic and real data experiments indicate that the method provides significant improvements over standard reconstruction techniques and compares well with convex-potential Markov-based reconstruction, while being more flexible and numerically efficient.
Collapse
Affiliation(s)
- Nicolas Villain
- Biomedical Engineering Institute, Ecole Polytechnique, Montreal, QC H3C 3A7, Canada
| | | | | | | |
Collapse
|
114
|
Lee SJ. Ordered subsets Bayesian tomographic reconstruction using 2-D smoothing splines as priors. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2003; 72:27-42. [PMID: 12850295 DOI: 10.1016/s0169-2607(02)00112-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The ordered subsets expectation maximization (OS-EM) algorithm has enjoyed considerable interest for accelerating the well-known EM algorithm for emission tomography. The OS principle has also been applied to several regularized EM algorithms, such as nonquadratic convex minimization-based maximum a posteriori (MAP) algorithms. However, most of these methods have not been as practical as OS-EM due to their complex optimization methods and difficulties in hyperparameter estimation. We note here that, by relaxing the requirement of imposing sharp edges and using instead useful quadratic spline priors, solutions are much easier to compute, and hyperparameter calculation becomes less of a problem. In this work, we use two-dimensional smoothing splines as priors and apply a method of iterated conditional modes for the optimization. In this case, step sizes or line-search algorithms necessary for gradient-based descent methods are avoided. We also accelerate the resulting algorithm using the OS approach and propose a principled way of scaling smoothing parameters to retain the strength of smoothing for different subset numbers. Our experimental results show that the OS approach applied to our quadratic MAP algorithms provides a considerable acceleration while retaining the advantages of quadratic spline priors.
Collapse
Affiliation(s)
- Soo-Jin Lee
- Department of Electronic Engineering, Paichai University, 439-6 Doma 2-Dong, Seo-Ku, 302-735 Taejon, South Korea.
| |
Collapse
|
115
|
Qi J. Theoretical evaluation of the detectability of random lesions in Bayesian emission reconstruction. INFORMATION PROCESSING IN MEDICAL IMAGING : PROCEEDINGS OF THE ... CONFERENCE 2003; 18:354-65. [PMID: 15344471 DOI: 10.1007/978-3-540-45087-0_30] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Detecting cancerous lesion is an important task in positron emission tomography (PET). Bayesian methods based on the maximum a posteriori principle (also called penalized maximum likelihood methods) have been developed to deal with the low signal to noise ratio in the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the prior parameters in Bayesian reconstruction control the resolution and noise trade-off and hence affect detectability of lesions in reconstructed images. Bayesian reconstructions are difficult to analyze because the resolution and noise properties are nonlinear and object-dependent. Most research has been based on Monte Carlo simulations, which are very time consuming. Building on the recent progress on the theoretical analysis of image properties of statistical reconstructions and the development of numerical observers, here we develop a theoretical approach for fast computation of lesion detectability in Bayesian reconstruction. The results can be used to choose the optimum hyperparameter for the maximum lesion detectability. New in this work is the use of theoretical expressions that explicitly model the statistical variation of the lesion and background without assuming that the object variation is (locally) stationary. The theoretical results are validated using Monte Carlo simulations. The comparisons show good agreement between the theoretical predications and the Monte Carlo results.
Collapse
Affiliation(s)
- Jinyi Qi
- Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| |
Collapse
|
116
|
Siltanen S, Kolehmainen V, Järvenpää S, Kaipio JP, Koistinen P, Lassas M, Pirttilä J, Somersalo E. Statistical inversion for medical x-ray tomography with few radiographs: I. General theory. Phys Med Biol 2003; 48:1437-63. [PMID: 12812457 DOI: 10.1088/0031-9155/48/10/314] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
In x-ray tomography, the structure of a three-dimensional body is reconstructed from a collection of projection images of the body. Medical CT imaging does this using an extensive set of projections from all around the body. However, in many practical imaging situations only a small number of truncated projections are available from a limited angle of view. Three-dimensional imaging using such data is complicated for two reasons: (i) typically, sparse projection data do not contain sufficient information to completely describe the 3D body, and (ii) traditional CT reconstruction algorithms, such as filtered backprojection, do not work well when applied to few irregularly spaced projections. Concerning (i), existing results about the information content of sparse projection data are reviewed and discussed. Concerning (ii), it is shown how Bayesian inversion methods can be used to incorporate a priori information into the reconstruction method, leading to improved image quality over traditional methods. Based on the discussion, a low-dose three-dimensional x-ray imaging modality is described.
Collapse
Affiliation(s)
- S Siltanen
- Instrumentarium Corp. Imaging Division, PO Box 20, FIN-04301 Tuusula, Finland
| | | | | | | | | | | | | | | |
Collapse
|
117
|
Frese T, Rouze NC, Bouman CA, Sauer K, Hutchins GD. Quantitative comparison of FBP, EM, and Bayesian reconstruction algorithms for the IndyPET scanner. IEEE TRANSACTIONS ON MEDICAL IMAGING 2003; 22:258-276. [PMID: 12716002 DOI: 10.1109/tmi.2002.808353] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We quantitatively compare filtered backprojection (FBP), expectation-maximization (EM), and Bayesian reconstruction algorithms as applied to the IndyPET scanner--a dedicated research scanner which has been developed for small and intermediate field of view imaging applications. In contrast to previous approaches that rely on Monte Carlo simulations, a key feature of our investigation is the use of an empirical system kernel determined from scans of line source phantoms. This kernel is incorporated into the forward model of the EM and Bayesian algorithms to achieve resolution recovery. Three data sets are used, data collected on the IndyPET scanner using a bar phantom and a Hoffman three-dimensional brain phantom, and simulated data containing a hot lesion added to a uniform background. Reconstruction quality is analyzed quantitatively in terms of bias-variance measures (bar phantom) and mean square error (lesion phantom). We observe that without use of the empirical system kernel, the FBP, EM, and Bayesian algorithms give similar performance. However, with the inclusion of the empirical kernel, the iterative algorithms provide superior reconstructions compared with FBP, both in terms of visual quality and quantitative measures. Furthermore, Bayesian methods outperform EM. We conclude that significant improvements in reconstruction quality can be realized by combining accurate models of the system response with Bayesian reconstruction algorithms.
Collapse
Affiliation(s)
- Thomas Frese
- McKinsey & Company, 21 South Clark Street, Suite 2900, Chicago, IL 60603, USA.
| | | | | | | | | |
Collapse
|
118
|
Chang TC, Allebach JP. Memory efficient error diffusion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2003; 12:1352-1366. [PMID: 18244693 DOI: 10.1109/tip.2003.818214] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Because of its good image quality and moderate computational requirements, error diffusion has become a popular halftoning solution for desktop printers, especially inkjet printers. By making the weights and thresholds tone-dependent and using a predesigned halftone bitmap for tone-dependent threshold modulation, it is possible to achieve image quality very close to that obtained with far more computationally complex iterative methods. However, the ability to implement error diffusion in very low cost or large format products is hampered by the requirement to store the tone-dependent parameters and halftone bitmap, and also the need to store error information for an entire row of the image at any given point during the halftoning process. For the first problem, we replace the halftone bitmap by deterministic bit flipping, which has been previously applied to halftoning, and we linearly interpolate the tone-dependent weights and thresholds from a small set of knot points. We call this implementation a reduced lookup table. For the second problem, we introduce a new serial block-based approach to error diffusion. This approach depends on a novel intrablock scan path and the use of different parameter sets at different points along that path. We show that serial block-based error diffusion reduces off-chip memory access by a factor equal to the block height. With both these solutions, satisfactory image quality can only be obtained with new cost functions that we have developed for the training process. With these new cost functions and moderate block size, we can obtain image quality that is very close to that of the original tone-dependent error diffusion algorithm.
Collapse
Affiliation(s)
- Ti-chiun Chang
- Sch. of Electr. and Comput. Eng., Purdue Univ., West Lafayette, IN 47907-1285, USA.
| | | |
Collapse
|
119
|
Horbelt S, Liebling M, Unser M. Discretization of the radon transform and of its inverse by spline convolutions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2002; 21:363-376. [PMID: 12022624 DOI: 10.1109/tmi.2002.1000260] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We present an explicit formula for B-spline convolution kernels; these are defined as the convolution of several B-splines of variable widths h(i) and degrees n(i). We apply our results to derive spline-convolution-based algorithms for two closely related problems: the computation of the Radon transform and of its inverse. First, we present an efficient discrete implementation of the Radon transform that is optimal in the least-squares sense. We then consider the reverse problem and introduce a new spline-convolution version of the filtered back-projection algorithm for tomographic reconstruction. In both cases, our explicit kernel formula allows for the use of high-degree splines; these offer better approximation performance than the conventional lower-degree formulations (e.g., piecewise constant or piecewise linear models). We present multiple experiments to validate our approach and to find the parameters that give the best tradeoff between image quality and computational complexity. In particular, we find that it can be computationally more efficient to increase the approximation degree than to increase the sampling rate.
Collapse
Affiliation(s)
- Stefan Horbelt
- Biomedical Imaging Group, IOA, STI, Swiss Federal Institute of Technology Lausanne, EPFL.
| | | | | |
Collapse
|
120
|
Elbakri IA, Fessler JA. Statistical image reconstruction for polyenergetic X-ray computed tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2002; 21:89-99. [PMID: 11929108 DOI: 10.1109/42.993128] [Citation(s) in RCA: 298] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This paper describes a statistical image reconstruction method for X-ray computed tomography (CT) that is based on a physical model that accounts for the polyenergetic X-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. We assume that the object consists of a given number of nonoverlapping materials, such as soft tissue and bone. The attenuation coefficient of each voxel is the product of its unknown density and a known energy-dependent mass attenuation coefficient. We formulate a penalized-likelihood function for this polyenergetic model and develop an ordered-subsets iterative algorithm for estimating the unknown densities in each voxel. The algorithm monotonically decreases the cost function at each iteration when one subset is used. Applying this method to simulated X-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artifacts.
Collapse
Affiliation(s)
- Idris A Elbakri
- Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor 48109-2122, USA.
| | | |
Collapse
|
121
|
Sotthivirat S, Fessler JA. Image recovery using partitioned-separable paraboloidal surrogate coordinate ascent algorithms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2002; 11:306-317. [PMID: 18244633 DOI: 10.1109/83.988963] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Iterative coordinate ascent algorithms have been shown to be useful for image recovery, but are poorly suited to parallel computing due to their sequential nature. This paper presents a new fast converging parallelizable algorithm for image recovery that can be applied to a very broad class of objective functions. This method is based on paraboloidal surrogate functions and a concavity technique. The paraboloidal surrogates simplify the optimization problem. The idea of the concavity technique is to partition pixels into subsets that can be updated in parallel to reduce the computation time. For fast convergence, pixels within each subset are updated sequentially using a coordinate ascent algorithm. The proposed algorithm is guaranteed to monotonically increase the objective function and intrinsically accommodates nonnegativity constraints. A global convergence proof is summarized. Simulation results show that the proposed algorithm requires less elapsed time for convergence than iterative coordinate ascent algorithms. With four parallel processors, the proposed algorithm yields a speedup factor of 3.77 relative to single processor coordinate ascent algorithms for a three-dimensional (3-D) confocal image restoration problem.
Collapse
Affiliation(s)
- Saowapak Sotthivirat
- Dept. of Electr. Eng. and Comput. Sci., Michigan Univ., Ann Arbor, MI 48109-2122, USA.
| | | |
Collapse
|
122
|
Frese T, Bouman CA, Sauer K. Adaptive wavelet graph model for Bayesian tomographic reconstruction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2002; 11:756-770. [PMID: 18244672 DOI: 10.1109/tip.2002.801586] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
We introduce an adaptive wavelet graph image model applicable to Bayesian tomographic reconstruction and other problems with nonlocal observations. The proposed model captures coarse-to-fine scale dependencies in the wavelet tree by modeling the conditional distribution of wavelet coefficients given overlapping windows of scaling coefficients containing coarse scale information. This results in a graph dependency structure which is more general than a quadtree, enabling the model to produce smooth estimates even for simple wavelet bases such as the Haar basis. The inter-scale dependencies of the wavelet graph model are specified using a spatially nonhomogeneous Gaussian distribution with parameters at each scale and location. The parameters of this distribution are selected adaptively using nonlinear classification of coarse scale data. The nonlinear adaptation mechanism is based on a set of training images. In conjunction with the wavelet graph model, we present a computationally efficient multiresolution image reconstruction algorithm. This algorithm is based on iterative Bayesian space domain optimization using scale recursive updates of the wavelet graph prior model. In contrast to performing the optimization over the wavelet coefficients, the space domain formulation facilitates enforcement of pixel positivity constraints. Results indicate that the proposed framework can improve reconstruction quality over fixed resolution Bayesian methods.
Collapse
|
123
|
Qi J, Huesman RH. Theoretical study of lesion detectability of MAP reconstruction using computer observers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2001; 20:815-822. [PMID: 11513032 DOI: 10.1109/42.938249] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
The low signal-to-noise ratio (SNR) in emission data has stimulated the development of statistical image reconstruction methods based on the maximum a posteriori (MAP) principle. Experimental examples have shown that statistical methods improve image quality compared to the conventional filtered backprojection (FBP) method. However, these results depend on isolated data sets. Here we study the lesion detectability of MAP reconstruction theoretically, using computer observers. These theoretical results can be applied to different object structures. They show that for a quadratic smoothing prior, the lesion detectability using the prewhitening observer is independent of the smoothing parameter and the neighborhood of the prior, while the nonprewhitening observer exhibits an optimum smoothing point. We also compare the results to those of FBP reconstruction. The comparison shows that for ideal positron emission tomography (PET) systems (where data are true line integrals of the tracer distribution) the MAP reconstruction has a higher SNR for lesion detection than FBP reconstruction due to the modeling of the Poisson noise. For realistic systems, MAP reconstruction further benefits from accurately modeling the physical photon detection process in PET.
Collapse
Affiliation(s)
- J Qi
- Center for Functional Imaging, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA.
| | | |
Collapse
|
124
|
de Pierro AR, Beleza Yamagishi ME. Fast EM-like methods for maximum "a posteriori" estimates in emission tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2001; 20:280-288. [PMID: 11370895 DOI: 10.1109/42.921477] [Citation(s) in RCA: 69] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
The maximum-likelihood (ML) approach in emission tomography provides images with superior noise characteristics compared to conventional filtered backprojection (FBP) algorithms. The expectation-maximization (EM) algorithm is an iterative algorithm for maximizing the Poisson likelihood in emission computed tomography that became very popular for solving the ML problem because of its attractive theoretical and practical properties. Recently, (Browne and DePierro, 1996 and Hudson and Larkin, 1994) block sequential versions of the EM algorithm that take advantage of the scanner's geometry have been proposed in order to accelerate its convergence. In Hudson and Larkin, 1994, the ordered subsets EM (OS-EM) method was applied to the ML problem and a modification (OS-GP) to the maximum a posteriori (MAP) regularized approach without showing convergence. In Browne and DePierro, 1996, we presented a relaxed version of OS-EM (RAMLA) that converges to an ML solution. In this paper, we present an extension of RAMLA for MAP reconstruction. We show that, if the sequence generated by this method converges, then it must converge to the true MAP solution. Experimental evidence of this convergence is also shown. To illustrate this behavior we apply the algorithm to positron emission tomography simulated data comparing its performance to OS-GP.
Collapse
Affiliation(s)
- A R de Pierro
- State University of Campinas, Department of Applied Mathematics, SP, Brazil.
| | | |
Collapse
|
125
|
Krol A, Bowsher JE, Manglos SH, Feiglin DH, Tornai MP, Thomas FD. An EM algorithm for estimating SPECT emission and transmission parameters from emissions data only. IEEE TRANSACTIONS ON MEDICAL IMAGING 2001; 20:218-232. [PMID: 11341711 DOI: 10.1109/42.918472] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
A maximum-likelihood (ML) expectation-maximization (EM) algorithm (called EM-IntraSPECT) is presented for simultaneously estimating single photon emission computed tomography (SPECT) emission and attenuation parameters from emission data alone. The algorithm uses the activity within the patient as transmission tomography sources, with which attenuation coefficients can be estimated. For this initial study, EM-IntraSPECT was tested on computer-simulated attenuation and emission maps representing a simplified human thorax as well as on SPECT data obtained from a physical phantom. Two evaluations were performed. First, to corroborate the idea of reconstructing attenuation parameters from emission data, attenuation parameters (mu) were estimated with the emission intensities (lambda) fixed at their true values. Accurate reconstructions of attenuation parameters were obtained. Second, emission parameters lambda and attenuation parameters mu were simultaneously estimated from the emission data alone. In this case there was crosstalk between estimates of lambda and mu and final estimates of lambda and mu depended on initial values. Estimates degraded significantly as the support extended out farther from the body, and an explanation for this is proposed. In the EM-IntraSPECT reconstructed attenuation images, the lungs, spine, and soft tissue were readily distinguished and had approximately correct shapes and sizes. As compared with standard EM reconstruction assuming a fix uniform attenuation map, EM-IntraSPECT provided more uniform estimates of cardiac activity in the physical phantom study and in the simulation study with tight support, but less uniform estimates with a broad support. The new EM algorithm derived here has additional applications, including reconstructing emission and transmission projection data under a unified statistical model.
Collapse
Affiliation(s)
- A Krol
- SUNY Upstate Medical University, Department of Radiology, Syracuse 13210, USA.
| | | | | | | | | | | |
Collapse
|
126
|
Glatting G, Wuchenauer M, Reske SN. Simultaneous iterative reconstruction for emission and attenuation images in positron emission tomography. Med Phys 2000; 27:2065-71. [PMID: 11011734 DOI: 10.1118/1.1288394] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
The quality of the attenuation correction strongly influences the outcome of the reconstructed emission scan in positron emission tomography. Usually the attenuation correction factors are calculated from the transmission and blank scan and thereafter applied during the reconstruction on the emission data. However, this is not an optimal treatment of the available data, because the emission data themselves contain additional information about attenuation: The optimal treatment must use this information for the determination of the attenuation correction factors. Therefore, our purpose is to investigate a simultaneous emission and attenuation image reconstruction using a maximum likelihood estimator, which takes the attenuation information in the emission data into account. The total maximum likelihood function for emission and transmission is used to derive a one-dimensional Newton-like algorithm for the calculation of the emission and attenuation image. Log-likelihood convergence, mean differences, and the mean of squared differences for the emission image and the attenuation correction factors of a mathematical thorax phantom were determined and compared. As a result we obtain images improved with respect to log likelihood in all cases and with respect to our figures of merit in most cases. We conclude that the simultaneous reconstruction can improve the performance of image reconstruction.
Collapse
Affiliation(s)
- G Glatting
- Abteilung Nuklearmedizin, Universität Ulm, Germany.
| | | | | |
Collapse
|
127
|
Qi J, Leahy RM. Resolution and noise properties of MAP reconstruction for fully 3-D PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2000; 19:493-506. [PMID: 11021692 DOI: 10.1109/42.870259] [Citation(s) in RCA: 178] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We derive approximate analytical expressions for the local impulse response and covariance of images reconstructed from fully three-dimensional (3-D) positron emission tomography (PET) data using maximum a posteriori (MAP) estimation. These expressions explicitly account for the spatially variant detector response and sensitivity of a 3-D tomograph. The resulting spatially variant impulse response and covariance are computed using 3-D Fourier transforms. A truncated Gaussian distribution is used to account for the effect on the variance of the nonnegativity constraint used in MAP reconstruction. Using Monte Carlo simulations and phantom data from the microPET small animal scanner, we show that the approximations provide reasonably accurate estimates of contrast recovery and covariance of MAP reconstruction for priors with quadratic energy functions. We also describe how these analytical results can be used to achieve near-uniform contrast recovery throughout the reconstructed volume.
Collapse
Affiliation(s)
- J Qi
- Signal and Image Processing Institute, University of Southern California, Los Angeles 90089-2564, USA.
| | | |
Collapse
|
128
|
Johnson CA, Seidel J, Sofer A. Interior-point methodology for 3-D PET reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2000; 19:271-285. [PMID: 10909923 DOI: 10.1109/42.848179] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Interior-point methods have been successfully applied to a wide variety of linear and nonlinear programming applications. This paper presents a class of algorithms, based on path-following interior-point methodology, for performing regularized maximum-likelihood (ML) reconstructions on three-dimensional (3-D) emission tomography data. The algorithms solve a sequence of subproblems that converge to the regularized maximum likelihood solution from the interior of the feasible region (the nonnegative orthant). We propose two methods, a primal method which updates only the primal image variables and a primal-dual method which simultaneously updates the primal variables and the Lagrange multipliers. A parallel implementation permits the interior-point methods to scale to very large reconstruction problems. Termination is based on well-defined convergence measures, namely, the Karush-Kuhn-Tucker first-order necessary conditions for optimality. We demonstrate the rapid convergence of the path-following interior-point methods using both data from a small animal scanner and Monte Carlo simulated data. The proposed methods can readily be applied to solve the regularized, weighted least squares reconstruction problem.
Collapse
Affiliation(s)
- C A Johnson
- Center for Information Technology, National Institutes of Health, Bethesda, MD 20892-5624, USA.
| | | | | |
Collapse
|
129
|
Zheng J, Saquib SS, Sauer K, Bouman CA. Parallelizable Bayesian tomography algorithms with rapid, guaranteed convergence. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2000; 9:1745-1759. [PMID: 18262913 DOI: 10.1109/83.869186] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Bayesian tomographic reconstruction algorithms generally require the efficient optimization of a functional of many variables. In this setting, as well as in many other optimization tasks, functional substitution (FS) has been widely applied to simplify each step of the iterative process. The function to be minimized is replaced locally by an approximation having a more easily manipulated form, e.g., quadratic, but which maintains sufficient similarity to descend the true functional while computing only the substitute. We provide two new applications of FS methods in iterative coordinate descent for Bayesian tomography. The first is a modification of our coordinate descent algorithm with one-dimensional (1-D) Newton-Raphson approximations to an alternative quadratic which allows convergence to be proven easily. In simulations, we find essentially no difference in convergence speed between the two techniques. We also present a new algorithm which exploits the FS method to allow parallel updates of arbitrary sets of pixels using computations similar to iterative coordinate descent. The theoretical potential speed up of parallel implementations is nearly linear with the number of processors if communication costs are neglected.
Collapse
Affiliation(s)
- J Zheng
- Delphi Delco Electron. Syst., Kokomo, IN 46904-9005, USA
| | | | | | | |
Collapse
|
130
|
Erdoğan H, Fessler JA. Monotonic algorithms for transmission tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 1999; 18:801-814. [PMID: 10571385 DOI: 10.1109/42.802758] [Citation(s) in RCA: 141] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We present a framework for designing fast and monotonic algorithms for transmission tomography penalized-likelihood image reconstruction. The new algorithms are based on paraboloidal surrogate functions for the log likelihood. Due to the form of the log-likelihood function it is possible to find low curvature surrogate functions that guarantee monotonicity. Unlike previous methods, the proposed surrogate functions lead to monotonic algorithms even for the nonconvex log likelihood that arises due to background events, such as scatter and random coincidences. The gradient and the curvature of the likelihood terms are evaluated only once per iteration. Since the problem is simplified at each iteration, the CPU time is less than that of current algorithms which directly minimize the objective, yet the convergence rate is comparable. The simplicity, monotonicity, and speed of the new algorithms are quite attractive. The convergence rates of the algorithms are demonstrated using real and simulated PET transmission scans.
Collapse
Affiliation(s)
- H Erdoğan
- IBM T.J. Watson Research Labs, Yorktown Heights, NY 10598, USA.
| | | |
Collapse
|
131
|
Glatting G, Wuchenauer M, Reske SN. Iterative reconstruction for attenuation correction in positron emission tomography: maximum likelihood for transmission and blank scan. Med Phys 1999; 26:1838-42. [PMID: 10505872 DOI: 10.1118/1.598689] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
The quality of the attenuation correction strongly influences the outcome of the reconstructed emission scan in positron emission tomography. The calculation of the attenuation correction factors must take into account the Poisson nature of the radioactive decay process, because-for a reasonable scan duration-the transmission measurements contain lines of response with low count numbers in the case of large attenuation factors. Our purpose in this study is to investigate a maximum likelihood estimator for attenuation correction factor calculation in positron emission tomography, which incorporates the Poisson nature of the radioactive decay into transmission and blank measurement. Therefore, the correct maximum likelihood function is used to derive two estimators for the calculation of the attenuation coefficient image and the corresponding attenuation correction factors depending on the measured blank and transmission data. Log likelihood convergence, mean differences, and the mean of squared differences for the attenuation correction factors of a mathematical thorax phantom were determined and compared. The algorithms yield adequate attenuation correction factors, however, the algorithm taking the noise in the blank scan into account can perform better for noisy blank scans. We conclude that maximum likelihood-including blank likelihood-is advantageous to reconstruct attenuation correction factors for low statistic blank and good statistic transmission data. For normal blank and transmission statistics the implementation of the statistical nature of the blank is not mandatory.
Collapse
Affiliation(s)
- G Glatting
- Abteilung Nuklearmedizin, Universität Ulm, Germany.
| | | | | |
Collapse
|
132
|
Qi J, Leahy RM. A theoretical study of the contrast recovery and variance of MAP reconstructions from PET data. IEEE TRANSACTIONS ON MEDICAL IMAGING 1999; 18:293-305. [PMID: 10385287 DOI: 10.1109/42.768839] [Citation(s) in RCA: 56] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We examine the spatial resolution and variance properties of PET images reconstructed using maximum a posteriori (MAP) or penalized-likelihood methods. Resolution is characterized by the contrast recovery coefficient (CRC) of the local impulse response. Simplified approximate expressions are derived for the local impulse response CRC's and variances for each voxel. Using these results we propose a practical scheme for selecting spatially variant smoothing parameters to optimize lesion detectability through maximization of the local CRC-to-noise ratio in the reconstructed image.
Collapse
Affiliation(s)
- J Qi
- Signal and Image Processing Institute, University of Southern California, Los Angeles 90089-2564, USA.
| | | |
Collapse
|
133
|
Nikolova M. Markovian reconstruction using a GNC approach. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:1204-1220. [PMID: 18267538 DOI: 10.1109/83.784433] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper is concerned with the reconstruction of images (or signals) from incomplete, noisy data, obtained at the output of an observation system. The solution is defined in maximum a posteriori (MAP) sense and it appears as the global minimum of an energy function joining a convex data-fidelity term and a Markovian prior energy. The sought images are composed of nearly homogeneous zones separated by edges and the prior term accounts for this knowledge. This term combines general nonconvex potential functions (PFs) which are applied to the differences between neighboring pixels. The resultant MAP energy generally exhibits numerous local minima. Calculating its local minimum, placed in the vicinity of the maximum likelihood estimate, is inexpensive but the resultant estimate is usually disappointing. Optimization using simulated annealing is practical only in restricted situations. Several deterministic suboptimal techniques approach the global minimum of special MAP energies, employed in the field of image denoising, at a reasonable numerical cost. The latter techniques are not directly applicable to general observation systems, nor to general Markovian prior energies. This work is devoted to the generalization of one of them, the graduated nonconvexity (GNC) algorithm, in order to calculate nearly-optimal MAP solutions in a wide range of situations. In fact, GNC provides a solution by tracking a set of minima along a sequence of approximate energies, starting from a convex energy and progressing toward the original energy. In this paper, we develop a common method to derive efficient GNC-algorithms for the minimization of MAP energies which arise in the context of any observation system giving rise to a convex data-fidelity term and of Markov random field (MRF) energies involving any nonconvex and/or nonsmooth PFs. As a side-result, we propose how to construct pertinent initializations which allow us to obtain meaningful solutions using local minimization of these MAP energies. Two numerical experiments-an image deblurring and an emission tomography reconstruction-illustrate the performance of the proposed technique.
Collapse
Affiliation(s)
- M Nikolova
- UFR Math. et Inf., Univ. Rene Descartes, Paris, France.
| |
Collapse
|
134
|
Liang Z, Ye J, Cheng J, Li J, Harrington D. Quantitative cardiac SPECT in three dimensions: validation by experimental phantom studies. Phys Med Biol 1998; 43:905-20. [PMID: 9572514 DOI: 10.1088/0031-9155/43/4/018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
A mathematical framework for quantitative SPECT (single photon emission computed tomography) reconstruction of the heart is presented. An efficient simultaneous compensation approach to the reconstruction task is described. The implementation of the approach on a digital computer is delineated. The approach was validated by experimental data acquired from chest phantoms. The phantoms consisted of a cylindrical elliptical tank of Plexiglass, a cardiac insert made of Plexiglass, a spine insert of packed bone meal and lung inserts made of styrofoam beads alone. Water bags were added to simulate different body characteristics. Comparison between the quantitative reconstruction and the conventional FBP (filtered backprojection) method was performed. The FBP reconstruction had a poor quantitative accuracy and varied for different body configurations. Significant improvement in reconstruction accuracy by the quantitative approach was demonstrated with a moderate computing time on a currently available desktop computer. Furthermore, the quantitative reconstruction was robust for different body characteristics. Therefore, the quantitative approach has the potential for clinical use.
Collapse
Affiliation(s)
- Z Liang
- Department of Radiology, State University of New York, Stony Brook 11794, USA.
| | | | | | | | | |
Collapse
|
135
|
Qi J, Leahy RM, Cherry SR, Chatziioannou A, Farquhar TH. High-resolution 3D Bayesian image reconstruction using the microPET small-animal scanner. Phys Med Biol 1998; 43:1001-13. [PMID: 9572523 DOI: 10.1088/0031-9155/43/4/027] [Citation(s) in RCA: 340] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
A Bayesian method is described for reconstruction of high-resolution 3D images from the microPET small-animal scanner. Resolution recovery is achieved by explicitly modelling the depth dependent geometric sensitivity for each voxel in combination with an accurate detector response model that includes factors due to photon pair non-collinearity and inter-crystal scatter and penetration. To reduce storage and computational costs we use a factored matrix in which the detector response is modelled using a sinogram blurring kernel. Maximum a posteriori (MAP) images are reconstructed using this model in combination with a Poisson likelihood function and a Gibbs prior on the image. Reconstructions obtained from point source data using the accurate system model demonstrate a potential for near-isotropic FWHM resolution of approximately 1.2 mm at the center of the field of view compared with approximately 2 mm when using an analytic 3D reprojection (3DRP) method with a ramp filter. These results also show the ability of the accurate system model to compensate for resolution loss due to crystal penetration producing nearly constant radial FWHM resolution of 1 mm out to a 4 mm radius. Studies with a point source in a uniform cylinder indicate that as the resolution of the image is reduced to control noise propagation the resolution obtained using the accurate system model is superior to that obtained using 3DRP at matched background noise levels. Additional studies using pie phantoms with hot and cold cylinders of diameter 1-2.5 mm and 18FDG animal studies appear to confirm this observation.
Collapse
Affiliation(s)
- J Qi
- Signal and Image Processing Institute, University of Southern California, Los Angeles 90089-2564, USA
| | | | | | | | | |
Collapse
|
136
|
Saquib SS, Bouman CA, Sauer K. ML parameter estimation for Markov random fields with applications to Bayesian tomography. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1998; 7:1029-1044. [PMID: 18276318 DOI: 10.1109/83.701163] [Citation(s) in RCA: 41] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Markov random fields (MRF's) have been widely used to model images in Bayesian frameworks for image reconstruction and restoration. Typically, these MRF models have parameters that allow the prior model to be adjusted for best performance. However, optimal estimation of these parameters(sometimes referred to as hyper parameters) is difficult in practice for two reasons: i) direct parameter estimation for MRF's is known to be mathematically and numerically challenging; ii)parameters can not be directly estimated because the true image cross section is unavailable.In this paper, we propose a computationally efficient scheme to address both these difficulties for a general class of MRF models,and we derive specific methods of parameter estimation for the MRF model known as generalized Gaussian MRF (GGMRF).The first section of the paper derives methods of direct estimation of scale and shape parameters for a general continuously valued MRF. For the GGMRF case, we show that the ML estimate of the scale parameter, sigma, has a simple closed-form solution, and we present an efficient scheme for computing the ML estimate of the shape parameter, p, by an off-line numerical computation of the dependence of the partition function on p.The second section of the paper presents a fast algorithm for computing ML parameter estimates when the true image is unavailable. To do this, we use the expectation maximization(EM) algorithm. We develop a fast simulation method to replace the E-step, and a method to improve parameter estimates when the simulations are terminated prior to convergence.Experimental results indicate that our fast algorithms substantially reduce computation and result in good scale estimates for real tomographic data sets.
Collapse
Affiliation(s)
- S S Saquib
- Polaroid Corp., Cambridge, MA 02139, USA.
| | | | | |
Collapse
|
137
|
Fessler JA, Ficaro EP, Clinthorne NH, Lange K. Grouped-coordinate ascent algorithms for penalized-likelihood transmission image reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 1997; 16:166-175. [PMID: 9101326 DOI: 10.1109/42.563662] [Citation(s) in RCA: 61] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
This paper presents a new class of algorithms for penalized-likelihood reconstruction of attenuation maps from low-count transmission scans. We derive the algorithms by applying to the transmission log-likelihood a version of the convexity technique developed by De Pierro for emission tomography. The new class includes the single-coordinate ascent (SCA) algorithm and Lange's convex algorithm for transmission tomography as special cases. The new grouped-coordinate ascent (GCA) algorithms in the class overcome several limitations associated with previous algorithms. 1) Fewer exponentiations are required than in the transmission maximum likelihood-expectation maximization (ML-EM) algorithm or in the SCA algorithm. 2) The algorithms intrinsically accommodate nonnegativity constraints, unlike many gradient-based methods. 3) The algorithms are easily parallelizable, unlike the SCA algorithm and perhaps line-search algorithms. We show that the GCA algorithms converge faster than the SCA algorithm, even on conventional workstations. An example from a low-count positron emission tomography (PET) transmission scan illustrates the method.
Collapse
Affiliation(s)
- J A Fessler
- University of Michigan, Ann Arbor 48109-2122, USA.
| | | | | | | |
Collapse
|
138
|
Fessler JA, Rogers WL. Spatial resolution properties of penalized-likelihood image reconstruction: space-invariant tomographs. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1996; 5:1346-58. [PMID: 18285223 DOI: 10.1109/83.535846] [Citation(s) in RCA: 233] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
This paper examines the spatial resolution properties of penalized-likelihood image reconstruction methods by analyzing the local impulse response. The analysis shows that standard regularization penalties induce space-variant local impulse response functions, even for space-invariant tomographic systems. Paradoxically, for emission image reconstruction, the local resolution is generally poorest in high-count regions. We show that the linearized local impulse response induced by quadratic roughness penalties depends on the object only through its projections. This analysis leads naturally to a modified regularization penalty that yields reconstructed images with nearly uniform resolution. The modified penalty also provides a very practical method for choosing the regularization parameter to obtain a specified resolution in images reconstructed by penalized-likelihood methods.
Collapse
|
139
|
Fessler JA, Hero AO. Penalized maximum-likelihood image reconstruction using space-alternating generalized EM algorithms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1995; 4:1417-1429. [PMID: 18291973 DOI: 10.1109/83.465106] [Citation(s) in RCA: 79] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Most expectation-maximization (EM) type algorithms for penalized maximum-likelihood image reconstruction converge slowly, particularly when one incorporates additive background effects such as scatter, random coincidences, dark current, or cosmic radiation. In addition, regularizing smoothness penalties (or priors) introduce parameter coupling, rendering intractable the M-steps of most EM-type algorithms. This paper presents space-alternating generalized EM (SAGE) algorithms for image reconstruction, which update the parameters sequentially using a sequence of small "hidden" data spaces, rather than simultaneously using one large complete-data space. The sequential update decouples the M-step, so the maximization can typically be performed analytically. We introduce new hidden-data spaces that are less informative than the conventional complete-data space for Poisson data and that yield significant improvements in convergence rate. This acceleration is due to statistical considerations, not numerical overrelaxation methods, so monotonic increases in the objective function are guaranteed. We provide a general global convergence proof for SAGE methods with nonnegativity constraints.
Collapse
Affiliation(s)
- J A Fessler
- Dept. of Electr. Eng. and Comput. Sci., Michigan Univ., Ann Arbor, MI
| | | |
Collapse
|
140
|
Fessler JA. Hybrid Poisson/polynomial objective functions for tomographic image reconstruction from transmission scans. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1995; 4:1439-1450. [PMID: 18291975 DOI: 10.1109/83.465108] [Citation(s) in RCA: 46] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper describes rapidly converging algorithms for computing attenuation maps from Poisson transmission measurements using penalized-likelihood objective functions. We demonstrate that an under-relaxed cyclic coordinate-ascent algorithm converges faster than the convex algorithm of Lange (see ibid., vol.4, no.10, p.1430-1438, 1995), which in turn converges faster than the expectation-maximization (EM) algorithm for transmission tomography. To further reduce computation, one could replace the log-likelihood objective with a quadratic approximation. However, we show with simulations and analysis that the quadratic objective function leads to biased estimates for low-count measurements. Therefore we introduce hybrid Poisson/polynomial objective functions that use the exact Poisson log-likelihood for detector measurements with low counts, but use computationally efficient quadratic or cubic approximations for the high-count detector measurements. We demonstrate that the hybrid objective functions reduce computation time without increasing estimation bias.
Collapse
Affiliation(s)
- J A Fessler
- Dept. of Electr. Eng. and Comput. Sci., Michigan Univ., Ann Arbor, MI
| |
Collapse
|
141
|
De Pierro AR. On the convergence of an EM-type algorithm for penalized likelihood estimation in emission tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 1995; 14:762-765. [PMID: 18215882 DOI: 10.1109/42.476119] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Recently, we proposed an extension of the expectation maximization (EM) algorithm that was able to handle regularization terms in a natural way. Although very general, convergence proofs were not valid for many possibly useful regularizations. We present here a simple convergence result that is valid assuming only continuous differentiability of the penalty term and can be also extended to other methods for penalized likelihood estimation in tomography.
Collapse
|