1
|
Ghaly M, Links JM, Frey EC. Collimator optimization and collimator-detector response compensation in myocardial perfusion SPECT using the ideal observer with and without model mismatch and an anthropomorphic model observer. Phys Med Biol 2016; 61:2109-23. [PMID: 26894376 DOI: 10.1088/0031-9155/61/5/2109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The collimator is the primary factor that determines the spatial resolution and noise tradeoff in myocardial perfusion SPECT images. In this paper, the goal was to find the collimator that optimizes the image quality in terms of a perfusion defect detection task. Since the optimal collimator could depend on the level of approximation of the collimator-detector response (CDR) compensation modeled in reconstruction, we performed this optimization for the cases of modeling the full CDR (including geometric, septal penetration and septal scatter responses), the geometric CDR, or no model of the CDR. We evaluated the performance on the detection task using three model observers. Two observers operated on data in the projection domain: the Ideal Observer (IO) and IO with Model-Mismatch (IO-MM). The third observer was an anthropomorphic Channelized Hotelling Observer (CHO), which operated on reconstructed images. The projection-domain observers have the advantage that they are computationally less intensive. The IO has perfect knowledge of the image formation process, i.e. it has a perfect model of the CDR. The IO-MM takes into account the mismatch between the true (complete and accurate) model and an approximate model, e.g. one that might be used in reconstruction. We evaluated the utility of these projection domain observers in optimizing instrumentation parameters. We investigated a family of 8 parallel-hole collimators, spanning a wide range of resolution and sensitivity tradeoffs, using a population of simulated projection (for the IO and IO-MM) and reconstructed (for the CHO) images that included background variability. We simulated anterolateral and inferior perfusion defects with variable extents and severities. The area under the ROC curve was estimated from the IO, IO-MM, and CHO test statistics and served as the figure-of-merit. The optimal collimator for the IO had a resolution of 9-11 mm FWHM at 10 cm, which is poorer resolution than typical collimators used for MPS. When the IO-MM and CHO used a geometric or no model of the CDR, the optimal collimator shifted toward higher resolution than that obtained using the IO and the CHO with full CDR modeling. With the optimal collimator, the IO-MM and CHO using geometric modeling gave similar performance to full CDR modeling. Collimators with poorer resolution were optimal when CDR modeling was used. The agreement of rankings between the IO-MM and CHO confirmed that the IO-MM is useful for optimization tasks when model mismatch is present due to its substantially reduced computational burden compared to the CHO.
Collapse
Affiliation(s)
- Michael Ghaly
- The Russell H Morgan Department of Radiology and Radiological Science, Johns Hopkins University, Baltimore, MD 21287, USA
| | | | | |
Collapse
|
2
|
Ghaly M, Du Y, Links JM, Frey EC. Collimator optimization in myocardial perfusion SPECT using the ideal observer and realistic background variability for lesion detection and joint detection and localization tasks. Phys Med Biol 2016; 61:2048-66. [PMID: 26895287 DOI: 10.1088/0031-9155/61/5/2048] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
In SPECT imaging, collimators are a major factor limiting image quality and largely determine the noise and resolution of SPECT images. In this paper, we seek the collimator with the optimal tradeoff between image noise and resolution with respect to performance on two tasks related to myocardial perfusion SPECT: perfusion defect detection and joint detection and localization. We used the Ideal Observer (IO) operating on realistic background-known-statistically (BKS) and signal-known-exactly (SKE) data. The areas under the receiver operating characteristic (ROC) and localization ROC (LROC) curves (AUCd, AUCd+l), respectively, were used as the figures of merit for both tasks. We used a previously developed population of 54 phantoms based on the eXtended Cardiac Torso Phantom (XCAT) that included variations in gender, body size, heart size and subcutaneous adipose tissue level. For each phantom, organ uptakes were varied randomly based on distributions observed in patient data. We simulated perfusion defects at six different locations with extents and severities of 10% and 25%, respectively, which represented challenging but clinically relevant defects. The extent and severity are, respectively, the perfusion defect's fraction of the myocardial volume and reduction of uptake relative to the normal myocardium. Projection data were generated using an analytical projector that modeled attenuation, scatter, and collimator-detector response effects, a 9% energy resolution at 140 keV, and a 4 mm full-width at half maximum (FWHM) intrinsic spatial resolution. We investigated a family of eight parallel-hole collimators that spanned a large range of sensitivity-resolution tradeoffs. For each collimator and defect location, the IO test statistics were computed using a Markov Chain Monte Carlo (MCMC) method for an ensemble of 540 pairs of defect-present and -absent images that included the aforementioned anatomical and uptake variability. Sets of test statistics were computed for both tasks and analyzed using ROC and LROC analysis methodologies. The results of this study suggest that collimators with somewhat poorer resolution and higher sensitivity than those of a typical low-energy high-resolution (LEHR) collimator were optimal for both defect detection and joint detection and localization tasks in myocardial perfusion SPECT for the range of defect sizes investigated. This study also indicates that optimizing instrumentation for a detection task may provide near-optimal performance on the more challenging detection-localization task.
Collapse
Affiliation(s)
- Michael Ghaly
- The Russell H Morgan Department of Radiology and Radiological Science, Johns Hopkins University, Baltimore, MD 21287, USA
| | | | | | | |
Collapse
|
3
|
McQuaid SJ, Southekal S, Kijewski MF, Moore SC. Joint optimization of collimator and reconstruction parameters in SPECT imaging for lesion quantification. Phys Med Biol 2012; 56:6983-7000. [PMID: 22008861 DOI: 10.1088/0031-9155/56/21/014] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Obtaining the best possible task performance using reconstructed SPECT images requires optimization of both the collimator and reconstruction parameters. The goal of this study is to determine how to perform this optimization, namely whether the collimator parameters can be optimized solely from projection data, or whether reconstruction parameters should also be considered. In order to answer this question, and to determine the optimal collimation, a digital phantom representing a human torso with 16 mm diameter hot lesions (activity ratio 8:1) was generated and used to simulate clinical SPECT studies with parallel-hole collimation. Two approaches to optimizing the SPECT system were then compared in a lesion quantification task: sequential optimization, where collimation was optimized on projection data using the Cramer–Rao bound, and joint optimization, which simultaneously optimized collimator and reconstruction parameters. For every condition, quantification performance in reconstructed images was evaluated using the root-mean-squared-error of 400 estimates of lesion activity. Compared to the joint-optimization approach, the sequential-optimization approach favoured a poorer resolution collimator, which, under some conditions, resulted in sub-optimal estimation performance. This implies that inclusion of the reconstruction parameters in the optimization procedure is important in obtaining the best possible task performance; in this study, this was achieved with a collimator resolution similar to that of a general-purpose (LEGP) collimator. This collimator was found to outperform the more commonly used high-resolution (LEHR) collimator, in agreement with other task-based studies, using both quantification and detection tasks.
Collapse
Affiliation(s)
- Sarah J McQuaid
- Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA, USA.
| | | | | | | |
Collapse
|
4
|
McQuaid SJ, Southekal S, Kijewski MF, Moore SC. Joint optimization of collimator and reconstruction parameters in SPECT imaging for lesion quantification. Phys Med Biol 2011. [PMID: 22008861 DOI: 10.1088/0031‐9155/56/21/014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Obtaining the best possible task performance using reconstructed SPECT images requires optimization of both the collimator and reconstruction parameters. The goal of this study is to determine how to perform this optimization, namely whether the collimator parameters can be optimized solely from projection data, or whether reconstruction parameters should also be considered. In order to answer this question, and to determine the optimal collimation, a digital phantom representing a human torso with 16 mm diameter hot lesions (activity ratio 8:1) was generated and used to simulate clinical SPECT studies with parallel-hole collimation. Two approaches to optimizing the SPECT system were then compared in a lesion quantification task: sequential optimization, where collimation was optimized on projection data using the Cramer–Rao bound, and joint optimization, which simultaneously optimized collimator and reconstruction parameters. For every condition, quantification performance in reconstructed images was evaluated using the root-mean-squared-error of 400 estimates of lesion activity. Compared to the joint-optimization approach, the sequential-optimization approach favoured a poorer resolution collimator, which, under some conditions, resulted in sub-optimal estimation performance. This implies that inclusion of the reconstruction parameters in the optimization procedure is important in obtaining the best possible task performance; in this study, this was achieved with a collimator resolution similar to that of a general-purpose (LEGP) collimator. This collimator was found to outperform the more commonly used high-resolution (LEHR) collimator, in agreement with other task-based studies, using both quantification and detection tasks.
Collapse
Affiliation(s)
- Sarah J McQuaid
- Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, MA, USA.
| | | | | | | |
Collapse
|
5
|
Trott CM, Ouyang J, El Fakhri G. Comparison of simultaneous and sequential SPECT imaging for discrimination tasks in assessment of cardiac defects. Phys Med Biol 2010; 55:6897-910. [PMID: 21048290 DOI: 10.1088/0031-9155/55/22/019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Simultaneous rest perfusion/fatty-acid metabolism studies have the potential to replace sequential rest/stress perfusion studies for the assessment of cardiac function. Simultaneous acquisition has the benefits of increased signal and lack of need for patient stress, but is complicated by cross-talk between the two radionuclide signals. We consider a simultaneous rest (99m)Tc-sestamibi/(123)I-BMIPP imaging protocol in place of the commonly used sequential rest/stress (99m)Tc-sestamibi protocol. The theoretical precision with which the severity of a cardiac defect and the transmural extent of infarct can be measured is computed for simultaneous and sequential SPECT imaging, and their performance is compared for discriminating (1) degrees of defect severity and (2) sub-endocardial from transmural defects. We consider cardiac infarcts for which reduced perfusion and metabolism are observed. From an information perspective, simultaneous imaging is found to yield comparable or improved performance compared with sequential imaging for discriminating both severity of defect and transmural extent of infarct, for three defects of differing location and size.
Collapse
Affiliation(s)
- C M Trott
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 55 Fruit St, Boston, MA 02114, USA.
| | | | | |
Collapse
|
6
|
Zhou L, Gindi G. Collimator optimization in SPECT based on a joint detection and localization task. Phys Med Biol 2009. [PMID: 19556684 DOI: 10.1088/0031‐9155/54/14/005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
In SPECT the collimator is a crucial element of the imaging chain and controls the noise-resolution tradeoff of the collected data. Optimizing collimator design has been a long studied topic, with many different criteria used to evaluate the design. One class of criteria is task based, in which the collimator is designed to optimize detection of a signal (lesion). Here we consider a new, more realistic task, the joint detection and localization of a signal. Furthermore, we use an ideal observer-one that attains a theoretically maximum task performance-to optimize collimator design. The ideal observer operates on the sinogram data. We consider a family of parallel-hole low-energy collimators of varying resolution and efficiency and optimize over this set. We observe that for a 2D object characterized by noise due to background variability and a sinogram with photon noise, the optimal collimator tends to be of lower resolution and higher efficiency than equivalent commercial collimators. Furthermore, this optimal design is insensitive to the tolerance radius within which the signal must be localized. So for this scenario, the addition of a localization task does not change the optimal collimator. Optimal collimator resolution gets worse as signal size grows, and improves as the level of background variability noise increases. These latter two trends are also observed when the detection task is signal-known-exactly and background variable.
Collapse
Affiliation(s)
- Lili Zhou
- Department of Radiology, Stony Brook University, NY 11790, USA
| | | |
Collapse
|
7
|
Zhou L, Gindi G. Collimator optimization in SPECT based on a joint detection and localization task. Phys Med Biol 2009; 54:4423-37. [PMID: 19556684 DOI: 10.1088/0031-9155/54/14/005] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
In SPECT the collimator is a crucial element of the imaging chain and controls the noise-resolution tradeoff of the collected data. Optimizing collimator design has been a long studied topic, with many different criteria used to evaluate the design. One class of criteria is task based, in which the collimator is designed to optimize detection of a signal (lesion). Here we consider a new, more realistic task, the joint detection and localization of a signal. Furthermore, we use an ideal observer-one that attains a theoretically maximum task performance-to optimize collimator design. The ideal observer operates on the sinogram data. We consider a family of parallel-hole low-energy collimators of varying resolution and efficiency and optimize over this set. We observe that for a 2D object characterized by noise due to background variability and a sinogram with photon noise, the optimal collimator tends to be of lower resolution and higher efficiency than equivalent commercial collimators. Furthermore, this optimal design is insensitive to the tolerance radius within which the signal must be localized. So for this scenario, the addition of a localization task does not change the optimal collimator. Optimal collimator resolution gets worse as signal size grows, and improves as the level of background variability noise increases. These latter two trends are also observed when the detection task is signal-known-exactly and background variable.
Collapse
Affiliation(s)
- Lili Zhou
- Department of Radiology, Stony Brook University, NY 11790, USA
| | | |
Collapse
|
8
|
Alessio AM, Schmitz RE, Macdonald LR, Wollenweber SD, Stearns CW, Ross SG, Ganin A, Lewellen TK, Kinahan PE. Image Reconstruction for a Partially Collimated Whole Body PET Scanner. IEEE TRANSACTIONS ON NUCLEAR SCIENCE 2008; 55:975-983. [PMID: 19096731 PMCID: PMC2605072 DOI: 10.1109/tns.2008.921938] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.
Collapse
Affiliation(s)
- Adam M Alessio
- A. M. Alessio, R. E. Schmitz, L. R. MacDonald, T. K. Lewellen, and P. E. Kinahan are with the Department of Radiology, University of Washington, Seattle, WA 98195 USA S. D. Wollenweber, C. W. Steams, S. G. Ross, and A. Ganin are with the GE Healthcare, Waukesha, WI 53188 USA
| | | | | | | | | | | | | | | | | |
Collapse
|
9
|
Shen F, Clarkson E. Using Fisher information to approximate ideal-observer performance on detection tasks for lumpy-background images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2006; 23:2406-14. [PMID: 16985526 DOI: 10.1364/josaa.23.002406] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
When building an imaging system for detection tasks in medical imaging, we need to evaluate how well the system performs before we can optimize it. One way to do the evaluation is to calculate the performance of the Bayesian ideal observer. The ideal-observer performance is often computationally expensive, and it is very useful to have an approximation to it. We use a parameterized probability density function to represent the corresponding densities of data under the signal-absent and the signal-present hypotheses. We develop approximations to the ideal-observer detectability as a function of signal parameters involving the Fisher information matrix, which is normally used in parameter estimation problems. The accuracy of the approximation is illustrated in analytical examples and lumpy-background simulations. We are able to predict the slope of the detectability as a function of the signal parameter. This capability suggests that the Fisher information matrix itself evaluated at the null parameter value can be used as the figure of merit in imaging system evaluation. We are also able to provide a theoretical foundation for the connection between detection tasks and estimation tasks.
Collapse
Affiliation(s)
- Fangfang Shen
- Department of Radiology, University of Arizona, Tucson 85724-5067, USA
| | | |
Collapse
|
10
|
Abstract
The purpose of this study was to characterize the performance of single photon emission computed tomography (SPECT) in tasks associated with tracking transplanted cells. Previous studies identified matters of hardware design, whereas we focus on biological variables impacting system performance, such as cell colony growth and non-specific radiolabelling. Using experimental data, a digital phantom was developed of in vitro 111In-radiolabelled stem cells, transfected with a reporter gene, transplanted into canine infarcted myocardium and interrogated using a peripherally injected 131I-radiolabelled reporter probe. Single- and dual-head SPECT acquisition was simulated. Performance was characterized using an estimation task, where the precision of parameter estimates (111In and 131I radiolabel quantity, cell colony size and location, and background) was tracked as the phantom evolved to simulate 111In-label efflux, cell colony growth and improved reporter probe specificity. In vitro pre-labelling of transplanted cells improved precision of parameter estimates via a priori size and location information. Precision of radiolabel quantity estimates improved with cell colony growth, despite 111In radiolabel dilution; size and location parameters were influenced little. Precision of radiolabel quantity estimates improved with reduced reporter probe non-specific uptake. The performance of SPECT in cell tracking is influenced strongly by biological variables. These should be considered when planning experiments or developing SPECT technology for cell tracking.
Collapse
Affiliation(s)
- Robert Z Stodilka
- Imaging Program, Lawson Health Research Institute, London, Ontario, Canada.
| | | | | |
Collapse
|
11
|
Moore SC, Foley Kijewski M, El Fakhri G. Collimator optimization for detection and quantitation tasks: application to gallium-67 imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:1347-56. [PMID: 16229420 DOI: 10.1109/tmi.2005.857211] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
We describe a new approach to the problem of collimator optimization in nuclear medicine; our methodology is illustrated for the challenging case of gallium-67 imaging. Collimator-design methods based on empirical rules, such as specification of an allowable level of single-septal penetration (SSP) at a fixed energy, are especially inappropriate for radionuclides characterized by an abundance of high-energy contaminant photons that scatter in the patient, collimator, and/or detector before detection within one of a few photopeak energy windows. Lead X-rays produced in the collimator are an additional source of contamination. We designed optimal collimation for 67Ga based on relevant clinical imaging tasks and a realistic simulation of photon transport in a phantom, collimator, and detector. Collimator designs were compared on the basis of performance in lesion detection, as predicted by a three-channel Hotelling observer (CHO), as well as in tumor and background activity estimation (EST), quantified by task-specific signal-to-noise ratios (SNRs). The optimal values of collimator lead content were 22.0 and 23.8 g/cm2, respectively, for CHO and EST, while the optimal geometric resolution values were 1.8 and 1.6 cm full-width at half-maximum (FWHM), respectively, at a distance of 23.5 cm. The resolution of a commercially available medium-energy low-penetration collimator (MELP) is 1.9 cm FWHM at this distance. The optimal values for SSP at 300 keV were 7.3% and 5.8% based on CHO and EST, respectively, compared to 5.2% for the MELP collimator. Compared with the commercial MELP collimator, the 67Ga collimator optimized for tumor detection or activity estimation tasks provided improved geometric spatial resolution with reduced geometric efficiency and, surprisingly, allowed an increased level of single-septal penetration.
Collapse
Affiliation(s)
- Stephen C Moore
- Division of Nuclear Medicine, Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA 02115, USA.
| | | | | |
Collapse
|
12
|
Müller SP, Abbey CK, Rybicki FJ, Moore SC, Kijewski MF. Measures of performance in nonlinear estimation tasks: prediction of estimation performance at low signal-to-noise ratio. Phys Med Biol 2005; 50:3697-715. [PMID: 16077222 DOI: 10.1088/0031-9155/50/16/004] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Maximum-likelihood (ML) estimation is an established paradigm for the assessment of imaging system performance in nonlinear quantitation tasks. At high signal-to-noise ratio (SNR), ML estimates are asymptotically Gaussian-distributed, unbiased and efficient, thereby attaining the Cramer-Rao bound (CRB). Therefore, at high SNR the CRB is useful as a predictor of the variance of ML estimates and, consequently, as a basis for measures of estimation performance. At low SNR, however, the achievable parameter variances are often substantially larger than the CRB and the estimates are no longer Gaussian-distributed. These departures imply that inference about the estimates that is based on the CRB and the assumption of a normal distribution will not be valid. We have found previously that for some tasks these effects arise at noise levels considered clinically acceptable. We have derived the mathematical relationship between a new measure, chi2(pdf-ML), and the expected probability density of the ML estimates, and have justified the use of chi2(pdf-ML)-isocontours in parameter space to describe the ML estimates. We validated this approach by simulation experiments using spherical objects imaged with a Gaussian point spread function. The parameters, activity concentration and size, were estimated simultaneously by ML, and variances and covariances calculated over 1000 replications per condition from 3D image volumes and from 2D tomographic projections of the same object. At low SNR, where the CRB is no longer achievable, chi2(pdf-ML)-isocontours provide a robust prediction of the distribution of the ML estimates. At high SNR, the chi2(pdf-ML)-isocontours asymptotically approach the analogous chi2(pdf-F)-contours derived from the Fisher information matrix. The chi2(pdf-ML) model appears to be suitable for characterization of the influence of the noise level and characteristics, the task, and the object on the shape of the probability density of the ML estimates at low SNR. Furthermore, it provides unique insights into the causes of the variability of estimation performance.
Collapse
Affiliation(s)
- Stefan P Müller
- Klinik und Poliklinik für Nuklearmedizin, Universitätsklinikum Essen, Essen, Germany.
| | | | | | | | | |
Collapse
|
13
|
El Fakhri G, Moore SC, Kijewski MF. Optimization of Ga-67 imaging for detection and estimation tasks: dependence of imaging performance on spectral acquisition parameters. Med Phys 2002; 29:1859-66. [PMID: 12201433 DOI: 10.1118/1.1493214] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
UNLABELLED We have compared the use of two (93 and 185 keV) and three (93, 185, and 300 keV) photopeaks for Ga-67 tumor imaging and optimized the placement of each energy window. METHODS The bases for optimization and evaluation were ideal and Bayesian signal-to-noise ratios (SNR) for the detection of spheres embedded in a realistic anthropomorphic digital torso phantom and ideal SNR for the estimation of their size and activity concentration. Seven spheres of radii ranging from 1 to 3 cm, located at several sites in the torso, were simulated using a realistic Monte Carlo program. We also calculated the ideal SNR for the detection from simple phantom acquisitions. RESULTS For detection and estimation tasks, the optimum windows were identical for all sphere sizes and locations. For the 93 keV photopeak, the optimal window was 84-102 keV for the detection and 87-102 keV for estimation; these windows are narrower than the 20% window often used in the clinic (83-101 keV). For the 185 keV photopeak, the optimal window was 170-220 keV for the detection and 170-215 keV for estimation; these are substantially different than the 15% window used in our clinic (171-199 keV). For the 300 keV photopeak, the optimal window for detection was 270-320 keV, and for estimation, 280-320 keV. Using the three optimized, rather than only the two lower-energy, windows yielded a 9% increase in the SNR for the detection of the 3 cm diam sphere (a 12% increase for a 2 cm diam sphere) and a 7% increase in the SNR for estimation of its size. For the acquired phantom data, detection also increased by 9%-12% when using three, rather than two, energy windows.
Collapse
Affiliation(s)
- Georges El Fakhri
- Department of Radiology, Harvard Medical School and Brigham and Women's Hospital, Boston, Massachusetts 02115, USA.
| | | | | |
Collapse
|
14
|
Wilson DW, Barrett HH, Furenlid LR. A new design for a SPECT small-animal imager. IEEE NUCLEAR SCIENCE SYMPOSIUM CONFERENCE RECORD. NUCLEAR SCIENCE SYMPOSIUM 2001; 3:1826-1829. [PMID: 26568673 PMCID: PMC4643301 DOI: 10.1109/nssmic.2001.1008697] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We demonstrate, using computer models, the feasibility of a new SPECT system for imaging small animals such as mice. This system consists of four modular scintillation cameras, four multiple-pinhole apertures, electronics, and tomographic reconstruction software. All of these constituents have been designed in our laboratory. The cameras are 120mm×120mm with a resolution of approximately 2mm, the apertures can have either single or multiple pinholes, and reconstruction is performed using the OS-EM algorithm. One major advantage of this system is the design flexibility it offers, as the cameras are easy to move and the aperture s are simple to modify. We explored a number of possible configurations. One promising configuration had the four camera faces forming four sides of a cube with multiple-pinhole apertures employed to focus the incoming high-energy photons. This system is rotated three times, so that data are collected from a total of sixteen camera angles. It is shown that this hybrid system has some superior properties to single-aperture-type systems. We conclude that this proposed system offers advantages over current imaging systems in terms of flexibility, simplicity, and performance.
Collapse
Affiliation(s)
- D. W. Wilson
- Center for Gamma-Ray Imaging and the Department of Radiology at the University of Arizona, 85703, USA. (telephone 520-626-4255)
| | - H. H. Barrett
- Center for Gamma-Ray Imaging and the Department of Radiology at the University of Arizona, 85703, USA
| | - L. R. Furenlid
- Center for Gamma-Ray Imaging and the Department of Radiology at the University of Arizona, 85703, USA
| |
Collapse
|
15
|
Qi J, Huesman RH. Theoretical study of lesion detectability of MAP reconstruction using computer observers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2001; 20:815-822. [PMID: 11513032 DOI: 10.1109/42.938249] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
The low signal-to-noise ratio (SNR) in emission data has stimulated the development of statistical image reconstruction methods based on the maximum a posteriori (MAP) principle. Experimental examples have shown that statistical methods improve image quality compared to the conventional filtered backprojection (FBP) method. However, these results depend on isolated data sets. Here we study the lesion detectability of MAP reconstruction theoretically, using computer observers. These theoretical results can be applied to different object structures. They show that for a quadratic smoothing prior, the lesion detectability using the prewhitening observer is independent of the smoothing parameter and the neighborhood of the prior, while the nonprewhitening observer exhibits an optimum smoothing point. We also compare the results to those of FBP reconstruction. The comparison shows that for ideal positron emission tomography (PET) systems (where data are true line integrals of the tracer distribution) the MAP reconstruction has a higher SNR for lesion detection than FBP reconstruction due to the modeling of the Poisson noise. For realistic systems, MAP reconstruction further benefits from accurately modeling the physical photon detection process in PET.
Collapse
Affiliation(s)
- J Qi
- Center for Functional Imaging, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA.
| | | |
Collapse
|
16
|
Moore SC, Kijewski MF, Müller SP, Rybicki F, Zimmerman RE. Evaluation of scatter compensation methods by their effects on parameter estimation from SPECT projections. Med Phys 2001; 28:278-87. [PMID: 11243353 DOI: 10.1118/1.1344201] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Three algorithms for scatter compensation in Tc-99m brain single-photon emission computed tomography (SPECT) were optimized and compared on the basis of the accuracy and precision with which lesion and background activity could be simultaneously estimated. These performance metrics are directly related to the clinically important tasks of activity quantitation and lesion detection, in contrast to measures based solely on the fidelity of image pixel values. The scatter compensation algorithms were (a) the Compton-window (CW) method with a 20% photopeak window, a 92-126 keV scatter window, and an optimized "k-factor," (b) the triple-energy window (TEW) method, with optimized widths of the photopeak window and the abutting scatter window, and (c) a general spectral (GS) method using seventeen 4 keV windows with optimized energy weights. Each method was optimized by minimizing the sum of the mean-squared errors (MSE) of the estimates of lesion and background activity concentrations. The accuracy and precision of activity estimates were then determined for lesions of different size, location, and contrast, as well as for a more complex Bayesian estimation task in which lesion size was also estimated. For the TEW and GS methods, parameters optimized for the estimation task differed significantly from those optimized for global normalized pixel MSE. For optimal estimation, the CW bias of activity estimates was larger and varied more (-2% to 22%) with lesion location and size than that of the other methods. The magnitude of the TEW bias was less than 7% across most conditions, although its precision was worse than that of CW estimates. The GS method performed best, with bias generally less than 4% and the lowest variance; its root-mean square (rms) estimation error was within a few percent of that achievable from primary photons alone. For brain SPECT, estimation performance with an optimized, energy-based, subtractive correction may approach that of an ideal scatter-rejection procedure.
Collapse
Affiliation(s)
- S C Moore
- Department of Radiology, Harvard Medial School, Brigham and Women's Hospital, Boston, Massachusetts 02115, USA.
| | | | | | | | | |
Collapse
|
17
|
Lahorte P, Vandenberghe S, Van Laere K, Audenaert K, Lemahieu I, Dierckx RA. Assessing the performance of SPM analyses of spect neuroactivation studies. Statistical Parametric Mapping. Neuroimage 2000; 12:757-64. [PMID: 11112407 DOI: 10.1006/nimg.2000.0658] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Several simulations of SPECT neuroactivation studies have been performed in order to determine the influence of both study size and activation focus characteristics on the detection of brain activation foci following a pixel-based statistical analysis. This was achieved by developing a methodology based on the Hoffman software brain phantom, SPECT acquisition simulation software, standard reconstruction software, and the Statistical Parametric Mapping (SPM96) package. We present results on the minimal activation levels required for focus detection. Furthermore, the improved sensitivity of the analysis resulting from the use of an iterative reconstruction technique (OSEM) with regard to the classical filtered backprojection (FBP) is assessed quantitatively, and the various physical, processing, and physiological parameters that potentially influence the detection of foci are discussed. Finally, the influence is investigated of the height threshold as implemented in SPM96 upon the size of the detected foci. Practical guidelines are proposed with regard to the number of subjects per group for SPECT activation studies following the split-dose design.
Collapse
Affiliation(s)
- P Lahorte
- Department of Subatomic and Radiation Physics, Radiation Physics Group, Ghent University, Proeftuinstraat 86, B-9000 Ghent, Belgium
| | | | | | | | | | | |
Collapse
|
18
|
Abstract
Monte Carlo techniques have become popular in different areas of medical physics with advantage of powerful computing systems. In particular, they have been extensively applied to simulate processes involving random behavior and to quantify physical parameters that are difficult or even impossible to calculate by experimental measurements. Recent nuclear medical imaging innovations such as single-photon emission computed tomography (SPECT), positron emission tomography (PET), and multiple emission tomography (MET) are ideal for Monte Carlo modeling techniques because of the stochastic nature of radiation emission, transport and detection processes. Factors which have contributed to the wider use include improved models of radiation transport processes, the practicality of application with the development of acceleration schemes and the improved speed of computers. In this paper we present a derivation and methodological basis for this approach and critically review their areas of application in nuclear imaging. An overview of existing simulation programs is provided and illustrated with examples of some useful features of such sophisticated tools in connection with common computing facilities and more powerful multiple-processor parallel processing systems. Current and future trends in the field are also discussed.
Collapse
Affiliation(s)
- H Zaidi
- Division of Nuclear Medicine, Geneva University Hospital, Switzerland.
| |
Collapse
|