1
|
Haldar JP. On Ambiguity in Linear Inverse Problems: Entrywise Bounds on Nearly Data-Consistent Solutions and Entrywise Condition Numbers. IEEE TRANSACTIONS ON SIGNAL PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 71:1083-1092. [PMID: 37383695 PMCID: PMC10299746 DOI: 10.1109/tsp.2023.3257989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
Ill-posed linear inverse problems appear frequently in various signal processing applications. It can be very useful to have theoretical characterizations that quantify the level of ill-posedness for a given inverse problem and the degree of ambiguity that may exist about its solution. Traditional measures of ill-posedness, such as the condition number of a matrix, provide characterizations that are global in nature. While such characterizations can be powerful, they can also fail to provide full insight into situations where certain entries of the solution vector are more or less ambiguous than others. In this work, we derive novel theoretical lower- and upper-bounds that apply to individual entries of the solution vector, and are valid for all potential solution vectors that are nearly data-consistent. These bounds are agnostic to the noise statistics and the specific method used to solve the inverse problem, and are also shown to be tight. In addition, our results also lead us to introduce an entrywise version of the traditional condition number, which provides a substantially more nuanced characterization of scenarios where certain elements of the solution vector are less sensitive to perturbations than others. Our results are illustrated in an application to magnetic resonance imaging reconstruction, and we include discussions of practical computation methods for large-scale inverse problems, connections between our new theory and the traditional Cramér-Rao bound under statistical modeling assumptions, and potential extensions to cases involving constraints beyond just data-consistency.
Collapse
Affiliation(s)
- Justin P Haldar
- Signal and Image Processing Institute, Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, 90089 USA
| |
Collapse
|
2
|
Guo X, Zhang L, Xing Y. Analytical covariance estimation for iterative CT reconstruction methods. Biomed Phys Eng Express 2022; 8. [PMID: 35213850 DOI: 10.1088/2057-1976/ac58bf] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 02/25/2022] [Indexed: 11/11/2022]
Abstract
Covariance of reconstruction images are useful to analyze the magnitude and correlation of noise in the evaluation of systems and reconstruction algorithms. The covariance estimation requires a big number of image samples that are hard to acquire in reality. A covariance propagation method from projection by a few noisy realizations is studied in this work. Based on the property of convergent points of cost funtions, the proposed method is composed of three steps, (1) construct a relationship between the covariance of projection and corresponding reconstruction from cost functions at its convergent point, (2) simplify the covariance relationship constructed in (1) by introducing an approximate gradient of penalties, and (3) obtain an analytical covariance estimation according to the simplified relationship in (2). Three approximation methods for step (2) are studied: the linear approximation of the gradient of penalties (LAM), the Taylor apprximation (TAM), and the mixture of LAM and TAM (MAM). TV and qGGMRF penalized weighted least square methods are experimented on. Results from statistical methods are used as reference. Under the condition of unstable 2nd derivative of penalties such as TV, the covariance image estimated by LAM accords to reference well but of smaller values, while the covarianc estimation by TAM is quite off. Under the conditon of relatively stable 2nd derivative of penalties such as qGGMRF, TAM performs well and LAM is again with a negative bias in magnitude. MAM gives a best performance under both conditions by combining LAM and TAM. Results also show that only one noise realization is enough to obtain reasonable covariance estimation analytically, which is important for practical usage. This work suggests the necessity and a new way to estimate the covariance for non-quadratically penalized reconstructions. Currently, the proposed method is computationally expensive for large size reconstructions.Computational efficiency is our future work to focus.
Collapse
Affiliation(s)
- Xiaoyue Guo
- Department of Engineering Physics, Tsinghua University, Beijing, People's Republic of China.,Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Beijing, People's Republic of China
| | - Li Zhang
- Department of Engineering Physics, Tsinghua University, Beijing, People's Republic of China.,Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Beijing, People's Republic of China
| | - Yuxiang Xing
- Department of Engineering Physics, Tsinghua University, Beijing, People's Republic of China.,Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Beijing, People's Republic of China
| |
Collapse
|
3
|
Stayman JW, Capostagno S, Gang GJ, Siewerdsen JH. Task-driven source-detector trajectories in cone-beam computed tomography: I. Theory and methods. J Med Imaging (Bellingham) 2019; 6:025002. [PMID: 31065569 PMCID: PMC6497008 DOI: 10.1117/1.jmi.6.2.025002] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2018] [Accepted: 03/29/2019] [Indexed: 11/14/2022] Open
Abstract
We develop a mathematical framework for the design of orbital trajectories that are optimal to a particular imaging task (or tasks) in advanced cone-beam computed tomography systems that have the capability of general source-detector positioning. The framework allows various parameterizations of the orbit as well as constraints based on imaging system capabilities. To accommodate nonstandard system geometries, a model-based iterative reconstruction method is applied. Such algorithms generally complicate the assessment and prediction of reconstructed image properties; however, we leverage efficient implementations of analytical predictors of local noise and spatial resolution that incorporate dependencies of the reconstruction algorithm on patient anatomy, x-ray technique, and geometry. These image property predictors serve as inputs to a task-based performance metric defined by detectability index, which is optimized with respect to the orbital parameters of data acquisition. We investigate the framework of the task-driven trajectory design in several examples to examine the dependence of optimal source-detector trajectories on the imaging task (or tasks), including location and spatial-frequency dependence. A variety of multitask objectives are also investigated, and the advantages to imaging performance are quantified in simulation studies.
Collapse
Affiliation(s)
- J. Webster Stayman
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Sarah Capostagno
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Grace J. Gang
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Radiology and Radiological Science, Baltimore, Maryland, United States
| |
Collapse
|
4
|
Kucharczak F, Loquin K, Buvat I, Strauss O, Mariano-Goulart D. Interval-based reconstruction for uncertainty quantification in PET. ACTA ACUST UNITED AC 2018; 63:035014. [DOI: 10.1088/1361-6560/aa9ea6] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
5
|
Gang GJ, Siewerdsen JH, Webster Stayman J. Task-driven optimization of CT tube current modulation and regularization in model-based iterative reconstruction. Phys Med Biol 2017; 62:4777-4797. [PMID: 28362638 DOI: 10.1088/1361-6560/aa6a97] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)-each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d') of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d' of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Collapse
Affiliation(s)
- Grace J Gang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | | | | |
Collapse
|
6
|
Chun SY. The Use of Anatomical Information for Molecular Image Reconstruction Algorithms: Attenuation/Scatter Correction, Motion Compensation, and Noise Reduction. Nucl Med Mol Imaging 2016; 50:13-23. [PMID: 26941855 DOI: 10.1007/s13139-016-0399-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2015] [Revised: 01/11/2016] [Accepted: 01/13/2016] [Indexed: 01/05/2023] Open
Abstract
PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.
Collapse
Affiliation(s)
- Se Young Chun
- School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea
| |
Collapse
|
7
|
Fischer A, Lasser T, Schrapp M, Stephan J, Noël PB. Object Specific Trajectory Optimization for Industrial X-ray Computed Tomography. Sci Rep 2016; 6:19135. [PMID: 26817435 PMCID: PMC4730246 DOI: 10.1038/srep19135] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 11/20/2015] [Indexed: 11/09/2022] Open
Abstract
In industrial settings, X-ray computed tomography scans are a common tool for inspection of objects. Often the object can not be imaged using standard circular or helical trajectories because of constraints in space or time. Compared to medical applications the variance in size and materials is much larger. Adapting the acquisition trajectory to the object is beneficial and sometimes inevitable. There are currently no sophisticated methods for this adoption. Typically the operator places the object according to his best knowledge. We propose a detectability index based optimization algorithm which determines the scan trajectory on the basis of a CAD-model of the object. The detectability index is computed solely from simulated projections for multiple user defined features. By adapting the features the algorithm is adapted to different imaging tasks. Performance of simulated and measured data was qualitatively and quantitatively assessed.The results illustrate that our algorithm not only allows more accurate detection of features, but also delivers images with high overall quality in comparison to standard trajectory reconstructions. This work enables to reduce the number of projections and in consequence scan time by introducing an optimization algorithm to compose an object specific trajectory.
Collapse
Affiliation(s)
- Andreas Fischer
- Siemens AG, Corporate Technology, 81730 Munich, Germany
- Computer Aided Medical Procedures (CAMP), Technische Universität München, 85748 Garching, Germany
- Department of Radiology, Technische Universität München, 81675 Munich, Germany
| | - Tobias Lasser
- Computer Aided Medical Procedures (CAMP), Technische Universität München, 85748 Garching, Germany
| | | | | | - Peter B. Noël
- Department of Radiology, Technische Universität München, 81675 Munich, Germany
- Chair for Biomedical Physics and Institute for Medical Engineering, Technische Universität München, 85748 Garching, Germany
| |
Collapse
|
8
|
Craciunescu T, Murari A, Kiptily V, Lupelli I, Fernandes A, Sharapov S, Tiseanu I, Zoita V. Evaluation of reconstruction errors and identification of artefacts for JET gamma and neutron tomography. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2016; 87:013502. [PMID: 26827316 DOI: 10.1063/1.4939252] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The Joint European Torus (JET) neutron profile monitor ensures 2D coverage of the gamma and neutron emissive region that enables tomographic reconstruction. Due to the availability of only two projection angles and to the coarse sampling, tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET, but the problem of evaluating the errors associated with the reconstructed emissivity profile is still open. The reconstruction technique based on the maximum likelihood principle, that proved already to be a powerful tool for JET tomography, has been used to develop a method for the numerical evaluation of the statistical properties of the uncertainties in gamma and neutron emissivity reconstructions. The image covariance calculation takes into account the additional techniques introduced in the reconstruction process for tackling with the limited data set (projection resampling, smoothness regularization depending on magnetic field). The method has been validated by numerically simulations and applied to JET data. Different sources of artefacts that may significantly influence the quality of reconstructions and the accuracy of variance calculation have been identified.
Collapse
Affiliation(s)
- Teddy Craciunescu
- National Institute for Laser, Plasma and Radiation Physics, Magurele-Bucharest, Romania
| | | | - Vasily Kiptily
- CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB, United Kingdom
| | - Ivan Lupelli
- CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB, United Kingdom
| | - Ana Fernandes
- Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal
| | - Sergei Sharapov
- CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB, United Kingdom
| | - Ion Tiseanu
- National Institute for Laser, Plasma and Radiation Physics, Magurele-Bucharest, Romania
| | - Vasile Zoita
- National Institute for Laser, Plasma and Radiation Physics, Magurele-Bucharest, Romania
| |
Collapse
|
9
|
Pato LRV, Vandenberghe S, Vandeghinste B, Van Holen R. Evaluation of Fisher Information Matrix-Based Methods for Fast Assessment of Image Quality in Pinhole SPECT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1830-1842. [PMID: 25769150 DOI: 10.1109/tmi.2015.2410342] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The accurate determination of the local impulse response and the covariance in voxels from penalized maximum likelihood reconstructed images requires performing reconstructions from many noise realizations of the projection data. As this is usually a very time-consuming process, efficient analytical approximations based on the Fisher information matrix (FIM) have been extensively used in PET and SPECT to estimate these quantities. For 3D imaging, however, additional approximations need to be made to the FIM in order to speed up the calculations. The most common approach is to use the local shift-invariant (LSI) approximation of the FIM, but this assumes specific conditions which are not always necessarily valid. In this paper we take a single-pinhole SPECT system and compare the accuracy of the LSI approximation against two other methods that have been more recently put forward: the non-uniform object-space pixelation (NUOP) and the subsampled FIM. These methods do not assume such restrictive conditions while still increasing the speed of the calculations considerably. Our results indicate that in pinhole SPECT the NUOP and subsampled FIM approaches could be more reliable than the LSI approximation, especially when a high accuracy is required.
Collapse
|
10
|
Elgass KD, Smith EA, LeGros MA, Larabell CA, Ryan MT. Analysis of ER-mitochondria contacts using correlative fluorescence microscopy and soft X-ray tomography of mammalian cells. J Cell Sci 2015; 128:2795-804. [PMID: 26101352 DOI: 10.1242/jcs.169136] [Citation(s) in RCA: 75] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2015] [Accepted: 06/17/2015] [Indexed: 01/04/2023] Open
Abstract
Mitochondrial fission is important for organelle transport, quality control and apoptosis. Changes to the fission process can result in a wide variety of neurological diseases. In mammals, mitochondrial fission is executed by the GTPase dynamin-related protein 1 (Drp1; encoded by DNM1L), which oligomerizes around mitochondria and constricts the organelle. The mitochondrial outer membrane proteins Mff, MiD49 (encoded by MIEF2) and MiD51 (encoded by MIEF1) are involved in mitochondrial fission by recruiting Drp1 from the cytosol to the organelle surface. In addition, endoplasmic reticulum (ER) tubules have been shown to wrap around and constrict mitochondria before a fission event. Up to now, the presence of MiD49 and MiD51 at ER-mitochondrial division foci has not been established. Here, we combine confocal live-cell imaging with correlative cryogenic fluorescence microscopy and soft x-ray tomography to link MiD49 and MiD51 to the involvement of the ER in mitochondrial fission. We gain further insight into this complex process and characterize the 3D structure of ER-mitochondria contact sites.
Collapse
Affiliation(s)
- Kirstin D Elgass
- Hudson Institute for Medical Research, Monash Micro Imaging, Monash University, Melbourne 3168, Australia
| | - Elizabeth A Smith
- Department of Anatomy, School of Medicine, University of California San Francisco, San Francisco, CA 94158, USA National Centre for X-ray Tomography, Advanced Light Source, Berkeley, CA 94720, USA
| | - Mark A LeGros
- Department of Anatomy, School of Medicine, University of California San Francisco, San Francisco, CA 94158, USA National Centre for X-ray Tomography, Advanced Light Source, Berkeley, CA 94720, USA
| | - Carolyn A Larabell
- Department of Anatomy, School of Medicine, University of California San Francisco, San Francisco, CA 94158, USA National Centre for X-ray Tomography, Advanced Light Source, Berkeley, CA 94720, USA
| | - Michael T Ryan
- Department of Biochemistry and Molecular Biology, Monash University, Clayton, Melbourne 3800, Australia
| |
Collapse
|
11
|
Gang GJ, Stayman JW, Zbijewski W, Siewerdsen JH. Task-based detectability in CT image reconstruction by filtered backprojection and penalized likelihood estimation. Med Phys 2014; 41:081902. [PMID: 25086533 PMCID: PMC4115652 DOI: 10.1118/1.4883816] [Citation(s) in RCA: 64] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2013] [Revised: 05/28/2014] [Accepted: 06/03/2014] [Indexed: 12/17/2022] Open
Abstract
PURPOSE Nonstationarity is an important aspect of imaging performance in CT and cone-beam CT (CBCT), especially for systems employing iterative reconstruction. This work presents a theoretical framework for both filtered-backprojection (FBP) and penalized-likelihood (PL) reconstruction that includes explicit descriptions of nonstationary noise, spatial resolution, and task-based detectability index. Potential utility of the model was demonstrated in the optimal selection of regularization parameters in PL reconstruction. METHODS Analytical models for local modulation transfer function (MTF) and noise-power spectrum (NPS) were investigated for both FBP and PL reconstruction, including explicit dependence on the object and spatial location. For FBP, a cascaded systems analysis framework was adapted to account for nonstationarity by separately calculating fluence and system gains for each ray passing through any given voxel. For PL, the point-spread function and covariance were derived using the implicit function theorem and first-order Taylor expansion according to Fessler ["Mean and variance of implicitly defined biased estimators (such as penalized maximum likelihood): Applications to tomography," IEEE Trans. Image Process. 5(3), 493-506 (1996)]. Detectability index was calculated for a variety of simple tasks. The model for PL was used in selecting the regularization strength parameter to optimize task-based performance, with both a constant and a spatially varying regularization map. RESULTS Theoretical models of FBP and PL were validated in 2D simulated fan-beam data and found to yield accurate predictions of local MTF and NPS as a function of the object and the spatial location. The NPS for both FBP and PL exhibit similar anisotropic nature depending on the pathlength (and therefore, the object and spatial location within the object) traversed by each ray, with the PL NPS experiencing greater smoothing along directions with higher noise. The MTF of FBP is isotropic and independent of location to a first order approximation, whereas the MTF of PL is anisotropic in a manner complementary to the NPS. Task-based detectability demonstrates dependence on the task, object, spatial location, and smoothing parameters. A spatially varying regularization "map" designed from locally optimal regularization can improve overall detectability beyond that achievable with the commonly used constant regularization parameter. CONCLUSIONS Analytical models for task-based FBP and PL reconstruction are predictive of nonstationary noise and resolution characteristics, providing a valuable framework for understanding and optimizing system performance in CT and CBCT.
Collapse
Affiliation(s)
- Grace J Gang
- Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada and Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205
| | - J Webster Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205
| | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205
| | - Jeffrey H Siewerdsen
- Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada and Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205
| |
Collapse
|
12
|
Fuin N, Pedemonte S, Arridge S, Ourselin S, Hutton BF. Efficient determination of the uncertainty for the optimization of SPECT system design: a subsampled fisher information matrix. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:618-635. [PMID: 24595338 DOI: 10.1109/tmi.2013.2292805] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
System designs in single photon emission tomography (SPECT) can be evaluated based on the fundamental trade-off between bias and variance that can be achieved in the reconstruction of emission tomograms. This trade off can be derived analytically using the Cramer-Rao type bounds, which imply the calculation and the inversion of the Fisher information matrix (FIM). The inverse of the FIM expresses the uncertainty associated to the tomogram, enabling the comparison of system designs. However, computing, storing and inverting the FIM is not practical with 3-D imaging systems. In order to tackle the problem of the computational load in calculating the inverse of the FIM, a method based on the calculation of the local impulse response and the variance, in a single point, from a single row of the FIM, has been previously proposed for system design. However this approximation (circulant approximation) does not capture the global interdependence between the variables in shift-variant systems such as SPECT, and cannot account e.g., for data truncation or missing data. Our new formulation relies on subsampling the FIM. The FIM is calculated over a subset of voxels arranged in a grid that covers the whole volume. Every element of the FIM at the grid points is calculated exactly, accounting for the acquisition geometry and for the object. This new formulation reduces the computational complexity in estimating the uncertainty, but nevertheless accounts for the global interdependence between the variables, enabling the exploration of design spaces hindered by the circulant approximation. The graphics processing unit accelerated implementation of the algorithm reduces further the computation times, making the algorithm a good candidate for real-time optimization of adaptive imaging systems. This paper describes the subsampled FIM formulation and implementation details. The advantages and limitations of the new approximation are explored, in comparison with the circulant approximation, in the context of design optimization of a parallel-hole collimator SPECT system and of an adaptive imaging system (similar to the commercially available D-SPECT).
Collapse
|
13
|
Dutta J, Ahn S, Li Q. Quantitative statistical methods for image quality assessment. Am J Cancer Res 2013; 3:741-56. [PMID: 24312148 PMCID: PMC3840409 DOI: 10.7150/thno.6815] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2013] [Accepted: 07/19/2013] [Indexed: 11/18/2022] Open
Abstract
Quantitative measures of image quality and reliability are critical for both qualitative interpretation and quantitative analysis of medical images. While, in theory, it is possible to analyze reconstructed images by means of Monte Carlo simulations using a large number of noise realizations, the associated computational burden makes this approach impractical. Additionally, this approach is less meaningful in clinical scenarios, where multiple noise realizations are generally unavailable. The practical alternative is to compute closed-form analytical expressions for image quality measures. The objective of this paper is to review statistical analysis techniques that enable us to compute two key metrics: resolution (determined from the local impulse response) and covariance. The underlying methods include fixed-point approaches, which compute these metrics at a fixed point (the unique and stable solution) independent of the iterative algorithm employed, and iteration-based approaches, which yield results that are dependent on the algorithm, initialization, and number of iterations. We also explore extensions of some of these methods to a range of special contexts, including dynamic and motion-compensated image reconstruction. While most of the discussed techniques were developed for emission tomography, the general methods are extensible to other imaging modalities as well. In addition to enabling image characterization, these analysis techniques allow us to control and enhance imaging system performance. We review practical applications where performance improvement is achieved by applying these ideas to the contexts of both hardware (optimizing scanner design) and image reconstruction (designing regularization functions that produce uniform resolution or maximize task-specific figures of merit).
Collapse
|
14
|
He X, Park S. Model observers in medical imaging research. Am J Cancer Res 2013; 3:774-86. [PMID: 24312150 PMCID: PMC3840411 DOI: 10.7150/thno.5138] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2012] [Accepted: 04/15/2013] [Indexed: 01/17/2023] Open
Abstract
Model observers play an important role in the optimization and assessment of imaging devices. In this review paper, we first discuss the basic concepts of model observers, which include the mathematical foundations and psychophysical considerations in designing both optimal observers for optimizing imaging systems and anthropomorphic observers for modeling human observers. Second, we survey a few state-of-the-art computational techniques for estimating model observers and the principles of implementing these techniques. Finally, we review a few applications of model observers in medical imaging research.
Collapse
|
15
|
Abstract
A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.
Collapse
Affiliation(s)
- Arkadiusz Sitek
- Radiology Department, Brigham and Women's Hospital and Harvard Medical School, 75 Francis Street, Boston, MA 02115, USA.
| |
Collapse
|
16
|
Li Y. Noise propagation for iterative penalized-likelihood image reconstruction based on Fisher information. Phys Med Biol 2011; 56:1083-103. [PMID: 21263172 DOI: 10.1088/0031-9155/56/4/013] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Iterative reconstruction algorithms have been widely used in PET and SPECT emission tomography. Accurate modeling of photon noise propagation is crucial for quantitative tomography applications. Iteration-based noise propagation methods have been developed for only a few algorithms that have explicit multiplicative update equations. And there are discrepancies between the iteration-based methods and Fessler's fixed-point method because of improper approximations. In this paper, we present a unified theoretical prediction of noise propagation for any penalized expectation maximization (EM) algorithm where the EM approach incorporates a penalty term. The proposed method does not require an explicit update equation. The update equation is assumed to be implicitly defined by a differential equation of a surrogate function. We derive the expressions using the implicit function theorem, Taylor series and the chain rule from vector calculus. We also derive the fixed-point expressions when iterative algorithms converge and show the consistency between the proposed method and the fixed-point method. These expressions are solely defined in terms of the partial derivatives of the surrogate function and the Fisher information matrices. We also apply the theoretical noise predictions for iterative reconstruction algorithms in emission tomography. Finally, we validate the theoretical predictions for MAP-EM and OSEM algorithms using Monte Carlo simulations with Jaszczak-like and XCAT phantoms, respectively.
Collapse
Affiliation(s)
- Yusheng Li
- Department of Diagnostic Radiology, Rush University Medical Center, Chicago, IL 60612, USA.
| |
Collapse
|
17
|
Zhou J, Qi J. Adaptive imaging for lesion detection using a zoom-in PET system. IEEE TRANSACTIONS ON MEDICAL IMAGING 2011; 30:119-30. [PMID: 20699208 PMCID: PMC3014423 DOI: 10.1109/tmi.2010.2064173] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Positron emission tomography (PET) has become a leading modality in molecular imaging. Demands for further improvements in spatial resolution and sensitivity remain high with growing number of applications. In this paper we present a novel PET system design that integrates a high-resolution depth-of-interaction (DOI) detector into an existing PET system to obtain higher-resolution and higher-sensitivity images in a target region around the face of the high-resolution detector. A unique feature of the proposed PET system is that the high-resolution detector can be adaptively positioned based on the detectability or quantitative accuracy of a feature of interest. This paper focuses on the signal-known-exactly, background-known-exactly (SKE-BKE) detection task. We perform theoretical analysis of lesion detectability using computer observers, and then develop methods that can efficiently calculate the optimal position of the high-resolution detector that maximizes the lesion detectability. We simulated incorporation of a high-resolution DOI detector into the microPET II scanner. Quantitative results verified that the new system has better performance than the microPET II scanner in terms of spatial resolution and lesion detectability, and that the optimal position for lesion detection can be reliably predicted by the proposed method.
Collapse
|
18
|
Abstract
While the performance of small animal PET systems has been improved impressively in terms of spatial resolution and sensitivity, demands for further improvements remain high with growing number of applications. Here we propose a novel PET system design that integrates a high-resolution detector into an existing PET system to obtain higher-resolution images in a target region. The high-resolution detector will be adaptively positioned based on the detectability or quantitative accuracy of a feature of interest. The proposed system will be particularly effective for studying human cancers using animal models where tumors are often grown near the skin surface and therefore permit close contact with the high resolution detector. It will also be useful for the high-resolution brain imaging in rodents. In this paper, we present the theoretical analysis and Monte Carlo simulation studies of the performance of the proposed system.
Collapse
|
19
|
NIBART: a new interval based algebraic reconstruction technique for error quantification of emission tomography images. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2009. [PMID: 20425982 DOI: 10.1007/978-3-642-04268-3_19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register]
Abstract
This article presents a new algebraic method for reconstructing emission tomography images. This approach is mostly an interval extension of the conventional SIRT algorithm. One of the main characteristic of our approach is that the reconstructed activity associated with each pixel of the reconstructed image is an interval whose length can be considered as an estimate of the impact of the random variation of the measured activity on the reconstructed image. This work aims at investigating a new methodological concept for a reliable and robust quantification of reconstructed activities in scintigraphic images.
Collapse
|
20
|
Ahn S, Leahy RM. Analysis of Resolution and Noise Properties of Nonquadratically Regularized Image Reconstruction Methods for PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:413-24. [PMID: 18334436 DOI: 10.1109/tmi.2007.911549] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
We present accurate and efficient methods for estimating the spatial resolution and noise properties of nonquadratically regularized image reconstruction for positron emission tomography (PET). It is well known that quadratic regularization tends to over-smooth sharp edges. Many types of edge-preserving nonquadratic penalties have been proposed to overcome this problem. However, there has been little research on the quantitative analysis of nonquadratic regularization due to its nonlinearity. In contrast, quadratically regularized estimators are approximately linear and are well understood in terms of resolution and variance properties. We derive new approximate expressions for the linearized local perturbation response (LLPR) and variance using the Taylor expansion with the remainder term. Although the expressions are implicit, we can use them to accurately predict resolution and variance for nonquadratic regularization where the conventional expressions based on the first-order Taylor truncation fail. They also motivate us to extend the use of a certainty-based modified penalty to nonquadratic regularization cases in order to achieve spatially uniform perturbation responses, analogous to uniform spatial resolution in quadratic regularization. Finally, we develop computationally efficient methods for predicting resolution and variance of nonquadratically regularized reconstruction and present simulations that illustrate the validity of these methods.
Collapse
Affiliation(s)
- Sangtae Ahn
- Signal and Image Processing Institute, University ofSouthern California, Los Angeles, CA 90089, USA
| | | |
Collapse
|
21
|
Zhang-O'Connor Y, Fessler JA. Fast predictions of variance images for fan-beam transmission tomography with quadratic regularization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:335-46. [PMID: 17354639 PMCID: PMC2923589 DOI: 10.1109/tmi.2006.887368] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Accurate predictions of image variances can be useful for reconstruction algorithm analysis and for the design of regularization methods. Computing the predicted variance at every pixel using matrix-based approximations [1] is impractical. Even most recently adopted methods that are based on local discrete Fourier approximations are impractical since they would require a forward and backprojection and two fast Fourier transform (FFT) calculations for every pixel, particularly for shift-variant systems like fan-beam tomography. This paper describes new "analytical" approaches to predicting the approximate variance maps of 2-D images that are reconstructed by penalized-likelihood estimation with quadratic regularization in fan-beam geometries. The simplest of the proposed analytical approaches requires computation equivalent to one backprojection and some summations, so it is computationally practical even for the data sizes in X-ray computed tomography (CT). Simulation results show that it gives accurate predictions of the variance maps. The parallel-beam geometry is a simple special case of the fan-beam analysis. The analysis is also applicable to 2-D positron emission tomography (PET).
Collapse
|
22
|
Kulkarni S, Khurd P, Zhou L, Gindi G. Rapid Optimization of SPECT Scatter Correction Using Model LROC Observers. IEEE NUCLEAR SCIENCE SYMPOSIUM CONFERENCE RECORD. NUCLEAR SCIENCE SYMPOSIUM 2007; 5:3986-3993. [PMID: 20589227 DOI: 10.1109/nssmic.2007.4436989] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The problem we address is the optimization and comparison of window-based scatter correction (SC) methods in SPECT for maximum a posteriori reconstructions. While sophisticated reconstruction-based SC methods are available, the commonly used window-based SC methods are fast, easy to use, and perform reasonably well. Rather than subtracting a scatter estimate from the measured sinogram and then reconstructing, we use an ensemble approach and model the mean scatter sinogram in the likelihood function. This mean scatter sinogram estimate, computed from satellite window data, is itself inexact (noisy). Therefore two sources of noise, that due to Poisson noise of unscattered photons and that due to the model error in the scatter estimate, are propagated into the reconstruction. The optimization and comparison is driven by a figure of merit, the area under the LROC curve (ALROC) that gauges performance in a signal detection plus localization task. We use model observers to perform the task. This usually entails laborious generation of many sample reconstructions, but in this work, we instead develop a theoretical approach that allows one to rapidly compute ALROC given known information about the imaging system and the scatter correction scheme. A critical step in the theory approach is to predict additional (above that due to to the propagated Poisson noise of the primary photons) contributions to the reconstructed image covariance due to scatter (model error) noise. Simulations show that our theory method yields, for a range of search tolerances, LROC curves and ALROC values in close agreement to that obtained using model observer responses obtained from sample reconstruction methods. This opens the door to rapid comparison of different window-based SC methods and to optimizing the parameters (including window placement and size, scatter sinogram smoothing kernel) of the SC method.
Collapse
Affiliation(s)
- Santosh Kulkarni
- Electrical & Computer Engineering Department, Stony Brook University, Stony Brook NY
| | | | | | | |
Collapse
|
23
|
Khurd P, Gindi G. Fast LROC analysis of Bayesian reconstructed emission tomographic images using model observers. Phys Med Biol 2005; 50:1519-32. [PMID: 15798341 PMCID: PMC2860870 DOI: 10.1088/0031-9155/50/7/014] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Lesion detection and localization is an important task in emission computed tomography. Detection and localization performance with signal location uncertainty may be summarized by a scalar figure of merit, the area under the localization receiver operating characteristic (LROC) curve, A(LROC). We consider model observers to compute A(LROC) for two-dimensional maximum a posteriori (MAP) reconstructions. Model observers may be used to rapidly prototype studies that use human observers. We address the case background-known-exactly (BKE) and signal known except for location. Our A(LROC) calculation makes use of theoretical expressions for the mean and covariance of the reconstruction and, unlike conventional methods that also use model observers, does not require computation of a large number of sample reconstructions. We validate the results of the procedure by comparison to A(LROC) obtained using a gold-standard Monte Carlo method employing a large set of reconstructed noise samples. Under reasonable simulation conditions, our theoretical calculation is about one to two orders of magnitude faster than the conventional Monte Carlo method.
Collapse
Affiliation(s)
- Parmeshwar Khurd
- Department of Electrical & Computer Engineering, SUNY Stony Brook, Stony Brook, NY 11794-2350, USA
| | | |
Collapse
|