1
|
Puvanasunthararajah S, Fontanarosa D, Wille M, Camps SM. The application of metal artifact reduction methods on computed tomography scans for radiotherapy applications: A literature review. J Appl Clin Med Phys 2021; 22:198-223. [PMID: 33938608 PMCID: PMC8200502 DOI: 10.1002/acm2.13255] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/21/2021] [Accepted: 03/30/2021] [Indexed: 12/22/2022] Open
Abstract
Metal artifact reduction (MAR) methods are used to reduce artifacts from metals or metal components in computed tomography (CT). In radiotherapy (RT), CT is the most used imaging modality for planning, whose quality is often affected by metal artifacts. The aim of this study is to systematically review the impact of MAR methods on CT Hounsfield Unit values, contouring of regions of interest, and dose calculation for RT applications. This systematic review is performed in accordance with the PRISMA guidelines; the PubMed and Web of Science databases were searched using the main keywords "metal artifact reduction", "computed tomography" and "radiotherapy". A total of 382 publications were identified, of which 40 (including one review article) met the inclusion criteria and were included in this review. The selected publications (except for the review article) were grouped into two main categories: commercial MAR methods and research-based MAR methods. Conclusion: The application of MAR methods on CT scans can improve treatment planning quality in RT. However, none of the investigated or proposed MAR methods was completely satisfactory for RT applications because of limitations such as the introduction of other errors (e.g., other artifacts) or image quality degradation (e.g., blurring), and further research is still necessary to overcome these challenges.
Collapse
Affiliation(s)
- Sathyathas Puvanasunthararajah
- School of Clinical SciencesQueensland University of TechnologyBrisbaneQLDAustralia
- Centre for Biomedical TechnologiesQueensland University of TechnologyBrisbaneQLDAustralia
| | - Davide Fontanarosa
- School of Clinical SciencesQueensland University of TechnologyBrisbaneQLDAustralia
- Centre for Biomedical TechnologiesQueensland University of TechnologyBrisbaneQLDAustralia
| | - Marie‐Luise Wille
- Centre for Biomedical TechnologiesQueensland University of TechnologyBrisbaneQLDAustralia
- School of MechanicalMedical & Process EngineeringFaculty of EngineeringQueensland University of TechnologyBrisbaneQLDAustralia
- ARC ITTC for Multiscale 3D Imaging, Modelling, and ManufacturingQueensland University of TechnologyBrisbaneQLDAustralia
| | | |
Collapse
|
2
|
Hehn L, Tilley S, Pfeiffer F, Stayman JW. Blind deconvolution in model-based iterative reconstruction for CT using a normalized sparsity measure. Phys Med Biol 2019; 64:215010. [PMID: 31561247 DOI: 10.1088/1361-6560/ab489e] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Model-based iterative reconstruction techniques for CT that include a description of the noise statistics and a physical forward model of the image formation process have proven to increase image quality for many applications. Specifically, including models of the system blur into the physical forward model and thus implicitly performing a deconvolution of the projections during tomographic reconstruction, could demonstrate distinct improvements, especially in terms of resolution. However, the results strongly rely on an exact characterization of all components contributing to the system blur. Such characterizations can be laborious and even a slight mismatch can diminish image quality significantly. Therefore, we introduce a novel objective function, which enables us to jointly estimate system blur parameters during tomographic reconstruction. Conventional objective functions are biased in terms of blur and can yield lowest cost to blurred reconstructions with low noise levels. A key feature of our objective function is a new normalized sparsity measure for CT based on total-variation regularization, constructed to be less biased in terms of blur. We outline a solving strategy for jointly recovering low-dimensional blur parameters during tomographic reconstruction. We perform an extensive simulation study, evaluating the performance of the regularization and the dependency of the different parts of the objective function on the blur parameters. Scenarios with different regularization strengths and system blurs are investigated, demonstrating that we can recover the blur parameter used for the simulations. The proposed strategy is validated and the dependency of the objective function with the number of iterations is analyzed. Finally, our approach is experimentally validated on test-bench data of a human wrist phantom, where the estimated blur parameter coincides well with visual inspection. Our findings are not restricted to attenuation-based CT and may facilitate the recovery of more complex imaging model parameters.
Collapse
Affiliation(s)
- Lorenz Hehn
- Chair of Biomedical Physics, Department of Physics and Munich School of BioEngineering, Technical University of Munich, 85748 Garching, Germany. Department of Diagnostic and Interventional Radiology, School of Medicine & Klinikum rechts der Isar, Technical University of Munich, 81675 München, Germany. Author to whom correspondence should be addressed
| | | | | | | |
Collapse
|
3
|
Minoura N, Teramoto A, Ito A, Yamamuro O, Nishio M, Saito K, Fujita H. A complementary scheme for automated detection of high-uptake regions on dedicated breast PET and whole-body PET/CT. Radiol Phys Technol 2019; 12:260-267. [PMID: 31129787 DOI: 10.1007/s12194-019-00516-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 05/16/2019] [Accepted: 05/17/2019] [Indexed: 11/25/2022]
Abstract
In this study, we aimed to develop a hybrid method for automated detection of high-uptake regions in the breast and axilla using dedicated breast positron-emission tomography (db PET) and whole-body PET/computed tomography (CT) images. In our proposed method, high-uptake regions in the breast and axilla were detected using db PET images and whole-body PET/CT images. In db PET images, high-uptake regions in the breast were detected using adaptive thresholding technique based on the noise characteristics. In whole-body PET/CT images, the region of the breast that includes the axilla was first extracted using CT images. Next, high-uptake regions in the extracted breast region were detected on the PET images. By integration of the results of the two types of PET images, a final candidate region was obtained. In the experiments, the accuracy of extracting the region of the breast and detection ability was evaluated using clinical data. As a result, all breast regions were extracted correctly. The sensitivity of detection was 0.765, and the number of false positive cases were 1.8, which was 30% better than those on whole-body PET/CT alone. These results suggested that the proposed method, combining the two types of PET images is effective for improving detection performance.
Collapse
Affiliation(s)
- Natsuki Minoura
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake, Toyoake, Aichi, 470-1192, Japan
- Nagoya City University Hospital, Nagoya, Japan
| | - Atsushi Teramoto
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake, Toyoake, Aichi, 470-1192, Japan.
| | - Akari Ito
- East Nagoya Imaging Diagnosis Center, Nagoya, Japan
| | | | | | - Kuniaki Saito
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake, Toyoake, Aichi, 470-1192, Japan
| | - Hiroshi Fujita
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, Gifu, Japan
| |
Collapse
|
4
|
Jothilakshmi GR, Raaza A, Rajendran V, Sreenivasa Varma Y, Guru Nirmal Raj R. Pattern Recognition and Size Prediction of Microcalcification Based on Physical Characteristics by Using Digital Mammogram Images. J Digit Imaging 2018; 31:912-922. [PMID: 29873011 DOI: 10.1007/s10278-018-0075-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Breast cancer is one of the life-threatening cancers occurring in women. In recent years, from the surveys provided by various medical organizations, it has become clear that the mortality rate of females is increasing owing to the late detection of breast cancer. Therefore, an automated algorithm is needed to identify the early occurrence of microcalcification, which would assist radiologists and physicians in reducing the false predictions via image processing techniques. In this work, we propose a new algorithm to detect the pattern of a microcalcification by calculating its physical characteristics. The considered physical characteristics are the reflection coefficient and mass density of the binned digital mammogram image. The calculation of physical characteristics doubly confirms the presence of malignant microcalcification. Subsequently, by interpolating the physical characteristics via thresholding and mapping techniques, a three-dimensional (3D) projection of the region of interest (RoI) is obtained in terms of the distance in millimeter. The size of a microcalcification is determined using this 3D-projected view. This algorithm is verified with 100 abnormal mammogram images showing microcalcification and 10 normal mammogram images. In addition to the size calculation, the proposed algorithm acts as a good classifier that is used to classify the considered input image as normal or abnormal with the help of only two physical characteristics. This proposed algorithm exhibits a classification accuracy of 99%.
Collapse
Affiliation(s)
| | | | - V Rajendran
- Department of ECE, Vels University, Chennai, India
| | | | - R Guru Nirmal Raj
- Department of ECE, Lakshmiammal Polytechnique College, Kovilpatti, Tamil Nadu, India
| |
Collapse
|
5
|
Tilley S, Jacobson M, Cao Q, Brehler M, Sisniega A, Zbijewski W, Stayman JW. Penalized-Likelihood Reconstruction With High-Fidelity Measurement Models for High-Resolution Cone-Beam Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:988-999. [PMID: 29621002 PMCID: PMC5889122 DOI: 10.1109/tmi.2017.2779406] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
We present a novel reconstruction algorithm based on a general cone-beam CT forward model, which is capable of incorporating the blur and noise correlations that are exhibited in flat-panel CBCT measurement data. Specifically, the proposed model may include scintillator blur, focal-spot blur, and noise correlations due to light spread in the scintillator. The proposed algorithm (GPL-BC) uses a Gaussian Penalized-Likelihood objective function, which incorporates models of blur and correlated noise. In a simulation study, GPL-BC was able to achieve lower bias as compared with deblurring followed by FDK as well as a model-based reconstruction method without integration of measurement blur. In the same study, GPL-BC was able to achieve better line-pair reconstructions (in terms of segmented-image accuracy) as compared with deblurring followed by FDK, a model-based method without blur, and a model-based method with blur but not noise correlations. A prototype extremities quantitative cone-beam CT test-bench was used to image a physical sample of human trabecular bone. These data were used to compare reconstructions using the proposed method and model-based methods without blur and/or correlation to a registered CT image of the same bone sample. The GPL-BC reconstructions resulted in more accurate trabecular bone segmentation. Multiple trabecular bone metrics, including trabecular thickness (Tb.Th.) were computed for each reconstruction approach as well as the CT volume. The GPL-BC reconstruction provided the most accurate Tb.Th. measurement, 0.255 mm, as compared with the CT derived value of 0.193 mm, followed by the GPL-B reconstruction, the GPL-I reconstruction, and then the FDK reconstruction (0.271 mm, 0.309 mm, and 0.335 mm, respectively).
Collapse
|
6
|
Wang J, Nishikawa RM, Yang Y. Improving the accuracy in detection of clustered microcalcifications with a context-sensitive classification model. Med Phys 2016; 43:159. [PMID: 26745908 DOI: 10.1118/1.4938059] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In computer-aided detection of microcalcifications (MCs), the detection accuracy is often compromised by frequent occurrence of false positives (FPs), which can be attributed to a number of factors, including imaging noise, inhomogeneity in tissue background, linear structures, and artifacts in mammograms. In this study, the authors investigated a unified classification approach for combating the adverse effects of these heterogeneous factors for accurate MC detection. METHODS To accommodate FPs caused by different factors in a mammogram image, the authors developed a classification model to which the input features were adapted according to the image context at a detection location. For this purpose, the input features were defined in two groups, of which one group was derived from the image intensity pattern in a local neighborhood of a detection location, and the other group was used to characterize how a MC is different from its structural background. Owing to the distinctive effect of linear structures in the detector response, the authors introduced a dummy variable into the unified classifier model, which allowed the input features to be adapted according to the image context at a detection location (i.e., presence or absence of linear structures). To suppress the effect of inhomogeneity in tissue background, the input features were extracted from different domains aimed for enhancing MCs in a mammogram image. To demonstrate the flexibility of the proposed approach, the authors implemented the unified classifier model by two widely used machine learning algorithms, namely, a support vector machine (SVM) classifier and an Adaboost classifier. In the experiment, the proposed approach was tested for two representative MC detectors in the literature [difference-of-Gaussians (DoG) detector and SVM detector]. The detection performance was assessed using free-response receiver operating characteristic (FROC) analysis on a set of 141 screen-film mammogram (SFM) images (66 cases) and a set of 188 full-field digital mammogram (FFDM) images (95 cases). RESULTS The FROC analysis results show that the proposed unified classification approach can significantly improve the detection accuracy of two MC detectors on both SFM and FFDM images. Despite the difference in performance between the two detectors, the unified classifiers can reduce their FP rate to a similar level in the output of the two detectors. In particular, with true-positive rate at 85%, the FP rate on SFM images for the DoG detector was reduced from 1.16 to 0.33 clusters/image (unified SVM) and 0.36 clusters/image (unified Adaboost), respectively; similarly, for the SVM detector, the FP rate was reduced from 0.45 clusters/image to 0.30 clusters/image (unified SVM) and 0.25 clusters/image (unified Adaboost), respectively. Similar FP reduction results were also achieved on FFDM images for the two MC detectors. CONCLUSIONS The proposed unified classification approach can be effective for discriminating MCs from FPs caused by different factors (such as MC-like noise patterns and linear structures) in MC detection. The framework is general and can be applicable for further improving the detection accuracy of existing MC detectors.
Collapse
Affiliation(s)
- Juan Wang
- Department of Electrical and Computer Engineering, Medical Imaging Research Center, Illinois Institute of Technology, Chicago, Illinois 60616
| | - Robert M Nishikawa
- Department of Radiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Yongyi Yang
- Department of Electrical and Computer Engineering, Medical Imaging Research Center, Illinois Institute of Technology, Chicago, Illinois 60616
| |
Collapse
|
7
|
Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning. Sci Rep 2016; 6:27327. [PMID: 27273294 PMCID: PMC4895132 DOI: 10.1038/srep27327] [Citation(s) in RCA: 144] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Accepted: 05/13/2016] [Indexed: 01/12/2023] Open
Abstract
Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer.
Collapse
|
8
|
Tilley S, Siewerdsen JH, Zbijewski W, Stayman JW. Nonlinear Statistical Reconstruction for Flat-Panel Cone-Beam CT with Blur and Correlated Noise Models. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9783. [PMID: 27110051 DOI: 10.1117/12.2216126] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Flat-panel cone-beam CT (FP-CBCT) is a promising imaging modality, partly due to its potential for high spatial resolution reconstructions in relatively compact scanners. Despite this potential, FP-CBCT can face difficulty resolving important fine scale structures (e.g, trabecular details in dedicated extremities scanners and microcalcifications in dedicated CBCT mammography). Model-based methods offer one opportunity to improve high-resolution performance without any hardware changes. Previous work, based on a linearized forward model, demonstrated improved performance when both system blur and spatial correlations characteristics of FP-CBCT systems are modeled. Unfortunately, the linearized model relies on a staged processing approach that complicates tuning parameter selection and can limit the finest achievable spatial resolution. In this work, we present an alternative scheme that leverages a full nonlinear forward model with both system blur and spatially correlated noise. A likelihood-based objective function is derived from this forward model and we derive an iterative optimization algorithm for its solution. The proposed approach is evaluated in simulation studies using a digital extremities phantom and resolution-noise trade-offs are quantitatively evaluated. The correlated nonlinear model outperformed both the uncorrelated nonlinear model and the staged linearized technique with up to a 86% reduction in variance at matched spatial resolution. Additionally, the nonlinear models could achieve finer spatial resolution (correlated: 0.10 mm, uncorrelated: 0.11 mm) than the linear correlated model (0.15 mm), and traditional FDK (0.40 mm). This suggests the proposed nonlinear approach may be an important tool in improving performance for high-resolution clinical applications.
Collapse
Affiliation(s)
- Steven Tilley
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | | | - Wojciech Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| | - J Webster Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD
| |
Collapse
|
9
|
Tilley S, Siewerdsen JH, Stayman JW. Model-based iterative reconstruction for flat-panel cone-beam CT with focal spot blur, detector blur, and correlated noise. Phys Med Biol 2015; 61:296-319. [PMID: 26649783 DOI: 10.1088/0031-9155/61/1/296] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
While model-based reconstruction methods have been successfully applied to flat-panel cone-beam CT (FP-CBCT) systems, typical implementations ignore both spatial correlations in the projection data as well as system blurs due to the detector and focal spot in the x-ray source. In this work, we develop a forward model for flat-panel-based systems that includes blur and noise correlation associated with finite focal spot size and an indirect detector (e.g. scintillator). This forward model is used to develop a staged reconstruction framework where projection data are deconvolved and log-transformed, followed by a generalized least-squares reconstruction that utilizes a non-diagonal statistical weighting to account for the correlation that arises from the acquisition and data processing chain. We investigate the performance of this novel reconstruction approach in both simulated data and in CBCT test-bench data. In comparison to traditional filtered backprojection and model-based methods that ignore noise correlation, the proposed approach yields a superior noise-resolution tradeoff. For example, for a system with 0.34 mm FWHM scintillator blur and 0.70 FWHM focal spot blur, using the correlated noise model instead of an uncorrelated noise model increased resolution by 42% (with variance matched at 6.9 × 10(-8) mm(-2)). While this advantage holds across a wide range of systems with differing blur characteristics, the improvements are greatest for systems where source blur is larger than detector blur.
Collapse
Affiliation(s)
- Steven Tilley
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | | | | |
Collapse
|
10
|
Cant J, Palenstijn WJ, Behiels G, Sijbers J. Modeling blurring effects due to continuous gantry rotation: Application to region of interest tomography. Med Phys 2015; 42:2709-17. [DOI: 10.1118/1.4914422] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023] Open
|
11
|
Stayman JW, Zbijewski W, Tilley S, Siewerdsen J. Generalized Least-Squares CT Reconstruction with Detector Blur and Correlated Noise Models. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2014; 9033:903335. [PMID: 25328638 DOI: 10.1117/12.2043067] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
The success and improved dose utilization of statistical reconstruction methods arises, in part, from their ability to incorporate sophisticated models of the physics of the measurement process and noise. Despite the great promise of statistical methods, typical measurement models ignore blurring effects, and nearly all current approaches make the presumption of independent measurements - disregarding noise correlations and a potential avenue for improved image quality. In some imaging systems, such as flat-panel-based cone-beam CT, such correlations and blurs can be a dominant factor in limiting the maximum achievable spatial resolution and noise performance. In this work, we propose a novel regularized generalized least-squares reconstruction method that includes models for both system blur and correlated noise in the projection data. We demonstrate, in simulation studies, that this approach can break through the traditional spatial resolution limits of methods that do not model these physical effects. Moreover, in comparison to other approaches that attempt deblurring without a correlation model, superior noise-resolution trade-offs can be found with the proposed approach.
Collapse
Affiliation(s)
- J Webster Stayman
- Dept. of Biomedical Eng., Johns Hopkins University, Baltimore, MD USA 21205
| | - Wojciech Zbijewski
- Dept. of Biomedical Eng., Johns Hopkins University, Baltimore, MD USA 21205
| | - Steven Tilley
- Dept. of Biomedical Eng., Johns Hopkins University, Baltimore, MD USA 21205
| | - Jeffrey Siewerdsen
- Dept. of Biomedical Eng., Johns Hopkins University, Baltimore, MD USA 21205
| |
Collapse
|
12
|
Stayman JW, Dang H, Ding Y, Siewerdsen JH. PIRPLE: a penalized-likelihood framework for incorporation of prior images in CT reconstruction. Phys Med Biol 2013; 58:7563-82. [PMID: 24107545 PMCID: PMC3868341 DOI: 10.1088/0031-9155/58/21/7563] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Over the course of diagnosis and treatment, it is common for a number of imaging studies to be acquired. Such imaging sequences can provide substantial patient-specific prior knowledge about the anatomy that can be incorporated into a prior-image-based tomographic reconstruction for improved image quality and better dose utilization. We present a general methodology using a model-based reconstruction approach including formulations of the measurement noise that also integrates prior images. This penalized-likelihood technique adopts a sparsity enforcing penalty that incorporates prior information yet allows for change between the current reconstruction and the prior image. Moreover, since prior images are generally not registered with the current image volume, we present a modified model-based approach that seeks a joint registration of the prior image in addition to the reconstruction of projection data. We demonstrate that the combined prior-image- and model-based technique outperforms methods that ignore the prior data or lack a noise model. Moreover, we demonstrate the importance of registration for prior-image-based reconstruction methods and show that the prior-image-registered penalized-likelihood estimation (PIRPLE) approach can maintain a high level of image quality in the presence of noisy and undersampled projection data.
Collapse
Affiliation(s)
- J Webster Stayman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | | | | | | |
Collapse
|
13
|
Nuyts J, De Man B, Fessler JA, Zbijewski W, Beekman FJ. Modelling the physics in the iterative reconstruction for transmission computed tomography. Phys Med Biol 2013. [PMID: 23739261 DOI: 10.1088/0031‐9155/58/12/r63] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of x-ray CT imaging. IR has the ability to significantly reduce patient dose; it provides the flexibility to reconstruct images from arbitrary x-ray system geometries and allows one to include detailed models of photon transport and detection physics to accurately correct for a wide variety of image degrading effects. This paper reviews discretization issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. The widespread implementation of IR with a highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling.
Collapse
Affiliation(s)
- Johan Nuyts
- Department of Nuclear Medicine and Medical Imaging Research Center, KU Leuven, Leuven, Belgium.
| | | | | | | | | |
Collapse
|
14
|
Nuyts J, De Man B, Fessler JA, Zbijewski W, Beekman FJ. Modelling the physics in the iterative reconstruction for transmission computed tomography. Phys Med Biol 2013; 58:R63-96. [PMID: 23739261 PMCID: PMC3725149 DOI: 10.1088/0031-9155/58/12/r63] [Citation(s) in RCA: 101] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of x-ray CT imaging. IR has the ability to significantly reduce patient dose; it provides the flexibility to reconstruct images from arbitrary x-ray system geometries and allows one to include detailed models of photon transport and detection physics to accurately correct for a wide variety of image degrading effects. This paper reviews discretization issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. The widespread implementation of IR with a highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling.
Collapse
Affiliation(s)
- Johan Nuyts
- Department of Nuclear Medicine and Medical Imaging Research Center, KU Leuven, Leuven, Belgium.
| | | | | | | | | |
Collapse
|
15
|
Teng Y, Zhang T. Generalized EM-type reconstruction algorithms for emission tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:1724-1733. [PMID: 22665503 DOI: 10.1109/tmi.2012.2197758] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
We provide a general form for many reconstruction estimators of emission tomography. These estimators include Shepp and Vardi's maximum likelihood (ML) estimator, the quadratic weighted least squares (WLS) estimator, Anderson's WLS estimator, and Liu and Wang's multi-objective estimator, and others. We derive a generic update rule by constructing a surrogate function. This work is inspired by the ML-EM (EM, expectation maximization), where the latter naturally arises as a special case. A regularization with a specific form can also be incorporated by De Pierro's trick. We provide a general and quite different convergence proof compared with the proofs of the ML-EM and De Pierro. Theoretical analysis shows that the proposed algorithm monotonically decreases the cost function and automatically meets nonnegativity constraints. We have introduced a mechanism to provide monotonic, self-constraining, and convergent algorithms, from which some interesting existing and new algorithms can be derived. Simulation results illustrate the behavior of these algorithms in term of image quality and resolution-noise tradeoff.
Collapse
Affiliation(s)
- Yueyang Teng
- School of Sciences, Northeastern University, Shenyang 110004, China.
| | | |
Collapse
|
16
|
Image Reconstruction. Med Image Anal 2011. [DOI: 10.1002/9780470918548.ch8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
17
|
Elter M, Horsch A. CADx of mammographic masses and clustered microcalcifications: A review. Med Phys 2009; 36:2052-68. [PMID: 19610294 DOI: 10.1118/1.3121511] [Citation(s) in RCA: 141] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Affiliation(s)
- Matthias Elter
- Fraunhofer Institute for Integrated Circuits, Am Wolfsmantel 33, 91058 Erlangen, Germany.
| | | |
Collapse
|
18
|
Lalush DS. Binary encoding of multiplexed images in mixed noise. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:1323-1332. [PMID: 18753046 DOI: 10.1109/tmi.2008.922697] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.
Collapse
Affiliation(s)
- David S Lalush
- Joint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC 27695-7115, USA.
| |
Collapse
|
19
|
Jacobson MW, Fessler JA. An expanded theoretical treatment of iteration-dependent majorize-minimize algorithms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:2411-22. [PMID: 17926925 PMCID: PMC2750827 DOI: 10.1109/tip.2007.904387] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
The majorize-minimize (MM) optimization technique has received considerable attention in signal and image processing applications, as well as in statistics literature. At each iteration of an MM algorithm, one constructs a tangent majorant function that majorizes the given cost function and is equal to it at the current iterate. The next iterate is obtained by minimizing this tangent majorant function, resulting in a sequence of iterates that reduces the cost function monotonically. A well-known special case of MM methods are expectation-maximization algorithms. In this paper, we expand on previous analyses of MM, due to Fessler and Hero, that allowed the tangent majorants to be constructed in iteration-dependent ways. Also, this paper overcomes an error in one of those earlier analyses. There are three main aspects in which our analysis builds upon previous work. First, our treatment relaxes many assumptions related to the structure of the cost function, feasible set, and tangent majorants. For example, the cost function can be nonconvex and the feasible set for the problem can be any convex set. Second, we propose convergence conditions, based on upper curvature bounds, that can be easier to verify than more standard continuity conditions. Furthermore, these conditions allow for considerable design freedom in the iteration-dependent behavior of the algorithm. Finally, we give an original characterization of the local region of convergence of MM algorithms based on connected (e.g., convex) tangent majorants. For such algorithms, cost function minimizers will locally attract the iterates over larger neighborhoods than typically is guaranteed with other methods. This expanded treatment widens the scope of the MM algorithm designs that can be considered for signal and image processing applications, allows us to verify the convergent behavior of previously published algorithms, and gives a fuller understanding overall of how these algorithms behave.
Collapse
|
20
|
Lalush DS. Feasibility of transmission microCT with two fan-beam sources. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2007; 2004:1283-6. [PMID: 17271924 DOI: 10.1109/iembs.2004.1403405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
We study in simulation the properties of a transmission CT system using two fan-beam sources both illuminating a single detector. Using traditional X-ray sources, such a system would be expensive, slow, and unwieldy. With the development of new X-ray sources based on nanofabrication methods, however, such a dual-source system becomes feasible. The principal advantage of the new geometry is that a shorter fan-beam (or cone-beam) focal length can be used while achieving the same field-of-view. The shorter focal length should achieve approximately the same spatial resolution since the magnification is offset by the increased effect of nonzero focal spot size. However, the shorter focal length should give better efficiency through inverse square law effects. A disadvantage is that analytical reconstruction methods based on filtered backprojection may not be effective since each source does not view the entire subject. We demonstrate that iterative reconstruction techniques can solve this problem. We also demonstrate the potential improvement in resolution for an ideal source using a microCT simulation by comparing a conventional single source fan-beam CT with 50 cm focal length to a dual-source system with 30.5 cm focal length, both giving approximately the same transverse field of view. We found that the ideal dual-source system improved transverse spatial resolution (FWHM) by 4-14%, although wider tails (FWTM) were noted in point spread estimates. We conclude that use of multiple fan-beam sources is feasible to create transmission CT devices with shorter focal lengths.
Collapse
Affiliation(s)
- David S Lalush
- Dept. of Biomed. Eng., North Carolina Univ., Chapel Hill, NC, USA
| |
Collapse
|
21
|
La Rivière PJ, Vargas PA. Monotonic penalized-likelihood image reconstruction for X-ray fluorescence computed tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:1117-29. [PMID: 16967798 DOI: 10.1109/tmi.2006.877441] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
In this paper, we derive a monotonic penalized-likelihood algorithm for image reconstruction in X-ray fluorescence computed tomography (XFCT) when the attenuation maps at the energies of the fluorescence X-rays are unknown. In XFCT, a sample is irradiated with pencil beams of monochromatic synchrotron radiation that stimulate the emission of fluorescence X-rays from atoms of elements whose K- or L-edges lie below the energy of the stimulating beam. Scanning and rotating the object through the beam allows for acquisition of a tomographic dataset that can be used to reconstruct images of the distribution of the elements in question. XFCT is a stimulated emission tomography modality, and it is thus necessary to correct for attenuation of the incident and fluorescence photons. The attenuation map is, however, generally known only at the stimulating beam energy and not at the energies of the various fluorescence X-rays of interest. We have developed a penalized-likelihood image reconstruction strategy for this problem. The approach alternates between updating the distribution of a given element and updating the attenuation map for that element's fluorescence X-rays. The approach is guaranteed to increase the penalized likelihood at each iteration. Because the joint objective function is not necessarily concave, the approach may drive the solution to a local maximum. To encourage the algorithm to seek out a reasonable local maximum, we include in the objective function a prior that encourages a relationship, based on physical considerations, between the fluorescence attenuation map and the distribution of the element being reconstructed.
Collapse
|
22
|
Feng B, Fessler JA, King MA. Incorporation of system resolution compensation (RC) in the ordered-subset transmission (OSTR) algorithm for transmission imaging in SPECT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:941-9. [PMID: 16827494 DOI: 10.1109/tmi.2006.876151] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
In order to reconstruct attenuation maps with improved spatial resolution and quantitative accuracy, we developed an approximate method of incorporating system resolution compensation (RC) in the ordered-subset transmission (OSTR) algorithm for transmission reconstruction. Our method approximately models the blur caused by the finite intrinsic detector resolution, the nonideal source collimation and detector collimation. We derived the formulation using the optimization transfer principle as in the derivation of the OSTR algorithm. The formulation includes one forward-blur step and one back-blur step, which do not severely slow down reconstruction. The formulation could be applicable to various transmission geometries, such as point-source, line-source, and sheet-source systems. Through computer simulations of the MCAT phantom and transmission measurements of the air-filled Data Spectrum Deluxe single photo emission computed tomography (SPECT) Phantom on a system which employed a cone-beam geometry and a system which employed a scanning-line-source geometry, we showed that incorporation of RC increased spatial resolution and improved the quantitative accuracy of reconstruction. In simulation studies, attenuation maps reconstructed with RC correction improved the quantitative accuracy of emission reconstruction.
Collapse
Affiliation(s)
- Bing Feng
- Department of Radiology, University of Massachusetts Medical School, Worcester, MA 01655, USA.
| | | | | |
Collapse
|
23
|
Acha B, Serrano C, Acha JI, Roa LM. Segmentation and classification of burn images by color and texture information. JOURNAL OF BIOMEDICAL OPTICS 2005; 10:034014. [PMID: 16229658 DOI: 10.1117/1.1921227] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
In this paper, a burn color image segmentation and classification system is proposed. The aim of the system is to separate burn wounds from healthy skin, and to distinguish among the different types of burns (burn depths). Digital color photographs are used as inputs to the system. The system is based on color and texture information, since these are the characteristics observed by physicians in order to form a diagnosis. A perceptually uniform color space (L*u*v*) was used, since Euclidean distances calculated in this space correspond to perceptual color differences. After the burn is segmented, a set of color and texture features is calculated that serves as the input to a Fuzzy-ARTMAP neural network. The neural network classifies burns into three types of burn depths: superficial dermal, deep dermal, and full thickness. Clinical effectiveness of the method was demonstrated on 62 clinical burn wound images, yielding an average classification success rate of 82%.
Collapse
Affiliation(s)
- Begoña Acha
- Area de Teoría de la Señal y Comunicaciones, Escuela Técnica Superior de Ingenieros, University of Seville, Camino de los Descubrimientos s/n, 41092 Sevilla, Spain.
| | | | | | | |
Collapse
|
24
|
Cost-Sensitive Ensemble of Support Vector Machines for Effective Detection of Microcalcification in Breast Cancer Diagnosis. FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY 2005. [DOI: 10.1007/11540007_59] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
25
|
|
26
|
A Fusion of Neural Network Based Auto-associator and Classifier for the Classification of Microcalcification Patterns. ACTA ACUST UNITED AC 2004. [DOI: 10.1007/978-3-540-30499-9_122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
|
27
|
Arnéodo A, Decoster N, Kestener P, Roux S. A wavelet-based method for multifractal image analysis: From theoretical concepts to experimental applications. ADVANCES IN IMAGING AND ELECTRON PHYSICS 2003. [DOI: 10.1016/s1076-5670(03)80014-9] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
28
|
Bowsher JE, Tornai MP, Peter J, González Trotter DE, Krol A, Gilland DR, Jaszczak RJ. Modeling the axial extension of a transmission line source within iterative reconstruction via multiple transmission sources. IEEE TRANSACTIONS ON MEDICAL IMAGING 2002; 21:200-215. [PMID: 11989845 DOI: 10.1109/42.996339] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Reconstruction algorithms for transmission tomography have generally assumed that the photons reaching a particular detector bin at a particular angle originate from a single point source. In this paper, we highlight several cases of extended transmission sources, in which it may be useful to approach the estimation of attenuation coefficients as a problem involving multiple transmission point sources. Examined in detail is the case of a fixed transmission line source with a fan-beam collimator. This geometry can result in attenuation images that have significant axial blur. Herein it is also shown, empirically, that extended transmission sources can result in biased estimates of the average attenuation, and an explanation is proposed. The finite axial resolution of the transmission line source configuration is modeled within iterative reconstruction using an expectation-maximization algorithm that was previously derived for estimating attenuation coefficients from single photon emission computed tomography (SPECT) emission data. The same algorithm is applicable to both problems because both can be thought of as involving multiple transmission sources. It is shown that modeling axial blur within reconstruction removes the bias in the average estimated attenuation and substantially improves the axial resolution of attenuation images.
Collapse
Affiliation(s)
- J E Bowsher
- Duke University Medical Center, Durham, NC 27710, USA.
| | | | | | | | | | | | | |
Collapse
|
29
|
Elbakri IA, Fessler JA. Statistical image reconstruction for polyenergetic X-ray computed tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2002; 21:89-99. [PMID: 11929108 DOI: 10.1109/42.993128] [Citation(s) in RCA: 298] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This paper describes a statistical image reconstruction method for X-ray computed tomography (CT) that is based on a physical model that accounts for the polyenergetic X-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. We assume that the object consists of a given number of nonoverlapping materials, such as soft tissue and bone. The attenuation coefficient of each voxel is the product of its unknown density and a known energy-dependent mass attenuation coefficient. We formulate a penalized-likelihood function for this polyenergetic model and develop an ordered-subsets iterative algorithm for estimating the unknown densities in each voxel. The algorithm monotonically decreases the cost function at each iteration when one subset is used. Applying this method to simulated X-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artifacts.
Collapse
Affiliation(s)
- Idris A Elbakri
- Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor 48109-2122, USA.
| | | |
Collapse
|
30
|
Krol A, Bowsher JE, Manglos SH, Feiglin DH, Tornai MP, Thomas FD. An EM algorithm for estimating SPECT emission and transmission parameters from emissions data only. IEEE TRANSACTIONS ON MEDICAL IMAGING 2001; 20:218-232. [PMID: 11341711 DOI: 10.1109/42.918472] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
A maximum-likelihood (ML) expectation-maximization (EM) algorithm (called EM-IntraSPECT) is presented for simultaneously estimating single photon emission computed tomography (SPECT) emission and attenuation parameters from emission data alone. The algorithm uses the activity within the patient as transmission tomography sources, with which attenuation coefficients can be estimated. For this initial study, EM-IntraSPECT was tested on computer-simulated attenuation and emission maps representing a simplified human thorax as well as on SPECT data obtained from a physical phantom. Two evaluations were performed. First, to corroborate the idea of reconstructing attenuation parameters from emission data, attenuation parameters (mu) were estimated with the emission intensities (lambda) fixed at their true values. Accurate reconstructions of attenuation parameters were obtained. Second, emission parameters lambda and attenuation parameters mu were simultaneously estimated from the emission data alone. In this case there was crosstalk between estimates of lambda and mu and final estimates of lambda and mu depended on initial values. Estimates degraded significantly as the support extended out farther from the body, and an explanation for this is proposed. In the EM-IntraSPECT reconstructed attenuation images, the lungs, spine, and soft tissue were readily distinguished and had approximately correct shapes and sizes. As compared with standard EM reconstruction assuming a fix uniform attenuation map, EM-IntraSPECT provided more uniform estimates of cardiac activity in the physical phantom study and in the simulation study with tight support, but less uniform estimates with a broad support. The new EM algorithm derived here has additional applications, including reconstructing emission and transmission projection data under a unified statistical model.
Collapse
Affiliation(s)
- A Krol
- SUNY Upstate Medical University, Department of Radiology, Syracuse 13210, USA.
| | | | | | | | | | | |
Collapse
|