26
|
Hadjiiski L, Samala R, Chan HP. Image Processing Analytics: Enhancements and Segmentation. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00057-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
27
|
Pujara AC, Joe AI, Patterson SK, Neal CH, Noroozian M, Ma T, Chan HP, Helvie MA, Maturen KE. Digital Breast Tomosynthesis Slab Thickness: Impact on Reader Performance and Interpretation Time. Radiology 2020; 297:534-542. [PMID: 33021891 DOI: 10.1148/radiol.2020192805] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Background Digital breast tomosynthesis (DBT) helps reduce recall rates and improve cancer detection compared with two-dimensional (2D) mammography but has a longer interpretation time. Purpose To evaluate the effect of DBT slab thickness and overlap on reader performance and interpretation time in the absence of 1-mm slices. Materials and Methods In this retrospective HIPAA-compliant multireader study of DBT examinations performed between August 2013 and July 2017, four fellowship-trained breast imaging radiologists blinded to final histologic findings interpreted DBT examinations by using a standard protocol (10-mm slabs with 5-mm overlap, 1-mm slices, synthetic 2D mammogram) and an experimental protocol (6-mm slabs with 3-mm overlap, synthetic 2D mammogram) with a crossover design. Among the 122 DBT examinations, 74 mammographic findings had final histologic findings, including 31 masses (26 malignant), 20 groups of calcifications (12 malignant), 18 architectural distortions (15 malignant), and five asymmetries (two malignant). Durations of reader interpretations were recorded. Comparisons were made by using receiver operating characteristic curves for diagnostic performance and paired t tests for continuous variables. Results Among 122 women, mean age was 58.6 years ± 10.1 (standard deviation). For detection of malignancy, areas under the receiver operating characteristic curves were similar between protocols (range, 0.83-0.94 vs 0.84-0.92; P ≥ .63). Mean DBT interpretation time was shorter with the experimental protocol for three of four readers (reader 1, 5.6 minutes ± 1.7 vs 4.7 minutes ± 1.4 [P < .001]; reader 2, 2.8 minutes ± 1.1 vs 2.3 minutes ± 1.0 [P = .001]; reader 3, 3.6 minutes ± 1.4 vs 3.3 minutes ± 1.3 [P = .17]; reader 4, 4.3 minutes ± 1.0 vs 3.8 minutes ± 1.1 [P ≤ .001]), with 72% reduction in both mean number of images and mean file size (P < .001 for both). Conclusion A digital breast tomosynthesis reconstruction protocol that uses 6-mm slabs with 3-mm overlap, without 1-mm slices, had similar diagnostic performance compared with the standard protocol and led to a reduced interpretation time for three of four readers. © RSNA, 2020 See also the editorial by Chang in this issue.
Collapse
|
28
|
Zhou C, Chan HP, Chughtai A, Hadjiiski LM, Kazerooni EA, Wei J. Pathologic categorization of lung nodules: Radiomic descriptors of CT attenuation distribution patterns of solid and subsolid nodules in low-dose CT. Eur J Radiol 2020; 129:109106. [PMID: 32526671 DOI: 10.1016/j.ejrad.2020.109106] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 04/28/2020] [Accepted: 05/27/2020] [Indexed: 12/17/2022]
Abstract
PURPOSE Develop a quantitative image analysis method to characterize the heterogeneous patterns of nodule components for the classification of pathological categories of nodules. MATERIALS AND METHODS With IRB approval and permission of the National Lung Screening Trial (NLST) project, 103 subjects with low dose CT (LDCT) were used in this study. We developed a radiomic quantitative CT attenuation distribution descriptor (qADD) to characterize the heterogeneous patterns of nodule components and a hybrid model (qADD+) that combined qADD with subject demographic data and radiologist-provided nodule descriptors to differentiate aggressive tumors from indolent tumors or benign nodules with pathological categorization as reference standard. The classification performances of qADD and qADD + were evaluated and compared to the Brock and the Mayo Clinic models by analysis of the area under the receiver operating characteristic curve (AUC). RESULTS The radiomic features were consistently selected into qADDs to differentiate pathological invasive nodules from (1) preinvasive nodules, (2) benign nodules, and (3) the group of preinvasive and benign nodules, achieving test AUCs of 0.847 ± 0.002, 0.842 ± 0.002 and 0.810 ± 0.001, respectively. The qADD + obtained test AUCs of 0.867 ± 0.002, 0.888 ± 0.001 and 0.852 ± 0.001, respectively, which were higher than both the Brock and the Mayo Clinic models. CONCLUSION The pathologic invasiveness of lung tumors could be categorized according to the CT attenuation distribution patterns of the nodule components manifested on LDCT images, and the majority of invasive lung cancers could be identified at baseline LDCT scans.
Collapse
|
29
|
Samala RK, Chan HP, Hadjiiski LM, Helvie MA, Richter CD. Generalization error analysis for deep convolutional neural network with transfer learning in breast cancer diagnosis. Phys Med Biol 2020; 65:105002. [PMID: 32208369 DOI: 10.1088/1361-6560/ab82e8] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Deep convolutional neural network (DCNN), now popularly called artificial intelligence (AI), has shown the potential to improve over previous computer-assisted tools in medical imaging developed in the past decades. A DCNN has millions of free parameters that need to be trained, but the training sample set is limited in size for most medical imaging tasks so that transfer learning is typically used. Automatic data mining may be an efficient way to enlarge the collected data set but the data can be noisy such as incorrect labels or even a wrong type of image. In this work we studied the generalization error of DCNN with transfer learning in medical imaging for the task of classifying malignant and benign masses on mammograms. With a finite available data set, we simulated a training set containing corrupted data or noisy labels. The balance between learning and memorization of the DCNN was manipulated by varying the proportion of corrupted data in the training set. The generalization error of DCNN was analyzed by the area under the receiver operating characteristic curve for the training and test sets and the weight changes after transfer learning. The study demonstrates that the transfer learning strategy of DCNN for such tasks needs to be designed properly, taking into consideration the constraints of the available training set having limited size and quality for the classification task at hand, to minimize memorization and improve generalizability.
Collapse
|
30
|
Chan HP, Samala RK, Hadjiiski LM. CAD and AI for breast cancer-recent development and challenges. Br J Radiol 2020; 93:20190580. [PMID: 31742424 PMCID: PMC7362917 DOI: 10.1259/bjr.20190580] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/13/2019] [Accepted: 11/17/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a popular area of research and development in the past few decades. In CAD, machine learning methods and multidisciplinary knowledge and techniques are used to analyze the patient information and the results can be used to assist clinicians in their decision making process. CAD may analyze imaging information alone or in combination with other clinical data. It may provide the analyzed information directly to the clinician or correlate the analyzed results with the likelihood of certain diseases based on statistical modeling of the past cases in the population. CAD systems can be developed to provide decision support for many applications in the patient care processes, such as lesion detection, characterization, cancer staging, treatment planning and response assessment, recurrence and prognosis prediction. The new state-of-the-art machine learning technique, known as deep learning (DL), has revolutionized speech and text recognition as well as computer vision. The potential of major breakthrough by DL in medical image analysis and other CAD applications for patient care has brought about unprecedented excitement of applying CAD, or artificial intelligence (AI), to medicine in general and to radiology in particular. In this paper, we will provide an overview of the recent developments of CAD using DL in breast imaging and discuss some challenges and practical issues that may impact the advancement of artificial intelligence and its integration into clinical workflow.
Collapse
|
31
|
Chan HP, Samala RK, Hadjiiski LM, Zhou C. Deep Learning in Medical Image Analysis. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1213:3-21. [PMID: 32030660 PMCID: PMC7442218 DOI: 10.1007/978-3-030-33128-3_1] [Citation(s) in RCA: 226] [Impact Index Per Article: 56.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Deep learning is the state-of-the-art machine learning approach. The success of deep learning in many pattern recognition applications has brought excitement and high expectations that deep learning, or artificial intelligence (AI), can bring revolutionary changes in health care. Early studies of deep learning applied to lesion detection or classification have reported superior performance compared to those by conventional techniques or even better than radiologists in some tasks. The potential of applying deep-learning-based medical image analysis to computer-aided diagnosis (CAD), thus providing decision support to clinicians and improving the accuracy and efficiency of various diagnostic and treatment processes, has spurred new research and development efforts in CAD. Despite the optimism in this new era of machine learning, the development and implementation of CAD or AI tools in clinical practice face many challenges. In this chapter, we will discuss some of these issues and efforts needed to develop robust deep-learning-based CAD tools and integrate these tools into the clinical workflow, thereby advancing towards the goal of providing reliable intelligent aids for patient care.
Collapse
|
32
|
Zheng J, Fessler JA, Chan HP. Effect of source blur on digital breast tomosynthesis reconstruction. Med Phys 2019; 46:5572-5592. [PMID: 31494953 DOI: 10.1002/mp.13801] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Revised: 08/20/2019] [Accepted: 08/26/2019] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Most digital breast tomosynthesis (DBT) reconstruction methods neglect the blurring of the projection views caused by the finite size or motion of the x-ray focal spot. This paper studies the effect of source blur on the spatial resolution of reconstructed DBT using analytical calculation and simulation, and compares the influence of source blur over a range of blurred source sizes. METHODS Mathematically derived formulas describe the point spread function (PSF) of source blur on the detector plane as a function of the spatial locations of the finite-sized source and the object. By using the available technical parameters of some clinical DBT systems, we estimated the effective source sizes over a range of exposure time and DBT scan geometries. We used the CatSim simulation tool (GE Global Research, NY) to generate digital phantoms containing line pairs and beads at different locations and imaged with sources of four different sizes covering the range of potential source blur. By analyzing the relative contrasts of the test objects in the reconstructed images, we studied the effect of the source blur on the spatial resolution of DBT. Furthermore, we simulated a detector that rotated in synchrony with the source about the rotation center and calculated the spatial distribution of the blurring distance in the imaged volume to estimate its influence on source blur. RESULTS Calculations demonstrate that the PSF is highly shift-variant, making it challenging to accurately implement during reconstruction. The results of the simulated phantoms demonstrated that a typical finite-sized focal spot (~0.3 mm) will not affect the reconstructed image resolution if the x-ray tube is stationary during data acquisition. If the x-ray tube moves during exposure, the extra blur due to the source motion may degrade image resolution, depending on the effective size of the source along the direction of the motion. A detector that rotates in synchrony with the source does not reduce the influence of source blur substantially. CONCLUSIONS This study demonstrates that the extra source blur due to the motion of the x-ray tube during image acquisition substantially degrades the reconstructed image resolution. This effect cannot be alleviated by rotating the detector in synchrony with the source. The simulation results suggest that there are potential benefits of modeling the source blur in image reconstruction for DBT systems using continuous-motion acquisition mode.
Collapse
|
33
|
Cha KH, Hadjiiski LM, Cohan RH, Chan HP, Caoili EM, Davenport MS, Samala RK, Weizer AZ, Alva A, Kirova-Nedyalkova G, Shampain K, Meyer N, Barkmeier D, Woolen S, Shankar PR, Francis IR, Palmbos P. Diagnostic Accuracy of CT for Prediction of Bladder Cancer Treatment Response with and without Computerized Decision Support. Acad Radiol 2019; 26:1137-1145. [PMID: 30424999 DOI: 10.1016/j.acra.2018.10.010] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2018] [Revised: 09/23/2018] [Accepted: 10/09/2018] [Indexed: 10/27/2022]
Abstract
RATIONALE AND OBJECTIVES To evaluate whether a computed tomography (CT)-based computerized decision-support system for muscle-invasive bladder cancer treatment response assessment (CDSS-T) can improve identification of patients who have responded completely to neoadjuvant chemotherapy. MATERIALS AND METHODS Following Institutional Review Board approval, pre-chemotherapy and post-chemotherapy CT scans of 123 subjects with 157 muscle-invasive bladder cancer foci were collected retrospectively. CT data were analyzed with a CDSS-T that uses a combination of deep-learning convolutional neural network and radiomic features to distinguish muscle-invasive bladder cancers that have fully responded to neoadjuvant treatment from those that have not. Leave-one-case-out cross-validation was used to minimize overfitting. Five attending abdominal radiologists, four diagnostic radiology residents, two attending oncologists, and one attending urologist estimated the likelihood of pathologic T0 disease (complete response) by viewing paired pre/post-treatment CT scans placed side-by-side on an internally-developed graphical user interface. The observers provided an estimate without use of CDSS-T and then were permitted to revise their estimate after a CDSS-T-derived likelihood score was displayed. Observer estimates were analyzed with multi-reader, multi-case receiver operating characteristic methodology. The area under the curve (AUC) and the statistical significance of the difference were estimated. RESULTS The mean AUCs for assessment of pathologic T0 disease were 0.80 for CDSS-T alone, 0.74 for physicians not using CDSS-T, and 0.77 for physicians using CDSS-T. The increase in the physicians' performance was statistically significant (P < .05). CONCLUSION CDSS-T improves physician performance for identifying complete response of muscle-invasive bladder cancer to neoadjuvant chemotherapy.
Collapse
|
34
|
Ma X, Wei J, Zhou C, Helvie MA, Chan HP, Hadjiiski LM, Lu Y. Automated pectoral muscle identification on MLO-view mammograms: Comparison of deep neural network to conventional computer vision. Med Phys 2019; 46:2103-2114. [PMID: 30771257 DOI: 10.1002/mp.13451] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 12/20/2018] [Accepted: 02/02/2019] [Indexed: 01/09/2023] Open
Abstract
OBJECTIVES The aim of this study was to develop a fully automated deep learning approach for identification of the pectoral muscle on mediolateral oblique (MLO) view mammograms and evaluate its performance in comparison to our previously developed texture-field orientation (TFO) method using conventional image feature analysis. Pectoral muscle segmentation is an important step for automated image analyses such as breast density or parenchymal pattern classification, lesion detection, and multiview correlation. MATERIALS AND METHODS Institutional Review Board (IRB) approval was obtained before data collection. A dataset of 729 MLO-view mammograms including 637 digitized film mammograms (DFM) and 92 digital mammograms (DM) from our previous study was used for the training and validation of our deep convolutional neural network (DCNN) segmentation method. In addition, we collected an independent set of 203 DMs from 131 patients for testing. The film mammograms were digitized at a pixel size of 50 μm × 50 μm with a Lumiscan digitizer. All DMs were acquired with GE systems at a pixel size of 100 μm × 100 μm. An experienced MQSA radiologist manually drew the pectoral muscle boundary on each mammogram as the reference standard. We trained the DCNN to estimate a probability map of the pectoral muscle region on mammograms. The DCNN consisted of a contracting path to capture multiresolution image context and a symmetric expanding path for prediction of the pectoral muscle region. Three DCNN structures were compared for automated identification of pectoral muscles. Tenfold cross-validation was used in training of the DCNNs. After training, we applied the ten trained models during cross-validation to the independent DM test set. The predicted pectoral muscle region of each test DM was obtained as the mean probability map by averaging the ensemble of probability maps from the ten models. The DCNN-segmented pectoral muscle was evaluated by three performance measures relative to the reference standard: (a) the percent overlap area (POA) of the pectoral muscle regions, (b) the Hausdorff distance (Hdist), and (c) the average Euclidean distance (AvgDist) between the boundaries. The results were compared to those obtained with the TFO method, used as our baseline. A two-tailed paired t test was performed to examine the significance in the differences between the DCNN and the baseline. RESULTS In the ten test partitions of the cross-validation set, the DCNN achieved a mean POA of 96.5 ± 2.9%, a mean Hdist of 2.26 ± 1.31 mm, and a mean AvgDist of 0.78 ± 0.58 mm, while the corresponding measures by the baseline method were 94.2 ± 4.8%, 3.69 ± 2.48 mm, and 1.30 ± 1.22 mm, respectively. For the independent DM test set, the DCNN achieved a mean POA of 93.7% ± 6.9%, a mean Hdist of 3.80 ± 3.21 mm, and a mean AvgDist of 1.49 ± 1.62 mm comparing to 86.9% ± 16.0%, 7.18 ± 14.22 mm, and 3.98 ± 14.13 mm, respectively, by the baseline method. CONCLUSION In comparison to the TFO method, DCNN significantly improved the accuracy of pectoral muscle identification on mammograms (P < 0.05).
Collapse
|
35
|
Wu E, Hadjiiski LM, Samala RK, Chan HP, Cha KH, Richter C, Cohan RH, Caoili EM, Paramagul C, Alva A, Weizer AZ. Deep Learning Approach for Assessment of Bladder Cancer Treatment Response. Tomography 2019; 5:201-208. [PMID: 30854458 PMCID: PMC6403041 DOI: 10.18383/j.tom.2018.00036] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
We compared the performance of different Deep learning-convolutional neural network (DL-CNN) models for bladder cancer treatment response assessment based on transfer learning by freezing different DL-CNN layers and varying the DL-CNN structure. Pre- and posttreatment computed tomography scans of 123 patients (cancers, 129; pre- and posttreatment cancer pairs, 158) undergoing chemotherapy were collected. After chemotherapy 33% of patients had T0 stage cancer (complete response). Regions of interest in pre- and posttreatment scans were extracted from the segmented lesions and combined into hybrid pre -post image pairs (h-ROIs). Training (pairs, 94; h-ROIs, 6209), validation (10 pairs) and test sets (54 pairs) were obtained. The DL-CNN consisted of 2 convolution (C1-C2), 2 locally connected (L3-L4), and 1 fully connected layers. The DL-CNN was trained with h-ROIs to classify cancers as fully responding (stage T0) or not fully responding to chemotherapy. Two radiologists provided lesion likelihood of being stage T0 posttreatment. The test area under the ROC curve (AUC) was 0.73 for T0 prediction by the base DL-CNN structure with randomly initialized weights. The base DL-CNN structure with pretrained weights and transfer learning (no frozen layers) achieved test AUC of 0.79. The test AUCs for 3 modified DL-CNN structures (different C1-C2 max pooling filter sizes, strides, and padding, with transfer learning) were 0.72, 0.86, and 0.69. For the base DL-CNN with (C1) frozen, (C1-C2) frozen, and (C1-C2-L3) frozen, the test AUCs were 0.81, 0.78, and 0.71, respectively. The radiologists' AUCs were 0.76 and 0.77. DL-CNN performed better with pretrained than randomly initialized weights.
Collapse
|
36
|
Ma X, Hadjiiski LM, Wei J, Chan HP, Cha KH, Cohan RH, Caoili EM, Samala R, Zhou C, Lu Y. U-Net based deep learning bladder segmentation in CT urography. Med Phys 2019; 46:1752-1765. [PMID: 30734932 DOI: 10.1002/mp.13438] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Revised: 12/26/2018] [Accepted: 01/31/2019] [Indexed: 11/06/2022] Open
Abstract
OBJECTIVES To develop a U-Net-based deep learning approach (U-DL) for bladder segmentation in computed tomography urography (CTU) as a part of a computer-assisted bladder cancer detection and treatment response assessment pipeline. MATERIALS AND METHODS A dataset of 173 cases including 81 cases in the training/validation set (42 masses, 21 with wall thickening, 18 normal bladders), and 92 cases in the test set (43 masses, 36 with wall thickening, 13 normal bladders) were used with Institutional Review Board approval. An experienced radiologist provided three-dimensional (3D) hand outlines for all cases as the reference standard. We previously developed a bladder segmentation method that used a deep learning convolution neural network and level sets (DCNN-LS) within a user-input bounding box. However, some cases with poor image quality or with advanced bladder cancer spreading into the neighboring organs caused inaccurate segmentation. We have newly developed an automated U-DL method to estimate a likelihood map of the bladder in CTU. The U-DL did not require a user-input box and the level sets for postprocessing. To identify the best model for this task, we compared the following models: (a) two-dimensional (2D) U-DL and 3D U-DL using 2D CT slices and 3D CT volumes, respectively, as input, (b) U-DLs using CT images of different resolutions as input, and (c) U-DLs with and without automated cropping of the bladder as an image preprocessing step. The segmentation accuracy relative to the reference standard was quantified by six measures: average volume intersection ratio (AVI), average percent volume error (AVE), average absolute volume error (AAVE), average minimum distance (AMD), average Hausdorff distance (AHD), and the average Jaccard index (AJI). As a baseline, the results from our previous DCNN-LS method were used. RESULTS In the test set, the best 2D U-DL model achieved AVI, AVE, AAVE, AMD, AHD, and AJI values of 93.4 ± 9.5%, -4.2 ± 14.2%, 9.2 ± 11.5%, 2.7 ± 2.5 mm, 9.7 ± 7.6 mm, 85.0 ± 11.3%, respectively, while the corresponding measures by the best 3D U-DL were 90.6 ± 11.9%, -2.3 ± 21.7%, 11.5 ± 18.5%, 3.1 ± 3.2 mm, 11.4 ± 10.0 mm, and 82.6 ± 14.2%, respectively. For comparison, the corresponding values obtained with the baseline method were 81.9 ± 12.1%, 10.2 ± 16.2%, 14.0 ± 13.0%, 3.6 ± 2.0 mm, 12.8 ± 6.1 mm, and 76.2 ± 11.8%, respectively, for the same test set. The improvement for all measures between the best U-DL and the DCNN-LS were statistically significant (P < 0.001). CONCLUSION Compared to a previous DCNN-LS method, which depended on a user-input bounding box, the U-DL provided more accurate bladder segmentation and was more automated than the previous approach.
Collapse
|
37
|
Wei J, Chan HP, Helvie MA, Roubidoux MA, Neal CH, Lu Y, Hadjiiski LM, Zhou C. Synthesizing mammogram from digital breast tomosynthesis. Phys Med Biol 2019; 64:045011. [PMID: 30625429 DOI: 10.1088/1361-6560/aafcda] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The purpose of this study is to develop a new method for generating synthesized mammogram (SM) from digital breast tomosynthesis (DBT) and to assess its potential as an adjunct to DBT. We first applied multiscale bilateral filtering to the reconstructed DBT slices to enhance the high-frequency features and reduce noise. A maximum intensity projection (MIP) image was then obtained from the high-frequency components of the DBT slices. A multiscale image fusion method was designed to combine the MIP image and the central DBT projection view into an SM and further enhance the high-frequency features. We conducted a pilot reader study to visually assess the image quality of SM in comparison to full field digital mammograms (FFDM). For each DBT craniocaudal or mediolateral view, a clinical FFDM of the corresponding view was retrospectively collected. Three MQSA radiologists, blinded to the pathological and other clinical information, independently interpreted the SM and the corresponding FFDM side by side marked with the lesion locations. The differences in the BI-RADS assessments of both MCs and masses between SM and FFDM did not achieve statistical significance for all three readers. The conspicuity of MCs on SM was superior to that on FFDM and the BI-RADS assessments of MCs were comparable while the conspicuity of masses on SM was degraded and interpretation on SM was less accurate than that on FFDM. The SM may be useful for efficient prescreening of MCs in DBT but the DBT should be used for detection and characterization of masses.
Collapse
|
38
|
Gordon MN, Hadjiiski LM, Cha KH, Samala RK, Chan HP, Cohan RH, Caoili EM. Deep-learning convolutional neural network: Inner and outer bladder wall segmentation in CT urography. Med Phys 2019; 46:634-648. [PMID: 30520055 DOI: 10.1002/mp.13326] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Revised: 09/30/2018] [Accepted: 11/15/2018] [Indexed: 11/08/2022] Open
Abstract
PURPOSE We are developing a computerized segmentation tool for the inner and outer bladder wall as a part of an image analysis pipeline for CT urography (CTU). MATERIALS AND METHODS A data set of 172 CTU cases was collected retrospectively with Institutional Review Board (IRB) approval. The data set was randomly split into two independent sets of training (81 cases) and testing (92 cases) which were manually outlined for both the inner and outer wall. We trained a deep-learning convolutional neural network (DL-CNN) to distinguish the bladder wall from the inside and outside of the bladder using neighborhood information. Approximately, 240 000 regions of interest (ROIs) of 16 × 16 pixels in size were extracted from regions in the training cases identified by the manually outlined inner and outer bladder walls to form a training set for the DL-CNN; half of the ROIs were selected to include the bladder wall and the other half were selected to exclude the bladder wall with some of these ROIs being inside the bladder and the rest outside the bladder entirely. The DL-CNN trained on these ROIs was applied to the cases in the test set slice-by-slice to generate a bladder wall likelihood map where the gray level of a given pixel represents the likelihood that a given pixel would belong to the bladder wall. We then used the DL-CNN likelihood map as an energy term in the energy equation of a cascaded level sets method to segment the inner and outer bladder wall. The DL-CNN segmentation with level sets was compared to the three-dimensional (3D) hand-segmented contours as a reference standard. RESULTS For the inner wall contour, the training set achieved the average volume intersection, average volume error, average absolute volume error, and average distance of 90.0 ± 8.7%, -4.2 ± 18.4%, 12.9 ± 13.9%, and 3.0 ± 1.6 mm, respectively. The corresponding values for the test set were 86.9 ± 9.6%, -8.3 ± 37.7%, 18.4 ± 33.8%, and 3.4 ± 1.8 mm, respectively. For the outer wall contour, the training set achieved the values of 93.7 ± 3.9%, -7.8 ± 11.4%, 10.3 ± 9.3%, and 3.0 ± 1.2 mm, respectively. The corresponding values for the test set were 87.5 ± 9.9%, -1.2 ± 20.8%, 11.9 ± 17.0%, and 3.5 ± 2.3 mm, respectively. CONCLUSIONS Our study demonstrates that DL-CNN-assisted level sets can effectively segment bladder walls from the inner bladder and outer structures despite a lack of consistent distinctions along the inner wall. However, even with the addition of level sets, the inner and outer walls may still be over-segmented and the DL-CNN-assisted level sets may incorrectly segment parts of the prostate that overlap with the outer bladder wall. The outer wall segmentation was improved compared to our previous method and the DL-CNN-assisted level sets were also able to segment the inner bladder wall with similar performance. This study shows the DL-CNN-assisted level set segmentation tool can effectively segment the inner and outer wall of the bladder.
Collapse
|
39
|
Chan HP, Helvie MA. Deep Learning for Mammographic Breast Density Assessment and Beyond. Radiology 2018; 290:59-60. [PMID: 30325286 DOI: 10.1148/radiol.2018182116] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
40
|
Alvarez R, Ridelman E, Rizk N, White MS, Zhou C, Chan HP, Varban OA, Helvie MA, Seeley RJ. Assessment of mammographic breast density after sleeve gastrectomy. Surg Obes Relat Dis 2018; 14:1643-1651. [PMID: 30195656 DOI: 10.1016/j.soard.2018.07.024] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2018] [Accepted: 07/26/2018] [Indexed: 02/06/2023]
Abstract
BACKGROUND Mammographic breast density (BD) is an independent risk factor for breast cancer. The effects of bariatric surgery on BD are unknown. OBJECTIVES To investigate BD changes after sleeve gastrectomy (SG). SETTING University hospital, United States. METHODS Fifty women with mammograms before and after SG performed from 2009 to 2015 were identified after excluding patients with a history of breast cancer, hormone replacement, and/or breast surgery. Patient age, menopausal status, co-morbidities, hemoglobin A1C, and body mass index were collected. Craniocaudal mammographic views before and after SG were interpreted by a blinded radiologist and analyzed by software to obtain breast imaging reporting and data system density categories, breast area, BD, and absolute dense breast area (ADA). Analyses were performed using χ2, McNemar's test, t test, and linear regressions. RESULTS Radiologist interpretation revealed a significant increase in breast imaging reporting and data system B+C category (68% versus 54%; P = .0095) and BD (9.8 ± 7.4% versus 8.3 ± 6.4%; P = .0006) after SG. Software analyses showed a postoperative decrease in breast area (75,398.9 ± 22,941.2 versus 90,655.9 ± 25,621.0 pixels; P < .0001) and ADA (7287.1 ± 3951.3 versus 8204.6 ± 4769.9 pixels; P = .0314) with no significant change in BD. Reduction in ADA was accentuated in postmenopausal patients. Declining breast area was directly correlated with body mass index reduction (R2 = .4495; P < 0.0001). Changes in breast rather than whole body adiposity better explained ADA reduction. Neither diabetes status nor changes in hemoglobin A1C correlated with changes in ADA. CONCLUSIONS ADA decreases after SG, particularly in postmenopausal patients. Software-generated ADA may be more accurate than radiologist-estimated BD or breast imaging reporting and data system for capturing changes in dense breast tissue after SG.
Collapse
|
41
|
Samala RK, Chan HP, Hadjiiski LM, Helvie MA, Richter C, Cha K. Evolutionary pruning of transfer learned deep convolutional neural network for breast cancer diagnosis in digital breast tomosynthesis. Phys Med Biol 2018; 63:095005. [PMID: 29616660 PMCID: PMC5967610 DOI: 10.1088/1361-6560/aabb5b] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Deep learning models are highly parameterized, resulting in difficulty in inference and transfer learning for image recognition tasks. In this work, we propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in digital breast tomosynthesis (DBT). The objective is to prune the number of tunable parameters while preserving the classification accuracy. In the first stage transfer learning, 19 632 augmented regions-of-interest (ROIs) from 2454 mass lesions on mammograms were used to train a pre-trained DCNN on ImageNet. In the second stage transfer learning, the DCNN was used as a feature extractor followed by feature selection and random forest classification. The pathway evolution was performed using genetic algorithm in an iterative approach with tournament selection driven by count-preserving crossover and mutation. The second stage was trained with 9120 DBT ROIs from 228 mass lesions using leave-one-case-out cross-validation. The DCNN was reduced by 87% in the number of neurons, 34% in the number of parameters, and 95% in the number of multiply-and-add operations required in the convolutional layers. The test AUC on 89 mass lesions from 94 independent DBT cases before and after pruning were 0.88 and 0.90, respectively, and the difference was not statistically significant (p > 0.05). The proposed DCNN compression approach can reduce the number of required operations by 95% while maintaining the classification performance. The approach can be extended to other deep neural networks and imaging tasks where transfer learning is appropriate.
Collapse
|
42
|
Balagurunathan Y, Beers A, Kalpathy-Cramer J, McNitt-Gray M, Hadjiiski L, Zhao B, Zhu J, Yang H, Yip SSF, Aerts HJWL, Napel S, Cherezov D, Cha K, Chan HP, Flores C, Garcia A, Gillies R, Goldgof D. Semi-automated pulmonary nodule interval segmentation using the NLST data. Med Phys 2018; 45:1093-1107. [PMID: 29363773 DOI: 10.1002/mp.12766] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Revised: 01/04/2018] [Accepted: 01/04/2018] [Indexed: 01/26/2023] Open
Abstract
PURPOSE To study the variability in volume change estimates of pulmonary nodules due to segmentation approaches used across several algorithms and to evaluate these effects on the ability to predict nodule malignancy. METHODS We obtained 100 patient image datasets from the National Lung Screening Trial (NLST) that had a nodule detected on each of two consecutive low dose computed tomography (LDCT) scans, with an equal proportion of malignant and benign cases (50 malignant, 50 benign). Information about the nodule location for the cases was provided by a screen capture with a bounding box and its axial location was indicated. Five participating quantitative imaging network (QIN) institutions performed nodule segmentation using their preferred semi-automated algorithms with no manual correction; teams were allowed to provide additional manually corrected segmentations (analyzed separately). The teams were asked to provide segmentation masks for each nodule at both time points. From these masks, the volume was estimated for the nodule at each time point; the change in volume (absolute and percent change) across time points was estimated as well. We used the concordance correlation coefficient (CCC) to compare the similarity of computed nodule volumes (absolute and percent change) across algorithms. We used Logistic regression model on the change in volume (absolute change and percent change) of the nodules to predict the malignancy status, the area under the receiver operating characteristic curve (AUROC) and confidence intervals were reported. Because the size of nodules was expected to have a substantial effect on segmentation variability, analysis of change in volumes was stratified by lesion size, where lesions were grouped into those with a longest diameter of <8 mm and those with longest diameter ≥ 8 mm. RESULTS We find that segmentation of the nodules shows substantial variability across algorithms, with the CCC ranging from 0.56 to 0.95 for change in volume (percent change in volume range was [0.15 to 0.86]) across the nodules. When examining nodules based on their longest diameter, we find the CCC had higher values for large nodules with a range of [0.54 to 0.93] among the algorithms, while percent change in volume was [0.3 to 0.95]. Compared to that of smaller nodules which had a range of [-0.0038 to 0.69] and percent change in volume was [-0.039 to 0.92]. The malignancy prediction results showed fairly consistent results across the institutions, the AUC using change in volume ranged from 0.65 to 0.89 (Percent change in volume was 0.64 to 0.86) for entire nodule range. Prediction improves for large nodule range (≥ 8 mm) with AUC range 0.75 to 0.90 (percent change in volume was 0.74 to 0.92). Compared to smaller nodule range (<8 mm) with AUC range 0.57 to 0.78 (percent change in volume was 0.59 to 0.77). CONCLUSIONS We find there is a fairly high concordance in the size measurements for larger nodules (≥8 mm) than the lower sizes (<8 mm) across algorithms. We find the change in nodule volume (absolute and percent change) were consistent predictors of malignancy across institutions, despite using different segmentation algorithms. Using volume change estimates without corrections shows slightly lower predictability (for two teams).
Collapse
|
43
|
Li S, Wei J, Chan HP, Helvie MA, Roubidoux MA, Lu Y, Zhou C, Hadjiiski LM, Samala RK. Computer-aided assessment of breast density: comparison of supervised deep learning and feature-based statistical learning. Phys Med Biol 2018; 63:025005. [PMID: 29210358 DOI: 10.1088/1361-6560/aa9f87] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Breast density is one of the most significant factors that is associated with cancer risk. In this study, our purpose was to develop a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammograms (DMs). The input 'for processing' DMs was first log-transformed, enhanced by a multi-resolution preprocessing scheme, and subsampled to a pixel size of 800 µm × 800 µm from 100 µm × 100 µm. A deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD) by using a domain adaptation resampling method. The PD was estimated as the ratio of the dense area to the breast area based on the PMD. The DCNN approach was compared to a feature-based statistical learning approach. Gray level, texture and morphological features were extracted and a least absolute shrinkage and selection operator was used to combine the features into a feature-based PMD. With approval of the Institutional Review Board, we retrospectively collected a training set of 478 DMs and an independent test set of 183 DMs from patient files in our institution. Two experienced mammography quality standards act radiologists interactively segmented PD as the reference standard. Ten-fold cross-validation was used for model selection and evaluation with the training set. With cross-validation, DCNN obtained a Dice's coefficient (DC) of 0.79 ± 0.13 and Pearson's correlation (r) of 0.97, whereas feature-based learning obtained DC = 0.72 ± 0.18 and r = 0.85. For the independent test set, DCNN achieved DC = 0.76 ± 0.09 and r = 0.94, while feature-based learning achieved DC = 0.62 ± 0.21 and r = 0.75. Our DCNN approach was significantly better and more robust than the feature-based learning approach for automated PD estimation on DMs, demonstrating its potential use for automated density reporting as well as for model-based risk prediction.
Collapse
|
44
|
Zheng J, Fessler JA, Chan HP. Detector Blur and Correlated Noise Modeling for Digital Breast Tomosynthesis Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:116-127. [PMID: 28767366 PMCID: PMC5772655 DOI: 10.1109/tmi.2017.2732824] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
This paper describes a new image reconstruction method for digital breast tomosynthesis (DBT). The new method incorporates detector blur into the forward model. The detector blur in DBT causes correlation in the measurement noise. By making a few approximations that are reasonable for breast imaging, we formulated a regularized quadratic optimization problem with a data-fit term that incorporates models for detector blur and correlated noise (DBCN). We derived a computationally efficient separable quadratic surrogate (SQS) algorithm to solve the optimization problem that has a non-diagonal noise covariance matrix. We evaluated the SQS-DBCN method by reconstructing DBT scans of breast phantoms and human subjects. The contrast-to-noise ratio and sharpness of microcalcifications were analyzed and compared with those by the simultaneous algebraic reconstruction technique. The quality of soft tissue lesions and parenchymal patterns was examined. The results demonstrate the potential to improve the image quality of reconstructed DBT images by incorporating the system physics model. This paper is a first step toward model-based iterative reconstruction for DBT.
Collapse
|
45
|
Samala RK, Chan HP, Hadjiiski LM, Helvie MA, Cha KH, Richter CD. Multi-task transfer learning deep convolutional neural network: application to computer-aided diagnosis of breast cancer on mammograms. Phys Med Biol 2017; 62:8894-8908. [PMID: 29035873 PMCID: PMC5859950 DOI: 10.1088/1361-6560/aa93d4] [Citation(s) in RCA: 88] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Transfer learning in deep convolutional neural networks (DCNNs) is an important step in its application to medical imaging tasks. We propose a multi-task transfer learning DCNN with the aim of translating the 'knowledge' learned from non-medical images to medical diagnostic tasks through supervised training and increasing the generalization capabilities of DCNNs by simultaneously learning auxiliary tasks. We studied this approach in an important application: classification of malignant and benign breast masses. With Institutional Review Board (IRB) approval, digitized screen-film mammograms (SFMs) and digital mammograms (DMs) were collected from our patient files and additional SFMs were obtained from the Digital Database for Screening Mammography. The data set consisted of 2242 views with 2454 masses (1057 malignant, 1397 benign). In single-task transfer learning, the DCNN was trained and tested on SFMs. In multi-task transfer learning, SFMs and DMs were used to train the DCNN, which was then tested on SFMs. N-fold cross-validation with the training set was used for training and parameter optimization. On the independent test set, the multi-task transfer learning DCNN was found to have significantly (p = 0.007) higher performance compared to the single-task transfer learning DCNN. This study demonstrates that multi-task transfer learning may be an effective approach for training DCNN in medical imaging applications when training samples from a single modality are limited.
Collapse
|
46
|
Alvarez R, Seeley R, Helvie M, Varban O, Rizk N, White M, Shabrokh E, Zhou C, Chan HP. Breast Density Following Bariatric Surgery: Is BI-RADS the Answer? Surg Obes Relat Dis 2017. [DOI: 10.1016/j.soard.2017.09.343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
47
|
Lu Y, Chan HP, Wei J, Hadjiiski LM, Samala RK. Improving image quality for digital breast tomosynthesis: an automated detection and diffusion-based method for metal artifact reduction. Phys Med Biol 2017; 62:7765-7783. [PMID: 28832336 DOI: 10.1088/1361-6560/aa8803] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
In digital breast tomosynthesis (DBT), the high-attenuation metallic clips marking a previous biopsy site in the breast cause errors in the estimation of attenuation along the ray paths intersecting the markers during reconstruction, which result in interplane and inplane artifacts obscuring the visibility of subtle lesions. We proposed a new metal artifact reduction (MAR) method to improve image quality. Our method uses automatic detection and segmentation to generate a marker location map for each projection (PV). A voting technique based on the geometric correlation among different PVs is designed to reduce false positives (FPs) and to label the pixels on the PVs and the voxels in the imaged volume that represent the location and shape of the markers. An iterative diffusion method replaces the labeled pixels on the PVs with estimated tissue intensity from the neighboring regions while preserving the original pixel values in the neighboring regions. The inpainted PVs are then used for DBT reconstruction. The markers are repainted on the reconstructed DBT slices for radiologists' information. The MAR method is independent of reconstruction techniques or acquisition geometry. For the training set, the method achieved 100% success rate with one FP in 19 views. For the test set, the success rate by view was 97.2% for core biopsy microclips and 66.7% for clusters of large post-lumpectomy markers with a total of 10 FPs in 58 views. All FPs were large dense benign calcifications that also generated artifacts if they were not corrected by MAR. For the views with successful detection, the metal artifacts were reduced to a level that was not visually apparent in the reconstructed slices. The visibility of breast lesions obscured by the reconstruction artifacts from the metallic markers was restored.
Collapse
|
48
|
Garapati SS, Hadjiiski L, Cha KH, Chan HP, Caoili EM, Cohan RH, Weizer A, Alva A, Paramagul C, Wei J, Zhou C. Urinary bladder cancer staging in CT urography using machine learning. Med Phys 2017; 44:5814-5823. [PMID: 28786480 DOI: 10.1002/mp.12510] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Revised: 07/04/2017] [Accepted: 07/30/2017] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To evaluate the feasibility of using an objective computer-aided system to assess bladder cancer stage in CT Urography (CTU). MATERIALS AND METHODS A dataset consisting of 84 bladder cancer lesions from 76 CTU cases was used to develop the computerized system for bladder cancer staging based on machine learning approaches. The cases were grouped into two classes based on pathological stage ≥ T2 or below T2, which is the decision threshold for neoadjuvant chemotherapy treatment clinically. There were 43 cancers below stage T2 and 41 cancers at stage T2 or above. All 84 lesions were automatically segmented using our previously developed auto-initialized cascaded level sets (AI-CALS) method. Morphological and texture features were extracted. The features were divided into subspaces of morphological features only, texture features only, and a combined set of both morphological and texture features. The dataset was split into Set 1 and Set 2 for two-fold cross-validation. Stepwise feature selection was used to select the most effective features. A linear discriminant analysis (LDA), a neural network (NN), a support vector machine (SVM), and a random forest (RAF) classifier were used to combine the features into a single score. The classification accuracy of the four classifiers was compared using the area under the receiver operating characteristic (ROC) curve (Az ). RESULTS Based on the texture features only, the LDA classifier achieved a test Az of 0.91 on Set 1 and a test Az of 0.88 on Set 2. The test Az of the NN classifier for Set 1 and Set 2 were 0.89 and 0.92, respectively. The SVM classifier achieved test Az of 0.91 on Set 1 and test Az of 0.89 on Set 2. The test Az of the RAF classifier for Set 1 and Set 2 was 0.89 and 0.97, respectively. The morphological features alone, the texture features alone, and the combined feature set achieved comparable classification performance. CONCLUSION The predictive model developed in this study shows promise as a classification tool for stratifying bladder cancer into two staging categories: greater than or equal to stage T2 and below stage T2.
Collapse
|
49
|
Chan S, Chan HP, Corney C, Scuderi C, Selvalogan N, Pelecanos A, Ratanjee S. Phosphate binder use in patients undergoing centre-based haemodialysis within the Metro North Kidney Health Service. Intern Med J 2017. [DOI: 10.1111/imj.4_13461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
50
|
Zheng J, Fessler JA, Chan HP. Segmented separable footprint projector for digital breast tomosynthesis and its application for subpixel reconstruction. Med Phys 2017; 44:986-1001. [PMID: 28058719 DOI: 10.1002/mp.12092] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Revised: 12/22/2016] [Accepted: 12/29/2016] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Digital forward and back projectors play a significant role in iterative image reconstruction. The accuracy of the projector affects the quality of the reconstructed images. Digital breast tomosynthesis (DBT) often uses the ray-tracing (RT) projector that ignores finite detector element size. This paper proposes a modified version of the separable footprint (SF) projector, called the segmented separable footprint (SG) projector, that calculates efficiently the Radon transform mean value over each detector element. The SG projector is specifically designed for DBT reconstruction because of the large height-to-width ratio of the voxels generally used in DBT. This study evaluates the effectiveness of the SG projector in reducing projection error and improving DBT reconstruction quality. METHODS We quantitatively compared the projection error of the RT and the SG projector at different locations and their performance in regular and subpixel DBT reconstruction. Subpixel reconstructions used finer voxels in the imaged volume than the detector pixel size. Subpixel reconstruction with RT projector uses interpolated projection views as input to provide adequate coverage of the finer voxel grid with the traced rays. Subpixel reconstruction with the SG projector, however, uses the measured projection views without interpolation. We simulated DBT projections of a test phantom using CatSim (GE Global Research, Niskayuna, NY) under idealized imaging conditions without noise and blur, to analyze the effects of the projectors and subpixel reconstruction without other image degrading factors. The phantom contained an array of horizontal and vertical line pair patterns (1 to 9.5 line pairs/mm) and pairs of closely spaced spheres (diameters 0.053 to 0.5 mm) embedded at the mid-plane of a 5-cm-thick breast tissue-equivalent uniform volume. The images were reconstructed with regular simultaneous algebraic reconstruction technique (SART) and subpixel SART using different projectors. The resolution and contrast of the test objects in the reconstructed images and the computation times were compared under different reconstruction conditions. RESULTS The SG projector reduced the projector error by 1 to 2 orders of magnitude at most locations. In the worst case, the SG projector still reduced the projection error by about 50%. In the DBT reconstructed slices parallel to the detector plane, the SG projector not only increased the contrast of the line pairs and spheres but also produced more smooth and continuous reconstructed images, whereas the discrete and sparse nature of the RT projector caused artifacts appearing as patterned noise. For subpixel reconstruction, the SG projector significantly increased object contrast and computation speed, especially for high subpixel ratios, compared with the RT projector implemented with accelerated Siddon's algorithm. The difference in the depth resolution among the projectors is negligible under the conditions studied. Our results also demonstrated that subpixel reconstruction can improve the spatial resolution of the reconstructed images, and can exceed the Nyquist limit of the detector under some conditions. CONCLUSIONS The SG projector was more accurate and faster than the RT projector. The SG projector also substantially reduced computation time and improved the image quality for the tomosynthesized images with and without subpixel reconstruction.
Collapse
|