1
|
Zeevi T, Leapman MS, Sprenkle PC, Venkataraman R, Staib LH, Onofrey JA. Reliable Prostate Cancer Risk Mapping From MRI Using Targeted and Systematic Core Needle Biopsy Histopathology. IEEE Trans Biomed Eng 2024; 71:1084-1091. [PMID: 37874731 PMCID: PMC10901528 DOI: 10.1109/tbme.2023.3326799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2023]
Abstract
OBJECTIVE To compute a dense prostate cancer risk map for the individual patient post-biopsy from magnetic resonance imaging (MRI) and to provide a more reliable evaluation of its fitness in prostate regions that were not identified as suspicious for cancer by a human-reader in pre- and intra-biopsy imaging analysis. METHODS Low-level pre-biopsy MRI biomarkers from targeted and non-targeted biopsy locations were extracted and statistically tested for representativeness against biomarkers from non-biopsied prostate regions. A probabilistic machine learning classifier was optimized to map biomarkers to their core-level pathology, followed by extrapolation of pathology scores to non-biopsied prostate regions. Goodness-of-fit was assessed at targeted and non-targeted biopsy locations for the post-biopsy individual patient. RESULTS Our experiments showed high predictability of imaging biomarkers in differentiating histopathology scores in thousands of non-targeted core-biopsy locations (ROC-AUCs: 0.85-0.88), but also high variability between patients (Median ROC-AUC [IQR]: 0.81-0.89 [0.29-0.40]). CONCLUSION The sparseness of prostate biopsy data makes the validation of a whole gland risk mapping a non-trivial task. Previous studies i) focused on targeted-biopsy locations although biopsy-specimens drawn from systematically scattered locations across the prostate constitute a more representative sample to non-biopsied regions, and ii) estimated prediction-power across predicted instances (e.g., biopsy specimens) with no patient distinction, which may lead to unreliable estimation of model fitness to the individual patient due to variation between patients in instance count, imaging characteristics, and pathologies. SIGNIFICANCE This study proposes a personalized whole-gland prostate cancer risk mapping post-biopsy to allow clinicians to better stage and personalize focal therapy treatment plans.
Collapse
|
2
|
Gross M, Huber S, Arora S, Ze'evi T, Haider SP, Kucukkaya AS, Iseke S, Kuhn TN, Gebauer B, Michallek F, Dewey M, Vilgrain V, Sartoris R, Ronot M, Jaffe A, Strazzabosco M, Chapiro J, Onofrey JA. Automated MRI liver segmentation for anatomical segmentation, liver volumetry, and the extraction of radiomics. Eur Radiol 2024:10.1007/s00330-023-10495-5. [PMID: 38217704 DOI: 10.1007/s00330-023-10495-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/20/2023] [Accepted: 10/29/2023] [Indexed: 01/15/2024]
Abstract
OBJECTIVES To develop and evaluate a deep convolutional neural network (DCNN) for automated liver segmentation, volumetry, and radiomic feature extraction on contrast-enhanced portal venous phase magnetic resonance imaging (MRI). MATERIALS AND METHODS This retrospective study included hepatocellular carcinoma patients from an institutional database with portal venous MRI. After manual segmentation, the data was randomly split into independent training, validation, and internal testing sets. From a collaborating institution, de-identified scans were used for external testing. The public LiverHccSeg dataset was used for further external validation. A 3D DCNN was trained to automatically segment the liver. Segmentation accuracy was quantified by the Dice similarity coefficient (DSC) with respect to manual segmentation. A Mann-Whitney U test was used to compare the internal and external test sets. Agreement of volumetry and radiomic features was assessed using the intraclass correlation coefficient (ICC). RESULTS In total, 470 patients met the inclusion criteria (63.9±8.2 years; 376 males) and 20 patients were used for external validation (41±12 years; 13 males). DSC segmentation accuracy of the DCNN was similarly high between the internal (0.97±0.01) and external (0.96±0.03) test sets (p=0.28) and demonstrated robust segmentation performance on public testing (0.93±0.03). Agreement of liver volumetry was satisfactory in the internal (ICC, 0.99), external (ICC, 0.97), and public (ICC, 0.85) test sets. Radiomic features demonstrated excellent agreement in the internal (mean ICC, 0.98±0.04), external (mean ICC, 0.94±0.10), and public (mean ICC, 0.91±0.09) datasets. CONCLUSION Automated liver segmentation yields robust and generalizable segmentation performance on MRI data and can be used for volumetry and radiomic feature extraction. CLINICAL RELEVANCE STATEMENT Liver volumetry, anatomic localization, and extraction of quantitative imaging biomarkers require accurate segmentation, but manual segmentation is time-consuming. A deep convolutional neural network demonstrates fast and accurate segmentation performance on T1-weighted portal venous MRI. KEY POINTS • This deep convolutional neural network yields robust and generalizable liver segmentation performance on internal, external, and public testing data. • Automated liver volumetry demonstrated excellent agreement with manual volumetry. • Automated liver segmentations can be used for robust and reproducible radiomic feature extraction.
Collapse
Affiliation(s)
- Moritz Gross
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA.
- Charité Center for Diagnostic and Interventional Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany.
| | - Steffen Huber
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
| | - Sandeep Arora
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
| | - Tal Ze'evi
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Stefan P Haider
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
- Department of Otorhinolaryngology, University Hospital of Ludwig Maximilians Universität München, Munich, Germany
| | - Ahmet S Kucukkaya
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
- Charité Center for Diagnostic and Interventional Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Simon Iseke
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
- Department of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, Rostock University Medical Center, Rostock, Germany
| | - Tom Niklas Kuhn
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
- Department of Diagnostic and Interventional Radiology, University Duesseldorf, Duesseldorf, Germany
| | - Bernhard Gebauer
- Charité Center for Diagnostic and Interventional Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Florian Michallek
- Charité Center for Diagnostic and Interventional Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Marc Dewey
- Charité Center for Diagnostic and Interventional Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Valérie Vilgrain
- Université Paris Cité, Île-de-France, Paris, France
- Department of Radiology, Hôpital Beaujon, AP-HP.Nord, Department of Radiology, Île-de-France, Clichy, France
| | - Riccardo Sartoris
- Université Paris Cité, Île-de-France, Paris, France
- Department of Radiology, Hôpital Beaujon, AP-HP.Nord, Department of Radiology, Île-de-France, Clichy, France
| | - Maxime Ronot
- Université Paris Cité, Île-de-France, Paris, France
- Department of Radiology, Hôpital Beaujon, AP-HP.Nord, Department of Radiology, Île-de-France, Clichy, France
| | - Ariel Jaffe
- Department of Internal Medicine, Yale University School of Medicine, New Haven, CT, USA
| | - Mario Strazzabosco
- Department of Internal Medicine, Yale University School of Medicine, New Haven, CT, USA
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA.
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
- Department of Urology, Yale University School of Medicine, New Haven, CT, USA.
| |
Collapse
|
3
|
Gross M, Arora S, Huber S, Kücükkaya AS, Onofrey JA. LiverHccSeg: A publicly available multiphasic MRI dataset with liver and HCC tumor segmentations and inter-rater agreement analysis. Data Brief 2023; 51:109662. [PMID: 37869619 PMCID: PMC10587725 DOI: 10.1016/j.dib.2023.109662] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 09/20/2023] [Accepted: 10/04/2023] [Indexed: 10/24/2023] Open
Abstract
Accurate segmentation of liver and tumor regions in medical imaging is crucial for the diagnosis, treatment, and monitoring of hepatocellular carcinoma (HCC) patients. However, manual segmentation is time-consuming and subject to inter- and intra-rater variability. Therefore, automated methods are necessary but require rigorous validation of high-quality segmentations based on a consensus of raters. To address the need for reliable and comprehensive data in this domain, we present LiverHccSeg, a dataset that provides liver and tumor segmentations on multiphasic contrast-enhanced magnetic resonance imaging from two board-approved abdominal radiologists, along with an analysis of inter-rater agreement. LiverHccSeg provides a curated resource for liver and HCC tumor segmentation tasks. The dataset includes a scientific reading and co-registered contrast-enhanced multiphasic magnetic resonance imaging (MRI) scans with corresponding manual segmentations by two board-approved abdominal radiologists and relevant metadata and offers researchers a comprehensive foundation for external validation, and benchmarking of liver and tumor segmentation algorithms. The dataset also provides an analysis of the agreement between the two sets of liver and tumor segmentations. Through the calculation of appropriate segmentation metrics, we provide insights into the consistency and variability in liver and tumor segmentations among the radiologists. A total of 17 cases were included for liver segmentation and 14 cases for HCC tumor segmentation. Liver segmentations demonstrates high segmentation agreement (mean Dice, 0.95 ± 0.01 [standard deviation]) and HCC tumor segmentations showed higher variation (mean Dice, 0.85 ± 0.16 [standard deviation]). The applications of LiverHccSeg can be manifold, ranging from testing machine learning algorithms on public external data to radiomic feature analyses. Leveraging the inter-rater agreement analysis within the dataset, researchers can investigate the impact of variability on segmentation performance and explore methods to enhance the accuracy and robustness of liver and tumor segmentation algorithms in HCC patients. By making this dataset publicly available, LiverHccSeg aims to foster collaborations, facilitate innovative solutions, and ultimately improve patient outcomes in the diagnosis and treatment of HCC.
Collapse
Affiliation(s)
- Moritz Gross
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, United States of America
- Charité Center for Diagnostic and Interventional Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Sandeep Arora
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, United States of America
| | - Steffen Huber
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, United States of America
| | - Ahmet S. Kücükkaya
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, United States of America
- Charité Center for Diagnostic and Interventional Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - John A. Onofrey
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, CT, United States of America
- Department of Urology, Yale University School of Medicine, New Haven, CT, United States of America
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America
| |
Collapse
|
4
|
Zeng T, Zhang J, Lieffrig EV, Cai Z, Chen F, You C, Naganawa M, Lu Y, Onofrey JA. Fast Reconstruction for Deep Learning PET Head Motion Correction. Med Image Comput Comput Assist Interv 2023; 14229:710-719. [PMID: 38174207 PMCID: PMC10758999 DOI: 10.1007/978-3-031-43999-5_67] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
Head motion correction is an essential component of brain PET imaging, in which even motion of small magnitude can greatly degrade image quality and introduce artifacts. Building upon previous work, we propose a new head motion correction framework taking fast reconstructions as input. The main characteristics of the proposed method are: (i) the adoption of a high-resolution short-frame fast reconstruction workflow; (ii) the development of a novel encoder for PET data representation extraction; and (iii) the implementation of data augmentation techniques. Ablation studies are conducted to assess the individual contributions of each of these design choices. Furthermore, multi-subject studies are conducted on an 18F-FPEB dataset, and the method performance is qualitatively and quantitatively evaluated by MOLAR reconstruction study and corresponding brain Region of Interest (ROI) Standard Uptake Values (SUV) evaluation. Additionally, we also compared our method with a conventional intensity-based registration method. Our results demonstrate that the proposed method outperforms other methods on all subjects, and can accurately estimate motion for subjects out of the training set. All code is publicly available on GitHub: https://github.com/OnofreyLab/dl-hmc_fast_recon_miccai2023.
Collapse
Affiliation(s)
- Tianyi Zeng
- Department of Radiology & Biomedical Imaging
| | - Jiazhen Zhang
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | | | | | - Fuyao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Chenyu You
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | | | - Yihuan Lu
- United Imaging Healthcare, Shanghai, China
| | - John A Onofrey
- Department of Radiology & Biomedical Imaging
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Department of Urology, Yale University, New Haven, CT, USA
| |
Collapse
|
5
|
Cai Z, Zeng T, Lieffrig EV, Zhang J, Chen F, Toyonaga T, You C, Xin J, Zheng N, Lu Y, Duncan JS, Onofrey JA. Cross-Attention for Improved Motion Correction in Brain PET. Mach Learn Clin Neuroimaging (2023) 2023; 14312:34-45. [PMID: 38174216 PMCID: PMC10758996 DOI: 10.1007/978-3-031-44858-4_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
Head movement during long scan sessions degrades the quality of reconstruction in positron emission tomography (PET) and introduces artifacts, which limits clinical diagnosis and treatment. Recent deep learning-based motion correction work utilized raw PET list-mode data and hardware motion tracking (HMT) to learn head motion in a supervised manner. However, motion prediction results were not robust to testing subjects outside the training data domain. In this paper, we integrate a cross-attention mechanism into the supervised deep learning network to improve motion correction across test subjects. Specifically, cross-attention learns the spatial correspondence between the reference images and moving images to explicitly focus the model on the most correlative inherent information - the head region the motion correction. We validate our approach on brain PET data from two different scanners: HRRT without time of flight (ToF) and mCT with ToF. Compared with traditional and deep learning benchmarks, our network improved the performance of motion correction by 58% and 26% in translation and rotation, respectively, in multi-subject testing in HRRT studies. In mCT studies, our approach improved performance by 66% and 64% for translation and rotation, respectively. Our results demonstrate that cross-attention has the potential to improve the quality of brain PET image reconstruction without the dependence on HMT. All code will be released on GitHub: https://github.com/OnofreyLab/dl_hmc_attention_mlcn2023.
Collapse
Affiliation(s)
- Zhuotong Cai
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, China
- Department of Radiology & Biomedical Imaging, New Haven, CT, USA
- Department of Biomedical Engineering, New Haven, CT, USA
| | - Tianyi Zeng
- Department of Radiology & Biomedical Imaging, New Haven, CT, USA
| | | | - Jiazhen Zhang
- Department of Biomedical Engineering, New Haven, CT, USA
| | - Fuyao Chen
- Department of Biomedical Engineering, New Haven, CT, USA
| | - Takuya Toyonaga
- Department of Radiology & Biomedical Imaging, New Haven, CT, USA
| | - Chenyu You
- Department of Electrical Engineering, New Haven, CT, USA
| | - Jingmin Xin
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, China
| | - Nanning Zheng
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, China
| | - Yihuan Lu
- United Imaging Healthcare, Shanghai, China
| | - James S Duncan
- Department of Radiology & Biomedical Imaging, New Haven, CT, USA
- Department of Biomedical Engineering, New Haven, CT, USA
- Department of Electrical Engineering, New Haven, CT, USA
| | - John A Onofrey
- Department of Radiology & Biomedical Imaging, New Haven, CT, USA
- Department of Biomedical Engineering, New Haven, CT, USA
- Department of Urology, Yale University, New Haven, CT, USA
| |
Collapse
|
6
|
Tram NK, Chou TH, Janse SA, Bobbey AJ, Audino AN, Onofrey JA, Stacy MR. Deep learning of image-derived measures of body composition in pediatric, adolescent, and young adult lymphoma: association with late treatment effects. Eur Radiol 2023; 33:6599-6607. [PMID: 36988714 DOI: 10.1007/s00330-023-09587-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 02/07/2023] [Accepted: 02/17/2023] [Indexed: 03/30/2023]
Abstract
OBJECTIVES The objective of this study was to translate a deep learning (DL) approach for semiautomated analysis of body composition (BC) measures from standard of care CT images to investigate the prognostic value of BC in pediatric, adolescent, and young adult (AYA) patients with lymphoma. METHODS This 10-year retrospective, single-site study of 110 pediatric and AYA patients with lymphoma involved manual segmentation of fat and muscle tissue from 260 CT imaging datasets obtained as part of routine imaging at initial staging and first therapeutic follow-up. A DL model was trained to perform semiautomated image segmentation of adipose and muscle tissue. The association between BC measures and the occurrence of 3-year late effects was evaluated using Cox proportional hazards regression analyses. RESULTS DL-guided measures of BC were in close agreement with those obtained by a human rater, as demonstrated by high Dice scores (≥ 0.95) and correlations (r > 0.99) for each tissue of interest. Cox proportional hazards regression analyses revealed that patients with elevated subcutaneous adipose tissue at baseline and first follow-up, along with patients who possessed lower volumes of skeletal muscle at first follow-up, have increased risk of late effects compared to their peers. CONCLUSIONS DL provides rapid and accurate quantification of image-derived measures of BC that are associated with risk for treatment-related late effects in pediatric and AYA patients with lymphoma. Image-based monitoring of BC measures may enhance future opportunities for personalized medicine for children with lymphoma by identifying patients at the highest risk for late effects of treatment. KEY POINTS • Deep learning-guided CT image analysis of body composition measures achieved high agreement level with manual image analysis. • Pediatric patients with more fat and less muscle during the course of cancer treatment were more likely to experience a serious adverse event compared to their clinical counterparts. • Deep learning of body composition may add value to routine CT imaging by offering real-time monitoring of pediatric, adolescent, and young adults at high risk for late effects of cancer treatment.
Collapse
Affiliation(s)
- Nguyen K Tram
- Center for Regenerative Medicine, The Research Institute at Nationwide Children's Hospital, 575 Children's Crossroad, WB4133, Columbus, OH, 43215, USA
| | - Ting-Heng Chou
- Center for Regenerative Medicine, The Research Institute at Nationwide Children's Hospital, 575 Children's Crossroad, WB4133, Columbus, OH, 43215, USA
| | - Sarah A Janse
- Center for Biostatistics, The Ohio State University, Columbus, OH, USA
| | - Adam J Bobbey
- Department of Radiology, Nationwide Children's Hospital, Columbus, OH, USA
| | - Anthony N Audino
- Division of Hematology/Oncology/BMT, Department of Pediatrics, The Ohio State University College of Medicine, Columbus, OH, USA
| | - John A Onofrey
- Department of Radiology & Biomedical Imaging, Yale University School of Medicine, New Haven, CT, USA
- Department of Urology, Yale University School of Medicine, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Mitchel R Stacy
- Center for Regenerative Medicine, The Research Institute at Nationwide Children's Hospital, 575 Children's Crossroad, WB4133, Columbus, OH, 43215, USA.
- Interdisciplinary Biophysics Graduate Program, The Ohio State University, Columbus, OH, USA.
- Division of Vascular Diseases and Surgery, Department of Surgery, The Ohio State University College of Medicine, Columbus, OH, USA.
| |
Collapse
|
7
|
Chen X, Zhou B, Xie H, Guo X, Zhang J, Duncan JS, Miller EJ, Sinusas AJ, Onofrey JA, Liu C. DuSFE: Dual-Channel Squeeze-Fusion-Excitation co-attention for cross-modality registration of cardiac SPECT and CT. Med Image Anal 2023; 88:102840. [PMID: 37216735 PMCID: PMC10524650 DOI: 10.1016/j.media.2023.102840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 04/27/2023] [Accepted: 05/10/2023] [Indexed: 05/24/2023]
Abstract
Myocardial perfusion imaging (MPI) using single-photon emission computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. Attenuation maps (μ-maps) derived from computed tomography (CT) are utilized for attenuation correction (AC) to improve the diagnostic accuracy of cardiac SPECT. However, in clinical practice, SPECT and CT scans are acquired sequentially, potentially inducing misregistration between the two images and further producing AC artifacts. Conventional intensity-based registration methods show poor performance in the cross-modality registration of SPECT and CT-derived μ-maps since the two imaging modalities might present totally different intensity patterns. Deep learning has shown great potential in medical imaging registration. However, existing deep learning strategies for medical image registration encoded the input images by simply concatenating the feature maps of different convolutional layers, which might not fully extract or fuse the input information. In addition, deep-learning-based cross-modality registration of cardiac SPECT and CT-derived μ-maps has not been investigated before. In this paper, we propose a novel Dual-Channel Squeeze-Fusion-Excitation (DuSFE) co-attention module for the cross-modality rigid registration of cardiac SPECT and CT-derived μ-maps. DuSFE is designed based on the co-attention mechanism of two cross-connected input data streams. The channel-wise or spatial features of SPECT and μ-maps are jointly encoded, fused, and recalibrated in the DuSFE module. DuSFE can be flexibly embedded at multiple convolutional layers to enable gradual feature fusion in different spatial dimensions. Our studies using clinical patient MPI studies demonstrated that the DuSFE-embedded neural network generated significantly lower registration errors and more accurate AC SPECT images than existing methods. We also showed that the DuSFE-embedded network did not over-correct or degrade the registration performance of motion-free cases. The source code of this work is available at https://github.com/XiongchaoChen/DuSFE_CrossRegistration.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Edward J Miller
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA; Department of Internal Medicine, Yale University, New Haven, CT, USA
| | - Albert J Sinusas
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA; Department of Internal Medicine, Yale University, New Haven, CT, USA
| | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| |
Collapse
|
8
|
Lieffrig EV, Zeng T, Zhang J, Fontaine K, Fang X, Revilla E, Lu Y, Onofrey JA. MULTI-TASK DEEP LEARNING AND UNCERTAINTY ESTIMATION FOR PET HEAD MOTION CORRECTION. Proc IEEE Int Symp Biomed Imaging 2023; 2023:10.1109/isbi53787.2023.10230791. [PMID: 38111738 PMCID: PMC10725741 DOI: 10.1109/isbi53787.2023.10230791] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Head motion occurring during brain positron emission tomography images acquisition leads to a decrease in image quality and induces quantification errors. We have previously introduced a Deep Learning Head Motion Correction (DL-HMC) method based on supervised learning of gold-standard Polaris Vicra motion tracking device and showed the potential of this method. In this study, we upgrade our network to a multi-task architecture in order to include image appearance prediction in the learning process. This multi-task Deep Learning Head Motion Correction (mtDL-HMC) model was trained on 21 subjects and showed enhanced motion prediction performance compared to our previous DL-HMC method on both quantitative and qualitative results for 5 testing subjects. We also evaluate the trustworthiness of network predictions by performing Monte Carlo Dropout at inference on testing subjects. We discard the data associated with a great motion prediction uncertainty and show that this does not harm the quality of reconstructed images, and can even improve it.
Collapse
Affiliation(s)
- Eléonore V Lieffrig
- Departments of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Tianyi Zeng
- Departments of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Jiazhen Zhang
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Kathryn Fontaine
- Departments of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Xi Fang
- Department of Psychiatry, Yale University, New Haven, CT, USA
| | | | - Yihuan Lu
- United Imaging Healthcare, Shanghai, China
| | - John A Onofrey
- Departments of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Urology, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| |
Collapse
|
9
|
Zhong J, Staib LH, Venkataraman R, Onofrey JA. INTEGRATING PROSTATE SPECIFIC ANTIGEN DENSITY BIOMARKER INTO DEEP LEARNING PROSTATE MRI LESION SEGMENTATION MODELS. Proc IEEE Int Symp Biomed Imaging 2023; 2023:10.1109/isbi53787.2023.10230418. [PMID: 38090633 PMCID: PMC10711801 DOI: 10.1109/isbi53787.2023.10230418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/11/2024]
Abstract
Prostate cancer lesion segmentation in multi-parametric magnetic resonance imaging (mpMRI) is crucial for pre-biopsy diagnosis and targeted biopsy guidance. Deep convolution neural networks have been widely utilized for lesion segmentation. However, these methods fail to achieve a high Dice coefficient because of the large variations in lesion size and location within the gland. To address this problem, we integrate the clinically-meaningful prostate specific antigen density (PSAD) biomarker into the deep learning model using feature-wise transformations to condition the features in latent space, and thus control the size of lesion prediction. We tested our models on a public dataset with 214 annotated mpMRI scans and compared the segmentation performance to a baseline 3D U-Net model. Results demonstrate that integrating the PSAD biomarker significantly improves segmentation performance in both Dice coefficient and centroid distance metric.
Collapse
Affiliation(s)
- Jiayang Zhong
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Lawrence H Staib
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | | | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Urology, Yale University, New Haven, CT, USA
| |
Collapse
|
10
|
Guo X, Wu J, Chen MK, Liu Q, Onofrey JA, Pucar D, Pang Y, Pigg D, Casey ME, Dvornek NC, Liu C. Inter-pass motion correction for whole-body dynamic PET and parametric imaging. IEEE Trans Radiat Plasma Med Sci 2023; 7:344-353. [PMID: 37842204 PMCID: PMC10569406 DOI: 10.1109/trpms.2022.3227576] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2023]
Abstract
Whole-body dynamic FDG-PET imaging through continuous-bed-motion (CBM) mode multi-pass acquisition protocol is a promising metabolism measurement. However, inter-pass misalignment originating from body movement could degrade parametric quantification. We aim to apply a non-rigid registration method for inter-pass motion correction in whole-body dynamic PET. 27 subjects underwent a 90-min whole-body FDG CBM PET scan on a Biograph mCT (Siemens Healthineers), acquiring 9 over-the-heart single-bed passes and subsequently 19 CBM passes (frames). The inter-pass motion correction was executed using non-rigid image registration with multi-resolution, B-spline free-form deformations. The parametric images were then generated by Patlak analysis. The overlaid Patlak slope Ki and y-intercept Vb images were visualized to qualitatively evaluate motion impact and correction effect. The normalized weighted mean squared Patlak fitting errors (NFE) were compared in the whole body, head, and hypermetabolic regions of interest (ROI). In Ki images, ROI statistics were collected and malignancy discrimination capacity was estimated by the area under the receiver operating characteristic curve (AUC). After the inter-pass motion correction was applied, the spatial misalignment appearance between Ki and Vb images was successfully reduced. Voxel-wise normalized fitting error maps showed global error reduction after motion correction. The NFE in the whole body (p = 0.0013), head (p = 0.0021), and ROIs (p = 0.0377) significantly decreased. The visual performance of each hypermetabolic ROI in Ki images was enhanced, while 3.59% and 3.67% average absolute percentage changes were observed in mean and maximum Ki values, respectively, across all evaluated ROIs. The estimated mean Ki values had substantial changes with motion correction (p = 0.0021). The AUC of both mean Ki and maximum Ki after motion correction increased, possibly suggesting the potential of enhancing oncological discrimination capacity through inter-pass motion correction.
Collapse
Affiliation(s)
- Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Jing Wu
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA, and the Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing, China
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - John A Onofrey
- Department of Biomedical Engineering, the Department of Radiology and Biomedical Imaging, and the Department of Urology, Yale University, New Haven, CT, 06511, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Yulei Pang
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA, and Southern Connecticut State University, New Haven, CT, 06515, USA
| | - David Pigg
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | - Michael E Casey
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | - Nicha C Dvornek
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Chi Liu
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| |
Collapse
|
11
|
Ahn SS, Ta K, Thorn SL, Onofrey JA, Melvinsdottir IH, Lee S, Langdon J, Sinusas AJ, Duncan JS. Co-attention spatial transformer network for unsupervised motion tracking and cardiac strain analysis in 3D echocardiography. Med Image Anal 2023; 84:102711. [PMID: 36525845 PMCID: PMC9812938 DOI: 10.1016/j.media.2022.102711] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 10/15/2022] [Accepted: 11/29/2022] [Indexed: 12/13/2022]
Abstract
Myocardial ischemia/infarction causes wall-motion abnormalities in the left ventricle. Therefore, reliable motion estimation and strain analysis using 3D+time echocardiography for localization and characterization of myocardial injury is valuable for early detection and targeted interventions. Previous unsupervised cardiac motion tracking methods rely on heavily-weighted regularization functions to smooth out the noisy displacement fields in echocardiography. In this work, we present a Co-Attention Spatial Transformer Network (STN) for improved motion tracking and strain analysis in 3D echocardiography. Co-Attention STN aims to extract inter-frame dependent features between frames to improve the motion tracking in otherwise noisy 3D echocardiography images. We also propose a novel temporal constraint to further regularize the motion field to produce smooth and realistic cardiac displacement paths over time without prior assumptions on cardiac motion. Our experimental results on both synthetic and in vivo 3D echocardiography datasets demonstrate that our Co-Attention STN provides superior performance compared to existing methods. Strain analysis from Co-Attention STNs also correspond well with the matched SPECT perfusion maps, demonstrating the clinical utility for using 3D echocardiography for infarct localization.
Collapse
Affiliation(s)
- Shawn S Ahn
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Kevinminh Ta
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Stephanie L Thorn
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University, New Haven, CT, USA
| | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Inga H Melvinsdottir
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University, New Haven, CT, USA
| | - Supum Lee
- Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University, New Haven, CT, USA
| | - Jonathan Langdon
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Albert J Sinusas
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Section of Cardiovascular Medicine, Department of Internal Medicine, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA; Department of Electrical Engineering, Yale University, New Haven, CT, USA.
| |
Collapse
|
12
|
Shi L, Zhang J, Toyonaga T, Shao D, Onofrey JA, Lu Y. Deep learning-based attenuation map generation with simultaneously reconstructed PET activity and attenuation and low-dose application. Phys Med Biol 2023; 68. [PMID: 36584395 DOI: 10.1088/1361-6560/acaf49] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 12/30/2022] [Indexed: 12/31/2022]
Abstract
Objective. In PET/CT imaging, CT is used for positron emission tomography (PET) attenuation correction (AC). CT artifacts or misalignment between PET and CT can cause AC artifacts and quantification errors in PET. Simultaneous reconstruction (MLAA) of PET activity (λ-MLAA) and attenuation (μ-MLAA) maps was proposed to solve those issues using the time-of-flight PET raw data only. However,λ-MLAA still suffers from quantification error as compared to reconstruction using the gold-standard CT-based attenuation map (μ-CT). Recently, a deep learning (DL)-based framework was proposed to improve MLAA by predictingμ-DL fromλ-MLAA andμ-MLAA using an image domain loss function (IM-loss). However, IM-loss does not directly measure the AC errors according to the PET attenuation physics. Our preliminary studies showed that an additional physics-based loss function can lead to more accurate PET AC. The main objective of this study is to optimize the attenuation map generation framework for clinical full-dose18F-FDG studies. We also investigate the effectiveness of the optimized network on predicting attenuation maps for synthetic low-dose oncological PET studies.Approach. We optimized the proposed DL framework by applying different preprocessing steps and hyperparameter optimization, including patch size, weights of the loss terms and number of angles in the projection-domain loss term. The optimization was performed based on 100 skull-to-toe18F-FDG PET/CT scans with minimal misalignment. The optimized framework was further evaluated on 85 clinical full-dose neck-to-thigh18F-FDG cancer datasets as well as synthetic low-dose studies with only 10% of the full-dose raw data.Main results. Clinical evaluation of tumor quantification as well as physics-based figure-of-merit metric evaluation validated the promising performance of our proposed method. For both full-dose and low-dose studies, the proposed framework achieved <1% error in tumor standardized uptake value measures.Significance. It is of great clinical interest to achieve CT-less PET reconstruction, especially for low-dose PET studies.
Collapse
Affiliation(s)
- Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, People's Republic of China
| | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America.,Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Department of Urology, Yale University, New Haven, CT, United States of America
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| |
Collapse
|
13
|
Sun C, Revilla EM, Zhang J, Fontaine K, Toyonaga T, Gallezot JD, Mulnix T, Onofrey JA, Carson RE, Lu Y. An objective evaluation method for head motion estimation in PET-Motion corrected centroid-of-distribution. Neuroimage 2022; 264:119678. [PMID: 36261057 DOI: 10.1016/j.neuroimage.2022.119678] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 09/16/2022] [Accepted: 10/07/2022] [Indexed: 11/06/2022] Open
Abstract
Head motion presents a continuing problem in brain PET studies. A wealth of motion correction (MC) algorithms had been proposed in the past, including both hardware-based methods and data-driven methods. However, in most real brain PET studies, in the absence of ground truth or gold standard of motion information, it is challenging to objectively evaluate MC quality. For MC evaluation, image-domain metrics, e.g., standardized uptake value (SUV) change before and after MC are commonly used, but this measure lacks objectivity because 1) other factors, e.g., attenuation correction, scatter correction and parameters used in the reconstruction, will confound MC effectiveness; 2) SUV only reflects final image quality, and it cannot precisely inform when an MC method performed well or poorly during the scan time period; 3) SUV is tracer-dependent and head motion may cause increases or decreases in SUV for different tracers, so evaluating MC effectiveness is complicated. Here, we present a new algorithm, i.e., motion corrected centroid-of-distribution (MCCOD) to perform objective quality control for measured or estimated rigid motion information. MCCOD is a three-dimensional surrogate trace of the center of tracer distribution after performing rigid MC using the existing motion information. MCCOD is used to inform whether the motion information is accurate, using the PET raw data only, i.e., without PET image reconstruction, where inaccurate motion information typically leads to abrupt changes in the MCCOD trace. MCCOD was validated using simulation studies and was tested on real studies acquired from both time-of-flight (TOF) and non-TOF scanners. A deep learning-based brain mask segmentation was implemented, which is shown to be necessary for non-TOF MCCOD generation. MCCOD is shown to be effective in detecting abrupt translation motion errors in slowly varying tracer distribution caused by the motion tracking hardware and can be used to compare different motion estimation methods as well as to improve existing motion information.
Collapse
Affiliation(s)
- Chen Sun
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Enette Mae Revilla
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States
| | - Kathryn Fontaine
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States
| | - Jean-Dominique Gallezot
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States
| | - Tim Mulnix
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States; Department of Urology, Yale University, New Haven, CT, United States; Department of Biomedical Engineering, Yale University, New Haven, CT, United States
| | - Richard E Carson
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States; Department of Biomedical Engineering, Yale University, New Haven, CT, United States
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States.
| |
Collapse
|
14
|
Zeng T, Zhang J, Revilla E, Lieffrig EV, Fang X, Lu Y, Onofrey JA. Supervised Deep Learning for Head Motion Correction in PET. Med Image Comput Comput Assist Interv 2022; 13434:194-203. [PMID: 38107622 PMCID: PMC10725740 DOI: 10.1007/978-3-031-16440-8_19] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Head movement is a major limitation in brain positron emission tomography (PET) imaging, which results in image artifacts and quantification errors. Head motion correction plays a critical role in quantitative image analysis and diagnosis of nervous system diseases. However, to date, there is no approach that can track head motion continuously without using an external device. Here, we develop a deep learning-based algorithm to predict rigid motion for brain PET by lever-aging existing dynamic PET scans with gold-standard motion measurements from external Polaris Vicra tracking. We propose a novel Deep Learning for Head Motion Correction (DL-HMC) methodology that consists of three components: (i) PET input data encoder layers; (ii) regression layers to estimate the six rigid motion transformation parameters; and (iii) feature-wise transformation (FWT) layers to condition the network to tracer time-activity. The input of DL-HMC is sampled pairs of one-second 3D cloud representations of the PET data and the output is the prediction of six rigid transformation motion parameters. We trained this network in a supervised manner using the Vicra motion tracking information as gold-standard. We quantitatively evaluate DL-HMC by comparing to gold-standard Vicra measurements and qualitatively evaluate the reconstructed images as well as perform region of interest standard uptake value (SUV) measurements. An algorithm ablation study was performed to determine the contributions of each of our DL-HMC design choices to network performance. Our results demonstrate accurate motion prediction performance for brain PET using a data-driven registration approach without external motion tracking hardware. All code is publicly available on GitHub: https://github.com/OnofreyLab/dl-hmc_miccai2022.
Collapse
Affiliation(s)
- Tianyi Zeng
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Jiazhen Zhang
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | - Eléonore V Lieffrig
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Xi Fang
- Department of Psychiatry, Yale University, New Haven, CT, USA
| | - Yihuan Lu
- United Imaging Healthcare, Shanghai, China
| | - John A Onofrey
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Urology, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| |
Collapse
|
15
|
Paulson N, Zeevi T, Papademetris M, Leapman MS, Onofrey JA, Sprenkle PC, Humphrey PA, Staib LH, Levi AW. Prediction of Adverse Pathology at Radical Prostatectomy in Grade Group 2 and 3 Prostate Biopsies Using Machine Learning. JCO Clin Cancer Inform 2022; 6:e2200016. [PMID: 36179281 DOI: 10.1200/cci.22.00016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
PURPOSE There is ongoing clinical need to improve estimates of disease outcome in prostate cancer. Machine learning (ML) approaches to pathologic diagnosis and prognosis are a promising and increasingly used strategy. In this study, we use an ML algorithm for prediction of adverse outcomes at radical prostatectomy (RP) using whole-slide images (WSIs) of prostate biopsies with Grade Group (GG) 2 or 3 disease. METHODS We performed a retrospective review of prostate biopsies collected at our institution which had corresponding RP, GG 2 or 3 disease one or more cores, and no biopsies with higher than GG 3 disease. A hematoxylin and eosin-stained core needle biopsy from each site with GG 2 or 3 disease was scanned and used as the sole input for the algorithm. The ML pipeline had three phases: image preprocessing, feature extraction, and adverse outcome prediction. First, patches were extracted from each biopsy scan. Subsequently, the pre-trained Visual Geometry Group-16 convolutional neural network was used for feature extraction. A representative feature vector was then used as input to an Extreme Gradient Boosting classifier for predicting the binary adverse outcome. We subsequently assessed patient clinical risk using CAPRA score for comparison with the ML pipeline results. RESULTS The data set included 361 WSIs from 107 patients (56 with adverse pathology at RP). The area under the receiver operating characteristic curves for the ML classification were 0.72 (95% CI, 0.62 to 0.81), 0.65 (95% CI, 0.53 to 0.79) and 0.89 (95% CI, 0.79 to 1.00) for the entire cohort, and GG 2 and GG 3 patients, respectively, similar to the performance of the CAPRA clinical risk assessment. CONCLUSION We provide evidence for the potential of ML algorithms to use WSIs of needle core prostate biopsies to estimate clinically relevant prostate cancer outcomes.
Collapse
Affiliation(s)
| | - Tal Zeevi
- Yale School of Medicine, New Haven, CT
| | | | | | | | | | | | | | | |
Collapse
|
16
|
Toyonaga T, Shao D, Shi L, Zhang J, Revilla EM, Menard D, Ankrah J, Hirata K, Chen MK, Onofrey JA, Lu Y. Deep learning-based attenuation correction for whole-body PET - a multi-tracer study with 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine. Eur J Nucl Med Mol Imaging 2022; 49:3086-3097. [PMID: 35277742 PMCID: PMC10725742 DOI: 10.1007/s00259-022-05748-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 02/25/2022] [Indexed: 11/04/2022]
Abstract
A novel deep learning (DL)-based attenuation correction (AC) framework was applied to clinical whole-body oncology studies using 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine. The framework used activity (λ-MLAA) and attenuation (µ-MLAA) maps estimated by the maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a modified U-net neural network with a novel imaging physics-based loss function to learn a CT-derived attenuation map (µ-CT). METHODS Clinical whole-body PET/CT datasets of 18F-FDG (N = 113), 68 Ga-DOTATATE (N = 76), and 18F-Fluciclovine (N = 90) were used to train and test tracer-specific neural networks. For each tracer, forty subjects were used to train the neural network to predict attenuation maps (µ-DL). µ-DL and µ-MLAA were compared to the gold-standard µ-CT. PET images reconstructed using the OSEM algorithm with µ-DL (OSEMDL) and µ-MLAA (OSEMMLAA) were compared to the CT-based reconstruction (OSEMCT). Tumor regions of interest were segmented by two radiologists and tumor SUV and volume measures were reported, as well as evaluation using conventional image analysis metrics. RESULTS µ-DL yielded high resolution and fine detail recovery of the attenuation map, which was superior in quality as compared to µ-MLAA in all metrics for all tracers. Using OSEMCT as the gold-standard, OSEMDL provided more accurate tumor quantification than OSEMMLAA for all three tracers, e.g., error in SUVmax for OSEMMLAA vs. OSEMDL: - 3.6 ± 4.4% vs. - 1.7 ± 4.5% for 18F-FDG (N = 152), - 4.3 ± 5.1% vs. 0.4 ± 2.8% for 68 Ga-DOTATATE (N = 70), and - 7.3 ± 2.9% vs. - 2.8 ± 2.3% for 18F-Fluciclovine (N = 44). OSEMDL also yielded more accurate tumor volume measures than OSEMMLAA, i.e., - 8.4 ± 14.5% (OSEMMLAA) vs. - 3.0 ± 15.0% for 18F-FDG, - 14.1 ± 19.7% vs. 1.8 ± 11.6% for 68 Ga-DOTATATE, and - 15.9 ± 9.1% vs. - 6.4 ± 6.4% for 18F-Fluciclovine. CONCLUSIONS The proposed framework provides accurate and robust attenuation correction for whole-body 18F-FDG, 68 Ga-DOTATATE and 18F-Fluciclovine in tumor SUV measures as well as tumor volume estimation. The proposed method provides clinically equivalent quality as compared to CT in attenuation correction for the three tracers.
Collapse
Affiliation(s)
- Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Guangdong Provincial People's Hospital, Guangzhou, Guangdong, China
| | - Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06520, USA
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Enette Mae Revilla
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | | | - Kenji Hirata
- Department of Diagnostic Imaging, School of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Yale New Haven Hospital, New Haven, CT, USA
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06520, USA
- Department of Urology, Yale University, New Haven, CT, USA
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA.
| |
Collapse
|
17
|
Revilla EM, Gallezot JD, Naganawa M, Toyonaga T, Fontaine K, Mulnix T, Onofrey JA, Carson RE, Lu Y. Adaptive data-driven motion detection and optimized correction for brain PET. Neuroimage 2022; 252:119031. [PMID: 35257856 PMCID: PMC9206767 DOI: 10.1016/j.neuroimage.2022.119031] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 02/16/2022] [Accepted: 02/21/2022] [Indexed: 12/03/2022] Open
Abstract
Head motion during PET scans causes image quality degradation, decreased concentration in regions with high uptake and incorrect outcome measures from kinetic analysis of dynamic datasets. Previously, we proposed a data-driven method, center of tracer distribution (COD), to detect head motion without an external motion tracking device. There, motion was detected using one dimension of the COD trace with a semiautomatic detection algorithm, requiring multiple user defined parameters and manual intervention. In this study, we developed a new data-driven motion detection algorithm, which is automatic, self-adaptive to noise level, does not require user-defined parameters and uses all three dimensions of the COD trace (3DCOD). 3DCOD was first validated and tested using 30 simulation studies (18F-FDG, N = 15; 11C-raclopride (RAC), N = 15) with large motion. The proposed motion correction method was tested on 22 real human datasets, with 20 acquired from a high resolution research tomograph (HRRT) scanner (18F-FDG, N = 10; 11C-RAC, N = 10) and 2 acquired from the Siemens Biograph mCT scanner. Real-time hardware-based motion tracking information (Vicra) was available for all real studies and was used as the gold standard. 3DCOD was compared to Vicra, no motion correction (NMC), one-direction COD (our previous method called 1DCOD) and two conventional frame-based image registration (FIR) algorithms, i.e., FIR1 (based on predefined frames reconstructed with attenuation correction) and FIR2 (without attenuation correction) for both simulation and real studies. For the simulation studies, 3DCOD yielded -2.3 ± 1.4% (mean ± standard deviation across all subjects and 11 brain regions) error in region of interest (ROI) uptake for 18F-FDG (-3.4 ± 1.7% for 11C-RAC across all subjects and 2 regions) as compared to Vicra (perfect correction) while NMC, FIR1, FIR2 and 1DCOD yielded -25.4 ± 11.1% (-34.5 ± 16.1% for 11C- RAC), -13.4 ± 3.5% (-16.1 ± 4.6%), -5.7 ± 3.6% (-8.0 ± 4.5%) and -2.6 ± 1.5% (-5.1 ± 2.7%), respectively. For real HRRT studies, 3DCOD yielded -0.3 ± 2.8% difference for 18F-FDG (-0.4 ± 3.2% for 11C-RAC) as compared to Vicra while NMC, FIR1, FIR2 and 1DCOD yielded -14.9 ± 9.0% (-24.5 ± 14.6%), -3.6 ± 4.9% (-13.4 ± 14.3%), -0.6 ± 3.4% (-6.7 ± 5.3%) and -1.5 ± 4.2% (-2.2 ± 4.1%), respectively. In summary, the proposed motion correction method yielded comparable performance to the hardware-based motion tracking method for multiple tracers, including very challenging cases with large frequent head motion, in studies performed on a non-TOF scanner.
Collapse
Affiliation(s)
- Enette Mae Revilla
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA
| | - Jean-Dominique Gallezot
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA
| | - Mika Naganawa
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA
| | - Kathryn Fontaine
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA
| | - Tim Mulnix
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA; Department of Urology, Yale University, New Haven, CT, USA; Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Richard E Carson
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA; Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA.
| |
Collapse
|
18
|
Gonzales RA, Seemann F, Lamy J, Mojibian H, Atar D, Erlinge D, Steding-Ehrenborg K, Arheden H, Hu C, Onofrey JA, Peters DC, Heiberg E. MVnet: automated time-resolved tracking of the mitral valve plane in CMR long-axis cine images with residual neural networks: a multi-center, multi-vendor study. J Cardiovasc Magn Reson 2021; 23:137. [PMID: 34857009 PMCID: PMC8638514 DOI: 10.1186/s12968-021-00824-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 10/20/2021] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND Mitral annular plane systolic excursion (MAPSE) and left ventricular (LV) early diastolic velocity (e') are key metrics of systolic and diastolic function, but not often measured by cardiovascular magnetic resonance (CMR). Its derivation is possible with manual, precise annotation of the mitral valve (MV) insertion points along the cardiac cycle in both two and four-chamber long-axis cines, but this process is highly time-consuming, laborious, and prone to errors. A fully automated, consistent, fast, and accurate method for MV plane tracking is lacking. In this study, we propose MVnet, a deep learning approach for MV point localization and tracking capable of deriving such clinical metrics comparable to human expert-level performance, and validated it in a multi-vendor, multi-center clinical population. METHODS The proposed pipeline first performs a coarse MV point annotation in a given cine accurately enough to apply an automated linear transformation task, which standardizes the size, cropping, resolution, and heart orientation, and second, tracks the MV points with high accuracy. The model was trained and evaluated on 38,854 cine images from 703 patients with diverse cardiovascular conditions, scanned on equipment from 3 main vendors, 16 centers, and 7 countries, and manually annotated by 10 observers. Agreement was assessed by the intra-class correlation coefficient (ICC) for both clinical metrics and by the distance error in the MV plane displacement. For inter-observer variability analysis, an additional pair of observers performed manual annotations in a randomly chosen set of 50 patients. RESULTS MVnet achieved a fast segmentation (<1 s/cine) with excellent ICCs of 0.94 (MAPSE) and 0.93 (LV e') and a MV plane tracking error of -0.10 ± 0.97 mm. In a similar manner, the inter-observer variability analysis yielded ICCs of 0.95 and 0.89 and a tracking error of -0.15 ± 1.18 mm, respectively. CONCLUSION A dual-stage deep learning approach for automated annotation of MV points for systolic and diastolic evaluation in CMR long-axis cine images was developed. The method is able to carefully track these points with high accuracy and in a timely manner. This will improve the feasibility of CMR methods which rely on valve tracking and increase their utility in a clinical setting.
Collapse
Affiliation(s)
- Ricardo A. Gonzales
- Clinical Physiology, Department of Clinical Sciences, Lund University, Skåne University Hospital, Lund, Sweden
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
- Department of Electrical Engineering, Universidad de Ingeniería y Tecnología, Lima, Peru
| | - Felicia Seemann
- Clinical Physiology, Department of Clinical Sciences, Lund University, Skåne University Hospital, Lund, Sweden
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
- Department of Biomedical Engineering, Lund University, Lund, Sweden
| | - Jérôme Lamy
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
| | - Hamid Mojibian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
| | - Dan Atar
- Department of Cardiology B, Oslo University Hospital Ullevål and Faculty of Medicine, University of Oslo, Oslo, Norway
| | - David Erlinge
- Department of Cardiology, Clinical Sciences, Lund University, Skåne University Hospital, Lund, Sweden
| | - Katarina Steding-Ehrenborg
- Clinical Physiology, Department of Clinical Sciences, Lund University, Skåne University Hospital, Lund, Sweden
| | - Håkan Arheden
- Clinical Physiology, Department of Clinical Sciences, Lund University, Skåne University Hospital, Lund, Sweden
| | - Chenxi Hu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - John A. Onofrey
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
- Department of Urology, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut United States of America
| | - Dana C. Peters
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
| | - Einar Heiberg
- Clinical Physiology, Department of Clinical Sciences, Lund University, Skåne University Hospital, Lund, Sweden
- Department of Biomedical Engineering, Lund University, Lund, Sweden
- Wallenberg Center for Molecular Medicine, Lund University, Lund, Sweden
| |
Collapse
|
19
|
Gross M, Spektor M, Jaffe A, Kucukkaya AS, Iseke S, Haider SP, Strazzabosco M, Chapiro J, Onofrey JA. Improved performance and consistency of deep learning 3D liver segmentation with heterogeneous cancer stages in magnetic resonance imaging. PLoS One 2021; 16:e0260630. [PMID: 34852007 PMCID: PMC8635384 DOI: 10.1371/journal.pone.0260630] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 11/13/2021] [Indexed: 11/23/2022] Open
Abstract
PURPOSE Accurate liver segmentation is key for volumetry assessment to guide treatment decisions. Moreover, it is an important pre-processing step for cancer detection algorithms. Liver segmentation can be especially challenging in patients with cancer-related tissue changes and shape deformation. The aim of this study was to assess the ability of state-of-the-art deep learning 3D liver segmentation algorithms to generalize across all different Barcelona Clinic Liver Cancer (BCLC) liver cancer stages. METHODS This retrospective study, included patients from an institutional database that had arterial-phase T1-weighted magnetic resonance images with corresponding manual liver segmentations. The data was split into 70/15/15% for training/validation/testing each proportionally equal across BCLC stages. Two 3D convolutional neural networks were trained using identical U-net-derived architectures with equal sized training datasets: one spanning all BCLC stages ("All-Stage-Net": AS-Net), and one limited to early and intermediate BCLC stages ("Early-Intermediate-Stage-Net": EIS-Net). Segmentation accuracy was evaluated by the Dice Similarity Coefficient (DSC) on a dataset spanning all BCLC stages and a Wilcoxon signed-rank test was used for pairwise comparisons. RESULTS 219 subjects met the inclusion criteria (170 males, 49 females, 62.8±9.1 years) from all BCLC stages. Both networks were trained using 129 subjects: AS-Net training comprised 19, 74, 18, 8, and 10 BCLC 0, A, B, C, and D patients, respectively; EIS-Net training comprised 21, 86, and 22 BCLC 0, A, and B patients, respectively. DSCs (mean±SD) were 0.954±0.018 and 0.946±0.032 for AS-Net and EIS-Net (p<0.001), respectively. The AS-Net 0.956±0.014 significantly outperformed the EIS-Net 0.941±0.038 on advanced BCLC stages (p<0.001) and yielded similarly good segmentation performance on early and intermediate stages (AS-Net: 0.952±0.021; EIS-Net: 0.949±0.027; p = 0.107). CONCLUSION To ensure robust segmentation performance across cancer stages that is independent of liver shape deformation and tumor burden, it is critical to train deep learning models on heterogeneous imaging data spanning all BCLC stages.
Collapse
Affiliation(s)
- Moritz Gross
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Charité Center for Diagnostic and Interventional Radiology, Charité—Universitätsmedizin Berlin, Berlin, Germany
| | - Michael Spektor
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - Ariel Jaffe
- Department of Internal Medicine, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - Ahmet S. Kucukkaya
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Charité Center for Diagnostic and Interventional Radiology, Charité—Universitätsmedizin Berlin, Berlin, Germany
| | - Simon Iseke
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Department of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, Rostock University Medical Center, Rostock, Germany
| | - Stefan P. Haider
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Department of Otorhinolaryngology, University Hospital of Ludwig Maximilians Universität München, Munich, Germany
| | - Mario Strazzabosco
- Department of Internal Medicine, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - John A. Onofrey
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Department of Urology, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States of America
| |
Collapse
|
20
|
Netto JMB, Scheinost D, Onofrey JA, Franco I. Magnetic resonance image connectivity analysis provides evidence of central nervous system mode of action for parasacral transcutaneous electro neural stimulation - A pilot study. J Pediatr Urol 2020; 16:536-542. [PMID: 32873504 DOI: 10.1016/j.jpurol.2020.08.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 07/30/2020] [Accepted: 08/04/2020] [Indexed: 12/16/2022]
Abstract
INTRODUCTION Parasacral transcutaneous electriconeural stimulation (pTENS) is a common treatment modality for patients with overactive bladder (OAB). Its mechanism of effectiveness has yet to be elucidated. Recent studies with fMRI in adults with implanted sacral nerve stimulators impute its effectiveness on changes in the brain involving the anterior cingulate cortex (ACC) and prefrontal cortex (PFC). AIM The study set out to evaluate brain connectivity utilizing functional MRI to the outline the mechanism of action of pTENS in the brain. METHODS Ten adult volunteers without urinary tract symptoms underwent fMRI. Electrodes were placed on the skin at sacral level (S2) (Experimental Stimulation - pTENS) and on the right scapular region (Sham Stimulation - sTENS). Stimulation was done twice on each site for 6 min at a frequency of 10 Hz and pulse width of 260 μs and intensity determined by the motor threshold. A 6 min resting state fMRI was also done twice as control. Functional connectivity data was acquired during each state (resting, pTENS and sTENS). Standard functional connectivity preprocessing was performed. Seed connectivity was examined to investigate changes in ACC functional connectivity between the stimulations and resting-state conditions. Significance was assessed at p < 0.05 corrected for multiple comparisons. RESULTS For all conditions (pTENS, sTENS, and rest), standard patterns of ACC connectivity were detectable with strong connectivity between the ACC and subcortical regions and between the ACC and the frontal lobe. Functional connectivity between ACC seed and the dorsal lateral prefrontal cortex (DLPFC) was significantly increased during pTENS compared to rest. sTENS did not increase connectivity between the ACC seed and DLPFC when compared to rest. DISCUSSION Preliminary results indicate that ACC is a major site of activation during pTENS. Increased connectivity between ACC and DLPFC may be a possible mechanism of pTENS effectiveness, which appears to be specific to pTENS compared to sTENS. This study is limited to the small size at this time which prevents further investigation at other sites in the brain. CONCLUSIONS The study confirms our original aim which was to define if parasacral TENS actually has a central effect.
Collapse
Affiliation(s)
- Jose Murillo B Netto
- Yale School of Medicine - Department of Urology, USA; Universidade Federal de Juiz de Fora - Division of Urology, Brazil.
| | - Dustin Scheinost
- Statistics & Data Science - Yale University, USA; Child Study Center - Yale University, USA; Radiology & Biomedical Imaging - Yale University, USA.
| | - John A Onofrey
- Yale School of Medicine - Department of Urology, USA; Radiology & Biomedical Imaging - Yale University, USA.
| | - Israel Franco
- Yale School of Medicine - Department of Urology, USA.
| |
Collapse
|
21
|
Abstract
Previous studies have demonstrated the feasibility of reducing noise with deep learning-based methods for low-dose fluorodeoxyglucose (FDG) positron emission tomography (PET). This work aimed to investigate the feasibility of noise reduction for tracers without sufficient training datasets using a deep transfer learning approach, which can utilize existing networks trained by the widely available FDG datasets. In this study, the deep transfer learning strategy based on a fully 3D patch-based U-Net was investigated on a 18F-fluoromisonidazole (18F-FMISO) dataset using single-bed scanning and a 68Ga-DOTATATE dataset using whole-body scanning. The datasets of 18F-FDG by single-bed scanning and whole-body scanning were used to obtain pre-trained U-Nets separately for subsequent cross-tracer and cross-protocol transfer learning. The full-dose PET images were used as the labels while low-dose PET images from 10% counts were used as the inputs. Three types of U-Nets were obtained: a U-Net trained by FDG dataset, a pre-trained FDG U-Net fine-tuned by another less-available tracer (FMISO/DOATATE), and a U-Net completely trained by a large number of less-available tracer datasets (FMISO/DOATATE), used as the reference U-Net. The denoising performance of the three types of U-Nets was evaluated on single-bed 18F-FMISO and whole-body 68Ga-DOTATATE separately and compared using normalized root-mean-square error (NRMSE), signal-to-noise ratio (SNR), and relative bias of region of interest (ROI). For cross-tracer transfer learning, all the U-Nets provided denoised images with similar quality for both tracers. There was no significant difference in terms of NRMSE and SNR when comparing the former two U-Nets with the reference U-Net. The ROI biases for these U-Nets were similar. For cross-tracer and cross-protocol transfer learning, the pre-trained single-bed FDG U-Net fine-tuned by whole-body DOTATATE data provided the most consistent images with the reference U-Net. Fine-tuning significantly reduced the NRMSE and the ROI bias and improved the SNR when comparing the fine-tuned U-Net with the U-Net trained by single-bed FDG only (NRMSE: 96.3% ± 21.1% versus 120.6% ± 18.5%, ROI bias: -10.5% ± 13.0% versus -14.7% ± 6.4%, SNR: 4.2 ± 1.4 versus 3.9 ± 1.6, for fine-tuned U-Net and the U-Net trained by single-bed FDG, respectively, with p < 0.01 in all cases). This work demonstrated that it is feasible to utilize existing networks well-trained by FDG datasets to reduce the noise for other less-available tracers and other scanning protocols by using the fine-tuning strategy.
Collapse
Affiliation(s)
- Hui Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America. Department of Internal Medicine (Cardiology), Yale University, New Haven, CT, United States of America
| | | | | | | | | | | |
Collapse
|
22
|
Onofrey JA, Staib LH, Huang X, Zhang F, Papademetris X, Metaxas D, Rueckert D, Duncan JS. Sparse Data-Driven Learning for Effective and Efficient Biomedical Image Segmentation. Annu Rev Biomed Eng 2020; 22:127-153. [PMID: 32169002 PMCID: PMC9351438 DOI: 10.1146/annurev-bioeng-060418-052147] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Sparsity is a powerful concept to exploit for high-dimensional machine learning and associated representational and computational efficiency. Sparsity is well suited for medical image segmentation. We present a selection of techniques that incorporate sparsity, including strategies based on dictionary learning and deep learning, that are aimed at medical image segmentation and related quantification.
Collapse
Affiliation(s)
- John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Urology, Yale School of Medicine, New Haven, Connecticut 06520, USA
| | - Lawrence H Staib
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut 06520, USA;
| | - Xiaojie Huang
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Citadel Securities, Chicago, Illinois 60603, USA
| | - Fan Zhang
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
| | - Xenophon Papademetris
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut 06520, USA;
| | - Dimitris Metaxas
- Department of Computer Science, Rutgers University, Piscataway, New Jersey 08854, USA
| | - Daniel Rueckert
- Department of Computing, Imperial College London, London SW7 2AZ, United Kingdom
| | - James S Duncan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut 06520, USA;
| |
Collapse
|
23
|
Lu W, Onofrey JA, Lu Y, Shi L, Ma T, Liu Y, Liu C. An investigation of quantitative accuracy for deep learning based denoising in oncological PET. ACTA ACUST UNITED AC 2019; 64:165019. [DOI: 10.1088/1361-6560/ab3242] [Citation(s) in RCA: 60] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
24
|
Boutagy NE, Ravera S, Papademetris X, Onofrey JA, Zhuang ZW, Wu J, Feher A, Stacy MR, French BA, Annex BH, Carrasco N, Sinusas AJ. Noninvasive In Vivo Quantification of Adeno-Associated Virus Serotype 9-Mediated Expression of the Sodium/Iodide Symporter Under Hindlimb Ischemia and Neuraminidase Desialylation in Skeletal Muscle Using Single-Photon Emission Computed Tomography/Computed Tomography. Circ Cardiovasc Imaging 2019; 12:e009063. [PMID: 31296047 DOI: 10.1161/circimaging.119.009063] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND We propose micro single-photon emission computed tomography/computed tomography imaging of the hNIS (human sodium/iodide symporter) to noninvasively quantify adeno-associated virus 9 (AAV9)-mediated gene expression in a murine model of peripheral artery disease. METHODS AAV9-hNIS (2×1011 viral genome particles) was injected into nonischemic or ischemic gastrocnemius muscles of C57Bl/6J mice following unilateral hindlimb ischemia ± the α-sialidase NA (neuraminidase). Control nonischemic limbs were injected with phosphate buffered saline or remained noninjected. Twelve mice underwent micro single-photon emission computed tomography/computed tomography imaging after serial injection of pertechnetate (99mTcO4-), a NIS substrate, up to 28 days after AAV9-hNIS injection. Twenty four animals were euthanized at selected times over 1 month for ex vivo validation. Forty-two animals were imaged with 99mTcO4- ± the selective NIS inhibitor perchlorate on day 10, to ascertain specificity of radiotracer uptake. Tissue was harvested for ex vivo validation. A modified version of the U-Net deep learning algorithm was used for image quantification. RESULTS As quantitated by standardized uptake value, there was a gradual temporal increase in 99mTcO4- uptake in muscles treated with AAV9-hNIS. Hindlimb ischemia, NA, and hindlimb ischemia plus NA increased the magnitude of 99mTcO4- uptake by 4- to 5-fold compared with nonischemic muscle treated with only AAV9-hNIS. Perchlorate treatment significantly reduced 99mTcO4- uptake in AAV9-hNIS-treated muscles, demonstrating uptake specificity. The imaging results correlated well with ex vivo well counting (r2=0.9375; P<0.0001) and immunoblot analysis of NIS protein (r2=0.65; P<0.0001). CONCLUSIONS Micro single-photon emission computed tomography/computed tomography imaging of hNIS-mediated 99mTcO4- uptake allows for accurate in vivo quantification of AAV9-driven gene expression, which increases under ischemic conditions or neuraminidase desialylation in skeletal muscle.
Collapse
Affiliation(s)
- Nabil E Boutagy
- Department of Medicine, Section of Cardiovascular Medicine, Yale Translational Research Imaging Center (N.E.B., Z.W.Z., A.F., M.R.S., A.J.S.), Yale School of Medicine, New Haven, CT
| | - Silvia Ravera
- Department of Cellular and Molecular Physiology (S.R., N.C.), Yale School of Medicine, New Haven, CT
| | - Xenophon Papademetris
- Department of Radiology and Biomedical Imaging (X.P., J.A.O., J.W., A.J.S.), Yale School of Medicine, New Haven, CT
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging (X.P., J.A.O., J.W., A.J.S.), Yale School of Medicine, New Haven, CT
| | - Zhen W Zhuang
- Department of Medicine, Section of Cardiovascular Medicine, Yale Translational Research Imaging Center (N.E.B., Z.W.Z., A.F., M.R.S., A.J.S.), Yale School of Medicine, New Haven, CT
| | - Jing Wu
- Department of Radiology and Biomedical Imaging (X.P., J.A.O., J.W., A.J.S.), Yale School of Medicine, New Haven, CT
| | - Attila Feher
- Department of Medicine, Section of Cardiovascular Medicine, Yale Translational Research Imaging Center (N.E.B., Z.W.Z., A.F., M.R.S., A.J.S.), Yale School of Medicine, New Haven, CT
| | - Mitchel R Stacy
- Department of Medicine, Section of Cardiovascular Medicine, Yale Translational Research Imaging Center (N.E.B., Z.W.Z., A.F., M.R.S., A.J.S.), Yale School of Medicine, New Haven, CT
| | - Brent A French
- Department of Biomedical Engineering (B.A.F., B.H.A.), University of Virginia, Charlottesville
- Division of Cardiovascular Medicine, Department of Medicine (B.A.F., B.H.A.), University of Virginia, Charlottesville
| | - Brian H Annex
- Department of Biomedical Engineering (B.A.F., B.H.A.), University of Virginia, Charlottesville
- Division of Cardiovascular Medicine, Department of Medicine (B.A.F., B.H.A.), University of Virginia, Charlottesville
| | - Nancy Carrasco
- Department of Cellular and Molecular Physiology (S.R., N.C.), Yale School of Medicine, New Haven, CT
| | - Albert J Sinusas
- Department of Medicine, Section of Cardiovascular Medicine, Yale Translational Research Imaging Center (N.E.B., Z.W.Z., A.F., M.R.S., A.J.S.), Yale School of Medicine, New Haven, CT
- Department of Radiology and Biomedical Imaging (X.P., J.A.O., J.W., A.J.S.), Yale School of Medicine, New Haven, CT
| |
Collapse
|
25
|
Onofrey JA, Casetti-Dinescu DI, Lauritzen AD, Sarkar S, Venkataraman R, Fan RE, Sonn GA, Sprenkle PC, Staib LH, Papademetris X. GENERALIZABLE MULTI-SITE TRAINING AND TESTING OF DEEP NEURAL NETWORKS USING IMAGE NORMALIZATION. Proc IEEE Int Symp Biomed Imaging 2019; 2019:348-351. [PMID: 32874427 PMCID: PMC7457546 DOI: 10.1109/isbi.2019.8759295] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
The ability of medical image analysis deep learning algorithms to generalize across multiple sites is critical for clinical adoption of these methods. Medical imging data, especially MRI, can have highly variable intensity characteristics across different individuals, scanners, and sites. However, it is not practical to train algorithms with data from all imaging equipment sources at all possible sites. Intensity normalization methods offer a potential solution for working with multi-site data. We evaluate five different image normalization methods on training a deep neural network to segment the prostate gland in MRI. Using 600 MRI prostate gland segmentations from two different sites, our results show that both intra-site and inter-site evaluation is critical for assessing the robustness of trained models and that training with single-site data produces models that fail to fully generalize across testing data from sites not included in the training.
Collapse
Affiliation(s)
- John A Onofrey
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | - Andreas D Lauritzen
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | | | - Richard E Fan
- Department of Urology, Stanford University, Palo Alto, CA, USA
| | - Geoffrey A Sonn
- Department of Urology, Stanford University, Palo Alto, CA, USA
| | | | - Lawrence H Staib
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Xenophon Papademetris
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| |
Collapse
|
26
|
Lu Y, Gallezot JD, Naganawa M, Ren S, Fontaine K, Wu J, Onofrey JA, Toyonaga T, Boutagy N, Mulnix T, Panin VY, Casey ME, Carson RE, Liu C. Data-driven voluntary body motion detection and non-rigid event-by-event correction for static and dynamic PET. Phys Med Biol 2019; 64:065002. [PMID: 30695768 DOI: 10.1088/1361-6560/ab02c2] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
PET has the potential to perform absolute in vivo radiotracer quantitation. This potential can be compromised by voluntary body motion (BM), which degrades image resolution, alters apparent tracer uptakes, introduces CT-based attenuation correction mismatch artifacts and causes inaccurate parameter estimates in dynamic studies. Existing body motion correction (BMC) methods include frame-based image-registration (FIR) approaches and real-time motion tracking using external measurement devices. FIR does not correct for motion occurring within a pre-defined frame and the device-based method is generally not practical in routine clinical use, since it requires attaching a tracking device to the patient and additional device set up time. In this paper, we proposed a data-driven algorithm, centroid of distribution (COD), to detect BM. In this algorithm, the central coordinate of the time-of-flight (TOF) bin, which can be used as a reasonable surrogate for the annihilation point, is calculated for every event, and averaged over a certain time interval to generate a COD trace. We hypothesized that abrupt changes on the COD trace in lateral direction represent BMs. After detection, BM is estimated using non-rigid image registrations and corrected through list-mode reconstruction. The COD-based BMC approach was validated using a monkey study and was evaluated against FIR using four human and one dog studies with multiple tracers. The proposed approach successfully detected BMs and yielded superior correction results over conventional FIR approaches.
Collapse
Affiliation(s)
- Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America. Author to whom any correspondence should be addressed
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
27
|
Onofrey JA, Staib LH, Papademetris X. Segmenting the Brain Surface From CT Images With Artifacts Using Locally Oriented Appearance and Dictionary Learning. IEEE Trans Med Imaging 2019; 38:596-607. [PMID: 30176584 PMCID: PMC6476428 DOI: 10.1109/tmi.2018.2868045] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The accurate segmentation of the brain surface in post-surgical computed tomography (CT) images is critical for image-guided neurosurgical procedures in epilepsy patients. Following surgical implantation of intracranial electrodes, surgeons require accurate registration of the post-implantation CT images to the pre-implantation functional and structural magnetic resonance imaging to guide surgical resection of epileptic tissue. One way to perform the registration is via surface matching. The key challenge in this setup is the CT segmentation, where the extraction of the cortical surface is difficult due to the missing parts of the skull and artifacts introduced from the electrodes. In this paper, we present a dictionary learning-based method to segment the brain surface in post-surgical CT images of epilepsy patients following surgical implantation of electrodes. We propose learning a model of locally oriented appearance that captures both the normal tissue and the artifacts found along this brain surface boundary. Utilizing a database of clinical epilepsy imaging data to train and test our approach, we demonstrate that our method using locally oriented image appearance both more accurately extracts the brain surface and better localizes electrodes on the post-operative brain surface compared to standard, non-oriented appearance modeling. In addition, we compare our method to a standard atlas-based segmentation approach and to a U-Net-based deep convolutional neural network segmentation method.
Collapse
Affiliation(s)
- John A. Onofrey
- Department of Radiology & Biomedical Imaging, Yale University,
New Haven, CT, 06520, USA ()
| | - Lawrence H. Staib
- Departments of Radiology & Biomedical Imaging, Electrical
Engineering, and Biomedical Engineering, Yale University, New Haven, CT,
06520, USA ()
| | - Xenophon Papademetris
- Departments of Radiology & Biomedical Imaging and Biomedical
Engineering, Yale University, New Haven, CT, 06520, USA
()
| |
Collapse
|
28
|
Lu Y, Fontaine K, Mulnix T, Onofrey JA, Ren S, Panin V, Jones J, Casey ME, Barnett R, Kench P, Fulton R, Carson RE, Liu C. Respiratory Motion Compensation for PET/CT with Motion Information Derived from Matched Attenuation-Corrected Gated PET Data. J Nucl Med 2018; 59:1480-1486. [PMID: 29439015 DOI: 10.2967/jnumed.117.203000] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Accepted: 01/25/2018] [Indexed: 11/16/2022] Open
Abstract
Respiratory motion degrades the detection and quantification capabilities of PET/CT imaging. Moreover, mismatch between a fast helical CT image and a time-averaged PET image due to respiratory motion results in additional attenuation correction artifacts and inaccurate localization. Current motion compensation approaches typically have 3 limitations: the mismatch among respiration-gated PET images and the CT attenuation correction (CTAC) map can introduce artifacts in the gated PET reconstructions that can subsequently affect the accuracy of the motion estimation; sinogram-based correction approaches do not correct for intragate motion due to intracycle and intercycle breathing variations; and the mismatch between the PET motion compensation reference gate and the CT image can cause an additional CT-mismatch artifact. In this study, we established a motion correction framework to address these limitations. Methods: In the proposed framework, the combined emission-transmission reconstruction algorithm was used for phase-matched gated PET reconstructions to facilitate the motion model building. An event-by-event nonrigid respiratory motion compensation method with correlations between internal organ motion and external respiratory signals was used to correct both intracycle and intercycle breathing variations. The PET reference gate was automatically determined by a newly proposed CT-matching algorithm. We applied the new framework to 13 human datasets with 3 different radiotracers and 323 lesions and compared its performance with CTAC and non-attenuation correction (NAC) approaches. Validation using 4-dimensional CT was performed for one lung cancer dataset. Results: For the 10 18F-FDG studies, the proposed method outperformed (P < 0.006) both the CTAC and the NAC methods in terms of region-of-interest-based SUVmean, SUVmax, and SUV ratio improvements over no motion correction (SUVmean: 19.9% vs. 14.0% vs. 13.2%; SUVmax: 15.5% vs. 10.8% vs. 10.6%; SUV ratio: 24.1% vs. 17.6% vs. 16.2%, for the proposed, CTAC, and NAC methods, respectively). The proposed method increased SUV ratios over no motion correction for 94.4% of lesions, compared with 84.8% and 86.4% using the CTAC and NAC methods, respectively. For the 2 18F-fluoropropyl-(+)-dihydrotetrabenazine studies, the proposed method reduced the CT-mismatch artifacts in the lower lung where the CTAC approach failed and maintained the quantification accuracy of bone marrow where the NAC approach failed. For the 18F-FMISO study, the proposed method outperformed both the CTAC and the NAC methods in terms of motion estimation accuracy at 2 lung lesion locations. Conclusion: The proposed PET/CT respiratory event-by-event motion-correction framework with motion information derived from matched attenuation-corrected PET data provides image quality superior to that of the CTAC and NAC methods for multiple tracers.
Collapse
Affiliation(s)
- Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut
| | - Kathryn Fontaine
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut
| | - Tim Mulnix
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut
| | - Silin Ren
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut
| | | | - Judson Jones
- Siemens Medical Solutions, Knoxville, Tennessee; and
| | | | - Robert Barnett
- Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, Sydney, Australia
| | - Peter Kench
- Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, Sydney, Australia
| | - Roger Fulton
- Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, Sydney, Australia
| | - Richard E Carson
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut.,Department of Biomedical Engineering, Yale University, New Haven, Connecticut
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut.,Department of Biomedical Engineering, Yale University, New Haven, Connecticut
| |
Collapse
|
29
|
Onofrey JA, Staib LH, Sarkar S, Venkataraman R, Nawaf CB, Sprenkle PC, Papademetris X. Learning Non-rigid Deformations for Robust, Constrained Point-based Registration in Image-Guided MR-TRUS Prostate Intervention. Med Image Anal 2017; 39:29-43. [PMID: 28431275 DOI: 10.1016/j.media.2017.04.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2016] [Revised: 02/28/2017] [Accepted: 04/03/2017] [Indexed: 01/13/2023]
Abstract
Accurate and robust non-rigid registration of pre-procedure magnetic resonance (MR) imaging to intra-procedure trans-rectal ultrasound (TRUS) is critical for image-guided biopsies of prostate cancer. Prostate cancer is one of the most prevalent forms of cancer and the second leading cause of cancer-related death in men in the United States. TRUS-guided biopsy is the current clinical standard for prostate cancer diagnosis and assessment. State-of-the-art, clinical MR-TRUS image fusion relies upon semi-automated segmentations of the prostate in both the MR and the TRUS images to perform non-rigid surface-based registration of the gland. Segmentation of the prostate in TRUS imaging is itself a challenging task and prone to high variability. These segmentation errors can lead to poor registration and subsequently poor localization of biopsy targets, which may result in false-negative cancer detection. In this paper, we present a non-rigid surface registration approach to MR-TRUS fusion based on a statistical deformation model (SDM) of intra-procedural deformations derived from clinical training data. Synthetic validation experiments quantifying registration volume of interest overlaps of the PI-RADS parcellation standard and tests using clinical landmark data demonstrate that our use of an SDM for registration, with median target registration error of 2.98 mm, is significantly more accurate than the current clinical method. Furthermore, we show that the low-dimensional SDM registration results are robust to segmentation errors that are not uncommon in clinical TRUS data.
Collapse
Affiliation(s)
| | - Lawrence H Staib
- Department of Radiology & Biomedical Imaging, USA; Department of Electrical Engineering, USA; Department of Biomedical Engineering, USA.
| | | | | | - Cayce B Nawaf
- Department of Urology, Yale University, New Haven, Connecticut, USA.
| | | | - Xenophon Papademetris
- Department of Radiology & Biomedical Imaging, USA; Department of Biomedical Engineering, USA.
| |
Collapse
|
30
|
Onofrey JA, Staib LH, Papademetris X. Learning intervention-induced deformations for non-rigid MR-CT registration and electrode localization in epilepsy patients. Neuroimage Clin 2015; 10:291-301. [PMID: 26900569 PMCID: PMC4724039 DOI: 10.1016/j.nicl.2015.12.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2015] [Revised: 11/08/2015] [Accepted: 12/03/2015] [Indexed: 11/02/2022]
Abstract
This paper describes a framework for learning a statistical model of non-rigid deformations induced by interventional procedures. We make use of this learned model to perform constrained non-rigid registration of pre-procedural and post-procedural imaging. We demonstrate results applying this framework to non-rigidly register post-surgical computed tomography (CT) brain images to pre-surgical magnetic resonance images (MRIs) of epilepsy patients who had intra-cranial electroencephalography electrodes surgically implanted. Deformations caused by this surgical procedure, imaging artifacts caused by the electrodes, and the use of multi-modal imaging data make non-rigid registration challenging. Our results show that the use of our proposed framework to constrain the non-rigid registration process results in significantly improved and more robust registration performance compared to using standard rigid and non-rigid registration methods.
Collapse
Affiliation(s)
- John A Onofrey
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Lawrence H Staib
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA; Department of Electrical Engineering, Yale University, New Haven, CT, USA; Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xenophon Papademetris
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA; Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| |
Collapse
|
31
|
Onofrey JA, Staib LH, Sarkar S, Venkataraman R, Papademetris X. LEARNING NONRIGID DEFORMATIONS FOR CONSTRAINED POINT-BASED REGISTRATION FOR IMAGE-GUIDED MR-TRUS PROSTATE INTERVENTION. Proc IEEE Int Symp Biomed Imaging 2015; 2015:1592-1595. [PMID: 26405508 DOI: 10.1109/isbi.2015.7164184] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
This paper presents and validates a low-dimensional nonrigid registration method for fusing magnetic resonance imaging (MRI) and trans-rectal ultrasound (TRUS) in image-guided prostate biopsy. Prostate cancer is one of the most prevalent forms of cancer and the second leading cause of cancer-related death in men in the United States. Conventional clinical practice uses TRUS to guide prostate biopsies when there is a suspicion of cancer. Pre-procedural MRI information can reveal lesions and may be fused with intra-procedure TRUS imaging to provide patient-specific, localization of lesions for targeting. The state-of-the-art MRI-TRUS nonrigid image fusion process relies upon semi-automated segmentation of the prostate in both the MRI and TRUS images. In this paper, we develop a fast, automated nonrigid registration approach to MRI-TRUS fusion based on a statistical deformation model of intra-procedural deformations derived from a clinical sample.
Collapse
Affiliation(s)
- John A Onofrey
- Department of Diagnostic Radiology, Yale University, New Haven, CT, USA
| | - Lawrence H Staib
- Department of Diagnostic Radiology, Yale University, New Haven, CT, USA ; Department of Biomedical Engineering, Yale University, New Haven, CT, USA ; Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | | | | | - Xenophon Papademetris
- Department of Diagnostic Radiology, Yale University, New Haven, CT, USA ; Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| |
Collapse
|
32
|
Onofrey JA, Papademetris X, Staib LH. Low-Dimensional Non-Rigid Image Registration Using Statistical Deformation Models From Semi-Supervised Training Data. IEEE Trans Med Imaging 2015; 34:1522-1532. [PMID: 25720017 PMCID: PMC8802338 DOI: 10.1109/tmi.2015.2404572] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Accurate and robust image registration is a fundamental task in medical image analysis applications, and requires non-rigid transformations with a large number of degrees of freedom. Statistical deformation models (SDMs) attempt to learn the distribution of non-rigid deformations, and can be used both to reduce the transformation dimensionality and to constrain the registration process. However, high-dimensional SDMs are difficult to train given orders of magnitude fewer training samples. In this paper, we utilize both a small set of annotated imaging data and a large set of unlabeled data to effectively learn an SDM of non-rigid transformations in a semi-supervised training (SST) framework. We demonstrate results applying this framework towards inter-subject registration of skull-stripped, magnetic resonance (MR) brain images. Our approach makes use of 39 labeled MR datasets to create a set of supervised registrations, which we augment with a set of over 1200 unsupervised registrations using unlabeled MRIs. Through leave-one-out cross validation, we show that SST of a non-rigid SDM results in a robust registration algorithm with significantly improved accuracy compared to standard, intensity-based registration, and does so with a 99% reduction in transformation dimensionality.
Collapse
Affiliation(s)
- John A. Onofrey
- Department of Diagnostic Radiology, Yale University, New Haven, CT 06520 USA
| | - Xenophon Papademetris
- Departments of Diagnostic Radiology and Biomedical Engineering, Yale University, New Haven, CT 06520 USA
| | - Lawrence H. Staib
- Departments of Diagnostic Radiology, Electrical Engineering, and Biomedical Engineering, Yale University, New Haven, CT 06520 USA
| |
Collapse
|
33
|
Onofrey JA, Staib LH, Papademetris X. Segmenting the Brain Surface from CT Images with Artifacts Using Dictionary Learning for Non-rigid MR-CT Registration. Inf Process Med Imaging 2015. [PMID: 26221711 PMCID: PMC5266617 DOI: 10.1007/978-3-319-19992-4_52] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
This paper presents a dictionary learning-based method to segment the brain surface in post-surgical CT images of epilepsy patients following surgical implantation of electrodes. Using the electrodes identified in the post-implantation CT, surgeons require accurate registration with pre-implantation functional and structural MR imaging to guide surgical resection of epileptic tissue. In this work, we use a surface-based registration method to align the MR and CT brain surfaces. The key challenge here is not the registration, but rather the extraction of the cortical surface from the CT image, which includes missing parts of the skull and artifacts introduced by the electrodes. To segment the brain from these images, we propose learning a model of appearance that captures both the normal tissue and the artifacts found along this brain surface boundary. Using clinical data, we demonstrate that our method both accurately extracts the brain surface and better localizes electrodes than intensity-based rigid and non-rigid registration methods.
Collapse
|
34
|
Onofrey JA, Staib LH, Papademetris X. Learning nonrigid deformations for constrained multi-modal image registration. Med Image Comput Comput Assist Interv 2013; 16:171-8. [PMID: 24505758 DOI: 10.1007/978-3-642-40760-4_22] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
We present a new strategy to constrain nonrigid registrations of multi-modal images using a low-dimensional statistical deformation model and test this in registering pre-operative and post-operative images from epilepsy patients. For those patients who may undergo surgical resection for treatment, the current gold-standard to identify regions of seizure involves craniotomy and implantation of intracranial electrodes. To guide surgical resection, surgeons utilize pre-op anatomical and functional MR images in conjunction with post-electrode implantation MR and CT images. The electrode positions from the CT image need to be registered to pre-op functional and structural MR images. The post-op MRI serves as an intermediate registration step between the pre-op MR and CT images. In this work, we propose to bypass the post-op MR image registration step and directly register the pre-op MR and post-op CT images using a low-dimensional nonrigid registration that captures the gross deformation after electrode implantation. We learn the nonrigid deformation characteristics from a principal component analysis of a set of training deformations and demonstrate results using clinical data. We show that our technique significantly outperforms both standard rigid and nonrigid intensity-based registration methods in terms of mean and maximum registration error.
Collapse
Affiliation(s)
- John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT 06520, USA.
| | - Lawrence H Staib
- Department of Biomedical Engineering, Yale University, New Haven, CT 06520, USA
| | | |
Collapse
|
35
|
Onofrey JA, Staib LH, Papademetris X. FAST NONRIGID IMAGE REGISTRATION USING STATISTICAL DEFORMATION MODELS LEARNED FROM RICHLY-ANNOTATED DATA. Proc IEEE Int Symp Biomed Imaging 2013:580-583. [PMID: 25000401 DOI: 10.1109/isbi.2013.6556541] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Nonrigid image registrations require a large number of degrees of freedom (DoFs) to capture intersubject anatomical variations. With such high DoFs and lack of anatomical correspondences, algorithms may not converge to the globally optimal solution. In this work, we propose a fast, two-step nonrigid registration procedure with low DoFs to accurately register brain images. Our method makes use of a statistical deformation model based upon a principal component analysis of deformations learned from a manually-segmented dataset to perform an initial registration. We then follow with a low DoF nonrigid transformation to complete the registration. Our results show the same registration accuracy in terms of volume of interest overlap as high DoF transformations, but with a 96% reduction in DoF and 98% decrease in computation time.
Collapse
Affiliation(s)
- John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Lawrence H Staib
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA ; Department of Diagnostic Radiology, Yale University, New Haven, CT, USA ; Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Xenophon Papademetris
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA ; Department of Diagnostic Radiology, Yale University, New Haven, CT, USA
| |
Collapse
|