1
|
Jager A, Zwart MJ, Postema AW, van den Kroonenberg DL, Zwart W, Beerlage HP, Oddens JR, Mischi M. Development and validation of a framework for registration of whole-mount radical prostatectomy histopathology with three-dimensional transrectal ultrasound. BMC Urol 2025; 25:73. [PMID: 40175990 PMCID: PMC11966914 DOI: 10.1186/s12894-025-01736-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Accepted: 03/05/2025] [Indexed: 04/04/2025] Open
Abstract
PURPOSE Artificial intelligence (AI) has the potential to improve diagnostic imaging on multiple levels. To develop and validate these AI-assisted modalities a reliable dataset is of utmost importance. The registration of imaging to pathology is an essential step in creating such a dataset. This study presents a comprehensive framework for the registration of 3D transrectal ultrasound (TRUS) to radical prostatectomy specimen (RPS) pathology. METHODS The study enrolled patients who underwent 3D TRUS and were scheduled for radical prostatectomy. A four-step process for registering RPS to TRUS was used: image segmentation, 3D reconstruction of RPS pathology, registration and ground truth calculation. Accuracy was assessed using a target-registration error (TRE) based on landmarks visible on both TRUS and pathology. RESULTS 20 Sets of 3D TRUS and RPS pathology were included for analyses. The mean TRE was 3.5 mm, (range: 0.4 to 5.4 mm), with TRE values in the apex-base, left-right and posterior-anterior directions of 2.5 mm, 1.1 mm, and 1.4 mm, respectively. CONCLUSION The framework proposed in this study accomplishes precise registration between prostate pathology and imaging. The methodologies employed hold the potential for broader application across diverse imaging modalities and other target organs. However, limitations such a small sample size and the need for manual segmentation should be considered when interpreting te results. Future efforts should focus on automating key steps to enhance reproducibility and scalability.
Collapse
Affiliation(s)
- Auke Jager
- Department of Urology, Amsterdam UMC location Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, the Netherlands.
- Department of Urology, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, Amsterdam, The Netherlands.
| | - Marije J Zwart
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
- Angiogenesis Analytics, JADS Venture Campus, 's-Hertogenbosch, AA, The Netherlands
| | - Arnoud W Postema
- Department of Urology, Leiden University Medical Center, Leiden, the Netherlands
| | - Daniel L van den Kroonenberg
- Department of Urology, Amsterdam UMC location Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, the Netherlands
| | - Wim Zwart
- Angiogenesis Analytics, JADS Venture Campus, 's-Hertogenbosch, AA, The Netherlands
| | - Harrie P Beerlage
- Department of Urology, Amsterdam UMC location Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, the Netherlands
| | - J R Oddens
- Department of Urology, Amsterdam UMC location Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, the Netherlands
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Massimo Mischi
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
2
|
Li CX, Bhattacharya I, Vesal S, Ghanouni P, Jahanandish H, Fan RE, Sonn GA, Rusu M. ProstAtlasDiff: Prostate cancer detection on MRI using Diffusion Probabilistic Models guided by population spatial cancer atlases. Med Image Anal 2025; 101:103486. [PMID: 39970527 DOI: 10.1016/j.media.2025.103486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 10/25/2024] [Accepted: 01/27/2025] [Indexed: 02/21/2025]
Abstract
Magnetic Resonance Imaging (MRI) is increasingly being used to detect prostate cancer, yet its interpretation can be challenging due to subtle differences between benign and cancerous tissue. Recently, Denoising Diffusion Probabilistic Models (DDPMs) have shown great utility for medical image segmentation, modeling the process as noise removal in standard Gaussian distributions. In this study, we further enhance DDPMs by introducing the knowledge that the occurrence of cancer varies across the prostate (e.g., ∼70% of prostate cancers occur in the peripheral zone). We quantify such heterogeneity with a registration pipeline to calculate voxel-level cancer distribution mean and variances. Our proposed approach, ProstAtlasDiff, relies on DDPMs that use the cancer atlas to model noise removal and segment cancer on MRI. We trained and evaluated the performance of ProstAtlasDiff in detecting clinically significant cancer in a multi-institution multi-scanner dataset, and compared it with alternative models. In a lesion-level evaluation, ProstAtlasDiff achieved statistically significantly higher accuracy (0.91 vs. 0.85, p<0.001), specificity (0.91 vs. 0.84, p<0.001), positive predictive value (PPV, 0.50 vs. 0.35, p<0.001), compared to alternative models. ProstAtlasDiff also offers more accurate cancer outlines, achieving a higher Dice Coefficient (0.33 vs. 0.31, p<0.01). Furthermore, we evaluated ProstAtlasDiff in an independent cohort of 91 patients who underwent radical prostatectomy to compare its performance to that of radiologists, relative to whole-mount histopathology ground truth. ProstAtlasDiff detected 16% (15 lesions out of 93) more clinically significant cancers compared to radiologists (sensitivity: 0.90 vs. 0.75, p<0.01), and was comparable in terms of ROC-AUC, PR-AUC, PPV, accuracy, and Dice coefficient (p≥0.05). Furthermore, we evaluated ProstAtlasDiff in a second independent cohort of 537 subjects and observed that ProsAtlasDiff outperformed alternative approaches. These results suggest that ProstAltasDiff has the potential to assist in localizing cancer for biopsy guidance and treatment planning.
Collapse
Affiliation(s)
- Cynthia Xinran Li
- Institute of Computational and Mathematical Engineering, Stanford University, Stanford, CA 94305, USA.
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Hassan Jahanandish
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA; Department of Urology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA; Department of Urology, Stanford University School of Medicine, Stanford, CA 94305, USA; Department of Biomedical Data Science, Stanford University, CA 94305, USA.
| |
Collapse
|
3
|
Rusu M, Jahanandish H, Vesal S, Li CX, Bhattacharya I, Venkataraman R, Zhou SR, Kornberg Z, Sommer ER, Khandwala YS, Hockman L, Zhou Z, Choi MH, Ghanouni P, Fan RE, Sonn GA. ProCUSNet: Prostate Cancer Detection on B-mode Transrectal Ultrasound Using Artificial Intelligence for Targeting During Prostate Biopsies. Eur Urol Oncol 2025; 8:477-485. [PMID: 39880746 PMCID: PMC11930619 DOI: 10.1016/j.euo.2024.12.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Revised: 12/06/2024] [Accepted: 12/27/2024] [Indexed: 01/31/2025]
Abstract
BACKGROUND AND OBJECTIVE To assess whether conventional brightness-mode (B-mode) transrectal ultrasound images of the prostate reveal clinically significant cancers with the help of artificial intelligence methods. METHODS This study included 2986 men who underwent biopsies at two institutions. We trained the PROstate Cancer detection on B-mode transrectal UltraSound images NETwork (ProCUSNet) to determine whether ultrasound can reliably detect cancer. Specifically, ProCUSNet is based on the well-established nnUNet frameworks, and seeks to detect and outline clinically significant cancer on three-dimensional (3D) examinations reconstructed from 2D screen captures. We compared ProCUSNet against (1) reference labels (n = 515 patients), (2) eight readers that interpreted B-mode ultrasound (n = 20-80 patients), and (3) radiologists interpreting magnetic resonance imaging (MRI) for clinical care (n = 110 radical prostatectomy patients). KEY FINDINGS AND LIMITATIONS ProCUSNet found 82% clinically significant cancer cases with a lesion boundary error of up to 2.67 mm and detected 42% more lesions than ultrasound readers (sensitivity: 0.86 vs 0.44, p < 0.05, Wilcoxon test, Bonferroni correction). Furthermore, ProCUSNet has similar performance to radiologists interpreting MRI when accounting for registration errors (sensitivity: 0.79 vs 0.78, p > 0.05, Wilcoxon test, Bonferroni correction), while having the same targeting utility as a supplement to systematic biopsies. CONCLUSIONS AND CLINICAL IMPLICATIONS ProCUSNet can localize clinically significant cancer on screen capture B-mode ultrasound, a task that is particularly challenging for clinicians reading these examinations. As a supplement to systematic biopsies, ProCUSNet appears comparable with MRI, suggesting its utility for targeting suspicious lesions during the biopsy and possibly for screening using ultrasound alone, in the absence of MRI.
Collapse
Affiliation(s)
- Mirabela Rusu
- Department of Radiology Stanford University Stanford CA USA; Department of Urology Stanford University Stanford CA USA; Stanford University, Department of Biomedical Data Science, 300 Pasteur, Stanford, CA USA.
| | - Hassan Jahanandish
- Department of Radiology Stanford University Stanford CA USA; Department of Urology Stanford University Stanford CA USA
| | - Sulaiman Vesal
- Department of Radiology Stanford University Stanford CA USA; Department of Urology Stanford University Stanford CA USA
| | - Cynthia Xinran Li
- Institute of Computational and Mathematical Engineering Stanford University Stanford CA USA
| | | | | | - Steve Ran Zhou
- Department of Urology Stanford University Stanford CA USA
| | | | | | | | - Luke Hockman
- Department of Urology Stanford University Stanford CA USA
| | - Zhien Zhou
- Peking Union Medical College Hospital Beijing China
| | - Moon Hyung Choi
- Department of Radiology, College of Medicine, The Catholic University of Korea Seoul Korea
| | | | - Richard E Fan
- Department of Urology Stanford University Stanford CA USA
| | - Geoffrey A Sonn
- Department of Radiology Stanford University Stanford CA USA; Department of Urology Stanford University Stanford CA USA
| |
Collapse
|
4
|
Phillips R, Zakkaroff C, Dittmer K, Robilliard N, Baer K, Butler A. A Proof-of-Concept Solution for Co-locating 2D Histology Images in 3D for Histology-to-CT and MR Image Registration: Closing the Loop for Bone Sarcoma Treatment Planning. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01426-5. [PMID: 40011346 DOI: 10.1007/s10278-025-01426-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 01/13/2025] [Accepted: 01/23/2025] [Indexed: 02/28/2025]
Abstract
This work presents a proof-of-concept solution designed to facilitate more accurate radiographic feature characterisation in pre-surgical CT/MR volumes. The solution involves 3D co-location of 2D digital histology slides within ex-vivo, tumour tissue CT volumes. Initially, laboratory dissection measurements seed the placement of histology slices in corresponding CT volumes, followed by in-plane point-based registration of bone in histology images to the bone in CT. Validation using six bisected canine humerus ex-vivo CT datasets indicated a plane misalignment of 0.19 ± 1.8 mm. User input sensitivity was assessed at 0.08 ± 0.2 mm for plane translation and 0-1.6° deviation. These results show a similar magnitude of error to related prostate histology co-location work. Although demonstrated with a femoral canine sarcoma tumour, this solution can be generalised to various orthopaedic geometries and sites. It supports high-fidelity histology image co-location to improve understanding of tissue characterisation accuracy in clinical radiology. This solution requires only minimal adjustment to routine workflows. By integrating histology insights earlier in the presentation-diagnosis-planning-surgery-recovery loop, this solution guides data co-location to support the continued evaluation of safe pre-surgical margins.
Collapse
Affiliation(s)
- Robert Phillips
- The University of Otago - Canterbury, Christchurch, New Zealand.
| | | | | | | | - Kenzie Baer
- The University of Otago - Canterbury, Christchurch, New Zealand
| | | |
Collapse
|
5
|
Wess M, Andersen MK, Midtbust E, Guillem JCC, Viset T, Størkersen Ø, Krossa S, Rye MB, Tessem MB. Spatial integration of multi-omics data from serial sections using the novel Multi-Omics Imaging Integration Toolset. Gigascience 2025; 14:giaf035. [PMID: 40366868 PMCID: PMC12077394 DOI: 10.1093/gigascience/giaf035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Revised: 02/11/2025] [Accepted: 03/05/2025] [Indexed: 05/16/2025] Open
Abstract
BACKGROUND Truly understanding the cancer biology of heterogeneous tumors in precision medicine requires capturing the complexities of multiple omics levels and the spatial heterogeneity of cancer tissue. Techniques like mass spectrometry imaging (MSI) and spatial transcriptomics (ST) achieve this by spatially detecting metabolites and RNA but are often applied to serial sections. To fully leverage the advantage of such multi-omics data, the individual measurements need to be integrated into 1 dataset. RESULTS We present the Multi-Omics Imaging Integration Toolset (MIIT), a Python framework for integrating spatially resolved multi-omics data. A key component of MIIT's integration is the registration of serial sections for which we developed a nonrigid registration algorithm, GreedyFHist. We validated GreedyFHist on 244 images from fresh-frozen serial sections, achieving state-of-the-art performance. As a proof of concept, we used MIIT to integrate ST and MSI data from prostate tissue samples and assessed the correlation of a gene signature for citrate-spermine secretion derived from ST with metabolic measurements from MSI. CONCLUSION MIIT is a highly accurate, customizable, open-source framework for integrating spatial omics technologies performed on different serial sections.
Collapse
Affiliation(s)
- Maximilian Wess
- Department of Circulation and Medical Imaging, NTNU–Norwegian University of Science and Technology, Trondheim, 7491, Norway
- ELIXIR, Norway
| | - Maria K Andersen
- Department of Circulation and Medical Imaging, NTNU–Norwegian University of Science and Technology, Trondheim, 7491, Norway
- Clinic of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, 7006, Norway
| | - Elise Midtbust
- Department of Circulation and Medical Imaging, NTNU–Norwegian University of Science and Technology, Trondheim, 7491, Norway
- Clinic of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, 7006, Norway
| | - Juan Carlos Cabellos Guillem
- Department of Circulation and Medical Imaging, NTNU–Norwegian University of Science and Technology, Trondheim, 7491, Norway
| | - Trond Viset
- Department of Pathology, St. Olavs Hospital, Trondheim University Hospital, Trondheim, 7030, Norway
| | - Øystein Størkersen
- Department of Pathology, St. Olavs Hospital, Trondheim University Hospital, Trondheim, 7030, Norway
| | - Sebastian Krossa
- Department of Circulation and Medical Imaging, NTNU–Norwegian University of Science and Technology, Trondheim, 7491, Norway
- Central staff, St. Olavs Hospital HF, Trondheim, 7006, Norway
| | - Morten Beck Rye
- ELIXIR, Norway
- Clinic of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, 7006, Norway
- Department of Clinical and Molecular Medicine, NTNU–Norwegian University of Science and Technology, Trondheim, 7491, Norway
- Clinic of Laboratory Medicine, St.Olavs Hospital, Trondheim University Hospital, Trondheim, 7006, Norway
- BioCore–Bioinformatics Core Facility, NTNU–Norwegian University of Science and Technology, Trondheim, 7030, Norway
| | - May-Britt Tessem
- Department of Circulation and Medical Imaging, NTNU–Norwegian University of Science and Technology, Trondheim, 7491, Norway
- Clinic of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, 7006, Norway
| |
Collapse
|
6
|
Schmidt B, Soerensen SJC, Bhambhvani HP, Fan RE, Bhattacharya I, Choi MH, Kunder CA, Kao C, Higgins J, Rusu M, Sonn GA. External validation of an artificial intelligence model for Gleason grading of prostate cancer on prostatectomy specimens. BJU Int 2025; 135:133-139. [PMID: 38989669 PMCID: PMC11628895 DOI: 10.1111/bju.16464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024]
Abstract
OBJECTIVES To externally validate the performance of the DeepDx Prostate artificial intelligence (AI) algorithm (Deep Bio Inc., Seoul, South Korea) for Gleason grading on whole-mount prostate histopathology, considering potential variations observed when applying AI models trained on biopsy samples to radical prostatectomy (RP) specimens due to inherent differences in tissue representation and sample size. MATERIALS AND METHODS The commercially available DeepDx Prostate AI algorithm is an automated Gleason grading system that was previously trained using 1133 prostate core biopsy images and validated on 700 biopsy images from two institutions. We assessed the AI algorithm's performance, which outputs Gleason patterns (3, 4, or 5), on 500 1-mm2 tiles created from 150 whole-mount RP specimens from a third institution. These patterns were then grouped into grade groups (GGs) for comparison with expert pathologist assessments. The reference standard was the International Society of Urological Pathology GG as established by two experienced uropathologists with a third expert to adjudicate discordant cases. We defined the main metric as the agreement with the reference standard, using Cohen's kappa. RESULTS The agreement between the two experienced pathologists in determining GGs at the tile level had a quadratically weighted Cohen's kappa of 0.94. The agreement between the AI algorithm and the reference standard in differentiating cancerous vs non-cancerous tissue had an unweighted Cohen's kappa of 0.91. Additionally, the AI algorithm's agreement with the reference standard in classifying tiles into GGs had a quadratically weighted Cohen's kappa of 0.89. In distinguishing cancerous vs non-cancerous tissue, the AI algorithm achieved a sensitivity of 0.997 and specificity of 0.88; in classifying GG ≥2 vs GG 1 and non-cancerous tissue, it demonstrated a sensitivity of 0.98 and specificity of 0.85. CONCLUSION The DeepDx Prostate AI algorithm had excellent agreement with expert uropathologists and performance in cancer identification and grading on RP specimens, despite being trained on biopsy specimens from an entirely different patient population.
Collapse
Affiliation(s)
- Bogdana Schmidt
- Division of Urology, Department of Surgery, Huntsman Cancer HospitalUniversity of UtahSalt Lake CityUTUSA
| | - Simon John Christoph Soerensen
- Department of UrologyStanford University School of MedicineStanfordCAUSA
- Department of Epidemiology and Population HealthStanford University School of MedicineStanfordCAUSA
| | - Hriday P. Bhambhvani
- Department of Urology, Weill Cornell Medical CollegeNew York‐Presbyterian HospitalNew YorkNYUSA
| | - Richard E. Fan
- Department of UrologyStanford University School of MedicineStanfordCAUSA
| | | | - Moon Hyung Choi
- Department of Radiology, College of Medicine, Eunpyeong St. Mary's HospitalThe Catholic University of KoreaSeoulKorea
| | | | - Chia‐Sui Kao
- Department of Pathology and Laboratory MedicineCleveland ClinicClevelandOHUSA
| | - John Higgins
- Department of PathologyStanford University School of MedicineStanfordCAUSA
| | - Mirabela Rusu
- Department of UrologyStanford University School of MedicineStanfordCAUSA
- Department of RadiologyStanford University School of MedicineStanfordCAUSA
- Department of Biomedical Data ScienceStanford UniversityStanfordCAUSA
| | - Geoffrey A. Sonn
- Department of UrologyStanford University School of MedicineStanfordCAUSA
- Department of RadiologyStanford University School of MedicineStanfordCAUSA
| |
Collapse
|
7
|
Shao M, Singh A, Johnson S, Pessin A, Merrill R, Page A, Odéen H, Joshi S, Payne A. Design and evaluation of an open-source block face imaging system for 2D histology to magnetic resonance image registration. MethodsX 2024; 13:103062. [PMID: 39687592 PMCID: PMC11647474 DOI: 10.1016/j.mex.2024.103062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Accepted: 11/16/2024] [Indexed: 12/18/2024] Open
Abstract
This study introduces a comprehensive hardware-software framework designed to enhance the quality of block face image capture-an essential intermediary step for registering 2D histology images to ex vivo magnetic resonance (MR) images. A customized camera mounting and lighting system is employed to maintain consistent relative positioning and lighting conditions. Departing from traditional transparent paraffin, dyed paraffin is utilized to enhance contrast for subsequent automatic segmentation. Our software facilitates fully automated data collection and organization, complemented by a real-time Quality Assurance (QA) section to assess the captured image's quality during the sectioning process. The setup is evaluated and validated using rabbit muscle and rat brain which underwent MR-guided focused ultrasound ablations. The customized hardware system establishes a robust image capturing environment. The software with a real-time QA section, enables operators to promptly rectify low-quality captures, thereby preventing data loss. The execution of our proposed framework produces robust registration results for H&E images to ex vivo MR images.•The presented hardware-software framework ensures the uniformity and resilience of the block face image capture process, contributing to a more reliable and efficient registration of 2D histology images to ex vivo MR images.
Collapse
Affiliation(s)
- Mingzhen Shao
- Kahlert School of Computing, Scientific Computing and Imaging Institute, University of Utah, 72 S Central Campus Drive, Salt Lake City, UT, 84112, USA
| | - Amanpreet Singh
- Kahlert School of Computing, Scientific Computing and Imaging Institute, University of Utah, 72 S Central Campus Drive, Salt Lake City, UT, 84112, USA
| | - Sara Johnson
- Department of Radiology and Imaging Sciences, University of Utah, 729 Arapeen Drive, Salt Lake City, UT, 84109, USA
| | - Alissa Pessin
- Department of Radiology and Imaging Sciences, University of Utah, 729 Arapeen Drive, Salt Lake City, UT, 84109, USA
| | - Robb Merrill
- Department of Radiology and Imaging Sciences, University of Utah, 729 Arapeen Drive, Salt Lake City, UT, 84109, USA
| | - Ariana Page
- Department of Radiology and Imaging Sciences, University of Utah, 729 Arapeen Drive, Salt Lake City, UT, 84109, USA
| | - Henrik Odéen
- Department of Radiology and Imaging Sciences, University of Utah, 729 Arapeen Drive, Salt Lake City, UT, 84109, USA
| | - Sarang Joshi
- Biomedical Engineering Department, Scientific Computing and Imaging Institute, University of Utah, 72 S Central Campus Drive, Salt Lake City, UT, 84112, USA
| | - Allison Payne
- Department of Radiology and Imaging Sciences, University of Utah, 729 Arapeen Drive, Salt Lake City, UT, 84109, USA
| |
Collapse
|
8
|
Maki JH, Patel NU, Ulrich EJ, Dhaouadi J, Jones RW. Part I: prostate cancer detection, artificial intelligence for prostate cancer and how we measure diagnostic performance: a comprehensive review. Curr Probl Diagn Radiol 2024; 53:606-613. [PMID: 38658286 DOI: 10.1067/j.cpradiol.2024.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 03/14/2024] [Accepted: 04/18/2024] [Indexed: 04/26/2024]
Abstract
MRI has firmly established itself as a mainstay for the detection, staging and surveillance of prostate cancer. Despite its success, prostate MRI continues to suffer from poor inter-reader variability and a low positive predictive value. The recent emergence of Artificial Intelligence (AI) to potentially improve diagnostic performance shows great potential. Understanding and interpreting the AI landscape as well as ever-increasing research literature, however, is difficult. This is in part due to widely varying study design and reporting techniques. This paper aims to address this need by first outlining the different types of AI used for the detection and diagnosis of prostate cancer, next deciphering how data collection methods, statistical analysis metrics (such as ROC and FROC analysis) and end points/outcomes (lesion detection vs. case diagnosis) affect the performance and limit the ability to compare between studies. Finally, this work explores the need for appropriately enriched investigational datasets and proper ground truth, and provides guidance on how to best conduct AI prostate MRI studies. Published in parallel, a clinical study applying this suggested study design was applied to review and report a multiple-reader multiple-case clinical study of 150 bi-parametric prostate MRI studies across nine readers, measuring physician performance both with and without the use of a recently FDA cleared Artificial Intelligence software.1.
Collapse
Affiliation(s)
- Jeffrey H Maki
- University of Colorado Anschutz Medical Center, Department of Radiology, 12401 E 17th Ave (MS L954), Aurora, Colorado, USA.
| | - Nayana U Patel
- University of New Mexico Department of Radiology, Albuquerque, NM, USA
| | | | | | | |
Collapse
|
9
|
Bai X, Wang H, Qin Y, Han J, Yu N. MatchMorph: A real-time pre- and intra-operative deformable image registration framework for MRI-guided surgery. Comput Biol Med 2024; 180:108948. [PMID: 39121681 DOI: 10.1016/j.compbiomed.2024.108948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 06/27/2024] [Accepted: 07/25/2024] [Indexed: 08/12/2024]
Abstract
PURPOSE The technological advancements in surgical robots compatible with magnetic resonance imaging (MRI) have created an indispensable demand for real-time deformable image registration (DIR) of pre- and intra-operative MRI, but there is a lack of relevant methods. Challenges arise from dimensionality mismatch, resolution discrepancy, non-rigid deformation and requirement for real-time registration. METHODS In this paper, we propose a real-time DIR framework called MatchMorph, specifically designed for the registration of low-resolution local intraoperative MRI and high-resolution global preoperative MRI. Firstly, a super-resolution network based on global inference is developed to enhance the resolution of intraoperative MRI to the same as preoperative MRI, thus resolving the resolution discrepancy. Secondly, a fast-matching algorithm is designed to identify the optimal position of the intraoperative MRI within the corresponding preoperative MRI to address the dimensionality mismatch. Further, a cross-attention-based dual-stream DIR network is constructed to manipulate the deformation between pre- and intra-operative MRI, real-timely. RESULTS We conducted comprehensive experiments on publicly available datasets IXI and OASIS to evaluate the performance of the proposed MatchMorph framework. Compared to the state-of-the-art (SOTA) network TransMorph, the designed dual-stream DIR network of MatchMorph achieved superior performance with a 1.306 mm smaller HD and a 0.07 mm smaller ASD score on the IXI dataset. Furthermore, the MatchMorph framework demonstrates an inference speed of approximately 280 ms. CONCLUSIONS The qualitative and quantitative registration results obtained from high-resolution global preoperative MRI and simulated low-resolution local intraoperative MRI validated the effectiveness and efficiency of the proposed MatchMorph framework.
Collapse
Affiliation(s)
- Xinhao Bai
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China; Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Nankai University, Tianjin, 300350, China; Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen, 518083, China
| | - Hongpeng Wang
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China; Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Nankai University, Tianjin, 300350, China; Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen, 518083, China
| | - Yanding Qin
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China; Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Nankai University, Tianjin, 300350, China; Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen, 518083, China
| | - Jianda Han
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China; Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Nankai University, Tianjin, 300350, China; Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen, 518083, China
| | - Ningbo Yu
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China; Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Nankai University, Tianjin, 300350, China; Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen, 518083, China.
| |
Collapse
|
10
|
Shao W, Vesal S, Soerensen SJC, Bhattacharya I, Golestani N, Yamashita R, Kunder CA, Fan RE, Ghanouni P, Brooks JD, Sonn GA, Rusu M. RAPHIA: A deep learning pipeline for the registration of MRI and whole-mount histopathology images of the prostate. Comput Biol Med 2024; 173:108318. [PMID: 38522253 PMCID: PMC11077621 DOI: 10.1016/j.compbiomed.2024.108318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 02/14/2024] [Accepted: 03/12/2024] [Indexed: 03/26/2024]
Abstract
Image registration can map the ground truth extent of prostate cancer from histopathology images onto MRI, facilitating the development of machine learning methods for early prostate cancer detection. Here, we present RAdiology PatHology Image Alignment (RAPHIA), an end-to-end pipeline for efficient and accurate registration of MRI and histopathology images. RAPHIA automates several time-consuming manual steps in existing approaches including prostate segmentation, estimation of the rotation angle and horizontal flipping in histopathology images, and estimation of MRI-histopathology slice correspondences. By utilizing deep learning registration networks, RAPHIA substantially reduces computational time. Furthermore, RAPHIA obviates the need for a multimodal image similarity metric by transferring histopathology image representations to MRI image representations and vice versa. With the assistance of RAPHIA, novice users achieved expert-level performance, and their mean error in estimating histopathology rotation angle was reduced by 51% (12 degrees vs 8 degrees), their mean accuracy of estimating histopathology flipping was increased by 5% (95.3% vs 100%), and their mean error in estimating MRI-histopathology slice correspondences was reduced by 45% (1.12 slices vs 0.62 slices). When compared to a recent conventional registration approach and a deep learning registration approach, RAPHIA achieved better mapping of histopathology cancer labels, with an improved mean Dice coefficient of cancer regions outlined on MRI and the deformed histopathology (0.44 vs 0.48 vs 0.50), and a reduced mean per-case processing time (51 vs 11 vs 4.5 min). The improved performance by RAPHIA allows efficient processing of large datasets for the development of machine learning models for prostate cancer detection on MRI. Our code is publicly available at: https://github.com/pimed/RAPHIA.
Collapse
Affiliation(s)
- Wei Shao
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States; Department of Medicine, University of Florida, Gainesville, FL, 32610, United States.
| | - Sulaiman Vesal
- Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Simon J C Soerensen
- Department of Urology, Stanford University, Stanford, CA, 94305, United States; Department of Epidemiology and Population Health, Stanford University, Stanford, CA, 94305, United States
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States
| | - Negar Golestani
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States
| | - Rikiya Yamashita
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94305, United States
| | - Christian A Kunder
- Department of Pathology, Stanford University, Stanford, CA, 94305, United States
| | - Richard E Fan
- Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Pejman Ghanouni
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States
| | - James D Brooks
- Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States; Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Mirabela Rusu
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States.
| |
Collapse
|
11
|
Li L, Shiradkar R, Gottlieb N, Buzzy C, Hiremath A, Viswanathan VS, MacLennan GT, Omil Lima D, Gupta K, Shen DL, Tirumani SH, Magi-Galluzzi C, Purysko A, Madabhushi A. Multi-scale statistical deformation based co-registration of prostate MRI and post-surgical whole mount histopathology. Med Phys 2024; 51:2549-2562. [PMID: 37742344 PMCID: PMC10960735 DOI: 10.1002/mp.16753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 09/07/2023] [Accepted: 09/12/2023] [Indexed: 09/26/2023] Open
Abstract
BACKGROUND Accurate delineations of regions of interest (ROIs) on multi-parametric magnetic resonance imaging (mpMRI) are crucial for development of automated, machine learning-based prostate cancer (PCa) detection and segmentation models. However, manual ROI delineations are labor-intensive and susceptible to inter-reader variability. Histopathology images from radical prostatectomy (RP) represent the "gold standard" in terms of the delineation of disease extents, for example, PCa, prostatitis, and benign prostatic hyperplasia (BPH). Co-registering digitized histopathology images onto pre-operative mpMRI enables automated mapping of the ground truth disease extents onto mpMRI, thus enabling the development of machine learning tools for PCa detection and risk stratification. Still, MRI-histopathology co-registration is challenging due to various artifacts and large deformation between in vivo MRI and ex vivo whole-mount histopathology images (WMHs). Furthermore, the artifacts on WMHs, such as tissue loss, may introduce unrealistic deformation during co-registration. PURPOSE This study presents a new registration pipeline, MSERgSDM, a multi-scale feature-based registration (MSERg) with a statistical deformation (SDM) constraint, which aims to improve accuracy of MRI-histopathology co-registration. METHODS In this study, we collected 85 pairs of MRI and WMHs from 48 patients across three cohorts. Cohort 1 (D1), comprised of a unique set of 3D printed mold data from six patients, facilitated the generation of ground truth deformations between ex vivo WMHs and in vivo MRI. The other two clinically acquired cohorts (D2 and D3) included 42 patients. Affine and nonrigid registrations were employed to minimize the deformation between ex vivo WMH and ex vivo T2-weighted MRI (T2WI) in D1. Subsequently, ground truth deformation between in vivo T2WI and ex vivo WMH was approximated as the deformation between in vivo T2WI and ex vivo T2WI. In D2 and D3, the prostate anatomical annotations, for example, tumor and urethra, were made by a pathologist and a radiologist in collaboration. These annotations included ROI boundary contours and landmark points. Before applying the registration, manual corrections were made for flipping and rotation of WMHs. MSERgSDM comprises two main components: (1) multi-scale representation construction, and (2) SDM construction. For the SDM construction, we collected N = 200 reasonable deformation fields generated using MSERg, verified through visual inspection. Three additional methods, including intensity-based registration, ProsRegNet, and MSERg, were also employed for comparison against MSERgSDM. RESULTS Our results suggest that MSERgSDM performed comparably to the ground truth (p > 0.05). Additionally, MSERgSDM (ROI Dice ratio = 0.61, landmark distance = 3.26 mm) exhibited significant improvement over MSERg (ROI Dice ratio = 0.59, landmark distance = 3.69 mm) and ProsRegNet (ROI Dice ratio = 0.56, landmark distance = 4.00 mm) in local alignment. CONCLUSIONS This study presents a novel registration method, MSERgSDM, for mapping ex vivo WMH onto in vivo prostate MRI. Our preliminary results demonstrate that MSERgSDM can serve as a valuable tool to map ground truth disease annotations from histopathology images onto MRI, thereby assisting in the development of machine learning models for PCa detection on MRI.
Collapse
Affiliation(s)
- Lin Li
- Deptartment of Biomedical Engineering, Case Western Reserve University, Cleveland, OH
| | - Rakesh Shiradkar
- Wallace H Coulter Department of Biomedical Engineering at Emory University and Georgia Institute of Technology
| | - Noah Gottlieb
- Deptartment of Biomedical Engineering, Case Western Reserve University, Cleveland, OH
| | - Christina Buzzy
- Deptartment of Biomedical Engineering, Case Western Reserve University, Cleveland, OH
| | - Amogh Hiremath
- Deptartment of Biomedical Engineering, Case Western Reserve University, Cleveland, OH
| | - Vidya Sankar Viswanathan
- Wallace H Coulter Department of Biomedical Engineering at Emory University and Georgia Institute of Technology
| | - Gregory T. MacLennan
- Department of Pathology and Urology, Case Western Reserve University, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Danly Omil Lima
- Department of Pathology and Urology, Case Western Reserve University, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Karishma Gupta
- Department of Pathology and Urology, Case Western Reserve University, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Daniel Lee Shen
- Department of Pathology and Urology, Case Western Reserve University, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | | | | | - Andrei Purysko
- Glickman Urological and Kidney Institute, Cleveland Clinic, Cleveland, OH, USA
- Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Anant Madabhushi
- Wallace H Coulter Department of Biomedical Engineering at Emory University and Georgia Institute of Technology
- Atlanta Veterans Administration Medical Center
| |
Collapse
|
12
|
Schouten D, van der Laak J, van Ginneken B, Litjens G. Full resolution reconstruction of whole-mount sections from digitized individual tissue fragments. Sci Rep 2024; 14:1497. [PMID: 38233535 PMCID: PMC10794243 DOI: 10.1038/s41598-024-52007-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 01/12/2024] [Indexed: 01/19/2024] Open
Abstract
Whole-mount sectioning is a technique in histopathology where a full slice of tissue, such as a transversal cross-section of a prostate specimen, is prepared on a large microscope slide without further sectioning into smaller fragments. Although this technique can offer improved correlation with pre-operative imaging and is paramount for multimodal research, it is not commonly employed due to its technical difficulty, associated cost and cumbersome integration in (digital) pathology workflows. In this work, we present a computational tool named PythoStitcher which reconstructs artificial whole-mount sections from digitized tissue fragments, thereby bringing the benefits of whole-mount sections to pathology labs currently unable to employ this technique. Our proposed algorithm consists of a multi-step approach where it (i) automatically determines how fragments need to be reassembled, (ii) iteratively optimizes the stitch using a genetic algorithm and (iii) efficiently reconstructs the final artificial whole-mount section on full resolution (0.25 µm/pixel). PythoStitcher was validated on a total of 198 cases spanning five datasets with a varying number of tissue fragments originating from different organs from multiple centers. PythoStitcher successfully reconstructed the whole-mount section in 86-100% of cases for a given dataset with a residual registration mismatch of 0.65-2.76 mm on automatically selected landmarks. It is expected that our algorithm can aid pathology labs unable to employ whole-mount sectioning through faster clinical case evaluation and improved radiology-pathology correlation workflows.
Collapse
Affiliation(s)
- Daan Schouten
- Department of Pathology, Radboud University Medical Centre, Nijmegen, The Netherlands.
| | - Jeroen van der Laak
- Department of Pathology, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Department of Radiology, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Centre, Nijmegen, The Netherlands
| |
Collapse
|
13
|
Wang X, Song Z, Zhu J, Li Z. Correlation Attention Registration Based on Deep Learning from Histopathology to MRI of Prostate. Crit Rev Biomed Eng 2024; 52:39-50. [PMID: 38305277 DOI: 10.1615/critrevbiomedeng.2023050566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Abstract
Deep learning offers a promising methodology for the registration of prostate cancer images from histopathology to MRI. We explored how to effectively leverage key information from images to achieve improved end-to-end registration. We developed an approach based on a correlation attention registration framework to register segmentation labels of histopathology onto MRI. The network was trained using paired prostate datasets of histopathology and MRI from the Cancer Imaging Archive. We introduced An L2-Pearson correlation layer to enhance feature matching. Furthermore, our model employed an enhanced attention regression network to distinguish between key and nonkey features. For data analysis, we used the Kolmogorov-Smirnov test and a one-sample t-test, with the statistical significance level for the one-sample t-test set at 0.001. Compared with two other models (ProsRegNet and CNNGeo), our model exhibited improved performance in Dice coefficient, with increases of 9.893% and 2.753%, respectively. The Hausdorff distance was reduced by approximately 50% and 50%, while the average label error (ALE) was reduced by 0.389% and 15.021%. The proposed improved multimodal prostate registration framework demonstrated high performance in statistical analysis. The results indicate that our enhanced strategy significantly improves registration performance and enables faster registration of histopathological images of patients undergoing radical prostatectomy to preoperative MRI. More accurate registration can prevent over-diagnosing low-risk cancers and frequent false positives due to observer differences.
Collapse
Affiliation(s)
- Xue Wang
- Shanghai Institute of Technology
| | - Zhili Song
- School of Computer Science and Information Engineering, Shanghai Institute of Technology, Shanghai, 201418, China
| | - Jianlin Zhu
- School of Computer Science and Information Engineering, Shanghai Institute of Technology, Shanghai, 201418, China
| | - Zhixiang Li
- School of Computer Science and Information Engineering, Shanghai Institute of Technology, Shanghai, 201418, China
| |
Collapse
|
14
|
Matsuoka Y, Ueno Y, Uehara S, Tanaka H, Kobayashi M, Tanaka H, Yoshida S, Yokoyama M, Kumazawa I, Fujii Y. Deep-learning prostate cancer detection and segmentation on biparametric versus multiparametric magnetic resonance imaging: Added value of dynamic contrast-enhanced imaging. Int J Urol 2023; 30:1103-1111. [PMID: 37605627 DOI: 10.1111/iju.15280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 07/30/2023] [Indexed: 08/23/2023]
Abstract
OBJECTIVES To develop diagnostic algorithms of multisequence prostate magnetic resonance imaging for cancer detection and segmentation using deep learning and explore values of dynamic contrast-enhanced imaging in multiparametric imaging, compared with biparametric imaging. METHODS We collected 3227 multiparametric imaging sets from 332 patients, including 218 cancer patients (291 biopsy-proven foci) and 114 noncancer patients. Diagnostic algorithms of T2-weighted, T2-weighted plus dynamic contrast-enhanced, biparametric, and multiparametric imaging were built using 2578 sets, and their performance for clinically significant cancer was evaluated using 649 sets. RESULTS Biparametric and multiparametric imaging had following region-based performance: sensitivity of 71.9% and 74.8% (p = 0.394) and positive predictive value of 61.3% and 74.8% (p = 0.013), respectively. In side-specific analyses of cancer images, the specificity was 72.6% and 89.5% (p < 0.001) and the negative predictive value was 78.9% and 83.5% (p = 0.364), respectively. False-negative cancer on multiparametric imaging was smaller (p = 0.002) and more dominant with grade group ≤2 (p = 0.028) than true positive foci. In the peripheral zone, false-positive regions on biparametric imaging turned out to be true negative on multiparametric imaging more frequently compared with the transition zone (78.3% vs. 47.2%, p = 0.018). In contrast, T2-weighted plus dynamic contrast-enhanced imaging had lower specificity than T2-weighted imaging (41.1% vs. 51.6%, p = 0.042). CONCLUSIONS When using deep learning, multiparametric imaging provides superior performance to biparametric imaging in the specificity and positive predictive value, especially in the peripheral zone. Dynamic contrast-enhanced imaging helps reduce overdiagnosis in multiparametric imaging.
Collapse
Affiliation(s)
- Yoh Matsuoka
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
- Department of Urology, Saitama Cancer Center, Saitama, Japan
| | - Yoshihiko Ueno
- Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama, Kanagawa, Japan
| | - Sho Uehara
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Hiroshi Tanaka
- Department of Radiology, Ochanomizu Surugadai Clinic, Tokyo, Japan
| | - Masaki Kobayashi
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Hajime Tanaka
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Soichiro Yoshida
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Minato Yokoyama
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Itsuo Kumazawa
- Laboratory for Future Interdisciplinary Research of Science and Technology, Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Kanagawa, Japan
| | - Yasuhisa Fujii
- Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| |
Collapse
|
15
|
Ghezzo S, Neri I, Mapelli P, Savi A, Samanes Gajate AM, Brembilla G, Bezzi C, Maghini B, Villa T, Briganti A, Montorsi F, De Cobelli F, Freschi M, Chiti A, Picchio M, Scifo P. [ 68Ga]Ga-PSMA and [ 68Ga]Ga-RM2 PET/MRI vs. Histopathological Images in Prostate Cancer: A New Workflow for Spatial Co-Registration. Bioengineering (Basel) 2023; 10:953. [PMID: 37627838 PMCID: PMC10451901 DOI: 10.3390/bioengineering10080953] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/05/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023] Open
Abstract
This study proposed a new workflow for co-registering prostate PET images from a dual-tracer PET/MRI study with histopathological images of resected prostate specimens. The method aims to establish an accurate correspondence between PET/MRI findings and histology, facilitating a deeper understanding of PET tracer distribution and enabling advanced analyses like radiomics. To achieve this, images derived by three patients who underwent both [68Ga]Ga-PSMA and [68Ga]Ga-RM2 PET/MRI before radical prostatectomy were selected. After surgery, in the resected fresh specimens, fiducial markers visible on both histology and MR images were inserted. An ex vivo MRI of the prostate served as an intermediate step for co-registration between histological specimens and in vivo MRI examinations. The co-registration workflow involved five steps, ensuring alignment between histopathological images and PET/MRI data. The target registration error (TRE) was calculated to assess the precision of the co-registration. Furthermore, the DICE score was computed between the dominant intraprostatic tumor lesions delineated by the pathologist and the nuclear medicine physician. The TRE for the co-registration of histopathology and in vivo images was 1.59 mm, while the DICE score related to the site of increased intraprostatic uptake on [68Ga]Ga-PSMA and [68Ga]Ga-RM2 PET images was 0.54 and 0.75, respectively. This work shows an accurate co-registration method for histopathological and in vivo PET/MRI prostate examinations that allows the quantitative assessment of dual-tracer PET/MRI diagnostic accuracy at a millimetric scale. This approach may unveil radiotracer uptake mechanisms and identify new PET/MRI biomarkers, thus establishing the basis for precision medicine and future analyses, such as radiomics.
Collapse
Affiliation(s)
- Samuele Ghezzo
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Ilaria Neri
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Paola Mapelli
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Annarita Savi
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Ana Maria Samanes Gajate
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Giorgio Brembilla
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Radiology, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy
| | - Carolina Bezzi
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Beatrice Maghini
- Department of Pathology, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (B.M.); (M.F.)
| | - Tommaso Villa
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
| | - Alberto Briganti
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Urology, Division of Experimental Oncology, Urological Research Institute, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy
| | - Francesco Montorsi
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Urology, Division of Experimental Oncology, Urological Research Institute, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy
| | - Francesco De Cobelli
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Radiology, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy
| | - Massimo Freschi
- Department of Pathology, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (B.M.); (M.F.)
| | - Arturo Chiti
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Maria Picchio
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Paola Scifo
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| |
Collapse
|
16
|
Priester A, Fan RE, Shubert J, Rusu M, Vesal S, Shao W, Khandwala YS, Marks LS, Natarajan S, Sonn GA. Prediction and Mapping of Intraprostatic Tumor Extent with Artificial Intelligence. EUR UROL SUPPL 2023; 54:20-27. [PMID: 37545845 PMCID: PMC10403686 DOI: 10.1016/j.euros.2023.05.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/21/2023] [Indexed: 08/08/2023] Open
Abstract
Background Magnetic resonance imaging (MRI) underestimation of prostate cancer extent complicates the definition of focal treatment margins. Objective To validate focal treatment margins produced by an artificial intelligence (AI) model. Design setting and participants Testing was conducted retrospectively in an independent dataset of 50 consecutive patients who had radical prostatectomy for intermediate-risk cancer. An AI deep learning model incorporated multimodal imaging and biopsy data to produce three-dimensional cancer estimation maps and margins. AI margins were compared with conventional MRI regions of interest (ROIs), 10-mm margins around ROIs, and hemigland margins. The AI model also furnished predictions of negative surgical margin probability, which were assessed for accuracy. Outcome measurements and statistical analysis Comparing AI with conventional margins, sensitivity was evaluated using Wilcoxon signed-rank tests and negative margin rates using chi-square tests. Predicted versus observed negative margin probability was assessed using linear regression. Clinically significant prostate cancer (International Society of Urological Pathology grade ≥2) delineated on whole-mount histopathology served as ground truth. Results and limitations The mean sensitivity for cancer-bearing voxels was higher for AI margins (97%) than for conventional ROIs (37%, p < 0.001), 10-mm ROI margins (93%, p = 0.24), and hemigland margins (94%, p < 0.001). For index lesions, AI margins were more often negative (90%) than conventional ROIs (0%, p < 0.001), 10-mm ROI margins (82%, p = 0.24), and hemigland margins (66%, p = 0.004). Predicted and observed negative margin probabilities were strongly correlated (R2 = 0.98, median error = 4%). Limitations include a validation dataset derived from a single institution's prostatectomy population. Conclusions The AI model was accurate and effective in an independent test set. This approach could improve and standardize treatment margin definition, potentially reducing cancer recurrence rates. Furthermore, an accurate assessment of negative margin probability could facilitate informed decision-making for patients and physicians. Patient summary Artificial intelligence was used to predict the extent of tumors in surgically removed prostate specimens. It predicted tumor margins more accurately than conventional methods.
Collapse
Affiliation(s)
- Alan Priester
- Department of Urology, David Geffen School of Medicine, Los Angeles, CA, USA
- Avenda Health, Inc., Culver City, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | | | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Medicine, University of Florida, Gainesville, FL, USA
| | - Yash Samir Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Leonard S. Marks
- Department of Urology, David Geffen School of Medicine, Los Angeles, CA, USA
| | - Shyam Natarajan
- Department of Urology, David Geffen School of Medicine, Los Angeles, CA, USA
- Avenda Health, Inc., Culver City, CA, USA
| | - Geoffrey A. Sonn
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
17
|
Xu M, Cao L, Lu D, Hu Z, Yue Y. Application of Swarm Intelligence Optimization Algorithms in Image Processing: A Comprehensive Review of Analysis, Synthesis, and Optimization. Biomimetics (Basel) 2023; 8:235. [PMID: 37366829 DOI: 10.3390/biomimetics8020235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 05/27/2023] [Accepted: 06/01/2023] [Indexed: 06/28/2023] Open
Abstract
Image processing technology has always been a hot and difficult topic in the field of artificial intelligence. With the rise and development of machine learning and deep learning methods, swarm intelligence algorithms have become a hot research direction, and combining image processing technology with swarm intelligence algorithms has become a new and effective improvement method. Swarm intelligence algorithm refers to an intelligent computing method formed by simulating the evolutionary laws, behavior characteristics, and thinking patterns of insects, birds, natural phenomena, and other biological populations. It has efficient and parallel global optimization capabilities and strong optimization performance. In this paper, the ant colony algorithm, particle swarm optimization algorithm, sparrow search algorithm, bat algorithm, thimble colony algorithm, and other swarm intelligent optimization algorithms are deeply studied. The model, features, improvement strategies, and application fields of the algorithm in image processing, such as image segmentation, image matching, image classification, image feature extraction, and image edge detection, are comprehensively reviewed. The theoretical research, improvement strategies, and application research of image processing are comprehensively analyzed and compared. Combined with the current literature, the improvement methods of the above algorithms and the comprehensive improvement and application of image processing technology are analyzed and summarized. The representative algorithms of the swarm intelligence algorithm combined with image segmentation technology are extracted for list analysis and summary. Then, the unified framework, common characteristics, different differences of the swarm intelligence algorithm are summarized, existing problems are raised, and finally, the future trend is projected.
Collapse
Affiliation(s)
- Minghai Xu
- School of Intelligent Manufacturing and Electronic Engineering, Wenzhou University of Technology, Wenzhou 325035, China
| | - Li Cao
- School of Intelligent Manufacturing and Electronic Engineering, Wenzhou University of Technology, Wenzhou 325035, China
| | - Dongwan Lu
- Intelligent Information Systems Institute, Wenzhou University, Wenzhou 325035, China
| | - Zhongyi Hu
- Intelligent Information Systems Institute, Wenzhou University, Wenzhou 325035, China
| | - Yinggao Yue
- School of Intelligent Manufacturing and Electronic Engineering, Wenzhou University of Technology, Wenzhou 325035, China
- Intelligent Information Systems Institute, Wenzhou University, Wenzhou 325035, China
| |
Collapse
|
18
|
Lu X, Zhang S, Liu Z, Liu S, Huang J, Kong G, Li M, Liang Y, Cui Y, Yang C, Zhao S. Ultrasonographic pathological grading of prostate cancer using automatic region-based Gleason grading network. Comput Med Imaging Graph 2022; 102:102125. [PMID: 36257091 DOI: 10.1016/j.compmedimag.2022.102125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 08/26/2022] [Accepted: 09/20/2022] [Indexed: 11/05/2022]
Abstract
The Gleason scoring system is a reliable method for quantifying the aggressiveness of prostate cancer, which provides an important reference value for clinical assessment on therapeutic strategies. However, to the best of our knowledge, no study has been done on the pathological grading of prostate cancer from single ultrasound images. In this work, a novel Automatic Region-based Gleason Grading (ARGG) network for prostate cancer based on deep learning is proposed. ARGG consists of two stages: (1) a region labeling object detection (RLOD) network is designed to label the prostate cancer lesion region; (2) a Gleason grading network (GNet) is proposed for pathological grading of prostate ultrasound images. In RLOD, a new feature fusion structure Skip-connected Feature Pyramid Network (CFPN) is proposed as an auxiliary branch for extracting features and enhancing the fusion of high-level features and low-level features, which helps to detect the small lesion and extract the image detail information. In GNet, we designed a synchronized pulse enhancement module (SPEM) based on pulse-coupled neural networks for enhancing the results of RLOD detection and used as training samples, and then fed the enhanced results and the original ones into the channel attention classification network (CACN), which introduces an attention mechanism to benefit the prediction of cancer grading. Experimental performance on the dataset of prostate ultrasound images collected from hospitals shows that the proposed Gleason grading model outperforms the manual diagnosis by physicians with a precision of 0.830. In addition, we have evaluated the lesions detection performance of RLOD, which achieves a mean Dice metric of 0.815.
Collapse
Affiliation(s)
- Xu Lu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China; Pazhou Lab, Guangzhou 510330, China
| | - Shulian Zhang
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Zhiyong Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Shaopeng Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Jun Huang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Guoquan Kong
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Mingzhu Li
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yinying Liang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yunneng Cui
- Department of Radiology, Foshan Maternity and Children's Healthcare Hospital Affiliated to Southern Medical University, Foshan 528000, China
| | - Chuan Yang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China.
| | - Shen Zhao
- Department of Artificial Intelligence, Sun Yat-sen University, Guangzhou 510006, China.
| |
Collapse
|
19
|
Ruchti A, Neuwirth A, Lowman AK, Duenweg SR, LaViolette PS, Bukowy JD. Homologous point transformer for multi-modality prostate image registration. PeerJ Comput Sci 2022; 8:e1155. [PMID: 36532813 PMCID: PMC9748842 DOI: 10.7717/peerj-cs.1155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 10/24/2022] [Indexed: 06/17/2023]
Abstract
Registration is the process of transforming images so they are aligned in the same coordinate space. In the medical field, image registration is often used to align multi-modal or multi-parametric images of the same organ. A uniquely challenging subset of medical image registration is cross-modality registration-the task of aligning images captured with different scanning methodologies. In this study, we present a transformer-based deep learning pipeline for performing cross-modality, radiology-pathology image registration for human prostate samples. While existing solutions for multi-modality prostate image registration focus on the prediction of transform parameters, our pipeline predicts a set of homologous points on the two image modalities. The homologous point registration pipeline achieves better average control point deviation than the current state-of-the-art automatic registration pipeline. It reaches this accuracy without requiring masked MR images which may enable this approach to achieve similar results in other organ systems and for partial tissue samples.
Collapse
Affiliation(s)
- Alexander Ruchti
- Department of Electrical Engineering and Computer Science, Milwaukee School of Engineering, Milwaukee, WI, United States
| | - Alexander Neuwirth
- Department of Electrical Engineering and Computer Science, Milwaukee School of Engineering, Milwaukee, WI, United States
| | - Allison K. Lowman
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Savannah R. Duenweg
- Department of Biophysics, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Peter S. LaViolette
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI, United States
- Department of Biomedical Engineering, Medical College of Wisconsin, Milwaukee, WI, United States
| | - John D. Bukowy
- Department of Electrical Engineering and Computer Science, Milwaukee School of Engineering, Milwaukee, WI, United States
| |
Collapse
|
20
|
Duan H, Baratto L, Fan RE, Soerensen SJC, Liang T, Chung BI, Thong AEC, Gill H, Kunder C, Stoyanova T, Rusu M, Loening AM, Ghanouni P, Davidzon GA, Moradi F, Sonn GA, Iagaru A. Correlation of 68Ga-RM2 PET with Postsurgery Histopathology Findings in Patients with Newly Diagnosed Intermediate- or High-Risk Prostate Cancer. J Nucl Med 2022; 63:1829-1835. [PMID: 35552245 DOI: 10.2967/jnumed.122.263971] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 05/10/2022] [Indexed: 01/11/2023] Open
Abstract
68Ga-RM2 targets gastrin-releasing peptide receptors (GRPRs), which are overexpressed in prostate cancer (PC). Here, we compared preoperative 68Ga-RM2 PET to postsurgery histopathology in patients with newly diagnosed intermediate- or high-risk PC. Methods: Forty-one men, 64.0 ± 6.7 y old, were prospectively enrolled. PET images were acquired 42-72 min (median ± SD, 52.5 ± 6.5 min) after injection of 118.4-247.9 MBq (median ± SD, 138.0 ± 22.2 MBq) of 68Ga-RM2. PET findings were compared with preoperative multiparametric MRI (mpMRI) (n = 36) and 68Ga-PSMA11 PET (n = 17) and correlated to postprostatectomy whole-mount histopathology (n = 32) and time to biochemical recurrence. Nine participants decided to undergo radiation therapy after study enrollment. Results: All participants had intermediate- (n = 17) or high-risk (n = 24) PC and were scheduled for prostatectomy. Prostate-specific antigen was 8.8 ± 77.4 (range, 2.5-504) and 7.6 ± 5.3 ng/mL (range, 2.5-28.0 ng/mL) when participants who ultimately underwent radiation treatment were excluded. Preoperative 68Ga-RM2 PET identified 70 intraprostatic foci of uptake in 40 of 41 patients. Postprostatectomy histopathology was available in 32 patients in which 68Ga-RM2 PET identified 50 of 54 intraprostatic lesions (detection rate = 93%). 68Ga-RM2 uptake was recorded in 19 nonenlarged pelvic lymph nodes in 6 patients. Pathology confirmed lymph node metastases in 16 lesions, and follow-up imaging confirmed nodal metastases in 2 lesions. 68Ga-PSMA11 and 68Ga-RM2 PET identified 27 and 26 intraprostatic lesions, respectively, and 5 pelvic lymph nodes each in 17 patients. Concordance between 68Ga-RM2 and 68Ga-PSMA11 PET was found in 18 prostatic lesions in 11 patients and 4 lymph nodes in 2 patients. Noncongruent findings were observed in 6 patients (intraprostatic lesions in 4 patients and nodal lesions in 2 patients). Sensitivity and accuracy rates for 68Ga-RM2 and 68Ga-PSMA11 (98% and 89% for 68Ga-RM2 and 95% and 89% for 68Ga-PSMA11) were higher than those for mpMRI (77% and 77%, respectively). Specificity was highest for mpMRI with 75% followed by 68Ga-PSMA11 (67%) and 68Ga-RM2 (65%). Conclusion: 68Ga-RM2 PET accurately detects intermediate- and high-risk primary PC, with a detection rate of 93%. In addition, 68Ga-RM2 PET showed significantly higher specificity and accuracy than mpMRI and a performance similar to 68Ga-PSMA11 PET. These findings need to be confirmed in larger studies to identify which patients will benefit from one or the other or both radiopharmaceuticals.
Collapse
Affiliation(s)
- Heying Duan
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, Stanford, California
| | - Lucia Baratto
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, Stanford, California
| | - Richard E Fan
- Department of Urology, Stanford University, Stanford, California
| | - Simon John Christoph Soerensen
- Department of Urology, Stanford University, Stanford, California.,Department of Epidemiology and Population Health, Stanford University, Stanford, California
| | - Tie Liang
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, Stanford, California
| | | | | | - Harcharan Gill
- Department of Urology, Stanford University, Stanford, California
| | - Christian Kunder
- Department of Pathology, Stanford University, Stanford, California
| | - Tanya Stoyanova
- Radiology, Canary Center at Stanford for Cancer Early Detection, Stanford University, Stanford, California
| | - Mirabela Rusu
- Division of Integrative Biomedical Imaging, Department of Radiology, Stanford University, Stanford, California; and
| | - Andreas M Loening
- Division of Body MRI, Department of Radiology, Stanford University, Stanford, California
| | - Pejman Ghanouni
- Division of Body MRI, Department of Radiology, Stanford University, Stanford, California
| | - Guido A Davidzon
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, Stanford, California
| | - Farshad Moradi
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, Stanford, California
| | - Geoffrey A Sonn
- Department of Urology, Stanford University, Stanford, California
| | - Andrei Iagaru
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Stanford University, Stanford, California;
| |
Collapse
|
21
|
Moroianu ŞL, Bhattacharya I, Seetharaman A, Shao W, Kunder CA, Sharma A, Ghanouni P, Fan RE, Sonn GA, Rusu M. Computational Detection of Extraprostatic Extension of Prostate Cancer on Multiparametric MRI Using Deep Learning. Cancers (Basel) 2022; 14:2821. [PMID: 35740487 PMCID: PMC9220816 DOI: 10.3390/cancers14122821] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 05/28/2022] [Accepted: 06/03/2022] [Indexed: 02/04/2023] Open
Abstract
The localization of extraprostatic extension (EPE), i.e., local spread of prostate cancer beyond the prostate capsular boundary, is important for risk stratification and surgical planning. However, the sensitivity of EPE detection by radiologists on MRI is low (57% on average). In this paper, we propose a method for computational detection of EPE on multiparametric MRI using deep learning. Ground truth labels of cancers and EPE were obtained in 123 patients (38 with EPE) by registering pre-surgical MRI with whole-mount digital histopathology images from radical prostatectomy. Our approach has two stages. First, we trained deep learning models using the MRI as input to generate cancer probability maps both inside and outside the prostate. Second, we built an image post-processing pipeline that generates predictions for EPE location based on the cancer probability maps and clinical knowledge. We used five-fold cross-validation to train our approach using data from 74 patients and tested it using data from an independent set of 49 patients. We compared two deep learning models for cancer detection: (i) UNet and (ii) the Correlated Signature Network for Indolent and Aggressive prostate cancer detection (CorrSigNIA). The best end-to-end model for EPE detection, which we call EPENet, was based on the CorrSigNIA cancer detection model. EPENet was successful at detecting cancers with extraprostatic extension, achieving a mean area under the receiver operator characteristic curve of 0.72 at the patient-level. On the test set, EPENet had 80.0% sensitivity and 28.2% specificity at the patient-level compared to 50.0% sensitivity and 76.9% specificity for the radiologists. To account for spatial location of predictions during evaluation, we also computed results at the sextant-level, where the prostate was divided into sextants according to standard systematic 12-core biopsy procedure. At the sextant-level, EPENet achieved mean sensitivity 61.1% and mean specificity 58.3%. Our approach has the potential to provide the location of extraprostatic extension using MRI alone, thus serving as an independent diagnostic aid to radiologists and facilitating treatment planning.
Collapse
Affiliation(s)
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA; (I.B.); (W.S.); (A.S.); (P.G.); (G.A.S.)
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305, USA;
| | - Arun Seetharaman
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA;
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA; (I.B.); (W.S.); (A.S.); (P.G.); (G.A.S.)
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA 94305, USA;
| | - Avishkar Sharma
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA; (I.B.); (W.S.); (A.S.); (P.G.); (G.A.S.)
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA; (I.B.); (W.S.); (A.S.); (P.G.); (G.A.S.)
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305, USA;
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305, USA;
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA; (I.B.); (W.S.); (A.S.); (P.G.); (G.A.S.)
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305, USA;
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA; (I.B.); (W.S.); (A.S.); (P.G.); (G.A.S.)
| |
Collapse
|
22
|
Bhattacharya I, Lim DS, Aung HL, Liu X, Seetharaman A, Kunder CA, Shao W, Soerensen SJC, Fan RE, Ghanouni P, To'o KJ, Brooks JD, Sonn GA, Rusu M. Bridging the gap between prostate radiology and pathology through machine learning. Med Phys 2022; 49:5160-5181. [PMID: 35633505 PMCID: PMC9543295 DOI: 10.1002/mp.15777] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 05/10/2022] [Accepted: 05/10/2022] [Indexed: 11/27/2022] Open
Abstract
Background Prostate cancer remains the second deadliest cancer for American men despite clinical advancements. Currently, magnetic resonance imaging (MRI) is considered the most sensitive non‐invasive imaging modality that enables visualization, detection, and localization of prostate cancer, and is increasingly used to guide targeted biopsies for prostate cancer diagnosis. However, its utility remains limited due to high rates of false positives and false negatives as well as low inter‐reader agreements. Purpose Machine learning methods to detect and localize cancer on prostate MRI can help standardize radiologist interpretations. However, existing machine learning methods vary not only in model architecture, but also in the ground truth labeling strategies used for model training. We compare different labeling strategies and the effects they have on the performance of different machine learning models for prostate cancer detection on MRI. Methods Four different deep learning models (SPCNet, U‐Net, branched U‐Net, and DeepLabv3+) were trained to detect prostate cancer on MRI using 75 patients with radical prostatectomy, and evaluated using 40 patients with radical prostatectomy and 275 patients with targeted biopsy. Each deep learning model was trained with four different label types: pathology‐confirmed radiologist labels, pathologist labels on whole‐mount histopathology images, and lesion‐level and pixel‐level digital pathologist labels (previously validated deep learning algorithm on histopathology images to predict pixel‐level Gleason patterns) on whole‐mount histopathology images. The pathologist and digital pathologist labels (collectively referred to as pathology labels) were mapped onto pre‐operative MRI using an automated MRI‐histopathology registration platform. Results Radiologist labels missed cancers (ROC‐AUC: 0.75‐0.84), had lower lesion volumes (~68% of pathology lesions), and lower Dice overlaps (0.24‐0.28) when compared with pathology labels. Consequently, machine learning models trained with radiologist labels also showed inferior performance compared to models trained with pathology labels. Digital pathologist labels showed high concordance with pathologist labels of cancer (lesion ROC‐AUC: 0.97‐1, lesion Dice: 0.75‐0.93). Machine learning models trained with digital pathologist labels had the highest lesion detection rates in the radical prostatectomy cohort (aggressive lesion ROC‐AUC: 0.91‐0.94), and had generalizable and comparable performance to pathologist label‐trained‐models in the targeted biopsy cohort (aggressive lesion ROC‐AUC: 0.87‐0.88), irrespective of the deep learning architecture. Moreover, machine learning models trained with pixel‐level digital pathologist labels were able to selectively identify aggressive and indolent cancer components in mixed lesions on MRI, which is not possible with any human‐annotated label type. Conclusions Machine learning models for prostate MRI interpretation that are trained with digital pathologist labels showed higher or comparable performance with pathologist label‐trained models in both radical prostatectomy and targeted biopsy cohort. Digital pathologist labels can reduce challenges associated with human annotations, including labor, time, inter‐ and intra‐reader variability, and can help bridge the gap between prostate radiology and pathology by enabling the training of reliable machine learning models to detect and localize prostate cancer on MRI.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - David S Lim
- Department of Computer Science, Stanford University, Stanford, CA 94305
| | - Han Lin Aung
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305
| | - Xingchen Liu
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305
| | - Arun Seetharaman
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305
| | - Christian A Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA 94305
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305
| | - Simon J C Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA 94305
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Katherine J To'o
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Radiology, VA Palo Alto Health Care System, Palo Alto, CA 94304
| | - James D Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305
| |
Collapse
|
23
|
Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics (Basel) 2022; 12:diagnostics12020289. [PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 02/04/2023] Open
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.
Collapse
Affiliation(s)
- Huanye Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Chau Hung Lee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
| | - David Chia
- Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore;
| | - Zhiping Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Weimin Huang
- Institute for Infocomm Research, A*Star, Singapore 138632, Singapore;
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| |
Collapse
|
24
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
25
|
Selective identification and localization of indolent and aggressive prostate cancers via CorrSigNIA: an MRI-pathology correlation and deep learning framework. Med Image Anal 2022; 75:102288. [PMID: 34784540 PMCID: PMC8678366 DOI: 10.1016/j.media.2021.102288] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Revised: 09/02/2021] [Accepted: 10/20/2021] [Indexed: 01/03/2023]
Abstract
Automated methods for detecting prostate cancer and distinguishing indolent from aggressive disease on Magnetic Resonance Imaging (MRI) could assist in early diagnosis and treatment planning. Existing automated methods of prostate cancer detection mostly rely on ground truth labels with limited accuracy, ignore disease pathology characteristics observed on resected tissue, and cannot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleason Pattern=3) cancers when they co-exist in mixed lesions. In this paper, we present a radiology-pathology fusion approach, CorrSigNIA, for the selective identification and localization of indolent and aggressive prostate cancer on MRI. CorrSigNIA uses registered MRI and whole-mount histopathology images from radical prostatectomy patients to derive accurate ground truth labels and learn correlated features between radiology and pathology images. These correlated features are then used in a convolutional neural network architecture to detect and localize normal tissue, indolent cancer, and aggressive cancer on prostate MRI. CorrSigNIA was trained and validated on a dataset of 98 men, including 74 men that underwent radical prostatectomy and 24 men with normal prostate MRI. CorrSigNIA was tested on three independent test sets including 55 men that underwent radical prostatectomy, 275 men that underwent targeted biopsies, and 15 men with normal prostate MRI. CorrSigNIA achieved an accuracy of 80% in distinguishing between men with and without cancer, a lesion-level ROC-AUC of 0.81±0.31 in detecting cancers in both radical prostatectomy and biopsy cohort patients, and lesion-levels ROC-AUCs of 0.82±0.31 and 0.86±0.26 in detecting clinically significant cancers in radical prostatectomy and biopsy cohort patients respectively. CorrSigNIA consistently outperformed other methods across different evaluation metrics and cohorts. In clinical settings, CorrSigNIA may be used in prostate cancer detection as well as in selective identification of indolent and aggressive components of prostate cancer, thereby improving prostate cancer care by helping guide targeted biopsies, reducing unnecessary biopsies, and selecting and planning treatment.
Collapse
|
26
|
Lee CC, Chang KH, Chiu FM, Ou YC, Hwang JI, Hsueh KC, Fan HC. Using IVIM Parameters to Differentiate Prostate Cancer and Contralateral Normal Tissue through Fusion of MRI Images with Whole-Mount Pathology Specimen Images by Control Point Registration Method. Diagnostics (Basel) 2021; 11:diagnostics11122340. [PMID: 34943577 PMCID: PMC8700385 DOI: 10.3390/diagnostics11122340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 12/04/2021] [Accepted: 12/10/2021] [Indexed: 11/16/2022] Open
Abstract
The intravoxel incoherent motion (IVIM) model may enhance the clinical value of multiparametric magnetic resonance imaging (mpMRI) in the detection of prostate cancer (PCa). However, while past IVIM modeling studies have shown promise, they have also reported inconsistent results and limitations, underscoring the need to further enhance the accuracy of IVIM modeling for PCa detection. Therefore, this study utilized the control point registration toolbox function in MATLAB to fuse T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) MRI images with whole-mount pathology specimen images in order to eliminate potential bias in IVIM calculations. Sixteen PCa patients underwent prostate MRI scans before undergoing radical prostatectomies. The image fusion method was then applied in calculating the patients’ IVIM parameters. Furthermore, MRI scans were also performed on 22 healthy young volunteers in order to evaluate the changes in IVIM parameters with aging. Among the full study cohort, the f parameter was significantly increased with age, while the D* parameter was significantly decreased. Among the PCa patients, the D and ADC parameters could differentiate PCa tissue from contralateral normal tissue, while the f and D* parameters could not. The presented image fusion method also provided improved precision when comparing regions of interest side by side. However, further studies with more standardized methods are needed to further clarify the benefits of the presented approach and the different IVIM parameters in PCa characterization.
Collapse
Affiliation(s)
- Cheng-Chun Lee
- Division of Diagnostic Radiology, Department of Medical Imaging, Tungs’ Taichung Metroharbor Hospital, Taichung 43503, Taiwan; (C.-C.L.); (J.-I.H.)
| | - Kuang-Hsi Chang
- Department of Medical Research, Tungs’ Taichung Metroharbor Hospital, Taichung 43503, Taiwan;
- Center for General Education, China Medical University, Taichung 404, Taiwan
- General Education Center, Jen-Teh Junior College of Medicine, Nursing and Management, Miaoli 356, Taiwan
| | - Feng-Mao Chiu
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei 112, Taiwan;
| | - Yen-Chuan Ou
- Division of Urology, Department of Surgery, Tungs’ Taichung Metroharbor Hospital, Taichung 43503, Taiwan;
| | - Jen-I. Hwang
- Division of Diagnostic Radiology, Department of Medical Imaging, Tungs’ Taichung Metroharbor Hospital, Taichung 43503, Taiwan; (C.-C.L.); (J.-I.H.)
- Department of Radiology, National Defense Medical Center, Taipei 11490, Taiwan
| | - Kuan-Chun Hsueh
- Division of General Surgery, Department of Surgery, Tungs’ Taichung Metroharbor Hospital, Taichung 43503, Taiwan;
| | - Hueng-Chuen Fan
- Department of Medical Research, Tungs’ Taichung Metroharbor Hospital, Taichung 43503, Taiwan;
- Department of Pediatrics, Tungs’ Taichung Metroharbor Hospital, Taichung 43503, Taiwan
- Department of Life Sciences, National Chung Hsing University, Taichung 40227, Taiwan
- Department of Rehabilitation, Jen-Teh Junior College of Medicine, Nursing and Management, Miaoli 356, Taiwan
- Correspondence: ; Tel.: +886-426-581-919 (ext. 4301)
| |
Collapse
|
27
|
Zimmerman BE, Johnson SL, Odéen HA, Shea JE, Factor RE, Joshi SC, Payne AH. Histology to 3D in vivo MR registration for volumetric evaluation of MRgFUS treatment assessment biomarkers. Sci Rep 2021; 11:18923. [PMID: 34556678 PMCID: PMC8460731 DOI: 10.1038/s41598-021-97309-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 08/24/2021] [Indexed: 11/09/2022] Open
Abstract
Advances in imaging and early cancer detection have increased interest in magnetic resonance (MR) guided focused ultrasound (MRgFUS) technologies for cancer treatment. MRgFUS ablation treatments could reduce surgical risks, preserve organ tissue and function, and improve patient quality of life. However, surgical resection and histological analysis remain the gold standard to assess cancer treatment response. For non-invasive ablation therapies such as MRgFUS, the treatment response must be determined through MR imaging biomarkers. However, current MR biomarkers are inconclusive and have not been rigorously evaluated against histology via accurate registration. Existing registration methods rely on anatomical features to directly register in vivo MR and histology. For MRgFUS applications in anatomies such as liver, kidney, or breast, anatomical features that are not caused by the treatment are often insufficient to drive direct registration. We present a novel MR to histology registration workflow that utilizes intermediate imaging and does not rely on anatomical MR features being visible in histology. The presented workflow yields an overall registration accuracy of 1.00 ± 0.13 mm. The developed registration pipeline is used to evaluate a common MRgFUS treatment assessment biomarker against histology. Evaluating MR biomarkers against histology using this registration pipeline will facilitate validating novel MRgFUS biomarkers to improve treatment assessment without surgical intervention. While the presented registration technique has been evaluated in a MRgFUS ablation treatment model, this technique could be potentially applied in any tissue to evaluate a variety of therapeutic options.
Collapse
Affiliation(s)
- Blake E Zimmerman
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA. .,Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, USA.
| | - Sara L Johnson
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA.,Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, UT, USA
| | - Henrik A Odéen
- Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, UT, USA
| | - Jill E Shea
- Department of Surgery, University of Utah, Salt Lake City, UT, USA
| | - Rachel E Factor
- Department of Pathology, University of Utah, Salt Lake City, UT, USA
| | - Sarang C Joshi
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA.,Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, USA
| | - Allison H Payne
- Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
28
|
Seetharaman A, Bhattacharya I, Chen LC, Kunder CA, Shao W, Soerensen SJC, Wang JB, Teslovich NC, Fan RE, Ghanouni P, Brooks JD, Too KJ, Sonn GA, Rusu M. Automated detection of aggressive and indolent prostate cancer on magnetic resonance imaging. Med Phys 2021; 48:2960-2972. [PMID: 33760269 PMCID: PMC8360053 DOI: 10.1002/mp.14855] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 01/31/2021] [Accepted: 03/16/2021] [Indexed: 01/05/2023] Open
Abstract
PURPOSE While multi-parametric magnetic resonance imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy. METHODS We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtained by registering MRI with whole-mount digital histopathology images from patients who underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients who underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including six patients with normal MRI and no cancer, 23 patients who underwent radical prostatectomy, and 293 patients who underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists. RESULTS Our model detected clinically significant lesions with an area under the receiver operator characteristics curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer. CONCLUSIONS Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.
Collapse
Affiliation(s)
- Arun Seetharaman
- Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Leo C Chen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Christian A Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Simon J C Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Aarhus University Hospital, Aarhus, Denmark
| | - Jeffrey B Wang
- Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Nikola C Teslovich
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - James D Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Katherine J Too
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Radiology, VA Palo Alto Health Care System, Palo Alto, CA, 94304, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| |
Collapse
|
29
|
Shao W, Banh L, Kunder CA, Fan RE, Soerensen SJC, Wang JB, Teslovich NC, Madhuripan N, Jawahar A, Ghanouni P, Brooks JD, Sonn GA, Rusu M. ProsRegNet: A deep learning framework for registration of MRI and histopathology images of the prostate. Med Image Anal 2021; 68:101919. [PMID: 33385701 PMCID: PMC7856244 DOI: 10.1016/j.media.2020.101919] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Revised: 11/18/2020] [Accepted: 11/23/2020] [Indexed: 12/21/2022]
Abstract
Magnetic resonance imaging (MRI) is an increasingly important tool for the diagnosis and treatment of prostate cancer. However, interpretation of MRI suffers from high inter-observer variability across radiologists, thereby contributing to missed clinically significant cancers, overdiagnosed low-risk cancers, and frequent false positives. Interpretation of MRI could be greatly improved by providing radiologists with an answer key that clearly shows cancer locations on MRI. Registration of histopathology images from patients who had radical prostatectomy to pre-operative MRI allows such mapping of ground truth cancer labels onto MRI. However, traditional MRI-histopathology registration approaches are computationally expensive and require careful choices of the cost function and registration hyperparameters. This paper presents ProsRegNet, a deep learning-based pipeline to accelerate and simplify MRI-histopathology image registration in prostate cancer. Our pipeline consists of image preprocessing, estimation of affine and deformable transformations by deep neural networks, and mapping cancer labels from histopathology images onto MRI using estimated transformations. We trained our neural network using MR and histopathology images of 99 patients from our internal cohort (Cohort 1) and evaluated its performance using 53 patients from three different cohorts (an additional 12 from Cohort 1 and 41 from two public cohorts). Results show that our deep learning pipeline has achieved more accurate registration results and is at least 20 times faster than a state-of-the-art registration algorithm. This important advance will provide radiologists with highly accurate prostate MRI answer keys, thereby facilitating improvements in the detection of prostate cancer on MRI. Our code is freely available at https://github.com/pimed//ProsRegNet.
Collapse
Affiliation(s)
- Wei Shao
- Department of Radiology, Stanford University, Stanford, CA 94305, USA.
| | - Linda Banh
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | | | - Richard E Fan
- Department of Urology, Stanford University, Stanford, CA 94305, USA
| | | | - Jeffrey B Wang
- School of Medicine, Stanford University, Stanford, CA 94305, USA
| | | | - Nikhil Madhuripan
- Department of Radiology, University of Colorado, Aurora, CO 80045, USA
| | | | - Pejman Ghanouni
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | - James D Brooks
- Department of Urology, Stanford University, Stanford, CA 94305, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, Stanford, CA 94305, USA; Department of Urology, Stanford University, Stanford, CA 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
30
|
Sood RR, Shao W, Kunder C, Teslovich NC, Wang JB, Soerensen SJC, Madhuripan N, Jawahar A, Brooks JD, Ghanouni P, Fan RE, Sonn GA, Rusu M. 3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction. Med Image Anal 2021; 69:101957. [PMID: 33550008 DOI: 10.1016/j.media.2021.101957] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 12/23/2020] [Accepted: 01/04/2021] [Indexed: 12/15/2022]
Abstract
The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.
Collapse
Affiliation(s)
- Rewa R Sood
- Department of Electrical Engineering, Stanford University, 350 Jane Stanford Way, Stanford, CA 94305, USA
| | - Wei Shao
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Christian Kunder
- Department of Pathology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Nikola C Teslovich
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Jeffrey B Wang
- Stanford School of Medicine, 291 Campus Drive, Stanford, CA 94305, USA
| | - Simon J C Soerensen
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA; Department of Urology, Aarhus University Hospital, Aarhus, Denmark
| | - Nikhil Madhuripan
- Department of Radiology, University of Colorado, Aurora, CO 80045, USA
| | | | - James D Brooks
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Richard E Fan
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA; Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
| |
Collapse
|