1
|
Huang J, Zhang X, Jin R, Xu T, Jin Z, Shen M, Lv F, Chen J, Liu J. Wavelet-based selection-and-recalibration network for Parkinson's disease screening in OCT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108368. [PMID: 39154408 DOI: 10.1016/j.cmpb.2024.108368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 07/30/2024] [Accepted: 08/07/2024] [Indexed: 08/20/2024]
Abstract
BACKGROUND AND OBJECTIVE Parkinson's disease (PD) is one of the most prevalent neurodegenerative brain diseases worldwide. Therefore, accurate PD screening is crucial for early clinical intervention and treatment. Recent clinical research indicates that changes in pathology, such as the texture and thickness of the retinal layers, can serve as biomarkers for clinical PD diagnosis based on optical coherence tomography (OCT) images. However, the pathological manifestations of PD in the retinal layers are subtle compared to the more salient lesions associated with retinal diseases. METHODS Inspired by textural edge feature extraction in frequency domain learning, we aim to explore a potential approach to enhance the distinction between the feature distributions in retinal layers of PD cases and healthy controls. In this paper, we introduce a simple yet novel wavelet-based selection and recalibration module to effectively enhance the feature representations of the deep neural network by aggregating the unique clinical properties, such as the retinal layers in each frequency band. We combine this module with the residual block to form a deep network named Wavelet-based Selection and Recalibration Network (WaveSRNet) for automatic PD screening. RESULTS The extensive experiments on a clinical PD-OCT dataset and two publicly available datasets demonstrate that our approach outperforms state-of-the-art methods. Visualization analysis and ablation studies are conducted to enhance the explainability of WaveSRNet in the decision-making process. CONCLUSIONS Our results suggest the potential role of the retina as an assessment tool for PD. Visual analysis shows that PD-related elements include not only certain retinal layers but also the location of the fovea in OCT images.
Collapse
Affiliation(s)
- Jingqi Huang
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Xiaoqing Zhang
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China; Center for High Performance Computing and Shenzhen Key Laboratory of Intelligent Bioinformatics, Shenzhen institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Richu Jin
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Tao Xu
- The State Key Laboratory of Ophthalmology, Optometry and Vision Science, Wenzhou Medical University, Wenzhou, Zhejiang, China; The Oujiang Laboratory; The Affiliated Eye Hospital, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, China
| | - Zi Jin
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Meixiao Shen
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Fan Lv
- The Oujiang Laboratory; The Affiliated Eye Hospital, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Jiangfan Chen
- The State Key Laboratory of Ophthalmology, Optometry and Vision Science, Wenzhou Medical University, Wenzhou, Zhejiang, China; The Oujiang Laboratory; The Affiliated Eye Hospital, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, China
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China; The State Key Laboratory of Ophthalmology, Optometry and Vision Science, Wenzhou Medical University, Wenzhou, Zhejiang, China; Singapore Eye Research Institute, 169856, Singapore.
| |
Collapse
|
2
|
Li H, Liu H, von Busch H, Grimm R, Huisman H, Tong A, Winkel D, Penzkofer T, Shabunin I, Choi MH, Yang Q, Szolar D, Shea S, Coakley F, Harisinghani M, Oguz I, Comaniciu D, Kamen A, Lou B. Deep Learning-based Unsupervised Domain Adaptation via a Unified Model for Prostate Lesion Detection Using Multisite Biparametric MRI Datasets. Radiol Artif Intell 2024; 6:e230521. [PMID: 39166972 DOI: 10.1148/ryai.230521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/23/2024]
Abstract
Purpose To determine whether the unsupervised domain adaptation (UDA) method with generated images improves the performance of a supervised learning (SL) model for prostate cancer (PCa) detection using multisite biparametric (bp) MRI datasets. Materials and Methods This retrospective study included data from 5150 patients (14 191 samples) collected across nine different imaging centers. A novel UDA method using a unified generative model was developed for PCa detection using multisite bpMRI datasets. This method translates diffusion-weighted imaging (DWI) acquisitions, including apparent diffusion coefficient (ADC) and individual diffusion-weighted (DW) images acquired using various b values, to align with the style of images acquired using b values recommended by Prostate Imaging Reporting and Data System (PI-RADS) guidelines. The generated ADC and DW images replace the original images for PCa detection. An independent set of 1692 test cases (2393 samples) was used for evaluation. The area under the receiver operating characteristic curve (AUC) was used as the primary metric, and statistical analysis was performed via bootstrapping. Results For all test cases, the AUC values for baseline SL and UDA methods were 0.73 and 0.79 (P < .001), respectively, for PCa lesions with PI-RADS score of 3 or greater and 0.77 and 0.80 (P < .001) for lesions with PI-RADS scores of 4 or greater. In the 361 test cases under the most unfavorable image acquisition setting, the AUC values for baseline SL and UDA were 0.49 and 0.76 (P < .001) for lesions with PI-RADS scores of 3 or greater and 0.50 and 0.77 (P < .001) for lesions with PI-RADS scores of 4 or greater. Conclusion UDA with generated images improved the performance of SL methods in PCa lesion detection across multisite datasets with various b values, especially for images acquired with significant deviations from the PI-RADS-recommended DWI protocol (eg, with an extremely high b value). Keywords: Prostate Cancer Detection, Multisite, Unsupervised Domain Adaptation, Diffusion-weighted Imaging, b Value Supplemental material is available for this article. © RSNA, 2024.
Collapse
Affiliation(s)
- Hao Li
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Han Liu
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Heinrich von Busch
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Robert Grimm
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Henkjan Huisman
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Angela Tong
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - David Winkel
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Tobias Penzkofer
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Ivan Shabunin
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Moon Hyung Choi
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Qingsong Yang
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Dieter Szolar
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Steven Shea
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Fergus Coakley
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Mukesh Harisinghani
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Ipek Oguz
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Dorin Comaniciu
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Ali Kamen
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| | - Bin Lou
- From Digital Technology and Innovation, Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540 (H. Li, H. Liu, D.C., A.K., B.L.); Diagnostic Imaging, Siemens Healthineers, Erlangen, Bavaria, Germany (H.v.B., R.G.); Vanderbilt University, Nashville, Tenn (H. Li, H. Liu, I.O.); Radboud University Medical Center, Nijmegen, the Netherlands (H.H.); New York University, New York, NY (A.T.); Universitätsspital Basel, Basel, Switzerland (D.W.); Charité, Universitätsmedizin Berlin, Berlin, Germany (T.P.); Patero Clinic, Moscow, Russia (I.S.); Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea (M.H.C.); Department of Radiology, Changhai Hospital of Shanghai, Shanghai, China (Q.Y.); Diagnostikum Graz Süd-West, Graz, Austria (D.S.); Department of Radiology, Loyola University Medical Center, Maywood, Ill (S.S.); Department of Diagnostic Radiology, Oregon Health and Science University School of Medicine, Portland, Ore (F.C.); and Massachusetts General Hospital, Boston, Mass (M.H.)
| |
Collapse
|
3
|
Magoulianitis V, Yang J, Yang Y, Xue J, Kaneko M, Cacciamani G, Abreu A, Duddalwar V, Kuo CCJ, Gill IS, Nikias C. PCa-RadHop: A transparent and lightweight feed-forward method for clinically significant prostate cancer segmentation. Comput Med Imaging Graph 2024; 116:102408. [PMID: 38908295 DOI: 10.1016/j.compmedimag.2024.102408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 05/30/2024] [Accepted: 05/31/2024] [Indexed: 06/24/2024]
Abstract
Prostate Cancer is one of the most frequently occurring cancers in men, with a low survival rate if not early diagnosed. PI-RADS reading has a high false positive rate, thus increasing the diagnostic incurred costs and patient discomfort. Deep learning (DL) models achieve a high segmentation performance, although require a large model size and complexity. Also, DL models lack of feature interpretability and are perceived as "black-boxes" in the medical field. PCa-RadHop pipeline is proposed in this work, aiming to provide a more transparent feature extraction process using a linear model. It adopts the recently introduced Green Learning (GL) paradigm, which offers a small model size and low complexity. PCa-RadHop consists of two stages: Stage-1 extracts data-driven radiomics features from the bi-parametric Magnetic Resonance Imaging (bp-MRI) input and predicts an initial heatmap. To reduce the false positive rate, a subsequent stage-2 is introduced to refine the predictions by including more contextual information and radiomics features from each already detected Region of Interest (ROI). Experiments on the largest publicly available dataset, PI-CAI, show a competitive performance standing of the proposed method among other deep DL models, achieving an area under the curve (AUC) of 0.807 among a cohort of 1,000 patients. Moreover, PCa-RadHop maintains orders of magnitude smaller model size and complexity.
Collapse
Affiliation(s)
- Vasileios Magoulianitis
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA.
| | - Jiaxin Yang
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Yijing Yang
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Jintang Xue
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Masatomo Kaneko
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - Giovanni Cacciamani
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - Andre Abreu
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Vinay Duddalwar
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA; Department of Radiology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - C-C Jay Kuo
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Inderbir S Gill
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - Chrysostomos Nikias
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| |
Collapse
|
4
|
Rodrigues NM, Almeida JGD, Rodrigues A, Vanneschi L, Matos C, Lisitskaya MV, Uysal A, Silva S, Papanikolaou N. Deep Learning Features Can Improve Radiomics-Based Prostate Cancer Aggressiveness Prediction. JCO Clin Cancer Inform 2024; 8:e2300180. [PMID: 39292984 DOI: 10.1200/cci.23.00180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 06/02/2024] [Accepted: 07/31/2024] [Indexed: 09/20/2024] Open
Abstract
PURPOSE Emerging evidence suggests that the use of artificial intelligence can assist in the timely detection and optimization of therapeutic approach in patients with prostate cancer. The conventional perspective on radiomics encompassing segmentation and the extraction of radiomic features considers it as an independent and sequential process. However, it is not necessary to adhere to this viewpoint. In this study, we show that besides generating masks from which radiomic features can be extracted, prostate segmentation and reconstruction models provide valuable information in their feature space, which can improve the quality of radiomic signatures models for disease aggressiveness classification. MATERIALS AND METHODS We perform 2,244 experiments with deep learning features extracted from 13 different models trained using different anatomic zones and characterize how modeling decisions, such as deep feature aggregation and dimensionality reduction, affect performance. RESULTS While models using deep features from full gland and radiomic features consistently lead to improved disease aggressiveness prediction performance, others are detrimental. Our results suggest that the use of deep features can be beneficial, but an appropriate and comprehensive assessment is necessary to ensure that their inclusion does not harm predictive performance. CONCLUSION The study findings reveal that incorporating deep features derived from autoencoder models trained to reconstruct the full prostate gland (both zonal models show worse performance than radiomics only models), combined with radiomic features, often lead to a statistically significant increase in model performance for disease aggressiveness classification. Additionally, the results also demonstrate that the choice of feature selection is key to achieving good performance, with principal component analysis (PCA) and PCA + relief being the best approaches and that there is no clear difference between the three proposed latent representation extraction techniques.
Collapse
Affiliation(s)
- Nuno M Rodrigues
- LASIGE, Department of Informatics, Faculty of Sciences, University of Lisbon, Lisbon, Portugal
- Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal
| | | | - Ana Rodrigues
- Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal
- Faculty of Medicine, University of Porto, Porto, Portugal
| | - Leonardo Vanneschi
- NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, Lisboa, Portugal
| | - Celso Matos
- Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal
| | - Maria V Lisitskaya
- Cand. of Sci. (Med.), Radiologist at Radiology Department with CT and MRI, Medical Research and Educational Center, Lomonosov Moscow State University, Moscow, Russia
| | - Aycan Uysal
- Gulhane Medical School, University of Health Sciences, Ankara, Turkey
| | - Sara Silva
- LASIGE, Department of Informatics, Faculty of Sciences, University of Lisbon, Lisbon, Portugal
| | | |
Collapse
|
5
|
Rajagopal A, Westphalen AC, Velarde N, Simko JP, Nguyen H, Hope TA, Larson PEZ, Magudia K. Mixed Supervision of Histopathology Improves Prostate Cancer Classification From MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2610-2622. [PMID: 38547000 PMCID: PMC11361281 DOI: 10.1109/tmi.2024.3382909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Non-invasive prostate cancer classification from MRI has the potential to revolutionize patient care by providing early detection of clinically significant disease, but has thus far shown limited positive predictive value. To address this, we present a image-based deep learning method to predict clinically significant prostate cancer from screening MRI in patients that subsequently underwent biopsy with results ranging from benign pathology to the highest grade tumors. Specifically, we demonstrate that mixed supervision via diverse histopathological ground truth improves classification performance despite the cost of reduced concordance with image-based segmentation. Where prior approaches have utilized pathology results as ground truth derived from targeted biopsies and whole-mount prostatectomy to strongly supervise the localization of clinically significant cancer, our approach also utilizes weak supervision signals extracted from nontargeted systematic biopsies with regional localization to improve overall performance. Our key innovation is performing regression by distribution rather than simply by value, enabling use of additional pathology findings traditionally ignored by deep learning strategies. We evaluated our model on a dataset of 973 (testing n=198 ) multi-parametric prostate MRI exams collected at UCSF from 2016-2019 followed by MRI/ultrasound fusion (targeted) biopsy and systematic (nontargeted) biopsy of the prostate gland, demonstrating that deep networks trained with mixed supervision of histopathology can feasibly exceed the performance of the Prostate Imaging-Reporting and Data System (PI-RADS) clinical standard for prostate MRI interpretation (71.6% vs 66.7% balanced accuracy and 0.724 vs 0.716 AUC).
Collapse
|
6
|
Wu S, Guo C, Litifu A, Wang Z. 3D residual attention hierarchical fusion for real-time detection of the prostate capsule. BMC Med Imaging 2024; 24:157. [PMID: 38914956 PMCID: PMC11194884 DOI: 10.1186/s12880-024-01336-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 06/14/2024] [Indexed: 06/26/2024] Open
Abstract
BACKGROUND For prostate electrosurgery, where real-time surveillance screens are relied upon for operations, manual identification of the prostate capsule remains the primary method. With the need for rapid and accurate detection becoming increasingly urgent, we set out to develop a deep learning approach for detecting the prostate capsule using endoscopic optical images. METHODS Our method involves utilizing the Simple, Parameter-Free Attention Module(SimAM) residual attention fusion module to enhance the extraction of texture and detail information, enabling better feature extraction capabilities. This enhanced detail information is then hierarchically transferred from lower to higher levels to aid in the extraction of semantic information. By employing a forward feature-by-feature hierarchical fusion network based on the 3D residual attention mechanism, we have proposed an improved single-shot multibox detector model. RESULTS Our proposed model achieves a detection precision of 83.12% and a speed of 0.014 ms on NVIDIA RTX 2060, demonstrating its effectiveness in rapid detection. Furthermore, when compared to various existing methods including Faster Region-based Convolutional Neural Network (Faster R-CNN), Single Shot Multibox Detector (SSD), EfficientDet and others, our method Attention based Feature Fusion Single Shot Multibox Detector (AFFSSD) stands out with the highest mean Average Precision (mAP) and faster speed, ranking only below You Only Look Once version 7 (YOLOv7). CONCLUSIONS This network excels in extracting regional features from images while retaining the spatial structure, facilitating the rapid detection of medical images.
Collapse
Affiliation(s)
- Shixiao Wu
- School of Information Engineering, Wuhan Business University, 816 Dongfeng Avenue, Caidian District, Wuhan, Hubei, 430056, China
| | - Chengcheng Guo
- School of Electronic Information, Wuhan University, 129 Luoyu Road, Hongshan District, Wuhan, Hubei, 430072, China.
| | - Ayixiamu Litifu
- School of Physics and Electronic Information, Xinjiang Normal University, Urumqi, China
| | - Zhiwei Wang
- Department of Cardiothoracic Surgery, People's Hospital of Wuhan University, 99 Zhangzhidong Road, Wuchang District, Wuhan, Hubei, 430060, China
| |
Collapse
|
7
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
8
|
Yu C, Pei H. Dynamic Weighting Translation Transfer Learning for Imbalanced Medical Image Classification. ENTROPY (BASEL, SWITZERLAND) 2024; 26:400. [PMID: 38785649 PMCID: PMC11119260 DOI: 10.3390/e26050400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 04/25/2024] [Accepted: 04/26/2024] [Indexed: 05/25/2024]
Abstract
Medical image diagnosis using deep learning has shown significant promise in clinical medicine. However, it often encounters two major difficulties in real-world applications: (1) domain shift, which invalidates the trained model on new datasets, and (2) class imbalance problems leading to model biases towards majority classes. To address these challenges, this paper proposes a transfer learning solution, named Dynamic Weighting Translation Transfer Learning (DTTL), for imbalanced medical image classification. The approach is grounded in information and entropy theory and comprises three modules: Cross-domain Discriminability Adaptation (CDA), Dynamic Domain Translation (DDT), and Balanced Target Learning (BTL). CDA connects discriminative feature learning between source and target domains using a synthetic discriminability loss and a domain-invariant feature learning loss. The DDT unit develops a dynamic translation process for imbalanced classes between two domains, utilizing a confidence-based selection approach to select the most useful synthesized images to create a pseudo-labeled balanced target domain. Finally, the BTL unit performs supervised learning on the reassembled target set to obtain the final diagnostic model. This paper delves into maximizing the entropy of class distributions, while simultaneously minimizing the cross-entropy between the source and target domains to reduce domain discrepancies. By incorporating entropy concepts into our framework, our method not only significantly enhances medical image classification in practical settings but also innovates the application of entropy and information theory within deep learning and medical image processing realms. Extensive experiments demonstrate that DTTL achieves the best performance compared to existing state-of-the-art methods for imbalanced medical image classification tasks.
Collapse
Affiliation(s)
- Chenglin Yu
- School of Electrtronic & Information Engineering and Communication Engineering, Guangzhou City University of Technology, Guangzhou 510800, China
- Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Unmanned Aerial Vehicle Systems Engineering Technology Research Center of Guangdong, South China University of Technology, Guangzhou 510640, China
| | - Hailong Pei
- Key Laboratory of Autonomous Systems and Networked Control, Ministry of Education, Unmanned Aerial Vehicle Systems Engineering Technology Research Center of Guangdong, School of Automation Scinece and Engineering, South China University of Technology, Guangzhou 510640, China;
| |
Collapse
|
9
|
Conze PH, Andrade-Miranda G, Le Meur Y, Cornec-Le Gall E, Rousseau F. Dual-task kidney MR segmentation with transformers in autosomal-dominant polycystic kidney disease. Comput Med Imaging Graph 2024; 113:102349. [PMID: 38330635 DOI: 10.1016/j.compmedimag.2024.102349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 01/21/2024] [Accepted: 02/01/2024] [Indexed: 02/10/2024]
Abstract
Autosomal-dominant polycystic kidney disease is a prevalent genetic disorder characterized by the development of renal cysts, leading to kidney enlargement and renal failure. Accurate measurement of total kidney volume through polycystic kidney segmentation is crucial to assess disease severity, predict progression and evaluate treatment effects. Traditional manual segmentation suffers from intra- and inter-expert variability, prompting the exploration of automated approaches. In recent years, convolutional neural networks have been employed for polycystic kidney segmentation from magnetic resonance images. However, the use of Transformer-based models, which have shown remarkable performance in a wide range of computer vision and medical image analysis tasks, remains unexplored in this area. With their self-attention mechanism, Transformers excel in capturing global context information, which is crucial for accurate organ delineations. In this paper, we evaluate and compare various convolutional-based, Transformers-based, and hybrid convolutional/Transformers-based networks for polycystic kidney segmentation. Additionally, we propose a dual-task learning scheme, where a common feature extractor is followed by per-kidney decoders, towards better generalizability and efficiency. We extensively evaluate various architectures and learning schemes on a heterogeneous magnetic resonance imaging dataset collected from 112 patients with polycystic kidney disease. Our results highlight the effectiveness of Transformer-based models for polycystic kidney segmentation and the relevancy of exploiting dual-task learning to improve segmentation accuracy and mitigate data scarcity issues. A promising ability in accurately delineating polycystic kidneys is especially shown in the presence of heterogeneous cyst distributions and adjacent cyst-containing organs. This work contribute to the advancement of reliable delineation methods in nephrology, paving the way for a broad spectrum of clinical applications.
Collapse
Affiliation(s)
- Pierre-Henri Conze
- IMT Atlantique, LaTIM UMR 1101, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, IBRBS, 22 rue Camille Desmoulins, 29200 Brest, France.
| | | | - Yannick Le Meur
- Department of Nephrology, University Hospital of Brest, bd Tanguy Prigent, 29200 Brest, France; LBAI UMR 1227, Inserm, 9 rue Félix le Dantec, 29200 Brest, France
| | - Emilie Cornec-Le Gall
- Department of Nephrology, University Hospital of Brest, bd Tanguy Prigent, 29200 Brest, France; UMR 1078, Inserm, IBRBS, 22 rue Camille Desmoulins, 29238 Brest, France
| | - François Rousseau
- IMT Atlantique, LaTIM UMR 1101, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, IBRBS, 22 rue Camille Desmoulins, 29200 Brest, France
| |
Collapse
|
10
|
Ferrero A, Ghelichkhan E, Manoochehri H, Ho MM, Albertson DJ, Brintz BJ, Tasdizen T, Whitaker RT, Knudsen BS. HistoEM: A Pathologist-Guided and Explainable Workflow Using Histogram Embedding for Gland Classification. Mod Pathol 2024; 37:100447. [PMID: 38369187 DOI: 10.1016/j.modpat.2024.100447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 01/06/2024] [Accepted: 02/06/2024] [Indexed: 02/20/2024]
Abstract
Pathologists have, over several decades, developed criteria for diagnosing and grading prostate cancer. However, this knowledge has not, so far, been included in the design of convolutional neural networks (CNN) for prostate cancer detection and grading. Further, it is not known whether the features learned by machine-learning algorithms coincide with diagnostic features used by pathologists. We propose a framework that enforces algorithms to learn the cellular and subcellular differences between benign and cancerous prostate glands in digital slides from hematoxylin and eosin-stained tissue sections. After accurate gland segmentation and exclusion of the stroma, the central component of the pipeline, named HistoEM, utilizes a histogram embedding of features from the latent space of the CNN encoder. Each gland is represented by 128 feature-wise histograms that provide the input into a second network for benign vs cancer classification of the whole gland. Cancer glands are further processed by a U-Net structured network to separate low-grade from high-grade cancer. Our model demonstrates similar performance compared with other state-of-the-art prostate cancer grading models with gland-level resolution. To understand the features learned by HistoEM, we first rank features based on the distance between benign and cancer histograms and visualize the tissue origins of the 2 most important features. A heatmap of pixel activation by each feature is generated using Grad-CAM and overlaid on nuclear segmentation outlines. We conclude that HistoEM, similar to pathologists, uses nuclear features for the detection of prostate cancer. Altogether, this novel approach can be broadly deployed to visualize computer-learned features in histopathology images.
Collapse
Affiliation(s)
- Alessandro Ferrero
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Elham Ghelichkhan
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Hamid Manoochehri
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Man Minh Ho
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | | | | | - Tolga Tasdizen
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Ross T Whitaker
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | | |
Collapse
|
11
|
Corponi F, Li BM, Anmella G, Mas A, Pacchiarotti I, Valentí M, Grande I, Benabarre A, Garriga M, Vieta E, Lawrie SM, Whalley HC, Hidalgo-Mazzei D, Vergari A. Automated mood disorder symptoms monitoring from multivariate time-series sensory data: getting the full picture beyond a single number. Transl Psychiatry 2024; 14:161. [PMID: 38531865 DOI: 10.1038/s41398-024-02876-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 03/09/2024] [Accepted: 03/13/2024] [Indexed: 03/28/2024] Open
Abstract
Mood disorders (MDs) are among the leading causes of disease burden worldwide. Limited specialized care availability remains a major bottleneck thus hindering pre-emptive interventions. MDs manifest with changes in mood, sleep, and motor activity, observable in ecological physiological recordings thanks to recent advances in wearable technology. Therefore, near-continuous and passive collection of physiological data from wearables in daily life, analyzable with machine learning (ML), could mitigate this problem, bringing MDs monitoring outside the clinician's office. Previous works predict a single label, either the disease state or a psychometric scale total score. However, clinical practice suggests that the same label may underlie different symptom profiles, requiring specific treatments. Here we bridge this gap by proposing a new task: inferring all items in HDRS and YMRS, the two most widely used standardized scales for assessing MDs symptoms, using physiological data from wearables. To that end, we develop a deep learning pipeline to score the symptoms of a large cohort of MD patients and show that agreement between predictions and assessments by an expert clinician is clinically significant (quadratic Cohen's κ and macro-average F1 score both of 0.609). While doing so, we investigate several solutions to the ML challenges associated with this task, including multi-task learning, class imbalance, ordinal target variables, and subject-invariant representations. Lastly, we illustrate the importance of testing on out-of-distribution samples.
Collapse
Affiliation(s)
- Filippo Corponi
- School of Informatics, University of Edinburgh, Edinburgh, UK.
| | - Bryan M Li
- School of Informatics, University of Edinburgh, Edinburgh, UK
| | - Gerard Anmella
- Bipolar and Depressive Disorders Unit, Department of Psychiatry and Psychology, Hospital Clínic de Barcelona, c. Villarroel, 170, 08036, Barcelona, Spain
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), c. Villarroel, 170, 08036, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Instituto de Salud Carlos III, Madrid, Spain
- Departament de Medicina, Facultat de Medicina i Ciències de la Salut, Universitat de Barcelona (UB), c. Casanova, 143, 08036, Barcelona, Spain
| | - Ariadna Mas
- Bipolar and Depressive Disorders Unit, Department of Psychiatry and Psychology, Hospital Clínic de Barcelona, c. Villarroel, 170, 08036, Barcelona, Spain
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), c. Villarroel, 170, 08036, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Instituto de Salud Carlos III, Madrid, Spain
- Departament de Medicina, Facultat de Medicina i Ciències de la Salut, Universitat de Barcelona (UB), c. Casanova, 143, 08036, Barcelona, Spain
| | - Isabella Pacchiarotti
- Bipolar and Depressive Disorders Unit, Department of Psychiatry and Psychology, Hospital Clínic de Barcelona, c. Villarroel, 170, 08036, Barcelona, Spain
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), c. Villarroel, 170, 08036, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Instituto de Salud Carlos III, Madrid, Spain
- Departament de Medicina, Facultat de Medicina i Ciències de la Salut, Universitat de Barcelona (UB), c. Casanova, 143, 08036, Barcelona, Spain
| | - Marc Valentí
- Bipolar and Depressive Disorders Unit, Department of Psychiatry and Psychology, Hospital Clínic de Barcelona, c. Villarroel, 170, 08036, Barcelona, Spain
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), c. Villarroel, 170, 08036, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Instituto de Salud Carlos III, Madrid, Spain
- Departament de Medicina, Facultat de Medicina i Ciències de la Salut, Universitat de Barcelona (UB), c. Casanova, 143, 08036, Barcelona, Spain
| | - Iria Grande
- Bipolar and Depressive Disorders Unit, Department of Psychiatry and Psychology, Hospital Clínic de Barcelona, c. Villarroel, 170, 08036, Barcelona, Spain
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), c. Villarroel, 170, 08036, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Instituto de Salud Carlos III, Madrid, Spain
- Departament de Medicina, Facultat de Medicina i Ciències de la Salut, Universitat de Barcelona (UB), c. Casanova, 143, 08036, Barcelona, Spain
| | - Antoni Benabarre
- Bipolar and Depressive Disorders Unit, Department of Psychiatry and Psychology, Hospital Clínic de Barcelona, c. Villarroel, 170, 08036, Barcelona, Spain
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), c. Villarroel, 170, 08036, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Instituto de Salud Carlos III, Madrid, Spain
- Departament de Medicina, Facultat de Medicina i Ciències de la Salut, Universitat de Barcelona (UB), c. Casanova, 143, 08036, Barcelona, Spain
| | - Marina Garriga
- Bipolar and Depressive Disorders Unit, Department of Psychiatry and Psychology, Hospital Clínic de Barcelona, c. Villarroel, 170, 08036, Barcelona, Spain
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), c. Villarroel, 170, 08036, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Instituto de Salud Carlos III, Madrid, Spain
- Departament de Medicina, Facultat de Medicina i Ciències de la Salut, Universitat de Barcelona (UB), c. Casanova, 143, 08036, Barcelona, Spain
| | - Eduard Vieta
- Bipolar and Depressive Disorders Unit, Department of Psychiatry and Psychology, Hospital Clínic de Barcelona, c. Villarroel, 170, 08036, Barcelona, Spain
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), c. Villarroel, 170, 08036, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Instituto de Salud Carlos III, Madrid, Spain
- Departament de Medicina, Facultat de Medicina i Ciències de la Salut, Universitat de Barcelona (UB), c. Casanova, 143, 08036, Barcelona, Spain
| | - Stephen M Lawrie
- Division of Psychiatry, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
| | - Heather C Whalley
- Division of Psychiatry, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, UK
- Generation Scotland, Institute for Genetics and Cancer, University of Edinburgh, Edinburgh, UK
| | - Diego Hidalgo-Mazzei
- Bipolar and Depressive Disorders Unit, Department of Psychiatry and Psychology, Hospital Clínic de Barcelona, c. Villarroel, 170, 08036, Barcelona, Spain
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), c. Villarroel, 170, 08036, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Instituto de Salud Carlos III, Madrid, Spain
- Departament de Medicina, Facultat de Medicina i Ciències de la Salut, Universitat de Barcelona (UB), c. Casanova, 143, 08036, Barcelona, Spain
| | - Antonio Vergari
- School of Informatics, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
12
|
Zheng H, Hung ALY, Miao Q, Song W, Scalzo F, Raman SS, Zhao K, Sung K. AtPCa-Net: anatomical-aware prostate cancer detection network on multi-parametric MRI. Sci Rep 2024; 14:5740. [PMID: 38459100 PMCID: PMC10923873 DOI: 10.1038/s41598-024-56405-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 03/06/2024] [Indexed: 03/10/2024] Open
Abstract
Multi-parametric MRI (mpMRI) is widely used for prostate cancer (PCa) diagnosis. Deep learning models show good performance in detecting PCa on mpMRI, but domain-specific PCa-related anatomical information is sometimes overlooked and not fully explored even by state-of-the-art deep learning models, causing potential suboptimal performances in PCa detection. Symmetric-related anatomical information is commonly used when distinguishing PCa lesions from other visually similar but benign prostate tissue. In addition, different combinations of mpMRI findings are used for evaluating the aggressiveness of PCa for abnormal findings allocated in different prostate zones. In this study, we investigate these domain-specific anatomical properties in PCa diagnosis and how we can adopt them into the deep learning framework to improve the model's detection performance. We propose an anatomical-aware PCa detection Network (AtPCa-Net) for PCa detection on mpMRI. Experiments show that the AtPCa-Net can better utilize the anatomical-related information, and the proposed anatomical-aware designs help improve the overall model performance on both PCa detection and patient-level classification.
Collapse
Affiliation(s)
- Haoxin Zheng
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA.
- Computer Science, University of California, Los Angeles, Los Angeles, 90095, USA.
| | - Alex Ling Yu Hung
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
- Computer Science, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Qi Miao
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Weinan Song
- Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Fabien Scalzo
- Computer Science, University of California, Los Angeles, Los Angeles, 90095, USA
- The Seaver College, Pepperdine University, Los Angeles, 90363, USA
| | - Steven S Raman
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Kai Zhao
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Kyunghyun Sung
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
| |
Collapse
|
13
|
Ramacciotti LS, Hershenhouse JS, Mokhtar D, Paralkar D, Kaneko M, Eppler M, Gill K, Mogoulianitis V, Duddalwar V, Abreu AL, Gill I, Cacciamani GE. Comprehensive Assessment of MRI-based Artificial Intelligence Frameworks Performance in the Detection, Segmentation, and Classification of Prostate Lesions Using Open-Source Databases. Urol Clin North Am 2024; 51:131-161. [PMID: 37945098 DOI: 10.1016/j.ucl.2023.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Numerous MRI-based artificial intelligence (AI) frameworks have been designed for prostate cancer lesion detection, segmentation, and classification via MRI as a result of intrareader and interreader variability that is inherent to traditional interpretation. Open-source data sets have been released with the intention of providing freely available MRIs for the testing of diverse AI frameworks in automated or semiautomated tasks. Here, an in-depth assessment of the performance of MRI-based AI frameworks for detecting, segmenting, and classifying prostate lesions using open-source databases was performed. Among 17 data sets, 12 were specific to prostate cancer detection/classification, with 52 studies meeting the inclusion criteria.
Collapse
Affiliation(s)
- Lorenzo Storino Ramacciotti
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Jacob S Hershenhouse
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Daniel Mokhtar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Divyangi Paralkar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Masatomo Kaneko
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Urology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Michael Eppler
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Karanvir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Vasileios Mogoulianitis
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Vinay Duddalwar
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Andre L Abreu
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Inderbir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
14
|
Yan W, Chiu B, Shen Z, Yang Q, Syer T, Min Z, Punwani S, Emberton M, Atkinson D, Barratt DC, Hu Y. Combiner and HyperCombiner networks: Rules to combine multimodality MR images for prostate cancer localisation. Med Image Anal 2024; 91:103030. [PMID: 37995627 DOI: 10.1016/j.media.2023.103030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 09/22/2023] [Accepted: 11/13/2023] [Indexed: 11/25/2023]
Abstract
One of the distinct characteristics of radiologists reading multiparametric prostate MR scans, using reporting systems like PI-RADS v2.1, is to score individual types of MR modalities, including T2-weighted, diffusion-weighted, and dynamic contrast-enhanced, and then combine these image-modality-specific scores using standardised decision rules to predict the likelihood of clinically significant cancer. This work aims to demonstrate that it is feasible for low-dimensional parametric models to model such decision rules in the proposed Combiner networks, without compromising the accuracy of predicting radiologic labels. First, we demonstrate that either a linear mixture model or a nonlinear stacking model is sufficient to model PI-RADS decision rules for localising prostate cancer. Second, parameters of these combining models are proposed as hyperparameters, weighing independent representations of individual image modalities in the Combiner network training, as opposed to end-to-end modality ensemble. A HyperCombiner network is developed to train a single image segmentation network that can be conditioned on these hyperparameters during inference for much-improved efficiency. Experimental results based on 751 cases from 651 patients compare the proposed rule-modelling approaches with other commonly-adopted end-to-end networks, in this downstream application of automating radiologist labelling on multiparametric MR. By acquiring and interpreting the modality combining rules, specifically the linear-weights or odds ratios associated with individual image modalities, three clinical applications are quantitatively presented and contextualised in the prostate cancer segmentation application, including modality availability assessment, importance quantification and rule discovery.
Collapse
Affiliation(s)
- Wen Yan
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Hong Kong China; Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Bernard Chiu
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Hong Kong China; Department of Physics & Computer Science, Wilfrid Laurier University, 75 University Avenue West Waterloo, Ontario N2L 3C5, Canada.
| | - Ziyi Shen
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Qianye Yang
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Tom Syer
- Centre for Medical Imaging, Division of Medicine, University College London, London W1 W 7TS, UK.
| | - Zhe Min
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Shonit Punwani
- Centre for Medical Imaging, Division of Medicine, University College London, London W1 W 7TS, UK.
| | - Mark Emberton
- Division of Surgery & Interventional Science, University College London, Gower St, WC1E 6BT, London, UK.
| | - David Atkinson
- Centre for Medical Imaging, Division of Medicine, University College London, London W1 W 7TS, UK.
| | - Dean C Barratt
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Yipeng Hu
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| |
Collapse
|
15
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
16
|
Tsui JMG, Kehayias CE, Leeman JE, Nguyen PL, Peng L, Yang DD, Moningi S, Martin N, Orio PF, D'Amico AV, Bredfeldt JS, Lee LK, Guthier CV, King MT. Assessing the Feasibility of Using Artificial Intelligence-Segmented Dominant Intraprostatic Lesion for Focal Intraprostatic Boost With External Beam Radiation Therapy. Int J Radiat Oncol Biol Phys 2024; 118:74-84. [PMID: 37517600 DOI: 10.1016/j.ijrobp.2023.07.029] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 07/11/2023] [Accepted: 07/18/2023] [Indexed: 08/01/2023]
Abstract
PURPOSE The delineation of dominant intraprostatic gross tumor volumes (GTVs) on multiparametric magnetic resonance imaging (mpMRI) can be subject to interobserver variability. We evaluated whether deep learning artificial intelligence (AI)-segmented GTVs can provide a similar degree of intraprostatic boosting with external beam radiation therapy (EBRT) as radiation oncologist (RO)-delineated GTVs. METHODS AND MATERIALS We identified 124 patients who underwent mpMRI followed by EBRT between 2010 and 2013. A reference GTV was delineated by an RO and approved by a board-certified radiologist. We trained an AI algorithm for GTV delineation on 89 patients, and tested the algorithm on 35 patients, each with at least 1 PI-RADS (Prostate Imaging Reporting and Data System) 4 or 5 lesion (46 total lesions). We then asked 5 additional ROs to independently delineate GTVs on the test set. We compared lesion detectability and geometric accuracy of the GTVs from AI and 5 ROs against the reference GTV. Then, we generated EBRT plans (77 Gy prostate) that boosted each observer-specific GTV to 95 Gy. We compared reference GTV dose (D98%) across observers using a mixed-effects model. RESULTS On a lesion level, AI GTV exhibited a sensitivity of 82.6% and positive predictive value of 86.4%. Respective ranges among the 5 RO GTVs were 84.8% to 95.7% and 95.1% to 100.0%. Among 30 GTVs mutually identified by all observers, no significant differences in Dice coefficient were detected between AI and any of the 5 ROs. Across all patients, only 2 of 5 ROs had a reference GTV D98% that significantly differed from that of AI by 2.56 Gy (P = .02) and 3.20 Gy (P = .003). The presence of false-negative (-5.97 Gy; P < .001) but not false-positive (P = .24) lesions was associated with reference GTV D98%. CONCLUSIONS AI-segmented GTVs demonstrate potential for intraprostatic boosting, although the degree of boosting may be adversely affected by false-negative lesions. Prospective review of AI-segmented GTVs remains essential.
Collapse
Affiliation(s)
- James M G Tsui
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts; Department of Radiation Oncology, McGill University Health Centre, Montreal, Quebec, Canada
| | - Christopher E Kehayias
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Jonathan E Leeman
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Paul L Nguyen
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Luke Peng
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - David D Yang
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Shalini Moningi
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Neil Martin
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Peter F Orio
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Anthony V D'Amico
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Jeremy S Bredfeldt
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Leslie K Lee
- Department of Radiology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Christian V Guthier
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Martin T King
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts.
| |
Collapse
|
17
|
Andrade-Miranda G, Jaouen V, Tankyevych O, Cheze Le Rest C, Visvikis D, Conze PH. Multi-modal medical Transformers: A meta-analysis for medical image segmentation in oncology. Comput Med Imaging Graph 2023; 110:102308. [PMID: 37918328 DOI: 10.1016/j.compmedimag.2023.102308] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 10/05/2023] [Accepted: 10/24/2023] [Indexed: 11/04/2023]
Abstract
Multi-modal medical image segmentation is a crucial task in oncology that enables the precise localization and quantification of tumors. The aim of this work is to present a meta-analysis of the use of multi-modal medical Transformers for medical image segmentation in oncology, specifically focusing on multi-parametric MR brain tumor segmentation (BraTS2021), and head and neck tumor segmentation using PET-CT images (HECKTOR2021). The multi-modal medical Transformer architectures presented in this work exploit the idea of modality interaction schemes based on visio-linguistic representations: (i) single-stream, where modalities are jointly processed by one Transformer encoder, and (ii) multiple-stream, where the inputs are encoded separately before being jointly modeled. A total of fourteen multi-modal architectures are evaluated using different ranking strategies based on dice similarity coefficient (DSC) and average symmetric surface distance (ASSD) metrics. In addition, cost indicators such as the number of trainable parameters and the number of multiply-accumulate operations (MACs) are reported. The results demonstrate that multi-path hybrid CNN-Transformer-based models improve segmentation accuracy when compared to traditional methods, but come at the cost of increased computation time and potentially larger model size.
Collapse
Affiliation(s)
| | - Vincent Jaouen
- LaTIM UMR 1101, Inserm, Brest, France; IMT Atlantique, Brest, France.
| | - Olena Tankyevych
- LaTIM UMR 1101, Inserm, Brest, France; Nuclear Medicine, University Hospital of Poitiers, Poitiers, France.
| | - Catherine Cheze Le Rest
- LaTIM UMR 1101, Inserm, Brest, France; Nuclear Medicine, University Hospital of Poitiers, Poitiers, France.
| | | | | |
Collapse
|
18
|
Wen L, Wang S, Pan X, Liu Y. iPCa-Net: A CNN-based framework for predicting incidental prostate cancer using multiparametric MRI. Comput Med Imaging Graph 2023; 110:102309. [PMID: 37924572 DOI: 10.1016/j.compmedimag.2023.102309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 10/27/2023] [Accepted: 10/27/2023] [Indexed: 11/06/2023]
Abstract
Incidental prostate cancer (iPCa) is an early stage of clinically significant prostate cancer (csPCa) and is typically asymptomatic, making it difficult to detect in clinical practice. The objective of this study is to predict iPCa by analyzing prostatic MRIs using deep convolutional neural network (CNN). While CNN-based models in medical image analysis have made significant advancements, the iPCa prediction task presents two challenging problems: subtler differences in MRIs that are imperceptible to human eyes and a lower incidence rate, resulting in a more pronounced sample imbalance compared to routine cancer prediction. To address these two challenges, we propose a new CNN-based framework called iPCa-Net, which is designed to jointly optimize two tasks: prostate transition zone segmentation and iPCa prediction. To evaluate the performance of our model, we construct a prostatic MRI dataset comprising 9536 prostate MRI slices from 448 patients diagnosed with benign prostatic hyperplasia (BPH) at our institution. In our study, the incidence rate of iPCa is 5.13% (23 out of 448) . We compare our model with eight state-of-the-art methods for segmentation task and nine established methods for prediction task respectively using our dataset, and experimental results demonstrate the superior performance of our model. Specifically, in the prostate transition zone segmentation task, our iPCa-Net outperforms the top-performing method by 1.23% with respect to mIoU. In the iPCa prediction task, our iPCa-Net surpasses the top-performing method by 2.06% with respect to F1 score. In conclusion, our iPCa-Net demonstrates superior performance in the early identification of iPCa patients compared to state-of-the-art methods. This advancement holds great significance for appropriate disease management and is highly beneficial for patients.
Collapse
Affiliation(s)
- Lijie Wen
- Department of Urology, The Second Affiliated Hospital of Dalian Medical University, Dalian 116027, China.
| | - Simiao Wang
- College of Artificial Intelligence, Dalian Maritime University, Dalian 116026, China
| | - Xianwei Pan
- College of Artificial Intelligence, Dalian Maritime University, Dalian 116026, China
| | - Yunan Liu
- College of Artificial Intelligence, Dalian Maritime University, Dalian 116026, China
| |
Collapse
|
19
|
Kovacs B, Netzer N, Baumgartner M, Schrader A, Isensee F, Weißer C, Wolf I, Görtz M, Jaeger PF, Schütz V, Floca R, Gnirs R, Stenzinger A, Hohenfellner M, Schlemmer HP, Bonekamp D, Maier-Hein KH. Addressing image misalignments in multi-parametric prostate MRI for enhanced computer-aided diagnosis of prostate cancer. Sci Rep 2023; 13:19805. [PMID: 37957250 PMCID: PMC10643562 DOI: 10.1038/s41598-023-46747-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/04/2023] [Indexed: 11/15/2023] Open
Abstract
Prostate cancer (PCa) diagnosis on multi-parametric magnetic resonance images (MRI) requires radiologists with a high level of expertise. Misalignments between the MRI sequences can be caused by patient movement, elastic soft-tissue deformations, and imaging artifacts. They further increase the complexity of the task prompting radiologists to interpret the images. Recently, computer-aided diagnosis (CAD) tools have demonstrated potential for PCa diagnosis typically relying on complex co-registration of the input modalities. However, there is no consensus among research groups on whether CAD systems profit from using registration. Furthermore, alternative strategies to handle multi-modal misalignments have not been explored so far. Our study introduces and compares different strategies to cope with image misalignments and evaluates them regarding to their direct effect on diagnostic accuracy of PCa. In addition to established registration algorithms, we propose 'misalignment augmentation' as a concept to increase CAD robustness. As the results demonstrate, misalignment augmentations can not only compensate for a complete lack of registration, but if used in conjunction with registration, also improve the overall performance on an independent test set.
Collapse
Affiliation(s)
- Balint Kovacs
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany.
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany.
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany.
| | - Nils Netzer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Michael Baumgartner
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
| | - Adrian Schrader
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Cedric Weißer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Ivo Wolf
- Mannheim University of Applied Sciences, Mannheim, Germany
| | - Magdalena Görtz
- Junior Clinical Cooperation Unit 'Multiparametric Methods for Early Detection of Prostate Cancer', German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Paul F Jaeger
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Interactive Machine Learning Group, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Victoria Schütz
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Ralf Floca
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Regula Gnirs
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Albrecht Stenzinger
- Institute of Pathology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Markus Hohenfellner
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), DKFZ, Core Center Heidelberg, Heidelberg, Germany
| | - David Bonekamp
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
- German Cancer Consortium (DKTK), DKFZ, Core Center Heidelberg, Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), DKFZ, Core Center Heidelberg, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
20
|
Netzer N, Eith C, Bethge O, Hielscher T, Schwab C, Stenzinger A, Gnirs R, Schlemmer HP, Maier-Hein KH, Schimmöller L, Bonekamp D. Application of a validated prostate MRI deep learning system to independent same-vendor multi-institutional data: demonstration of transferability. Eur Radiol 2023; 33:7463-7476. [PMID: 37507610 PMCID: PMC10598076 DOI: 10.1007/s00330-023-09882-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 04/24/2023] [Accepted: 04/27/2023] [Indexed: 07/30/2023]
Abstract
OBJECTIVES To evaluate a fully automatic deep learning system to detect and segment clinically significant prostate cancer (csPCa) on same-vendor prostate MRI from two different institutions not contributing to training of the system. MATERIALS AND METHODS In this retrospective study, a previously bi-institutionally validated deep learning system (UNETM) was applied to bi-parametric prostate MRI data from one external institution (A), a PI-RADS distribution-matched internal cohort (B), and a csPCa stratified subset of single-institution external public challenge data (C). csPCa was defined as ISUP Grade Group ≥ 2 determined from combined targeted and extended systematic MRI/transrectal US-fusion biopsy. Performance of UNETM was evaluated by comparing ROC AUC and specificity at typical PI-RADS sensitivity levels. Lesion-level analysis between UNETM segmentations and radiologist-delineated segmentations was performed using Dice coefficient, free-response operating characteristic (FROC), and weighted alternative (waFROC). The influence of using different diffusion sequences was analyzed in cohort A. RESULTS In 250/250/140 exams in cohorts A/B/C, differences in ROC AUC were insignificant with 0.80 (95% CI: 0.74-0.85)/0.87 (95% CI: 0.83-0.92)/0.82 (95% CI: 0.75-0.89). At sensitivities of 95% and 90%, UNETM achieved specificity of 30%/50% in A, 44%/71% in B, and 43%/49% in C, respectively. Dice coefficient of UNETM and radiologist-delineated lesions was 0.36 in A and 0.49 in B. The waFROC AUC was 0.67 (95% CI: 0.60-0.83) in A and 0.7 (95% CI: 0.64-0.78) in B. UNETM performed marginally better on readout-segmented than on single-shot echo-planar-imaging. CONCLUSION For same-vendor examinations, deep learning provided comparable discrimination of csPCa and non-csPCa lesions and examinations between local and two independent external data sets, demonstrating the applicability of the system to institutions not participating in model training. CLINICAL RELEVANCE STATEMENT A previously bi-institutionally validated fully automatic deep learning system maintained acceptable exam-level diagnostic performance in two independent external data sets, indicating the potential of deploying AI models without retraining or fine-tuning, and corroborating evidence that AI models extract a substantial amount of transferable domain knowledge about MRI-based prostate cancer assessment. KEY POINTS • A previously bi-institutionally validated fully automatic deep learning system maintained acceptable exam-level diagnostic performance in two independent external data sets. • Lesion detection performance and segmentation congruence was similar on the institutional and an external data set, as measured by the weighted alternative FROC AUC and Dice coefficient. • Although the system generalized to two external institutions without re-training, achieving expected sensitivity and specificity levels using the deep learning system requires probability thresholds to be adjusted, underlining the importance of institution-specific calibration and quality control.
Collapse
Affiliation(s)
- Nils Netzer
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
- Heidelberg University Medical School, Heidelberg, Germany
| | - Carolin Eith
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
- Heidelberg University Medical School, Heidelberg, Germany
| | - Oliver Bethge
- Medical Faculty, Department of Diagnostic and Interventional Radiology, University Dusseldorf, D-40225, Dusseldorf, Germany
| | - Thomas Hielscher
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Constantin Schwab
- Institute of Pathology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Albrecht Stenzinger
- Institute of Pathology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Regula Gnirs
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
- German Cancer Consortium (DKTK), Heidelberg, Germany
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany
| | - Klaus H Maier-Hein
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany
- Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Lars Schimmöller
- Medical Faculty, Department of Diagnostic and Interventional Radiology, University Dusseldorf, D-40225, Dusseldorf, Germany
| | - David Bonekamp
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany.
- Heidelberg University Medical School, Heidelberg, Germany.
- German Cancer Consortium (DKTK), Heidelberg, Germany.
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany.
| |
Collapse
|
21
|
Jaouen T, Souchon R, Moldovan PC, Bratan F, Duran A, Hoang-Dinh A, Di Franco F, Debeer S, Dubreuil-Chambardel M, Arfi N, Ruffion A, Colombel M, Crouzet S, Gonindard-Melodelima C, Rouvière O. Characterization of high-grade prostate cancer at multiparametric MRI using a radiomic-based computer-aided diagnosis system as standalone and second reader. Diagn Interv Imaging 2023; 104:465-476. [PMID: 37345961 DOI: 10.1016/j.diii.2023.04.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 04/16/2023] [Accepted: 04/18/2023] [Indexed: 06/23/2023]
Abstract
PURPOSE The purpose of this study was to develop and test across various scanners a zone-specific region-of-interest (ROI)-based computer-aided diagnosis system (CAD) aimed at characterizing, on MRI, International Society of Urological Pathology (ISUP) grade≥2 prostate cancers. MATERIALS AND METHODS ROI-based quantitative models were selected in multi-vendor training (265 pre-prostatectomy MRIs) and pre-test (112 pre-biopsy MRIs) datasets. The best peripheral and transition zone models were combined and retrospectively assessed in internal (158 pre-biopsy MRIs) and external (104 pre-biopsy MRIs) test datasets. Two radiologists (R1/R2) retrospectively delineated the lesions targeted at biopsy in test datasets. The CAD area under the receiver operating characteristic curve (AUC) for characterizing ISUP≥2 cancers was compared to that of the Prostate Imaging-Reporting and Data System version2 (PI-RADSv2) score prospectively assigned to targeted lesions. RESULTS The best models used the 25th apparent diffusion coefficient (ADC) percentile in transition zone and the 2nd ADC percentile and normalized wash-in rate in peripheral zone. The PI-RADSv2 AUCs were 82% (95% confidence interval [CI]: 74-87) and 86% (95% CI: 81-91) in the internal and external test datasets respectively. They were not different from the CAD AUCs obtained with R1 and R2 delineations, in the internal (82% [95% CI: 76-89], P = 0.95 and 85% [95% CI: 78-91], P = 0.55) and external (82% [95% CI: 74-91], P = 0.41 and 86% [95% CI:78-95], P = 0.98) test datasets. The CAD yielded sensitivities of 86-89% and 90-91%, and specificities of 64-65% and 69-75% in the internal and external test datasets respectively. CONCLUSION The CAD performance for characterizing ISUP grade≥2 prostate cancers on MRI is not different from that of PI-RADSv2 score across two test datasets.
Collapse
Affiliation(s)
| | | | - Paul C Moldovan
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon, 69003, France
| | - Flavie Bratan
- Hôpital Saint Joseph Saint Luc, Department of Radiology, Lyon, 69007, France
| | - Audrey Duran
- Univ Lyon, CNRS, Inserm, INSA Lyon, UCBL, CREATIS, UMR5220, U1294, Villeurbanne, 69100, France
| | - Au Hoang-Dinh
- INSERM, LabTAU, U1032, Lyon, 69003, France; Hanoi Medical University, Department of Radiology, Hanoi, 116001, Vietnam
| | - Florian Di Franco
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon, 69003, France
| | - Sabine Debeer
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon, 69003, France
| | - Marine Dubreuil-Chambardel
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon, 69003, France
| | - Nicolas Arfi
- Hôpital Saint Joseph Saint Luc, Department of Urology, Lyon, 69007, France
| | - Alain Ruffion
- Hospices Civils de Lyon, Centre Hospitalier Lyon Sud, Department of Urology, Pierre-Bénite, 69310, France; Equipe 2 - Centre d'Innovation en Cancérologie de Lyon (EA 3738 CICLY), Pierre-Bénite, 69310, France; Université de Lyon, Lyon, 69003, France; Université Lyon 1, Lyon, 69003, France; Faculté de Médecine Lyon Sud, Pierre-Bénite, 69310, France
| | - Marc Colombel
- Université de Lyon, Lyon, 69003, France; Université Lyon 1, Lyon, 69003, France; Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Urology, Lyon, 69003, France; Faculté de Médecine Lyon Est, Lyon, 69003, France
| | - Sébastien Crouzet
- INSERM, LabTAU, U1032, Lyon, 69003, France; Université de Lyon, Lyon, 69003, France; Université Lyon 1, Lyon, 69003, France; Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Urology, Lyon, 69003, France; Faculté de Médecine Lyon Est, Lyon, 69003, France
| | - Christelle Gonindard-Melodelima
- Université Grenoble Alpes, Laboratoire d'Ecologie Alpine, BP 53, Grenoble 38041, France; CNRS, UMR 5553, BP 53, Grenoble, 38041, France
| | - Olivier Rouvière
- INSERM, LabTAU, U1032, Lyon, 69003, France; Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon, 69003, France; Université de Lyon, Lyon, 69003, France; Université Lyon 1, Lyon, 69003, France; Faculté de Médecine Lyon Est, Lyon, 69003, France.
| |
Collapse
|
22
|
Simeth J, Jiang J, Nosov A, Wibmer A, Zelefsky M, Tyagi N, Veeraraghavan H. Deep learning-based dominant index lesion segmentation for MR-guided radiation therapy of prostate cancer. Med Phys 2023; 50:4854-4870. [PMID: 36856092 PMCID: PMC11098147 DOI: 10.1002/mp.16320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/11/2023] [Accepted: 01/29/2023] [Indexed: 03/02/2023] Open
Abstract
BACKGROUND Dose escalation radiotherapy enables increased control of prostate cancer (PCa) but requires segmentation of dominant index lesions (DIL). This motivates the development of automated methods for fast, accurate, and consistent segmentation of PCa DIL. PURPOSE To construct and validate a model for deep-learning-based automatic segmentation of PCa DIL defined by Gleason score (GS) ≥3+4 from MR images applied to MR-guided radiation therapy. Validate generalizability of constructed models across scanner and acquisition differences. METHODS Five deep-learning networks were evaluated on apparent diffusion coefficient (ADC) MRI from 500 lesions in 365 patients arising from internal training Dataset 1 (156 lesions in 125 patients, 1.5Tesla GE MR with endorectal coil), testing using Dataset 1 (35 lesions in 26 patients), external ProstateX Dataset 2 (299 lesions in 204 patients, 3Tesla Siemens MR), and internal inter-rater Dataset 3 (10 lesions in 10 patients, 3Tesla Philips MR). The five networks include: multiple resolution residually connected network (MRRN) and MRRN regularized in training with deep supervision implemented into the last convolutional block (MRRN-DS), Unet, Unet++, ResUnet, and fast panoptic segmentation (FPSnet) as well as fast panoptic segmentation with smoothed labels (FPSnet-SL). Models were evaluated by volumetric DIL segmentation accuracy using Dice similarity coefficient (DSC) and the balanced F1 measure of detection accuracy, as a function of lesion aggressiveness and size (Dataset 1 and 2), and accuracy with respect to two-raters (on Dataset 3). Upon acceptance for publication segmentation models will be made available in an open-source GitHub repository. RESULTS In general, MRRN-DS more accurately segmented tumors than other methods on the testing datasets. MRRN-DS significantly outperformed ResUnet in Dataset2 (DSC of 0.54 vs. 0.44, p < 0.001) and the Unet++ in Dataset3 (DSC of 0.45 vs. p = 0.04). FPSnet-SL was similarly accurate as MRRN-DS in Dataset2 (p = 0.30), but MRRN-DS significantly outperformed FPSnet and FPSnet-SL in both Dataset1 (0.60 vs. 0.51 [p = 0.01] and 0.54 [p = 0.049] respectively) and Dataset3 (0.45 vs. 0.06 [p = 0.002] and 0.24 [p = 0.004] respectively). Finally, MRRN-DS produced slightly higher agreement with experienced radiologist than two radiologists in Dataset 3 (DSC of 0.45 vs. 0.41). CONCLUSIONS MRRN-DS was generalizable to different MR testing datasets acquired using different scanners. It produced slightly higher agreement with an experienced radiologist than that between two radiologists. Finally, MRRN-DS more accurately segmented aggressive lesions, which are generally candidates for radiative dose ablation.
Collapse
Affiliation(s)
- Josiah Simeth
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Anton Nosov
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Andreas Wibmer
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Michael Zelefsky
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
23
|
Zhao LT, Liu ZY, Xie WF, Shao LZ, Lu J, Tian J, Liu JG. What benefit can be obtained from magnetic resonance imaging diagnosis with artificial intelligence in prostate cancer compared with clinical assessments? Mil Med Res 2023; 10:29. [PMID: 37357263 DOI: 10.1186/s40779-023-00464-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 06/07/2023] [Indexed: 06/27/2023] Open
Abstract
The present study aimed to explore the potential of artificial intelligence (AI) methodology based on magnetic resonance (MR) images to aid in the management of prostate cancer (PCa). To this end, we reviewed and summarized the studies comparing the diagnostic and predictive performance for PCa between AI and common clinical assessment methods based on MR images and/or clinical characteristics, thereby investigating whether AI methods are generally superior to common clinical assessment methods for the diagnosis and prediction fields of PCa. First, we found that, in the included studies of the present study, AI methods were generally equal to or better than the clinical assessment methods for the risk assessment of PCa, such as risk stratification of prostate lesions and the prediction of therapeutic outcomes or PCa progression. In particular, for the diagnosis of clinically significant PCa, the AI methods achieved a higher summary receiver operator characteristic curve (SROC-AUC) than that of the clinical assessment methods (0.87 vs. 0.82). For the prediction of adverse pathology, the AI methods also achieved a higher SROC-AUC than that of the clinical assessment methods (0.86 vs. 0.75). Second, as revealed by the radiomics quality score (RQS), the studies included in the present study presented a relatively high total average RQS of 15.2 (11.0-20.0). Further, the scores of the individual RQS elements implied that the AI models in these studies were constructed with relatively perfect and standard radiomics processes, but the exact generalizability and clinical practicality of the AI models should be further validated using higher levels of evidence, such as prospective studies and open-testing datasets.
Collapse
Affiliation(s)
- Li-Tao Zhao
- School of Engineering Medicine, Beihang University, Beijing, 100191, China
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Zhen-Yu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, 100080, China
| | - Wan-Fang Xie
- School of Engineering Medicine, Beihang University, Beijing, 100191, China
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Li-Zhi Shao
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Jian Lu
- Department of Urology, Peking University Third Hospital, Peking University, 100191, Beijing, China.
| | - Jie Tian
- School of Engineering Medicine, Beihang University, Beijing, 100191, China.
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China.
- Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, 100191, Beijing, China.
| | - Jian-Gang Liu
- School of Engineering Medicine, Beihang University, Beijing, 100191, China.
- Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, 100191, Beijing, China.
- Beijing Engineering Research Center of Cardiovascular Wisdom Diagnosis and Treatment, Beijing, 100029, China.
| |
Collapse
|
24
|
Karagoz A, Alis D, Seker ME, Zeybel G, Yergin M, Oksuz I, Karaarslan E. Anatomically guided self-adapting deep neural network for clinically significant prostate cancer detection on bi-parametric MRI: a multi-center study. Insights Imaging 2023; 14:110. [PMID: 37337101 DOI: 10.1186/s13244-023-01439-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 04/17/2023] [Indexed: 06/21/2023] Open
Abstract
OBJECTIVE To evaluate the effectiveness of a self-adapting deep network, trained on large-scale bi-parametric MRI data, in detecting clinically significant prostate cancer (csPCa) in external multi-center data from men of diverse demographics; to investigate the advantages of transfer learning. METHODS We used two samples: (i) Publicly available multi-center and multi-vendor Prostate Imaging: Cancer AI (PI-CAI) training data, consisting of 1500 bi-parametric MRI scans, along with its unseen validation and testing samples; (ii) In-house multi-center testing and transfer learning data, comprising 1036 and 200 bi-parametric MRI scans. We trained a self-adapting 3D nnU-Net model using probabilistic prostate masks on the PI-CAI data and evaluated its performance on the hidden validation and testing samples and the in-house data with and without transfer learning. We used the area under the receiver operating characteristic (AUROC) curve to evaluate patient-level performance in detecting csPCa. RESULTS The PI-CAI training data had 425 scans with csPCa, while the in-house testing and fine-tuning data had 288 and 50 scans with csPCa, respectively. The nnU-Net model achieved an AUROC of 0.888 and 0.889 on the hidden validation and testing data. The model performed with an AUROC of 0.886 on the in-house testing data, with a slight decrease in performance to 0.870 using transfer learning. CONCLUSIONS The state-of-the-art deep learning method using prostate masks trained on large-scale bi-parametric MRI data provides high performance in detecting csPCa in internal and external testing data with different characteristics, demonstrating the robustness and generalizability of deep learning within and across datasets. CLINICAL RELEVANCE STATEMENT A self-adapting deep network, utilizing prostate masks and trained on large-scale bi-parametric MRI data, is effective in accurately detecting clinically significant prostate cancer across diverse datasets, highlighting the potential of deep learning methods for improving prostate cancer detection in clinical practice.
Collapse
Affiliation(s)
- Ahmet Karagoz
- Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey
| | - Deniz Alis
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey.
- Department of Radiology, School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey.
| | - Mustafa Ege Seker
- School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| | - Gokberk Zeybel
- School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| | - Mert Yergin
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey
| | - Ilkay Oksuz
- Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey
| | - Ercan Karaarslan
- Department of Radiology, School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| |
Collapse
|
25
|
Bashkanov O, Rak M, Meyer A, Engelage L, Lumiani A, Muschter R, Hansen C. Automatic detection of prostate cancer grades and chronic prostatitis in biparametric MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 239:107624. [PMID: 37271051 DOI: 10.1016/j.cmpb.2023.107624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 05/13/2023] [Accepted: 05/25/2023] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVE With emerging evidence to improve prostate cancer (PCa) screening, multiparametric magnetic prostate imaging is becoming an essential noninvasive component of the diagnostic routine. Computer-aided diagnostic (CAD) tools powered by deep learning can help radiologists interpret multiple volumetric images. In this work, our objective was to examine promising methods recently proposed in the multigrade prostate cancer detection task and to suggest practical considerations regarding model training in this context. METHODS We collected 1647 fine-grained biopsy-confirmed findings, including Gleason scores and prostatitis, to form a training dataset. In our experimental framework for lesion detection, all models utilized 3D nnU-Net architecture that accounts for anisotropy in the MRI data. First, we explore an optimal range of b-values for diffusion-weighted imaging (DWI) modality and its effect on the detection of clinically significant prostate cancer (csPCa) and prostatitis using deep learning, as the optimal range is not yet clearly defined in this domain. Next, we propose a simulated multimodal shift as a data augmentation technique to compensate for the multimodal shift present in the data. Third, we study the effect of incorporating the prostatitis class alongside cancer-related findings at three different granularities of the prostate cancer class (coarse, medium, and fine) and its impact on the detection rate of the target csPCa. Furthermore, ordinal and one-hot encoded (OHE) output formulations were tested. RESULTS An optimal model configuration with fine class granularity (prostatitis included) and OHE has scored the lesion-wise partial Free-Response Receiver Operating Characteristic (FROC) area under the curve (AUC) of 1.94 (CI 95%: 1.76-2.11) and patient-wise ROC AUC of 0.874 (CI 95%: 0.793-0.938) in the detection of csPCa. Inclusion of the auxiliary prostatitis class has demonstrated a stable relative improvement in specificity at a false positive rate (FPR) of 1.0 per patient, with an increase of 3%, 7%, and 4% for coarse, medium, and fine class granularities. CONCLUSIONS This paper examines several configurations for model training in the biparametric MRI setup and proposes optimal value ranges. It also shows that the fine-grained class configuration, including prostatitis, is beneficial for detecting csPCa. The ability to detect prostatitis in all low-risk cancer lesions suggests the potential to improve the quality of the early diagnosis of prostate diseases. It also implies an improved interpretability of the results by the radiologist.
Collapse
Affiliation(s)
- Oleksii Bashkanov
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany.
| | - Marko Rak
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| | - Anneke Meyer
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| | | | | | | | - Christian Hansen
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| |
Collapse
|
26
|
Ren H, Ren C, Guo Z, Zhang G, Luo X, Ren Z, Tian H, Li W, Yuan H, Hao L, Wang J, Zhang M. A novel approach for automatic segmentation of prostate and its lesion regions on magnetic resonance imaging. Front Oncol 2023; 13:1095353. [PMID: 37152013 PMCID: PMC10154598 DOI: 10.3389/fonc.2023.1095353] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 03/30/2023] [Indexed: 05/09/2023] Open
Abstract
Objective To develop an accurate and automatic segmentation model based on convolution neural network to segment the prostate and its lesion regions. Methods Of all 180 subjects, 122 healthy individuals and 58 patients with prostate cancer were included. For each subject, all slices of the prostate were comprised in the DWIs. A novel DCNN is proposed to automatically segment the prostate and its lesion regions. This model is inspired by the U-Net model with the encoding-decoding path as the backbone, importing dense block, attention mechanism techniques, and group norm-Atrous Spatial Pyramidal Pooling. Data augmentation was used to avoid overfitting in training. In the experimental phase, the data set was randomly divided into a training (70%), testing set (30%). four-fold cross-validation methods were used to obtain results for each metric. Results The proposed model achieved in terms of Iou, Dice score, accuracy, sensitivity, 95% Hausdorff Distance, 86.82%,93.90%, 94.11%, 93.8%,7.84 for the prostate, 79.2%, 89.51%, 88.43%,89.31%,8.39 for lesion region in segmentation. Compared to the state-of-the-art models, FCN, U-Net, U-Net++, and ResU-Net, the segmentation model achieved more promising results. Conclusion The proposed model yielded excellent performance in accurate and automatic segmentation of the prostate and lesion regions, revealing that the novel deep convolutional neural network could be used in clinical disease treatment and diagnosis.
Collapse
Affiliation(s)
- Huipeng Ren
- Department of Medical Imaging, First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
- Department of Medical Imaging, Baoji Central Hospital, Baoji, China
| | - Chengjuan Ren
- Department of Language Intelligence, Sichuan International Studies University, Chongqing, China
| | - Ziyu Guo
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Guangnan Zhang
- Department of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Xiaohui Luo
- Department of Urology, Baoji Central Hospital, Baoji, China
| | - Zhuanqin Ren
- Department of Medical Imaging, Baoji Central Hospital, Baoji, China
| | - Hongzhe Tian
- Department of Medical Imaging, Baoji Central Hospital, Baoji, China
| | - Wei Li
- Department of Medical Imaging, Baoji Central Hospital, Baoji, China
| | - Hao Yuan
- Department of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Lele Hao
- Department of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Jiacheng Wang
- Department of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Ming Zhang
- Department of Medical Imaging, First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
| |
Collapse
|
27
|
Zhong J, Staib LH, Venkataraman R, Onofrey JA. INTEGRATING PROSTATE SPECIFIC ANTIGEN DENSITY BIOMARKER INTO DEEP LEARNING PROSTATE MRI LESION SEGMENTATION MODELS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230418. [PMID: 38090633 PMCID: PMC10711801 DOI: 10.1109/isbi53787.2023.10230418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/11/2024]
Abstract
Prostate cancer lesion segmentation in multi-parametric magnetic resonance imaging (mpMRI) is crucial for pre-biopsy diagnosis and targeted biopsy guidance. Deep convolution neural networks have been widely utilized for lesion segmentation. However, these methods fail to achieve a high Dice coefficient because of the large variations in lesion size and location within the gland. To address this problem, we integrate the clinically-meaningful prostate specific antigen density (PSAD) biomarker into the deep learning model using feature-wise transformations to condition the features in latent space, and thus control the size of lesion prediction. We tested our models on a public dataset with 214 annotated mpMRI scans and compared the segmentation performance to a baseline 3D U-Net model. Results demonstrate that integrating the PSAD biomarker significantly improves segmentation performance in both Dice coefficient and centroid distance metric.
Collapse
Affiliation(s)
- Jiayang Zhong
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Lawrence H Staib
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | | | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Urology, Yale University, New Haven, CT, USA
| |
Collapse
|
28
|
Kumar GV, Bellary MI, Reddy TB. Prostate cancer classification with MRI using Taylor-Bird Squirrel Optimization based Deep Recurrent Neural Network. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2165242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
Affiliation(s)
- Goddumarri Vijay Kumar
- Dept. of Computer Science and Technology, Sri Krishnadevaraya University, Ananthapuram, A.P., India
| | - Mohammed Ismail Bellary
- Department of Artificial Intelligence & Machine Learning, P.A. College of Engineering, Managalore, Affiliated to Visvesvaraya Technological University, Belagavi, K.A., India
| | - Thota Bhaskara Reddy
- Dept. of Computer Science and Technology, Sri Krishnadevaraya University, Ananthapuram, A.P., India
| |
Collapse
|
29
|
Tang S, Yu X, Cheang CF, Liang Y, Zhao P, Yu HH, Choi IC. Transformer-based multi-task learning for classification and segmentation of gastrointestinal tract endoscopic images. Comput Biol Med 2023; 157:106723. [PMID: 36907035 DOI: 10.1016/j.compbiomed.2023.106723] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/04/2023] [Accepted: 02/26/2023] [Indexed: 03/07/2023]
Abstract
Despite being widely utilized to help endoscopists identify gastrointestinal (GI) tract diseases using classification and segmentation, models based on convolutional neural network (CNN) have difficulties in distinguishing the similarities among some ambiguous types of lesions presented in endoscopic images, and in the training when lacking labeled datasets. Those will prevent CNN from further improving the accuracy of diagnosis. To address these challenges, we first proposed a Multi-task Network (TransMT-Net) capable of simultaneously learning two tasks (classification and segmentation), which has the transformer designed to learn global features and can combine the advantages of CNN in learning local features so that to achieve a more accurate prediction in identifying the lesion types and regions in GI tract endoscopic images. We further adopted the active learning in TransMT-Net to tackle the labeled image-hungry problem. A dataset was created from the CVC-ClinicDB dataset, Macau Kiang Wu Hospital, and Zhongshan Hospital to evaluate the model performance. Then, the experimental results show that our model not only achieved 96.94% accuracy in the classification task and 77.76% Dice Similarity Coefficient in the segmentation task but also outperformed those of other models on our test set. Meanwhile, active learning also produced positive results for the performance of our model with a small-scale initial training set, and even its performance with 30% of the initial training set was comparable to that of most comparable models with the full training set. Consequently, the proposed TransMT-Net has demonstrated its potential performance in GI tract endoscopic images and it through active learning can alleviate the shortage of labeled images.
Collapse
Affiliation(s)
- Suigu Tang
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Macao Special Administrative Region of China
| | - Xiaoyuan Yu
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Macao Special Administrative Region of China
| | - Chak Fong Cheang
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Macao Special Administrative Region of China.
| | - Yanyan Liang
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Macao Special Administrative Region of China
| | - Penghui Zhao
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Macao Special Administrative Region of China
| | - Hon Ho Yu
- Kiang Wu Hospital, Macao Special Administrative Region of China
| | - I Cheong Choi
- Kiang Wu Hospital, Macao Special Administrative Region of China
| |
Collapse
|
30
|
Rodrigues NM, Silva S, Vanneschi L, Papanikolaou N. A Comparative Study of Automated Deep Learning Segmentation Models for Prostate MRI. Cancers (Basel) 2023; 15:1467. [PMID: 36900261 PMCID: PMC10001231 DOI: 10.3390/cancers15051467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/17/2023] [Accepted: 02/20/2023] [Indexed: 03/03/2023] Open
Abstract
Prostate cancer is one of the most common forms of cancer globally, affecting roughly one in every eight men according to the American Cancer Society. Although the survival rate for prostate cancer is significantly high given the very high incidence rate, there is an urgent need to improve and develop new clinical aid systems to help detect and treat prostate cancer in a timely manner. In this retrospective study, our contributions are twofold: First, we perform a comparative unified study of different commonly used segmentation models for prostate gland and zone (peripheral and transition) segmentation. Second, we present and evaluate an additional research question regarding the effectiveness of using an object detector as a pre-processing step to aid in the segmentation process. We perform a thorough evaluation of the deep learning models on two public datasets, where one is used for cross-validation and the other as an external test set. Overall, the results reveal that the choice of model is relatively inconsequential, as the majority produce non-significantly different scores, apart from nnU-Net which consistently outperforms others, and that the models trained on data cropped by the object detector often generalize better, despite performing worse during cross-validation.
Collapse
Affiliation(s)
- Nuno M. Rodrigues
- LASIGE, Faculty of Sciences, University of Lisbon, 1749-016 Lisbon, Portugal
- Champalimaud Foundation, Centre for the Unknown, 1400-038 Lisbon, Portugal
| | - Sara Silva
- LASIGE, Faculty of Sciences, University of Lisbon, 1749-016 Lisbon, Portugal
| | - Leonardo Vanneschi
- NOVA Information Management School (NOVA IMS), Campus de Campolide, Universidade Nova de Lisboa, 1070-312 Lisboa, Portugal
| | | |
Collapse
|
31
|
Rouvière O, Jaouen T, Baseilhac P, Benomar ML, Escande R, Crouzet S, Souchon R. Artificial intelligence algorithms aimed at characterizing or detecting prostate cancer on MRI: How accurate are they when tested on independent cohorts? – A systematic review. Diagn Interv Imaging 2022; 104:221-234. [PMID: 36517398 DOI: 10.1016/j.diii.2022.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
Abstract
PURPOSE The purpose of this study was to perform a systematic review of the literature on the diagnostic performance, in independent test cohorts, of artificial intelligence (AI)-based algorithms aimed at characterizing/detecting prostate cancer on magnetic resonance imaging (MRI). MATERIALS AND METHODS Medline, Embase and Web of Science were searched for studies published between January 2018 and September 2022, using a histological reference standard, and assessing prostate cancer characterization/detection by AI-based MRI algorithms in test cohorts composed of more than 40 patients and with at least one of the following independency criteria as compared to the training cohort: different institution, different population type, different MRI vendor, different magnetic field strength or strict temporal splitting. RESULTS Thirty-five studies were selected. The overall risk of bias was low. However, 23 studies did not use predefined diagnostic thresholds, which may have optimistically biased the results. Test cohorts fulfilled one to three of the five independency criteria. The diagnostic performance of the algorithms used as standalones was good, challenging that of human reading. In the 12 studies with predefined diagnostic thresholds, radiomics-based computer-aided diagnosis systems (assessing regions-of-interest drawn by the radiologist) tended to provide more robust results than deep learning-based computer-aided detection systems (providing probability maps). Two of the six studies comparing unassisted and assisted reading showed significant improvement due to the algorithm, mostly by reducing false positive findings. CONCLUSION Prostate MRI AI-based algorithms showed promising results, especially for the relatively simple task of characterizing predefined lesions. The best management of discrepancies between human reading and algorithm findings still needs to be defined.
Collapse
Affiliation(s)
- Olivier Rouvière
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France; Université Lyon 1, Faculté de médecine Lyon Est, Lyon 69003, France; LabTAU, INSERM, U1032, Lyon 69003, France.
| | | | - Pierre Baseilhac
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France
| | - Mohammed Lamine Benomar
- LabTAU, INSERM, U1032, Lyon 69003, France; University of Ain Temouchent, Faculty of Science and Technology, Algeria
| | - Raphael Escande
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France
| | - Sébastien Crouzet
- Université Lyon 1, Faculté de médecine Lyon Est, Lyon 69003, France; LabTAU, INSERM, U1032, Lyon 69003, France; Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Urology, Lyon 69003, France
| | | |
Collapse
|
32
|
Adams LC, Makowski MR, Engel G, Rattunde M, Busch F, Asbach P, Niehues SM, Vinayahalingam S, van Ginneken B, Litjens G, Bressem KK. Prostate158 - An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection. Comput Biol Med 2022; 148:105817. [PMID: 35841780 DOI: 10.1016/j.compbiomed.2022.105817] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 06/12/2022] [Accepted: 07/03/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND The development of deep learning (DL) models for prostate segmentation on magnetic resonance imaging (MRI) depends on expert-annotated data and reliable baselines, which are often not publicly available. This limits both reproducibility and comparability. METHODS Prostate158 consists of 158 expert annotated biparametric 3T prostate MRIs comprising T2w sequences and diffusion-weighted sequences with apparent diffusion coefficient maps. Two U-ResNets trained for segmentation of anatomy (central gland, peripheral zone) and suspicious lesions for prostate cancer (PCa) with a PI-RADS score of ≥4 served as baseline algorithms. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average surface distance (ASD). The Wilcoxon test with Bonferroni correction was used to evaluate differences in performance. The generalizability of the baseline model was assessed using the open datasets Medical Segmentation Decathlon and PROSTATEx. RESULTS Compared to Reader 1, the models achieved a DSC/HD/ASD of 0.88/18.3/2.2 for the central gland, 0.75/22.8/1.9 for the peripheral zone, and 0.45/36.7/17.4 for PCa. Compared with Reader 2, the DSC/HD/ASD were 0.88/17.5/2.6 for the central gland, 0.73/33.2/1.9 for the peripheral zone, and 0.4/39.5/19.1 for PCa. Interrater agreement measured in DSC/HD/ASD was 0.87/11.1/1.0 for the central gland, 0.75/15.8/0.74 for the peripheral zone, and 0.6/18.8/5.5 for PCa. Segmentation performances on the Medical Segmentation Decathlon and PROSTATEx were 0.82/22.5/3.4; 0.86/18.6/2.5 for the central gland, and 0.64/29.2/4.7; 0.71/26.3/2.2 for the peripheral zone. CONCLUSIONS We provide an openly accessible, expert-annotated 3T dataset of prostate MRI and a reproducible benchmark to foster the development of prostate segmentation algorithms.
Collapse
Affiliation(s)
- Lisa C Adams
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
| | - Marcus R Makowski
- Technical University of Munich, Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Ismaninger Str. 22, 81675, Munich, Germany
| | - Günther Engel
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Institute for Diagnostic and Interventional Radiology, Georg-August University, Göttingen, Germany
| | - Maximilian Rattunde
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Felix Busch
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Patrick Asbach
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Stefan M Niehues
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | | | - Geert Litjens
- Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | - Keno K Bressem
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany
| |
Collapse
|
33
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|