1
|
Debs N, Routier A, Bône A, Rohé MM. Evaluation of a deep learning prostate cancer detection system on biparametric MRI against radiological reading. Eur Radiol 2025; 35:3134-3143. [PMID: 39699671 DOI: 10.1007/s00330-024-11287-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 11/07/2024] [Accepted: 12/01/2024] [Indexed: 12/20/2024]
Abstract
OBJECTIVES This study aims to evaluate a deep learning pipeline for detecting clinically significant prostate cancer (csPCa), defined as Gleason Grade Group (GGG) ≥ 2, using biparametric MRI (bpMRI) and compare its performance with radiological reading. MATERIALS AND METHODS The training dataset included 4381 bpMRI cases (3800 positive and 581 negative) across three continents, with 80% annotated using PI-RADS and 20% with Gleason Scores. The testing set comprised 328 cases from the PROSTATEx dataset, including 34% positive (GGG ≥ 2) and 66% negative cases. A 3D nnU-Net was trained on bpMRI for lesion detection, evaluated using histopathology-based annotations, and assessed with patient- and lesion-level metrics, along with lesion volume, and GGG. The algorithm was compared to non-expert radiologists using multi-parametric MRI (mpMRI). RESULTS The model achieved an AUC of 0.83 (95% CI: 0.80, 0.87). Lesion-level sensitivity was 0.85 (95% CI: 0.82, 0.94) at 0.5 False Positives per volume (FP/volume) and 0.88 (95% CI: 0.79, 0.92) at 1 FP/volume. Average Precision was 0.55 (95% CI: 0.46, 0.64). The model showed over 0.90 sensitivity for lesions larger than 650 mm³ and exceeded 0.85 across GGGs. It had higher true positive rates (TPRs) than radiologists equivalent FP rates, achieving TPRs of 0.93 and 0.79 compared to radiologists' 0.87 and 0.68 for PI-RADS ≥ 3 and PI-RADS ≥ 4 lesions (p ≤ 0.05). CONCLUSION The DL model showed strong performance in detecting csPCa on an independent test cohort, surpassing radiological interpretation and demonstrating AI's potential to improve diagnostic accuracy for non-expert radiologists. However, detecting small lesions remains challenging. KEY POINTS Question Current prostate cancer detection methods often do not involve non-expert radiologists, highlighting the need for more accurate deep learning approaches using biparametric MRI. Findings Our model outperforms radiologists significantly, showing consistent performance across Gleason Grade Groups and for medium to large lesions. Clinical relevance This AI model improves prostate detection accuracy in prostate imaging, serves as a benchmark with reference performance on a public dataset, and offers public PI-RADS annotations, enhancing transparency and facilitating further research and development.
Collapse
|
2
|
Ma H, Wang L, Sun L, Wang S, Lu L, Zhang C, He Y, Zhu Y. Preoperative Prediction of Microvascular Invasion in Hepatocellular Carcinoma From Multi-Sequence Magnetic Resonance Imaging Based on Deep Fusion Representation Learning. IEEE J Biomed Health Inform 2025; 29:3259-3271. [PMID: 39196745 DOI: 10.1109/jbhi.2024.3451331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2024]
Abstract
Recent studies have identified microvascular invasion (MVI) as the most vital independent biomarker associated with early tumor recurrence. With advancements in medical technology, several computational methods have been developed to predict preoperative MVI using diverse medical images. These existing methods rely on human experience, attribute selection or clinical trial testing, which is often time-consuming and labor-intensive. Leveraging the advantages of deep learning, this study presents a novel end-to-end algorithm for predicting MVI prior to surgery. We devised a series of data preprocessing strategies to fully extract multi-view features from the data while preserving peritumoral information. Notably, a new multi-branch deep fused feature algorithm based on ResNet (DFFResNet) is introduced, which combines Magnetic Resonance Images (MRI) from different sequences to enhance information complementarity and integration. We conducted prediction experiments on a dataset from the Radiology Department of the First Hospital of Lanzhou University, comprising 117 individuals and seven MRI sequences. The model was trained on 80% of the data using 10-fold cross-validation, and the remaining 20% were used for testing. This evaluation was processed in two cases: CROI, containing samples with a complete region of interest (ROI), and PROI, containing samples with a partial ROI region. The robustness results from repeated experiments at both image and patient levels demonstrate the superior performance and improved generalization of the proposed method compared to alternative models. Our approach yields highly competitive prediction results even when the ROI region outline is incomplete, offering a novel and effective multi-sequence fused strategy for predicting preoperative MVI.
Collapse
|
3
|
Miao C, Yao F, Fang J, Tong Y, Lin H, Lu C, Peng L, Zhong J, Lin Y. Exploring the role of multimodal [ 18F]F-PSMA-1007 PET/CT and multiparametric MRI data in predicting ISUP grading of primary prostate cancer. Eur J Nucl Med Mol Imaging 2025; 52:2087-2095. [PMID: 39871017 DOI: 10.1007/s00259-025-07099-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2024] [Accepted: 01/17/2025] [Indexed: 01/29/2025]
Abstract
PURPOSE The study explores the role of multimodal imaging techniques, such as [18F]F-PSMA-1007 PET/CT and multiparametric MRI (mpMRI), in predicting the ISUP (International Society of Urological Pathology) grading of prostate cancer. The goal is to enhance diagnostic accuracy and improve clinical decision-making by integrating these advanced imaging modalities with clinical variables. In particular, the study investigates the application of few-shot learning to address the challenge of limited data in prostate cancer imaging, which is often a common issue in medical research. METHODS This study conducted a retrospective analysis of 341 prostate cancer patients enrolled between 2019 and 2023, with data collected from five imaging modalities: [18F]F-PSMA-1007 PET, CT, Diffusion Weighted Imaging (DWI), T2 Weighted Imaging (T2WI), and Apparent Diffusion Coefficient (ADC). The study compared the performance of five single-modality data sets, PET/CT dual-modality fusion data, mpMRI tri-modality fusion data, and five-modality fusion data within deep learning networks, analyzing how different modalities impact the accuracy of ISUP grading prediction. To address the issue of limited data, a few-shot deep learning network was employed, enabling training and cross-validation with only a small set of labeled samples. Additionally, the results were compared with those from preoperative biopsies and clinical prediction models to further assess the reliability of the experimental findings. RESULTS The experimental results demonstrate that the multimodal model (combining [18F]F-PSMA-1007 PET/CT and multiparametric MRI) significantly outperforms other models in predicting ISUP grading of prostate cancer. Meanwhile, both the PET/CT dual-modality and mpMRI tri-modality models outperform the single-modality model, with comparable performance between the two multimodal models. Furthermore, the experimental data confirm that the few-shot learning network introduced in this study provides reliable predictions, even with limited data. CONCLUSION This study highlights the potential of applying multimodal imaging techniques (such as PET/CT and mpMRI) in predicting ISUP grading of prostate cancer. The findings suggest that this integrated approach can enhance the accuracy of prostate cancer diagnosis and contribute to more personalized treatment planning. Furthermore, incorporating few-shot learning into the model development process allows for more robust predictions despite limited data, making this approach highly valuable in clinical settings with sparse data.
Collapse
Affiliation(s)
- Cunke Miao
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
| | - Fei Yao
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
| | - Junfei Fang
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
| | - Yingnuo Tong
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
- Cixi Biomedical Research Institute, Wenzhou Medical University, Zhejiang, 315300, China
| | - Heng Lin
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
| | - Chuntao Lu
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
| | - Lu Peng
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
| | - JiaQi Zhong
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
- Cixi Biomedical Research Institute, Wenzhou Medical University, Zhejiang, 315300, China
| | - Yezhi Lin
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, China.
| |
Collapse
|
4
|
Keller P, Dawood M, Chohan BS, Minhas FUAA. HistoKernel: Whole slide image level Maximum Mean Discrepancy kernels for pan-cancer predictive modelling. Med Image Anal 2025; 101:103491. [PMID: 39938344 DOI: 10.1016/j.media.2025.103491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2024] [Revised: 01/27/2025] [Accepted: 01/31/2025] [Indexed: 02/14/2025]
Abstract
In computational pathology, labels are typically available only at the whole slide image (WSI) or patient level, necessitating weakly supervised learning methods that aggregate patch-level features or predictions to produce WSI-level scores for clinically significant tasks such as cancer subtype classification or survival analysis. However, existing approaches lack a theoretically grounded framework to capture the holistic distributional differences between the patch sets within WSIs, limiting their ability to accurately and comprehensively model the underlying pathology. To address this limitation, we introduce HistoKernel, a novel WSI-level Maximum Mean Discrepancy (MMD) kernel designed to quantify distributional similarity between WSIs using their local feature representation. HistoKernel enables a wide range of applications, including classification, regression, retrieval, clustering, survival analysis, multimodal data integration, and visualization of large WSI datasets. Additionally, HistoKernel offers a novel perturbation-based method for patch-level explainability. Our analysis over large pan-cancer datasets shows that HistoKernel achieves performance that typically matches or exceeds existing state-of-the-art methods across diverse tasks, including WSI retrieval (n = 9324), drug sensitivity regression (n = 551), point mutation classification (n = 3419), and survival analysis (n = 2291). By pioneering the use of kernel-based methods for a diverse range of WSI-level predictive tasks, HistoKernel opens new avenues for computational pathology research especially in terms of rapid prototyping on large and complex computational pathology datasets. Code and interactive visualization are available at: https://histokernel.dcs.warwick.ac.uk/.
Collapse
Affiliation(s)
- Piotr Keller
- Tissue Image Analytics Centre, University of Warwick, Coventry, CV4 7AL, United Kingdom.
| | - Muhammad Dawood
- Tissue Image Analytics Centre, University of Warwick, Coventry, CV4 7AL, United Kingdom
| | - Brinder Singh Chohan
- Department of Cellular Pathology, Royal Derby Hospital, Derby, DE22 3NE, United Kingdom
| | | |
Collapse
|
5
|
Dimitriadis A, Kalliatakis G, Osuala R, Kessler D, Mazzetti S, Regge D, Diaz O, Lekadir K, Fotiadis D, Tsiknakis M, Papanikolaou N, ProCAncer-I Consortium, Marias K. Assessing Cancer Presence in Prostate MRI Using Multi-Encoder Cross-Attention Networks. J Imaging 2025; 11:98. [PMID: 40278014 PMCID: PMC12028011 DOI: 10.3390/jimaging11040098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2025] [Revised: 03/12/2025] [Accepted: 03/14/2025] [Indexed: 04/26/2025] Open
Abstract
Prostate cancer (PCa) is currently the second most prevalent cancer among men. Accurate diagnosis of PCa can provide effective treatment for patients and reduce mortality. Previous works have merely focused on either lesion detection or lesion classification of PCa from magnetic resonance imaging (MRI). In this work we focus on a critical, yet underexplored task of the PCa clinical workflow: distinguishing cases with cancer presence (pathologically confirmed PCa patients) from conditions with no suspicious PCa findings (no cancer presence). To this end, we conduct large-scale experiments for this task for the first time by adopting and processing the multi-centric ProstateNET Imaging Archive which contains more than 6 million image representations of PCa from more than 11,000 PCa cases, representing the largest collection of PCa MR images. Bi-parametric MR (bpMRI) images of 4504 patients alongside their clinical variables are used for training, while the architectures are evaluated on two hold-out test sets of 975 retrospective and 435 prospective patients. Our proposed multi-encoder-cross-attention-fusion architecture achieved a promising area under the receiver operating characteristic curve (AUC) of 0.91. This demonstrates our method's capability of fusing complex bi-parametric imaging modalities and enhancing model robustness, paving the way towards the clinical adoption of deep learning models for accurately determining the presence of PCa across patient populations.
Collapse
Affiliation(s)
- Avtantil Dimitriadis
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), N. Plastira 100, Vassilika Vouton, 70013 Heraklion, Greece; (A.D.); (M.T.); (K.M.)
- Department of Mathematics and Computer Science, Universitat de Barcelona, Gran Via de les Corts Catalanes, 585, L’Eixample, 08007 Barcelona, Spain; (R.O.); (D.K.); (O.D.); (K.L.)
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University (HMU), Estavromenos, 71410 Heraklion, Greece
| | - Grigorios Kalliatakis
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), N. Plastira 100, Vassilika Vouton, 70013 Heraklion, Greece; (A.D.); (M.T.); (K.M.)
| | - Richard Osuala
- Department of Mathematics and Computer Science, Universitat de Barcelona, Gran Via de les Corts Catalanes, 585, L’Eixample, 08007 Barcelona, Spain; (R.O.); (D.K.); (O.D.); (K.L.)
| | - Dimitri Kessler
- Department of Mathematics and Computer Science, Universitat de Barcelona, Gran Via de les Corts Catalanes, 585, L’Eixample, 08007 Barcelona, Spain; (R.O.); (D.K.); (O.D.); (K.L.)
| | - Simone Mazzetti
- Department of Radiology, Candiolo Cancer Institute–FPO, IRCCS, 10060 Candiolo Torino, Italy; (S.M.); (D.R.)
| | - Daniele Regge
- Department of Radiology, Candiolo Cancer Institute–FPO, IRCCS, 10060 Candiolo Torino, Italy; (S.M.); (D.R.)
| | - Oliver Diaz
- Department of Mathematics and Computer Science, Universitat de Barcelona, Gran Via de les Corts Catalanes, 585, L’Eixample, 08007 Barcelona, Spain; (R.O.); (D.K.); (O.D.); (K.L.)
| | - Karim Lekadir
- Department of Mathematics and Computer Science, Universitat de Barcelona, Gran Via de les Corts Catalanes, 585, L’Eixample, 08007 Barcelona, Spain; (R.O.); (D.K.); (O.D.); (K.L.)
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Passeig Lluís Companys 23, 08010 Barcelona, Spain
| | - Dimitrios Fotiadis
- Department of Biomedical Research Institute–FORTH, University Campus of Ioannina, 45110 Ioannina, Greece;
| | - Manolis Tsiknakis
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), N. Plastira 100, Vassilika Vouton, 70013 Heraklion, Greece; (A.D.); (M.T.); (K.M.)
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University (HMU), Estavromenos, 71410 Heraklion, Greece
| | | | | | - Kostas Marias
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), N. Plastira 100, Vassilika Vouton, 70013 Heraklion, Greece; (A.D.); (M.T.); (K.M.)
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University (HMU), Estavromenos, 71410 Heraklion, Greece
| |
Collapse
|
6
|
Yan W, Chiu B, Shen Z, Yang Q, Syer T, Min Z, Punwani S, Emberton M, Atkinson D, Barratt DC, Hu Y. Combiner and HyperCombiner networks: Rules to combine multimodality MR images for prostate cancer localisation. Med Image Anal 2024; 91:103030. [PMID: 37995627 DOI: 10.1016/j.media.2023.103030] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 09/22/2023] [Accepted: 11/13/2023] [Indexed: 11/25/2023]
Abstract
One of the distinct characteristics of radiologists reading multiparametric prostate MR scans, using reporting systems like PI-RADS v2.1, is to score individual types of MR modalities, including T2-weighted, diffusion-weighted, and dynamic contrast-enhanced, and then combine these image-modality-specific scores using standardised decision rules to predict the likelihood of clinically significant cancer. This work aims to demonstrate that it is feasible for low-dimensional parametric models to model such decision rules in the proposed Combiner networks, without compromising the accuracy of predicting radiologic labels. First, we demonstrate that either a linear mixture model or a nonlinear stacking model is sufficient to model PI-RADS decision rules for localising prostate cancer. Second, parameters of these combining models are proposed as hyperparameters, weighing independent representations of individual image modalities in the Combiner network training, as opposed to end-to-end modality ensemble. A HyperCombiner network is developed to train a single image segmentation network that can be conditioned on these hyperparameters during inference for much-improved efficiency. Experimental results based on 751 cases from 651 patients compare the proposed rule-modelling approaches with other commonly-adopted end-to-end networks, in this downstream application of automating radiologist labelling on multiparametric MR. By acquiring and interpreting the modality combining rules, specifically the linear-weights or odds ratios associated with individual image modalities, three clinical applications are quantitatively presented and contextualised in the prostate cancer segmentation application, including modality availability assessment, importance quantification and rule discovery.
Collapse
Affiliation(s)
- Wen Yan
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Hong Kong China; Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Bernard Chiu
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Hong Kong China; Department of Physics & Computer Science, Wilfrid Laurier University, 75 University Avenue West Waterloo, Ontario N2L 3C5, Canada.
| | - Ziyi Shen
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Qianye Yang
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Tom Syer
- Centre for Medical Imaging, Division of Medicine, University College London, London W1 W 7TS, UK.
| | - Zhe Min
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Shonit Punwani
- Centre for Medical Imaging, Division of Medicine, University College London, London W1 W 7TS, UK.
| | - Mark Emberton
- Division of Surgery & Interventional Science, University College London, Gower St, WC1E 6BT, London, UK.
| | - David Atkinson
- Centre for Medical Imaging, Division of Medicine, University College London, London W1 W 7TS, UK.
| | - Dean C Barratt
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Yipeng Hu
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| |
Collapse
|
7
|
Tang J, Zheng X, Wang X, Mao Q, Xie L, Wang R. Computer-aided detection of prostate cancer in early stages using multi-parameter MRI: A promising approach for early diagnosis. Technol Health Care 2024; 32:125-133. [PMID: 38759043 PMCID: PMC11191472 DOI: 10.3233/thc-248011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/19/2024]
Abstract
BACKGROUND Transrectal ultrasound-guided prostate biopsy is the gold standard diagnostic test for prostate cancer, but it is an invasive examination of non-targeted puncture and has a high false-negative rate. OBJECTIVE In this study, we aimed to develop a computer-assisted prostate cancer diagnosis method based on multiparametric MRI (mpMRI) images. METHODS We retrospectively collected 106 patients who underwent radical prostatectomy after diagnosis with prostate biopsy. mpMRI images, including T2 weighted imaging (T2WI), diffusion weighted imaging (DWI), and dynamic-contrast enhanced (DCE), and were accordingly analyzed. We extracted the region of interest (ROI) about the tumor and benign area on the three sequential MRI axial images at the same level. The ROI data of 433 mpMRI images were obtained, of which 202 were benign and 231 were malignant. Of those, 50 benign and 50 malignant images were used for training, and the 333 images were used for verification. Five main feature groups, including histogram, GLCM, GLGCM, wavelet-based multi-fractional Brownian motion features and Minkowski function features, were extracted from the mpMRI images. The selected characteristic parameters were analyzed by MATLAB software, and three analysis methods with higher accuracy were selected. RESULTS Through prostate cancer identification based on mpMRI images, we found that the system uses 58 texture features and 3 classification algorithms, including Support Vector Machine (SVM), K-nearest Neighbor (KNN), and Ensemble Learning (EL), performed well. In the T2WI-based classification results, the SVM achieved the optimal accuracy and AUC values of 64.3% and 0.67. In the DCE-based classification results, the SVM achieved the optimal accuracy and AUC values of 72.2% and 0.77. In the DWI-based classification results, the ensemble learning achieved optimal accuracy as well as AUC values of 75.1% and 0.82. In the classification results based on all data combinations, the SVM achieved the optimal accuracy and AUC values of 66.4% and 0.73. CONCLUSION The proposed computer-aided diagnosis system provides a good assessment of the diagnosis of the prostate cancer, which may reduce the burden of radiologists and improve the early diagnosis of prostate cancer.
Collapse
Affiliation(s)
- Jianer Tang
- Department of Urology, First Affiliated Hospital, Medical College of Zhejiang University, Hangzhou, Zhejiang, China
- Department of Urology, First Affiliated Hospital of Huzhou Teachers College, Huzhou, Zhejiang, China
| | - Xiangyi Zheng
- Department of Urology, First Affiliated Hospital, Medical College of Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiao Wang
- Department of Urology, First Affiliated Hospital, Medical College of Zhejiang University, Hangzhou, Zhejiang, China
| | - Qiqi Mao
- Department of Urology, First Affiliated Hospital, Medical College of Zhejiang University, Hangzhou, Zhejiang, China
| | - Liping Xie
- Department of Urology, First Affiliated Hospital, Medical College of Zhejiang University, Hangzhou, Zhejiang, China
| | - Rongjiang Wang
- Department of Urology, First Affiliated Hospital of Huzhou Teachers College, Huzhou, Zhejiang, China
| |
Collapse
|
8
|
Wang M, Jiang H. PST-Radiomics: a PET/CT lymphoma classification method based on pseudo spatial-temporal radiomic features and structured atrous recurrent convolutional neural network. Phys Med Biol 2023; 68:235014. [PMID: 37956448 DOI: 10.1088/1361-6560/ad0c0f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 11/13/2023] [Indexed: 11/15/2023]
Abstract
Objective.Existing radiomic methods tend to treat each isolated tumor as an inseparable whole, when extracting radiomic features. However, they may discard the critical intra-tumor metabolic heterogeneity (ITMH) information, that contributes to triggering tumor subtypes. To improve lymphoma classification performance, we propose a pseudo spatial-temporal radiomic method (PST-Radiomics) based on positron emission tomography computed tomography (PET/CT).Approach.Specifically, to enable exploitation of ITMH, we first present a multi-threshold gross tumor volume sequence (GTVS). Next, we extract 1D radiomic features based on PET images and each volume in GTVS and create a pseudo spatial-temporal feature sequence (PSTFS) tightly interwoven with ITMH. Then, we reshape PSTFS to create 2D pseudo spatial-temporal feature maps (PSTFM), of which the columns are elements of PSTFS. Finally, to learn from PSTFM in an end-to-end manner, we build a light-weighted pseudo spatial-temporal radiomic network (PSTR-Net), in which a structured atrous recurrent convolutional neural network serves as a PET branch to better exploit the strong local dependencies in PSTFM, and a residual convolutional neural network is used as a CT branch to exploit conventional radiomic features extracted from CT volumes.Main results.We validate PST-Radiomics based on a PET/CT lymphoma subtype classification task. Experimental results quantitatively demonstrate the superiority of PST-Radiomics, when compared to existing radiomic methods.Significance.Feature map visualization of our method shows that it performs complex feature selection while extracting hierarchical feature maps, which qualitatively demonstrates its superiority.
Collapse
Affiliation(s)
- Meng Wang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|
9
|
Pachetti E, Colantonio S. 3D-Vision-Transformer Stacking Ensemble for Assessing Prostate Cancer Aggressiveness from T2w Images. Bioengineering (Basel) 2023; 10:1015. [PMID: 37760117 PMCID: PMC10525095 DOI: 10.3390/bioengineering10091015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/27/2023] [Accepted: 08/20/2023] [Indexed: 09/29/2023] Open
Abstract
Vision transformers represent the cutting-edge topic in computer vision and are usually employed on two-dimensional data following a transfer learning approach. In this work, we propose a trained-from-scratch stacking ensemble of 3D-vision transformers to assess prostate cancer aggressiveness from T2-weighted images to help radiologists diagnose this disease without performing a biopsy. We trained 18 3D-vision transformers on T2-weighted axial acquisitions and combined them into two- and three-model stacking ensembles. We defined two metrics for measuring model prediction confidence, and we trained all the ensemble combinations according to a five-fold cross-validation, evaluating their accuracy, confidence in predictions, and calibration. In addition, we optimized the 18 base ViTs and compared the best-performing base and ensemble models by re-training them on a 100-sample bootstrapped training set and evaluating each model on the hold-out test set. We compared the two distributions by calculating the median and the 95% confidence interval and performing a Wilcoxon signed-rank test. The best-performing 3D-vision-transformer stacking ensemble provided state-of-the-art results in terms of area under the receiving operating curve (0.89 [0.61-1]) and exceeded the area under the precision-recall curve of the base model of 22% (p < 0.001). However, it resulted to be less confident in classifying the positive class.
Collapse
Affiliation(s)
- Eva Pachetti
- “Alessandro Faedo” Institute of Information Science and Technologies (ISTI), National Research Council of Italy (CNR), 56127 Pisa, Italy;
- Department of Information Engineering (DII), University of Pisa, 56122 Pisa, Italy
| | - Sara Colantonio
- “Alessandro Faedo” Institute of Information Science and Technologies (ISTI), National Research Council of Italy (CNR), 56127 Pisa, Italy;
| |
Collapse
|
10
|
Karagoz A, Alis D, Seker ME, Zeybel G, Yergin M, Oksuz I, Karaarslan E. Anatomically guided self-adapting deep neural network for clinically significant prostate cancer detection on bi-parametric MRI: a multi-center study. Insights Imaging 2023; 14:110. [PMID: 37337101 DOI: 10.1186/s13244-023-01439-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 04/17/2023] [Indexed: 06/21/2023] Open
Abstract
OBJECTIVE To evaluate the effectiveness of a self-adapting deep network, trained on large-scale bi-parametric MRI data, in detecting clinically significant prostate cancer (csPCa) in external multi-center data from men of diverse demographics; to investigate the advantages of transfer learning. METHODS We used two samples: (i) Publicly available multi-center and multi-vendor Prostate Imaging: Cancer AI (PI-CAI) training data, consisting of 1500 bi-parametric MRI scans, along with its unseen validation and testing samples; (ii) In-house multi-center testing and transfer learning data, comprising 1036 and 200 bi-parametric MRI scans. We trained a self-adapting 3D nnU-Net model using probabilistic prostate masks on the PI-CAI data and evaluated its performance on the hidden validation and testing samples and the in-house data with and without transfer learning. We used the area under the receiver operating characteristic (AUROC) curve to evaluate patient-level performance in detecting csPCa. RESULTS The PI-CAI training data had 425 scans with csPCa, while the in-house testing and fine-tuning data had 288 and 50 scans with csPCa, respectively. The nnU-Net model achieved an AUROC of 0.888 and 0.889 on the hidden validation and testing data. The model performed with an AUROC of 0.886 on the in-house testing data, with a slight decrease in performance to 0.870 using transfer learning. CONCLUSIONS The state-of-the-art deep learning method using prostate masks trained on large-scale bi-parametric MRI data provides high performance in detecting csPCa in internal and external testing data with different characteristics, demonstrating the robustness and generalizability of deep learning within and across datasets. CLINICAL RELEVANCE STATEMENT A self-adapting deep network, utilizing prostate masks and trained on large-scale bi-parametric MRI data, is effective in accurately detecting clinically significant prostate cancer across diverse datasets, highlighting the potential of deep learning methods for improving prostate cancer detection in clinical practice.
Collapse
Affiliation(s)
- Ahmet Karagoz
- Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey
| | - Deniz Alis
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey.
- Department of Radiology, School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey.
| | - Mustafa Ege Seker
- School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| | - Gokberk Zeybel
- School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| | - Mert Yergin
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey
| | - Ilkay Oksuz
- Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey
| | - Ercan Karaarslan
- Department of Radiology, School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| |
Collapse
|
11
|
Rouvière O, Jaouen T, Baseilhac P, Benomar ML, Escande R, Crouzet S, Souchon R. Artificial intelligence algorithms aimed at characterizing or detecting prostate cancer on MRI: How accurate are they when tested on independent cohorts? – A systematic review. Diagn Interv Imaging 2022; 104:221-234. [PMID: 36517398 DOI: 10.1016/j.diii.2022.11.005] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
Abstract
PURPOSE The purpose of this study was to perform a systematic review of the literature on the diagnostic performance, in independent test cohorts, of artificial intelligence (AI)-based algorithms aimed at characterizing/detecting prostate cancer on magnetic resonance imaging (MRI). MATERIALS AND METHODS Medline, Embase and Web of Science were searched for studies published between January 2018 and September 2022, using a histological reference standard, and assessing prostate cancer characterization/detection by AI-based MRI algorithms in test cohorts composed of more than 40 patients and with at least one of the following independency criteria as compared to the training cohort: different institution, different population type, different MRI vendor, different magnetic field strength or strict temporal splitting. RESULTS Thirty-five studies were selected. The overall risk of bias was low. However, 23 studies did not use predefined diagnostic thresholds, which may have optimistically biased the results. Test cohorts fulfilled one to three of the five independency criteria. The diagnostic performance of the algorithms used as standalones was good, challenging that of human reading. In the 12 studies with predefined diagnostic thresholds, radiomics-based computer-aided diagnosis systems (assessing regions-of-interest drawn by the radiologist) tended to provide more robust results than deep learning-based computer-aided detection systems (providing probability maps). Two of the six studies comparing unassisted and assisted reading showed significant improvement due to the algorithm, mostly by reducing false positive findings. CONCLUSION Prostate MRI AI-based algorithms showed promising results, especially for the relatively simple task of characterizing predefined lesions. The best management of discrepancies between human reading and algorithm findings still needs to be defined.
Collapse
Affiliation(s)
- Olivier Rouvière
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France; Université Lyon 1, Faculté de médecine Lyon Est, Lyon 69003, France; LabTAU, INSERM, U1032, Lyon 69003, France.
| | | | - Pierre Baseilhac
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France
| | - Mohammed Lamine Benomar
- LabTAU, INSERM, U1032, Lyon 69003, France; University of Ain Temouchent, Faculty of Science and Technology, Algeria
| | - Raphael Escande
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France
| | - Sébastien Crouzet
- Université Lyon 1, Faculté de médecine Lyon Est, Lyon 69003, France; LabTAU, INSERM, U1032, Lyon 69003, France; Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Urology, Lyon 69003, France
| | | |
Collapse
|
12
|
Yuan W, Cheng L, Yang J, Yin B, Fan X, Yang J, Li S, Zhong J, Huang X. Noninvasive oral cancer screening based on local residual adaptation network using optical coherence tomography. Med Biol Eng Comput 2022; 60:1363-1375. [PMID: 35359200 DOI: 10.1007/s11517-022-02535-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 01/30/2022] [Indexed: 02/06/2023]
Abstract
Oral cancer is known as one of the relatively common malignancy types worldwide. Despite the easy access of the oral cavity to examination, the invasive biopsy is still essential for final diagnosis, which requires laborious operation and complicated trained specialists. With the development of deep learning, the artificial intelligence (AI) technique is applied for oral cancer examinations and alleviates the workload of manual screening on biopsy. However, existing computer-aided oral cancer diagnostic methods focus on oral cavity environment photos and histology images, which require complicated operations for doctors and are invasive and painful for patients. As a noninvasive, real-time imaging technique, optical coherence tomography (OCT) can express sufficient identical information for oral cancer screening, but it has not been effectively explored for automatic oral cancer diagnosis. This paper proposes a novel deep learning method named Local Residual Adaptation Network (LRAN) for noninvasive oral cancer screening on OCT images, collected from 25 patients in Beijing Stomatological Hospital. Our proposed LRAN consists of a Residual Feature Representation (RFR) module and a Local Distribution Adaptation (LDA) module. Specifically, RFR firstly adopts stacked residual blocks as the backbone network to learn feature representations for training data, optimized by the Cross-Entropy loss, and then deploy Euclidean distance to measure the distribution distance between training and testing OCT images. Finally, LRAN achieves distribution-gap bridging by the LDA module, which integrates local maximum mean discrepancy constraint to estimate and minimize the distribution discrepancy between training and testing sets within the same category. We also collected an OCT-based oral cancer image dataset to evaluate the effectiveness of the proposed method, and it achieves an accuracy of 91.62%, a sensitivity of 91.66%, and a specificity of 92.58% on this self-collected dataset. Furthermore, we conduct a quantitative and qualitative analysis, and the results demonstrate LRAN model has excellent capability to solve the noninvasive oral cancer screening task.
Collapse
Affiliation(s)
- Wei Yuan
- Department of Oral and Maxillofacial-Head and Neck Oncology, Beijing Stomatological Hospital, School of Stomatology, Capital Medical University, Beijing, China
| | - Long Cheng
- Department of Oral and Maxillofacial-Head and Neck Oncology, Beijing Stomatological Hospital, School of Stomatology, Capital Medical University, Beijing, China
| | - Jinsuo Yang
- Department of Oral and Maxillofacial-Head and Neck Oncology, Beijing Stomatological Hospital, School of Stomatology, Capital Medical University, Beijing, China
| | - Boya Yin
- Department of Oral and Maxillofacial-Head and Neck Oncology, Beijing Stomatological Hospital, School of Stomatology, Capital Medical University, Beijing, China
| | - Xingyu Fan
- Department of Oral and Maxillofacial-Head and Neck Oncology, Beijing Stomatological Hospital, School of Stomatology, Capital Medical University, Beijing, China
| | - Jing Yang
- Department of Oral and Maxillofacial-Head and Neck Oncology, Beijing Stomatological Hospital, School of Stomatology, Capital Medical University, Beijing, China
| | - Sen Li
- College of Science, Harbin Institute of Technology, Shenzhen, China
| | - Jianjun Zhong
- School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing, China
| | - Xin Huang
- Department of Oral and Maxillofacial-Head and Neck Oncology, Beijing Stomatological Hospital, School of Stomatology, Capital Medical University, Beijing, China.
| |
Collapse
|
13
|
Mehralivand S, Yang D, Harmon SA, Xu D, Xu Z, Roth H, Masoudi S, Kesani D, Lay N, Merino MJ, Wood BJ, Pinto PA, Choyke PL, Turkbey B. Deep learning-based artificial intelligence for prostate cancer detection at biparametric MRI. Abdom Radiol (NY) 2022; 47:1425-1434. [PMID: 35099572 PMCID: PMC10506420 DOI: 10.1007/s00261-022-03419-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 01/09/2022] [Accepted: 01/10/2022] [Indexed: 11/01/2022]
Abstract
PURPOSE To present fully automated DL-based prostate cancer detection system for prostate MRI. METHODS MRI scans from two institutions, were used for algorithm training, validation, testing. MRI-visible lesions were contoured by an experienced radiologist. All lesions were biopsied using MRI-TRUS-guidance. Lesions masks, histopathological results were used as ground truth labels to train UNet, AH-Net architectures for prostate cancer lesion detection, segmentation. Algorithm was trained to detect any prostate cancer ≥ ISUP1. Detection sensitivity, positive predictive values, mean number of false positive lesions per patient were used as performance metrics. RESULTS 525 patients were included for training, validation, testing of the algorithm. Dataset was split into training (n = 368, 70%), validation (n = 79, 15%), test (n = 78, 15%) cohorts. Dice coefficients in training, validation sets were 0.403, 0.307, respectively, for AHNet model compared to 0.372, 0.287, respectively, for UNet model. In validation set, detection sensitivity was 70.9%, PPV was 35.5%, mean number of false positive lesions/patient was 1.41 (range 0-6) for UNet model compared to 74.4% detection sensitivity, 47.8% PPV, mean number of false positive lesions/patient was 0.87 (range 0-5) for AHNet model. In test set, detection sensitivity for UNet was 72.8% compared to 63.0% for AHNet, mean number of false positive lesions/patient was 1.90 (range 0-7), 1.40 (range 0-6) in UNet, AHNet models, respectively. CONCLUSION We developed a DL-based AI approach which predicts prostate cancer lesions at biparametric MRI with reasonable performance metrics. While false positive lesion calls remain as a challenge of AI-assisted detection algorithms, this system can be utilized as an adjunct tool by radiologists.
Collapse
Affiliation(s)
| | - Dong Yang
- NVIDIA Corporation, Santa Clara, CA, USA
| | | | - Daguang Xu
- NVIDIA Corporation, Santa Clara, CA, USA
| | - Ziyue Xu
- NVIDIA Corporation, Santa Clara, CA, USA
| | | | | | - Deepak Kesani
- Molecular Imaging Branch, NCI, NIH, Bethesda, MD, USA
| | - Nathan Lay
- Molecular Imaging Branch, NCI, NIH, Bethesda, MD, USA
| | | | - Bradford J Wood
- Center for Interventional Oncology, NCI, NIH, Bethesda, MD, USA
- Department of Radiology, Clinical Center, NIH, Bethesda, MD, USA
| | - Peter A Pinto
- Urologic Oncology Branch, NCI, NIH, Bethesda, MD, USA
| | | | - Baris Turkbey
- Molecular Imaging Branch, NCI, NIH, Bethesda, MD, USA.
- Molecular Imaging Branch, National Cancer Institute, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, MD, 20892-1088, USA.
| |
Collapse
|
14
|
Hamzaoui D, Montagne S, Renard-Penna R, Ayache N, Delingette H. Automatic zonal segmentation of the prostate from 2D and 3D T2-weighted MRI and evaluation for clinical use. J Med Imaging (Bellingham) 2022; 9:024001. [PMID: 35300345 PMCID: PMC8920492 DOI: 10.1117/1.jmi.9.2.024001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/23/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: An accurate zonal segmentation of the prostate is required for prostate cancer (PCa) management with MRI. Approach: The aim of this work is to present UFNet, a deep learning-based method for automatic zonal segmentation of the prostate from T2-weighted (T2w) MRI. It takes into account the image anisotropy, includes both spatial and channelwise attention mechanisms and uses loss functions to enforce prostate partition. The method was applied on a private multicentric three-dimensional T2w MRI dataset and on the public two-dimensional T2w MRI dataset ProstateX. To assess the model performance, the structures segmented by the algorithm on the private dataset were compared with those obtained by seven radiologists of various experience levels. Results: On the private dataset, we obtained a Dice score (DSC) of 93.90 ± 2.85 for the whole gland (WG), 91.00 ± 4.34 for the transition zone (TZ), and 79.08 ± 7.08 for the peripheral zone (PZ). Results were significantly better than other compared networks' ( p - value < 0.05 ). On ProstateX, we obtained a DSC of 90.90 ± 2.94 for WG, 86.84 ± 4.33 for TZ, and 78.40 ± 7.31 for PZ. These results are similar to state-of-the art results and, on the private dataset, are coherent with those obtained by radiologists. Zonal locations and sectorial positions of lesions annotated by radiologists were also preserved. Conclusions: Deep learning-based methods can provide an accurate zonal segmentation of the prostate leading to a consistent zonal location and sectorial position of lesions, and therefore can be used as a helping tool for PCa diagnosis.
Collapse
Affiliation(s)
- Dimitri Hamzaoui
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Sarah Montagne
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Hervé Delingette
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| |
Collapse
|
15
|
Ayyad SM, Badawy MA, Shehata M, Alksas A, Mahmoud A, Abou El-Ghar M, Ghazal M, El-Melegy M, Abdel-Hamid NB, Labib LM, Ali HA, El-Baz A. A New Framework for Precise Identification of Prostatic Adenocarcinoma. SENSORS (BASEL, SWITZERLAND) 2022; 22:1848. [PMID: 35270995 PMCID: PMC8915102 DOI: 10.3390/s22051848] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 02/21/2022] [Accepted: 02/24/2022] [Indexed: 02/01/2023]
Abstract
Prostate cancer, which is also known as prostatic adenocarcinoma, is an unconstrained growth of epithelial cells in the prostate and has become one of the leading causes of cancer-related death worldwide. The survival of patients with prostate cancer relies on detection at an early, treatable stage. In this paper, we introduce a new comprehensive framework to precisely differentiate between malignant and benign prostate cancer. This framework proposes a noninvasive computer-aided diagnosis system that integrates two imaging modalities of MR (diffusion-weighted (DW) and T2-weighted (T2W)). For the first time, it utilizes the combination of functional features represented by apparent diffusion coefficient (ADC) maps estimated from DW-MRI for the whole prostate in combination with texture features with its first- and second-order representations, extracted from T2W-MRIs of the whole prostate, and shape features represented by spherical harmonics constructed for the lesion inside the prostate and integrated with PSA screening results. The dataset presented in the paper includes 80 biopsy confirmed patients, with a mean age of 65.7 years (43 benign prostatic hyperplasia, 37 prostatic carcinomas). Experiments were conducted using different well-known machine learning approaches including support vector machines (SVM), random forests (RF), decision trees (DT), and linear discriminant analysis (LDA) classification models to study the impact of different feature sets that lead to better identification of prostatic adenocarcinoma. Using a leave-one-out cross-validation approach, the diagnostic results obtained using the SVM classification model along with the combined feature set after applying feature selection (88.75% accuracy, 81.08% sensitivity, 95.35% specificity, and 0.8821 AUC) indicated that the system's performance, after integrating and reducing different types of feature sets, obtained an enhanced diagnostic performance compared with each individual feature set and other machine learning classifiers. In addition, the developed diagnostic system provided consistent diagnostic performance using 10-fold and 5-fold cross-validation approaches, which confirms the reliability, generalization ability, and robustness of the developed system.
Collapse
Affiliation(s)
- Sarah M. Ayyad
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Mohamed A. Badawy
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt; (M.A.B.); (M.A.E.-G.)
| | - Mohamed Shehata
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| | - Ahmed Alksas
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| | - Ali Mahmoud
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| | - Mohamed Abou El-Ghar
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt; (M.A.B.); (M.A.E.-G.)
| | - Mohammed Ghazal
- Department of Electrical and Computer Engineering, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Moumen El-Melegy
- Department of Electrical Engineering, Assiut University, Assiut 71511, Egypt;
| | - Nahla B. Abdel-Hamid
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Labib M. Labib
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - H. Arafat Ali
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
- Faulty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35516, Egypt
| | - Ayman El-Baz
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| |
Collapse
|
16
|
Jin J, Zhang L, Leng E, Metzger GJ, Koopmeiners JS. Multi-resolution super learner for voxel-wise classification of prostate cancer using multi-parametric MRI. J Appl Stat 2021; 50:805-826. [PMID: 36819087 PMCID: PMC9930806 DOI: 10.1080/02664763.2021.2017411] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 12/05/2021] [Indexed: 10/19/2022]
Abstract
Multi-parametric MRI (mpMRI) is a critical tool in prostate cancer (PCa) diagnosis and management. To further advance the use of mpMRI in patient care, computer aided diagnostic methods are under continuous development for supporting/supplanting standard radiological interpretation. While voxel-wise PCa classification models are the gold standard, few if any approaches have incorporated the inherent structure of the mpMRI data, such as spatial heterogeneity and between-voxel correlation, into PCa classification. We propose a machine learning-based method to fill in this gap. Our method uses an ensemble learning approach to capture regional heterogeneity in the data, where classifiers are developed at multiple resolutions and combined using the super learner algorithm, and further account for between-voxel correlation through a Gaussian kernel smoother. It allows any type of classifier to be the base learner and can be extended to further classify PCa sub-categories. We introduce the algorithms for binary PCa classification, as well as for classifying the ordinal clinical significance of PCa for which a weighted likelihood approach is implemented to improve the detection of less prevalent cancer categories. The proposed method has shown important advantages over conventional modeling and machine learning approaches in simulations and application to our motivating patient data.
Collapse
Affiliation(s)
- Jin Jin
- Department of Biostatistics, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA
| | - Lin Zhang
- Devision of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN, USA
| | - Ethan Leng
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN, USA
| | | | - Joseph S. Koopmeiners
- Devision of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|