2
|
Zhu M, Sali R, Baba F, Khasawneh H, Ryndin M, Leveillee RJ, Hurwitz MD, Lui K, Dixon C, Zhang DY. Artificial intelligence in pathologic diagnosis, prognosis and prediction of prostate cancer. AMERICAN JOURNAL OF CLINICAL AND EXPERIMENTAL UROLOGY 2024; 12:200-215. [PMID: 39308594 PMCID: PMC11411179 DOI: 10.62347/jsae9732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 08/19/2024] [Indexed: 09/25/2024]
Abstract
Histopathology, which is the gold-standard for prostate cancer diagnosis, faces significant challenges. With prostate cancer ranking among the most common cancers in the United States and worldwide, pathologists experience an increased number for prostate biopsies. At the same time, precise pathological assessment and classification are necessary for risk stratification and treatment decisions in prostate cancer care, adding to the challenge to pathologists. Recent advancement in digital pathology makes artificial intelligence and learning tools adopted in histopathology feasible. In this review, we introduce the concept of AI and its various techniques in the field of histopathology. We summarize the clinical applications of AI pathology for prostate cancer, including pathological diagnosis, grading, prognosis evaluation, and treatment options. We also discuss how AI applications can be integrated into the routine pathology workflow. With these rapid advancements, it is evident that AI applications in prostate cancer go beyond the initial goal of being tools for diagnosis and grading. Instead, pathologists can provide additional information to improve long-term patient outcomes by assessing detailed histopathologic features at pixel level using digital pathology and AI. Our review not only provides a comprehensive summary of the existing research but also offers insights for future advancements.
Collapse
Affiliation(s)
- Min Zhu
- Department of Computational Pathology, NovinoAI1443 NE 4th Ave, Fort Lauderdale, FL 33304, USA
| | - Rasoul Sali
- Department of Computational Pathology, NovinoAI1443 NE 4th Ave, Fort Lauderdale, FL 33304, USA
- Department of Radiation Oncology, Stanford University School of MedicineStanford, CA 94305, USA
| | - Firas Baba
- Department of Computational Pathology, NovinoAI1443 NE 4th Ave, Fort Lauderdale, FL 33304, USA
| | - Hamdi Khasawneh
- King Hussein School of Computing Sciences, Princess Sumaya University for TechnologyAmman 11855, Jordan
| | - Michelle Ryndin
- College of Agriculture and Life Sciences, Cornell University616 Thurston Ave, Ithaca, NY 14853, USA
| | - Raymond J Leveillee
- Department of Surgery, Florida Atlantic University, Division of Urology, Bethesda Hospital East, Baptist Health South Florida2800 S. Seacrest Drive, Boynton Beach, FL 33435, USA
| | - Mark D Hurwitz
- Department of Radiation Medicine, New York Medical College and Westchester Medical CenterValhalla, NY 10595, USA
| | - Kin Lui
- Department of Urology, Mount Sinai HospitalNew York, NY 10029, USA
| | - Christopher Dixon
- Department of Urology, Good Samaritan Hospital, Westchester Medical Center Health NetworkSuffern, NY 10901, USA
| | - David Y Zhang
- Department of Computational Pathology, NovinoAI1443 NE 4th Ave, Fort Lauderdale, FL 33304, USA
- Pathology and Laboratory Services, Department of Veterans Affairs New York Harbor Healthcare SystemNew York, NY 10010, USA
| |
Collapse
|
3
|
Ramacciotti LS, Hershenhouse JS, Mokhtar D, Paralkar D, Kaneko M, Eppler M, Gill K, Mogoulianitis V, Duddalwar V, Abreu AL, Gill I, Cacciamani GE. Comprehensive Assessment of MRI-based Artificial Intelligence Frameworks Performance in the Detection, Segmentation, and Classification of Prostate Lesions Using Open-Source Databases. Urol Clin North Am 2024; 51:131-161. [PMID: 37945098 DOI: 10.1016/j.ucl.2023.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Numerous MRI-based artificial intelligence (AI) frameworks have been designed for prostate cancer lesion detection, segmentation, and classification via MRI as a result of intrareader and interreader variability that is inherent to traditional interpretation. Open-source data sets have been released with the intention of providing freely available MRIs for the testing of diverse AI frameworks in automated or semiautomated tasks. Here, an in-depth assessment of the performance of MRI-based AI frameworks for detecting, segmenting, and classifying prostate lesions using open-source databases was performed. Among 17 data sets, 12 were specific to prostate cancer detection/classification, with 52 studies meeting the inclusion criteria.
Collapse
Affiliation(s)
- Lorenzo Storino Ramacciotti
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Jacob S Hershenhouse
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Daniel Mokhtar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Divyangi Paralkar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Masatomo Kaneko
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Urology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Michael Eppler
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Karanvir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Vasileios Mogoulianitis
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Vinay Duddalwar
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Andre L Abreu
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Inderbir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
4
|
Belue MJ, Harmon SA, Masoudi S, Barrett T, Law YM, Purysko AS, Panebianco V, Yilmaz EC, Lin Y, Jadda PK, Raavi S, Wood BJ, Pinto PA, Choyke PL, Turkbey B. Quality of T2-weighted MRI re-acquisition versus deep learning GAN image reconstruction: A multi-reader study. Eur J Radiol 2024; 170:111259. [PMID: 38128256 PMCID: PMC10842312 DOI: 10.1016/j.ejrad.2023.111259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/23/2023] [Accepted: 12/07/2023] [Indexed: 12/23/2023]
Abstract
PURPOSE To evaluate CycleGAN's ability to enhance T2-weighted image (T2WI) quality. METHOD A CycleGAN algorithm was used to enhance T2WI quality. 96 patients (192 scans) were identified from patients who underwent multiple axial T2WI due to poor quality on the first attempt (RAD1) and improved quality on re-acquisition (RAD2). CycleGAN algorithm gave DL classifier scores (0-1) for quality quantification and produced enhanced versions of QI1 and QI2 from RAD1 and RAD2, respectively. A subset (n = 20 patients) was selected for a blinded, multi-reader study, where four radiologists rated T2WI on a scale of 1-4 for quality. The multi-reader study presented readers with 60 image pairs (RAD1 vs RAD2, RAD1 vs QI1, and RAD2 vs QI2), allowing for selecting sequence preferences and quantifying the quality changes. RESULTS The DL classifier correctly discerned 71.9 % of quality classes, with 90.6 % (96/106) as poor quality and 48.8 % (42/86) as diagnostic in original sequences (RAD1, RAD2). CycleGAN images (QI1, QI2) demonstrated quantitative improvements, with consistently higher DL classifier scores than original scans (p < 0.001). In the multi-reader analysis, CycleGAN demonstrated no qualitative improvements, with diminished overall quality and motion in QI2 in most patients compared to RAD2, with noise levels remaining similar (8/20). No readers preferred QI2 to RAD2 for diagnosis. CONCLUSION Despite quantitative enhancements with CycleGAN, there was no qualitative boost in T2WI diagnostic quality, noise, or motion. Expert radiologists didn't favor CycleGAN images over standard scans, highlighting the divide between quantitative and qualitative metrics.
Collapse
Affiliation(s)
- Mason J Belue
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Stephanie A Harmon
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Tristan Barrett
- Department of Radiology, University of Cambridge, Cambridge, England
| | - Yan Mee Law
- Department of Radiology, Singapore General Hospital, Singapore
| | - Andrei S Purysko
- Section of Abdominal Imaging, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | | | - Enis C Yilmaz
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Yue Lin
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Pavan Kumar Jadda
- Center for Information Technology, National Institutes of Health, Bethesda, MD, USA
| | - Sitarama Raavi
- Center for Information Technology, National Institutes of Health, Bethesda, MD, USA
| | - Bradford J Wood
- Center for Interventional Oncology, National Cancer Institute, NIH, Bethesda, MD, USA; Department of Radiology, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
| | - Peter A Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA.
| |
Collapse
|
6
|
Mohammed MA, Lakhan A, Abdulkareem KH, Garcia-Zapirain B. A hybrid cancer prediction based on multi-omics data and reinforcement learning state action reward state action (SARSA). Comput Biol Med 2023; 154:106617. [PMID: 36753981 DOI: 10.1016/j.compbiomed.2023.106617] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 01/21/2023] [Accepted: 01/28/2023] [Indexed: 02/05/2023]
Abstract
These days, the ratio of cancer diseases among patients has been growing day by day. Recently, many cancer cases have been reported in different clinical hospitals. Many machine learning algorithms have been suggested in the literature to predict cancer diseases with the same class types based on trained and test data. However, there are many research rooms available for further research. In this paper, the studies look into the different types of cancer by analyzing, classifying, and processing the multi-omics dataset in a fog cloud network. Based on SARSA on-policy and multi-omics workload learning, made possible by reinforcement learning, the study made new hybrid cancer detection schemes. It consists of different layers, such as clinical data collection via laboratories and tool processes (biopsy, colonoscopy, and mammography) at the distributed omics-based clinics in the network. The study considers the different cancer classes such as carcinomas, sarcomas, leukemias, and lymphomas with their types in work and processes them using the multi-omics distributed clinics in work. In order to solve the problem, the study presents omics cancer workload reinforcement learning state action reward state action "SARSA" (OCWLS) schemes, which are made up of an on-policy learning scheme on different parameters like states, actions, timestamps, reward, accuracy, and processing time constraints. The goal is to process multiple cancer classes and workload feature matching while reducing the time it takes to process in clinical hospitals that are spread out. Simulation results show that OCWLS is better than other machine learning methods regarding+ processing time, extracting features from multiple classes of cancer, and matching in the system.
Collapse
Affiliation(s)
- Mazin Abed Mohammed
- College of Computer Science and Information Technology, University of Anbar, Anbar 31001, Iraq; eVIDA Lab, University of Deusto, 48007 Bilbao, Spain.
| | - Abdullah Lakhan
- Department of Computer Science, Dawood University of Engineering and Technology, Pakistan.
| | - Karrar Hameed Abdulkareem
- College of Agriculture, Al-Muthanna University, Samawah 66001, Iraq; College of Engineering, University of Warith Al-Anbiyaa, Karbala 56001, Iraq.
| | | |
Collapse
|