1
|
Ren M, Huang M, Zhang Y, Zhang Z, Ren M. Enhanced hierarchical attention mechanism for mixed MIL in automatic Gleason grading and scoring. Sci Rep 2025; 15:15980. [PMID: 40341520 PMCID: PMC12062252 DOI: 10.1038/s41598-025-00048-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2024] [Accepted: 04/24/2025] [Indexed: 05/10/2025] Open
Abstract
Segmenting histological images and analyzing relevant regions are crucial for supporting pathologists in diagnosing various diseases. In prostate cancer diagnosis, Gleason grading and scoring relies on the recognition of different patterns in tissue samples. However, annotating large histological datasets is laborious, expensive, and often limited to slide-level or limited instance-level labels. To address this, we propose an enhanced hierarchical attention mechanism within a mixed multiple instance learning (MIL) model that effectively integrates slide-level and instance-level labels. Our hierarchical attention mechanism dynamically suppresses noisy instance-level labels while adaptively amplifying discriminative features, achieving a synergistic integration of global slide-level context and local superpixel patterns. This design significantly improves label utilization efficiency, leading to state-of-the-art performance in Gleason grading. Experimental results on the SICAPv2 and TMAs datasets demonstrate the superior performance of our model, achieving AUC scores of 0.9597 and 0.8889, respectively. Our work not only advances the state-of-the-art in Gleason grading but also highlights the potential of hierarchical attention mechanisms in mixed MIL models for medical image analysis.
Collapse
Affiliation(s)
- Meili Ren
- Hainan Provincial Key Laboratory of Big Data and Smart Service, Hainan University, Haikou, 570228, China.
- Center of Network and Information Education Technology, Shanxi University of Finance and Economics, Taiyuan, 030006, China.
| | - Mengxing Huang
- Hainan Provincial Key Laboratory of Big Data and Smart Service, Hainan University, Haikou, 570228, China
| | - Yu Zhang
- Hainan Provincial Key Laboratory of Big Data and Smart Service, Hainan University, Haikou, 570228, China
| | - Zhijun Zhang
- Center of Network and Information Education Technology, Shanxi University of Finance and Economics, Taiyuan, 030006, China
| | - Meiyan Ren
- School of Medical, Shanxi Datong University, Datong, 037009, China
| |
Collapse
|
2
|
Lee Y, Al Mukaddim R, Ngawang T, Salamat S, Mitchell CC, Maybock J, Wilbrand SM, Dempsey RJ, Varghese T. Varying pixel resolution significantly improves deep learning-based carotid plaque histology segmentation. Sci Rep 2025; 15:139. [PMID: 39747244 PMCID: PMC11696133 DOI: 10.1038/s41598-024-83948-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Accepted: 12/18/2024] [Indexed: 01/04/2025] Open
Abstract
Carotid plaques-the buildup of cholesterol, calcium, cellular debris, and fibrous tissues in carotid arteries-can rupture, release microemboli into the cerebral vasculature and cause strokes. The likelihood of a plaque rupturing is thought to be associated with its composition (i.e. lipid, calcium, hemorrhage and inflammatory cell content) and the mechanical properties of the plaque. Automating and digitizing histopathological images of these plaques into tissue specific (lipid and calcified) regions can help us compare histologic findings to in vivo imaging and thereby enable us to optimize medical treatments or interventions for patients based on the composition of plaques. Lack of public datasets and the hypocellular nature of plaques have made applying deep learning to this task difficult. To address this, we sampled 1944 regions of interests from 323 whole slide images and drastically varied their pixel resolution from [Formula: see text] to [Formula: see text] as we anticipated that varying the pixel resolution of histology images can provide neural networks more 'context' that pathologists also rely on. We were able to train Mask R-CNN using regions of interests with varied pixel resolution, with a [Formula: see text] increase in pixel accuracy versus training with patches. The model achieved F1 scores of [Formula: see text] for calcified regions, [Formula: see text] for lipid core with fibrinous material and cholesterol crystals, and [Formula: see text] for fibrous regions, as well as a pixel accuracy of [Formula: see text]. While the F1 score was not calculated for lumen, qualitative results illustrate the model's ability to predict lumen. Hemorrhage was excluded as a class since only one out of 34 carotid endarterectomy specimens had sufficient hemorrhage for annotation.
Collapse
Affiliation(s)
- Yurim Lee
- Medical Physics, University of Wisconsin School of Medicine and Public Health (UW-SMPH), Madison, USA.
| | - Rashid Al Mukaddim
- Medical Physics, University of Wisconsin School of Medicine and Public Health (UW-SMPH), Madison, USA
| | - Tenzin Ngawang
- Medical Physics, University of Wisconsin School of Medicine and Public Health (UW-SMPH), Madison, USA
| | - Shahriar Salamat
- Pathology and Laboratory Medicine, UW-SMPH, Madison, USA
- Neurological Surgery, UW-SMPH, Madison, USA
| | - Carol C Mitchell
- Medicine/Division of Cardiovascular Medicine, UW-SMPH, Madison, USA
| | | | | | | | - Tomy Varghese
- Medical Physics, University of Wisconsin School of Medicine and Public Health (UW-SMPH), Madison, USA.
| |
Collapse
|
3
|
Patkar S, Harmon S, Sesterhenn I, Lis R, Merino M, Young D, Brown GT, Greenfield KM, McGeeney JD, Elsamanoudi S, Tan SH, Schafer C, Jiang J, Petrovics G, Dobi A, Rentas FJ, Pinto PA, Chesnut GT, Choyke P, Turkbey B, Moncur JT. A selective CutMix approach improves generalizability of deep learning-based grading and risk assessment of prostate cancer. J Pathol Inform 2024; 15:100381. [PMID: 38953042 PMCID: PMC11215954 DOI: 10.1016/j.jpi.2024.100381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 03/27/2024] [Accepted: 04/29/2024] [Indexed: 07/03/2024] Open
Abstract
The Gleason score is an important predictor of prognosis in prostate cancer. However, its subjective nature can result in over- or under-grading. Our objective was to train an artificial intelligence (AI)-based algorithm to grade prostate cancer in specimens from patients who underwent radical prostatectomy (RP) and to assess the correlation of AI-estimated proportions of different Gleason patterns with biochemical recurrence-free survival (RFS), metastasis-free survival (MFS), and overall survival (OS). Training and validation of algorithms for cancer detection and grading were completed with three large datasets containing a total of 580 whole-mount prostate slides from 191 RP patients at two centers and 6218 annotated needle biopsy slides from the publicly available Prostate Cancer Grading Assessment dataset. A cancer detection model was trained using MobileNetV3 on 0.5 mm × 0.5 mm cancer areas (tiles) captured at 10× magnification. For cancer grading, a Gleason pattern detector was trained on tiles using a ResNet50 convolutional neural network and a selective CutMix training strategy involving a mixture of real and artificial examples. This strategy resulted in improved model generalizability in the test set compared with three different control experiments when evaluated on both needle biopsy slides and whole-mount prostate slides from different centers. In an additional test cohort of RP patients who were clinically followed over 30 years, quantitative Gleason pattern AI estimates achieved concordance indexes of 0.69, 0.72, and 0.64 for predicting RFS, MFS, and OS times, outperforming the control experiments and International Society of Urological Pathology system (ISUP) grading by pathologists. Finally, unsupervised clustering of test RP patient specimens into low-, medium-, and high-risk groups based on AI-estimated proportions of each Gleason pattern resulted in significantly improved RFS and MFS stratification compared with ISUP grading. In summary, deep learning-based quantitative Gleason scoring using a selective CutMix training strategy may improve prognostication after prostate cancer surgery.
Collapse
Affiliation(s)
- Sushant Patkar
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Stephanie Harmon
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | | | - Rosina Lis
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Maria Merino
- Laboratory of Pathology, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Denise Young
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - G. Thomas Brown
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | | | | | - Sally Elsamanoudi
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Shyh-Han Tan
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Cara Schafer
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Jiji Jiang
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Gyorgy Petrovics
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Albert Dobi
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | | | - Peter A. Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Gregory T. Chesnut
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- F. Edward Hebert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, USA
- Urology Service, Walter Reed National Military Medical Center, Bethesda, MD 20814, USA
| | - Peter Choyke
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Baris Turkbey
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Joel T. Moncur
- The Joint Pathology Center, Silver Spring, MD 20910, USA
| |
Collapse
|
4
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
5
|
Saha S, Vignarajan J, Flesch A, Jelinko P, Gorog P, Szep E, Toth C, Gombas P, Schvarcz T, Mihaly O, Kapin M, Zub A, Kuthi L, Tiszlavicz L, Glasz T, Frost S. An Artificial Intelligent System for Prostate Cancer Diagnosis in Whole Slide Images. J Med Syst 2024; 48:101. [PMID: 39466503 PMCID: PMC11519157 DOI: 10.1007/s10916-024-02118-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 10/08/2024] [Indexed: 10/30/2024]
Abstract
In recent years a significant demand to develop computer-assisted diagnostic tools to assess prostate cancer using whole slide images has been observed. In this study we develop and validate a machine learning system for cancer assessment, inclusive of detection of perineural invasion and measurement of cancer portion to meet clinical reporting needs. The system analyses the whole slide image in three consecutive stages: tissue detection, classification, and slide level analysis. The whole slide image is divided into smaller regions (patches). The tissue detection stage relies upon traditional machine learning to identify WSI patches containing tissue, which are then further assessed at the classification stage where deep learning algorithms are employed to detect and classify cancer tissue. At the slide level analysis stage, entire slide level information is generated by aggregating all the patch level information of the slide. A total of 2340 haematoxylin and eosin stained slides were used to train and validate the system. A medical team consisting of 11 board certified pathologists with prostatic pathology subspeciality competences working independently in 4 different medical centres performed the annotations. Pixel-level annotation based on an agreed set of 10 annotation terms, determined based on medical relevance and prevalence, was created by the team. The system achieved an accuracy of 99.53% in tissue detection, with sensitivity and specificity respectively of 99.78% and 99.12%. The system achieved an accuracy of 92.80% in classifying tissue terms, with sensitivity and specificity respectively 92.61% and 99.25%, when 5x magnification level was used. For 10x magnification, these values were respectively 91.04%, 90.49%, and 99.07%. For 20x magnification they were 84.71%, 83.95%, 90.13%.
Collapse
Affiliation(s)
- Sajib Saha
- Australian e-Health Research Centre, CSIRO, Kensington, Australia.
| | | | - Adam Flesch
- AI4Path (Prosperitree Pty Ltd), Roseville, Australia
| | | | - Petra Gorog
- Markusovszky University Teaching Hospital, Szombathely, Hungary
| | - Eniko Szep
- Markusovszky University Teaching Hospital, Szombathely, Hungary
| | - Csaba Toth
- Markusovszky University Teaching Hospital, Szombathely, Hungary
| | | | | | | | | | | | - Levente Kuthi
- Department of Pathology, Albert Szent-Györgyi Medical School, University of Szeged, Szeged, Hungary
| | - Laszlo Tiszlavicz
- Department of Pathology, Albert Szent-Györgyi Medical School, University of Szeged, Szeged, Hungary
| | - Tibor Glasz
- AI4Path (Prosperitree Pty Ltd), Roseville, Australia
| | - Shaun Frost
- Australian e-Health Research Centre, CSIRO, Kensington, Australia
| |
Collapse
|
6
|
Dominguez-Morales JP, Duran-Lopez L, Marini N, Vicente-Diaz S, Linares-Barranco A, Atzori M, Müller H. A systematic comparison of deep learning methods for Gleason grading and scoring. Med Image Anal 2024; 95:103191. [PMID: 38728903 DOI: 10.1016/j.media.2024.103191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 01/16/2024] [Accepted: 05/02/2024] [Indexed: 05/12/2024]
Abstract
Prostate cancer is the second most frequent cancer in men worldwide after lung cancer. Its diagnosis is based on the identification of the Gleason score that evaluates the abnormality of cells in glands through the analysis of the different Gleason patterns within tissue samples. The recent advancements in computational pathology, a domain aiming at developing algorithms to automatically analyze digitized histopathology images, lead to a large variety and availability of datasets and algorithms for Gleason grading and scoring. However, there is no clear consensus on which methods are best suited for each problem in relation to the characteristics of data and labels. This paper provides a systematic comparison on nine datasets with state-of-the-art training approaches for deep neural networks (including fully-supervised learning, weakly-supervised learning, semi-supervised learning, Additive-MIL, Attention-Based MIL, Dual-Stream MIL, TransMIL and CLAM) applied to Gleason grading and scoring tasks. The nine datasets are collected from pathology institutes and openly accessible repositories. The results show that the best methods for Gleason grading and Gleason scoring tasks are fully supervised learning and CLAM, respectively, guiding researchers to the best practice to adopt depending on the task to solve and the labels that are available.
Collapse
Affiliation(s)
- Juan P Dominguez-Morales
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla 41012, Spain; SCORE Lab, I3US. Universidad de Sevilla, Spain.
| | - Lourdes Duran-Lopez
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla 41012, Spain; SCORE Lab, I3US. Universidad de Sevilla, Spain
| | - Niccolò Marini
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Technopôle 3, Sierre 3960, Switzerland; Centre Universitaire d'Informatique, University of Geneva, Carouge 1227, Switzerland
| | - Saturnino Vicente-Diaz
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla 41012, Spain; SCORE Lab, I3US. Universidad de Sevilla, Spain
| | - Alejandro Linares-Barranco
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla 41012, Spain; SCORE Lab, I3US. Universidad de Sevilla, Spain
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Technopôle 3, Sierre 3960, Switzerland; Department of Neuroscience, University of Padua, Via Giustiniani 2, Padua, 35128, Italy
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Technopôle 3, Sierre 3960, Switzerland; Medical faculty, University of Geneva, Geneva 1211, Switzerland
| |
Collapse
|
7
|
Frewing A, Gibson AB, Robertson R, Urie PM, Corte DD. Don't Fear the Artificial Intelligence: A Systematic Review of Machine Learning for Prostate Cancer Detection in Pathology. Arch Pathol Lab Med 2024; 148:603-612. [PMID: 37594900 DOI: 10.5858/arpa.2022-0460-ra] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2023] [Indexed: 08/20/2023]
Abstract
CONTEXT Automated prostate cancer detection using machine learning technology has led to speculation that pathologists will soon be replaced by algorithms. This review covers the development of machine learning algorithms and their reported effectiveness specific to prostate cancer detection and Gleason grading. OBJECTIVE To examine current algorithms regarding their accuracy and classification abilities. We provide a general explanation of the technology and how it is being used in clinical practice. The challenges to the application of machine learning algorithms in clinical practice are also discussed. DATA SOURCES The literature for this review was identified and collected using a systematic search. Criteria were established prior to the sorting process to effectively direct the selection of studies. A 4-point system was implemented to rank the papers according to their relevancy. For papers accepted as relevant to our metrics, all cited and citing studies were also reviewed. Studies were then categorized based on whether they implemented binary or multi-class classification methods. Data were extracted from papers that contained accuracy, area under the curve (AUC), or κ values in the context of prostate cancer detection. The results were visually summarized to present accuracy trends between classification abilities. CONCLUSIONS It is more difficult to achieve high accuracy metrics for multiclassification tasks than for binary tasks. The clinical implementation of an algorithm that can assign a Gleason grade to clinical whole slide images (WSIs) remains elusive. Machine learning technology is currently not able to replace pathologists but can serve as an important safeguard against misdiagnosis.
Collapse
Affiliation(s)
- Aaryn Frewing
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Alexander B Gibson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Richard Robertson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Paul M Urie
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Dennis Della Corte
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| |
Collapse
|
8
|
Zhu L, Pan J, Mou W, Deng L, Zhu Y, Wang Y, Pareek G, Hyams E, Carneiro BA, Hadfield MJ, El-Deiry WS, Yang T, Tan T, Tong T, Ta N, Zhu Y, Gao Y, Lai Y, Cheng L, Chen R, Xue W. Harnessing artificial intelligence for prostate cancer management. Cell Rep Med 2024; 5:101506. [PMID: 38593808 PMCID: PMC11031422 DOI: 10.1016/j.xcrm.2024.101506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 01/05/2024] [Accepted: 03/19/2024] [Indexed: 04/11/2024]
Abstract
Prostate cancer (PCa) is a common malignancy in males. The pathology review of PCa is crucial for clinical decision-making, but traditional pathology review is labor intensive and subjective to some extent. Digital pathology and whole-slide imaging enable the application of artificial intelligence (AI) in pathology. This review highlights the success of AI in detecting and grading PCa, predicting patient outcomes, and identifying molecular subtypes. We propose that AI-based methods could collaborate with pathologists to reduce workload and assist clinicians in formulating treatment recommendations. We also introduce the general process and challenges in developing AI pathology models for PCa. Importantly, we summarize publicly available datasets and open-source codes to facilitate the utilization of existing data and the comparison of the performance of different models to improve future studies.
Collapse
Affiliation(s)
- Lingxuan Zhu
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China; Department of Etiology and Carcinogenesis, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China; Changping Laboratory, Beijing, China
| | - Jiahua Pan
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Weiming Mou
- Department of Urology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Longxin Deng
- Department of Urology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yinjie Zhu
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Yanqing Wang
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Gyan Pareek
- Department of Surgery (Urology), Brown University Warren Alpert Medical School, Providence, RI, USA; Minimally Invasive Urology Institute, Providence, RI, USA
| | - Elias Hyams
- Department of Surgery (Urology), Brown University Warren Alpert Medical School, Providence, RI, USA; Minimally Invasive Urology Institute, Providence, RI, USA
| | - Benedito A Carneiro
- The Legorreta Cancer Center at Brown University, Lifespan Cancer Institute, Providence, RI, USA
| | - Matthew J Hadfield
- The Legorreta Cancer Center at Brown University, Lifespan Cancer Institute, Providence, RI, USA
| | - Wafik S El-Deiry
- The Legorreta Cancer Center at Brown University, Laboratory of Translational Oncology and Experimental Cancer Therapeutics, Department of Pathology & Laboratory Medicine, The Warren Alpert Medical School of Brown University, The Joint Program in Cancer Biology, Brown University and Lifespan Health System, Division of Hematology/Oncology, The Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Tao Yang
- Department of Medical Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Tao Tan
- Faculty of Applied Sciences, Macao Polytechnic University, Address: R. de Luís Gonzaga Gomes, Macao, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fujian 350108, China
| | - Na Ta
- Department of Pathology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yan Zhu
- Department of Pathology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yisha Gao
- Department of Pathology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yancheng Lai
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China; The First School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Liang Cheng
- Department of Surgery (Urology), Brown University Warren Alpert Medical School, Providence, RI, USA; Department of Pathology and Laboratory Medicine, Department of Surgery (Urology), Brown University Warren Alpert Medical School, Lifespan Health, and the Legorreta Cancer Center at Brown University, Providence, RI, USA.
| | - Rui Chen
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China.
| | - Wei Xue
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China.
| |
Collapse
|
9
|
[Chinese expert consensus on the technical and clinical practice specifications of artificial intelligence assisted morphology examination of blood cells (2024)]. ZHONGHUA XUE YE XUE ZA ZHI = ZHONGHUA XUEYEXUE ZAZHI 2024; 45:330-338. [PMID: 38951059 PMCID: PMC11168004 DOI: 10.3760/cma.j.cn121090-20240217-00064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Indexed: 07/03/2024]
Abstract
Blood cell morphological examination is a crucial method for the diagnosis of blood diseases, but traditional manual microscopy is characterized by low efficiency and susceptibility to subjective biases. The application of artificial intelligence (AI) technology has improved the efficiency and quality of blood cell examinations and facilitated the standardization of test results. Currently, a variety of AI devices are either in clinical use or under research, with diverse technical requirements and configurations. The Experimental Diagnostic Study Group of the Hematology Branch of the Chinese Medical Association has organized a panel of experts to formulate this consensus. The consensus covers term definitions, scope of application, technical requirements, clinical application, data management, and information security. It emphasizes the importance of specimen preparation, image acquisition, image segmentation algorithms, and cell feature extraction and classification, and sets forth basic requirements for the cell recognition spectrum. Moreover, it provides detailed explanations regarding the fine classification of pathological cells, requirements for cell training and testing, quality control standards, and assistance in issuing diagnostic reports by humans. Additionally, the consensus underscores the significance of data management and information security to ensure the safety of patient information and the accuracy of data.
Collapse
|
10
|
Ferrero A, Ghelichkhan E, Manoochehri H, Ho MM, Albertson DJ, Brintz BJ, Tasdizen T, Whitaker RT, Knudsen BS. HistoEM: A Pathologist-Guided and Explainable Workflow Using Histogram Embedding for Gland Classification. Mod Pathol 2024; 37:100447. [PMID: 38369187 DOI: 10.1016/j.modpat.2024.100447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 01/06/2024] [Accepted: 02/06/2024] [Indexed: 02/20/2024]
Abstract
Pathologists have, over several decades, developed criteria for diagnosing and grading prostate cancer. However, this knowledge has not, so far, been included in the design of convolutional neural networks (CNN) for prostate cancer detection and grading. Further, it is not known whether the features learned by machine-learning algorithms coincide with diagnostic features used by pathologists. We propose a framework that enforces algorithms to learn the cellular and subcellular differences between benign and cancerous prostate glands in digital slides from hematoxylin and eosin-stained tissue sections. After accurate gland segmentation and exclusion of the stroma, the central component of the pipeline, named HistoEM, utilizes a histogram embedding of features from the latent space of the CNN encoder. Each gland is represented by 128 feature-wise histograms that provide the input into a second network for benign vs cancer classification of the whole gland. Cancer glands are further processed by a U-Net structured network to separate low-grade from high-grade cancer. Our model demonstrates similar performance compared with other state-of-the-art prostate cancer grading models with gland-level resolution. To understand the features learned by HistoEM, we first rank features based on the distance between benign and cancer histograms and visualize the tissue origins of the 2 most important features. A heatmap of pixel activation by each feature is generated using Grad-CAM and overlaid on nuclear segmentation outlines. We conclude that HistoEM, similar to pathologists, uses nuclear features for the detection of prostate cancer. Altogether, this novel approach can be broadly deployed to visualize computer-learned features in histopathology images.
Collapse
Affiliation(s)
- Alessandro Ferrero
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Elham Ghelichkhan
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Hamid Manoochehri
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Man Minh Ho
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | | | | | - Tolga Tasdizen
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Ross T Whitaker
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | | |
Collapse
|
11
|
Gifani P, Shalbaf A. Transfer Learning with Pretrained Convolutional Neural Network for Automated Gleason Grading of Prostate Cancer Tissue Microarrays. JOURNAL OF MEDICAL SIGNALS & SENSORS 2024; 14:4. [PMID: 38510670 PMCID: PMC10950311 DOI: 10.4103/jmss.jmss_42_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 12/20/2022] [Accepted: 03/22/2023] [Indexed: 03/22/2024]
Abstract
Background The Gleason grading system has been the most effective prediction for prostate cancer patients. This grading system provides this possibility to assess prostate cancer's aggressiveness and then constitutes an important factor for stratification and therapeutic decisions. However, determining Gleason grade requires highly-trained pathologists and is time-consuming and tedious, and suffers from inter-pathologist variability. To remedy these limitations, this paper introduces an automatic methodology based on transfer learning with pretrained convolutional neural networks (CNNs) for automatic Gleason grading of prostate cancer tissue microarray (TMA). Methods Fifteen pretrained (CNNs): Efficient Nets (B0-B5), NasNetLarge, NasNetMobile, InceptionV3, ResNet-50, SeResnet 50, Xception, DenseNet121, ResNext50, and inception_resnet_v2 were fine-tuned on a dataset of prostate carcinoma TMA images. Six pathologists separately identified benign and cancerous areas for each prostate TMA image by allocating benign, 3, 4, or 5 Gleason grade for 244 patients. The dataset was labeled by these pathologists and majority vote was applied on pixel-wise annotations to obtain a unified label. Results Results showed the NasnetLarge architecture is the best model among them in the classification of prostate TMA images of 244 patients with accuracy of 0.93 and area under the curve of 0.98. Conclusion Our study can act as a highly trained pathologist to categorize the prostate cancer stages with more objective and reproducible results.
Collapse
Affiliation(s)
- Parisa Gifani
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Cancer Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
12
|
Li T, Xu Y, Wu T, Charlton JR, Bennett KM, Al-Hindawi F. BlobCUT: A Contrastive Learning Method to Support Small Blob Detection in Medical Imaging. Bioengineering (Basel) 2023; 10:1372. [PMID: 38135963 PMCID: PMC10740534 DOI: 10.3390/bioengineering10121372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/19/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
Medical imaging-based biomarkers derived from small objects (e.g., cell nuclei) play a crucial role in medical applications. However, detecting and segmenting small objects (a.k.a. blobs) remains a challenging task. In this research, we propose a novel 3D small blob detector called BlobCUT. BlobCUT is an unpaired image-to-image (I2I) translation model that falls under the Contrastive Unpaired Translation paradigm. It employs a blob synthesis module to generate synthetic 3D blobs with corresponding masks. This is incorporated into the iterative model training as the ground truth. The I2I translation process is designed with two constraints: (1) a convexity consistency constraint that relies on Hessian analysis to preserve the geometric properties and (2) an intensity distribution consistency constraint based on Kullback-Leibler divergence to preserve the intensity distribution of blobs. BlobCUT learns the inherent noise distribution from the target noisy blob images and performs image translation from the noisy domain to the clean domain, effectively functioning as a denoising process to support blob identification. To validate the performance of BlobCUT, we evaluate it on a 3D simulated dataset of blobs and a 3D MRI dataset of mouse kidneys. We conduct a comparative analysis involving six state-of-the-art methods. Our findings reveal that BlobCUT exhibits superior performance and training efficiency, utilizing only 56.6% of the training time required by the state-of-the-art BlobDetGAN. This underscores the effectiveness of BlobCUT in accurately segmenting small blobs while achieving notable gains in training efficiency.
Collapse
Affiliation(s)
- Teng Li
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA; (T.L.); (Y.X.); (F.A.-H.)
| | - Yanzhe Xu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA; (T.L.); (Y.X.); (F.A.-H.)
| | - Teresa Wu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA; (T.L.); (Y.X.); (F.A.-H.)
| | - Jennifer R. Charlton
- Division Nephrology, Department of Pediatrics, University of Virginia, Charlottesville, VA 22903, USA;
| | - Kevin M. Bennett
- Department of Radiology, Washington University, St. Louis, MO 63130, USA;
| | - Firas Al-Hindawi
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281, USA; (T.L.); (Y.X.); (F.A.-H.)
| |
Collapse
|
13
|
Zheng T, Chen W, Li S, Quan H, Zou M, Zheng S, Zhao Y, Gao X, Cui X. Learning how to detect: A deep reinforcement learning method for whole-slide melanoma histopathology images. Comput Med Imaging Graph 2023; 108:102275. [PMID: 37567046 DOI: 10.1016/j.compmedimag.2023.102275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/18/2023] [Accepted: 07/22/2023] [Indexed: 08/13/2023]
Abstract
Cutaneous melanoma represents one of the most life-threatening malignancies. Histopathological image analysis serves as a vital tool for early melanoma detection. Deep neural network (DNN) models are frequently employed to aid pathologists in enhancing the efficiency and accuracy of diagnoses. However, due to the paucity of well-annotated, high-resolution, whole-slide histopathology image (WSI) datasets, WSIs are typically fragmented into numerous patches during the model training and testing stages. This process disregards the inherent interconnectedness among patches, potentially impeding the models' performance. Additionally, the presence of excess, non-contributing patches extends processing times and introduces substantial computational burdens. To mitigate these issues, we draw inspiration from the clinical decision-making processes of dermatopathologists to propose an innovative, weakly supervised deep reinforcement learning framework, titled Fast medical decision-making in melanoma histopathology images (FastMDP-RL). This framework expedites model inference by reducing the number of irrelevant patches identified within WSIs. FastMDP-RL integrates two DNN-based agents: the search agent (SeAgent) and the decision agent (DeAgent). The SeAgent initiates actions, steered by the image features observed in the current viewing field at various magnifications. Simultaneously, the DeAgent provides labeling probabilities for each patch. We utilize multi-instance learning (MIL) to construct a teacher-guided model (MILTG), serving a dual purpose: rewarding the SeAgent and guiding the DeAgent. Our evaluations were conducted using two melanoma datasets: the publicly accessible TCIA-CM dataset and the proprietary MELSC dataset. Our experimental findings affirm FastMDP-RL's ability to expedite inference and accurately predict WSIs, even in the absence of pixel-level annotations. Moreover, our research investigates the WSI-based interactive environment, encompassing the design of agents, state and reward functions, and feature extractors suitable for melanoma tissue images. This investigation offers valuable insights and references for researchers engaged in related studies. The code is available at: https://github.com/titizheng/FastMDP-RL.
Collapse
Affiliation(s)
- Tingting Zheng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Weixing Chen
- Shenzhen College of Advanced Technology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shuqin Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Mingchen Zou
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Song Zheng
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xinghua Gao
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| |
Collapse
|
14
|
Zhang Y, Chen S, Wang Y, Li J, Xu K, Chen J, Zhao J. Deep learning-based methods for classification of microsatellite instability in endometrial cancer from HE-stained pathological images. J Cancer Res Clin Oncol 2023; 149:8877-8888. [PMID: 37150803 DOI: 10.1007/s00432-023-04838-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Accepted: 05/03/2023] [Indexed: 05/09/2023]
Abstract
BACKGROUND Microsatellite instability (MSI) is one of the essential tumor biomarkers for cancer treatment and prognosis. The presence of more significant PD-L1 expression on the surface of tumor cells in endometrial cancer with MSI suggests that MSI may be a promising biomarker for anti-PD-1/PD-L1 immunotherapy. However, the conventional testing methods are labor-intensive and expensive for patients. METHODS Inspired by classifiers for MSI based on fast and low-cost deep-learning methods in previous investigations, a new architecture for MSI classification based on an attention module is proposed to extract features from pathological images. Especially, slide-level microsatellite status will be obtained by the bag of words method to aggregate probabilities predicted by the proposed model. The H&E-stained whole slide images (WSIs) from The Cancer Genome Atlas endometrial cohort are collected as the dataset. The performances of the proposed model were primarily evaluated by the area under the receiver-operating characteristic curve, accuracy, sensitivity, and F1-Score. RESULTS On the randomly divided test dataset, the proposed model achieved an accuracy of 0.80, a sensitivity of 0.857, a F1-Score of 0.826, and an AUROC of 0.799. We then visualize the results of the microsatellite status classification to capture more specific morphological features, helping pathologists better understand how deep learning performs the classification. CONCLUSIONS This study implements the prediction of microsatellite status in endometrial cancer cases using deep-learning methods directly from H&E-stained WSIs. The proposed architecture can help the model capture more valuable features for classification. In contrast to current laboratory testing methods, the proposed model creates a more convenient screening tool for rapid automated testing for patients. This method can potentially be a clinical method for detecting the microsatellite status of endometrial cancer.
Collapse
Affiliation(s)
- Ying Zhang
- Xuzhou Medical University, Xuzhou, 221004, Jiangsu, China
| | - Shijie Chen
- Xuzhou Medical University, Xuzhou, 221004, Jiangsu, China
| | - Yuling Wang
- Xuzhou Medical University, Xuzhou, 221004, Jiangsu, China
| | - Jingjing Li
- Xuzhou Medical University, Xuzhou, 221004, Jiangsu, China
| | - Kai Xu
- Xuzhou Medical University, Xuzhou, 221004, Jiangsu, China
| | - Jyhcheng Chen
- Xuzhou Medical University, Xuzhou, 221004, Jiangsu, China
| | - Jie Zhao
- Xuzhou Medical University, Xuzhou, 221004, Jiangsu, China.
| |
Collapse
|
15
|
Rabilloud N, Allaume P, Acosta O, De Crevoisier R, Bourgade R, Loussouarn D, Rioux-Leclercq N, Khene ZE, Mathieu R, Bensalah K, Pecot T, Kammerer-Jacquet SF. Deep Learning Methodologies Applied to Digital Pathology in Prostate Cancer: A Systematic Review. Diagnostics (Basel) 2023; 13:2676. [PMID: 37627935 PMCID: PMC10453406 DOI: 10.3390/diagnostics13162676] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/09/2023] [Accepted: 08/11/2023] [Indexed: 08/27/2023] Open
Abstract
Deep learning (DL), often called artificial intelligence (AI), has been increasingly used in Pathology thanks to the use of scanners to digitize slides which allow us to visualize them on monitors and process them with AI algorithms. Many articles have focused on DL applied to prostate cancer (PCa). This systematic review explains the DL applications and their performances for PCa in digital pathology. Article research was performed using PubMed and Embase to collect relevant articles. A Risk of Bias (RoB) was assessed with an adaptation of the QUADAS-2 tool. Out of the 77 included studies, eight focused on pre-processing tasks such as quality assessment or staining normalization. Most articles (n = 53) focused on diagnosis tasks like cancer detection or Gleason grading. Fifteen articles focused on prediction tasks, such as recurrence prediction or genomic correlations. Best performances were reached for cancer detection with an Area Under the Curve (AUC) up to 0.99 with algorithms already available for routine diagnosis. A few biases outlined by the RoB analysis are often found in these articles, such as the lack of external validation. This review was registered on PROSPERO under CRD42023418661.
Collapse
Affiliation(s)
- Noémie Rabilloud
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
| | - Pierre Allaume
- Department of Pathology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France; (P.A.)
| | - Oscar Acosta
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
| | - Renaud De Crevoisier
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
- Department of Radiotherapy, Centre Eugène Marquis, 35033 Rennes, France
| | - Raphael Bourgade
- Department of Pathology, Nantes University Hospital, 44000 Nantes, France
| | | | - Nathalie Rioux-Leclercq
- Department of Pathology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France; (P.A.)
| | - Zine-eddine Khene
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
- Department of Urology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France
| | - Romain Mathieu
- Department of Urology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France
| | - Karim Bensalah
- Department of Urology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France
| | - Thierry Pecot
- Facility for Artificial Intelligence and Image Analysis (FAIIA), Biosit UAR 3480 CNRS-US18 INSERM, Rennes University, 2 Avenue du Professeur Léon Bernard, 35042 Rennes, France
| | - Solene-Florence Kammerer-Jacquet
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
- Department of Pathology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France; (P.A.)
| |
Collapse
|
16
|
Wang Y, Qian H, Shao X, Zhang H, Liu S, Pan J, Xue W. Multimodal convolutional neural networks based on the Raman spectra of serum and clinical features for the early diagnosis of prostate cancer. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2023; 293:122426. [PMID: 36787677 DOI: 10.1016/j.saa.2023.122426] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/25/2023] [Accepted: 01/28/2023] [Indexed: 06/18/2023]
Abstract
We collected surface-enhanced Raman spectroscopy (SERS) data from the serum of 729 patients with prostate cancer or benign prostatic hyperplasia (BPH), corresponding to their pathological results, and built an artificial intelligence-assisted diagnosis model based on a convolutional neural network (CNN). We then evaluated its value in diagnosing prostate cancer and predicting the Gleason score (GS) using a simple cross-validation method. Our CNN model based on the spectral data for prostate cancer diagnosis revealed an accuracy of 85.14 ± 0.39%. After adjusting the model with patient age and prostate specific antigen (PSA), the accuracy of the multimodal CNN was up to 88.55 ± 0.66%. Our multimodal CNN for distinguishing low-GS/high-GS and GS = 3 + 3/GS = 3 + 4 revealed accuracies of 68 ± 0.58% and 77 ± 0.52%, respectively.
Collapse
Affiliation(s)
- Yan Wang
- Department of Urology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200127, China
| | - Hongyang Qian
- Department of Urology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200127, China
| | - Xiaoguang Shao
- Department of Urology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200127, China
| | - Heng Zhang
- Shanghai Institute for Advanced Communication and Data Science, Key Laboratory of Specialty Fiber Optics and Optical Access Networks, School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Shupeng Liu
- Shanghai Institute for Advanced Communication and Data Science, Key Laboratory of Specialty Fiber Optics and Optical Access Networks, School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Jiahua Pan
- Department of Urology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200127, China.
| | - Wei Xue
- Department of Urology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200127, China.
| |
Collapse
|
17
|
Chen Y, Loveless IM, Nakai T, Newaz R, Abdollah FF, Rogers CG, Hassan O, Chitale D, Arora K, Williamson SR, Gupta NS, Rybicki BA, Sadasivan SM, Levin AM. Convolutional Neural Network Quantification of Gleason Pattern 4 and Association with Biochemical Recurrence in Intermediate Grade Prostate Tumors. Mod Pathol 2023; 36:100157. [PMID: 36925071 DOI: 10.1016/j.modpat.2023.100157] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 02/20/2023] [Accepted: 03/01/2023] [Indexed: 03/15/2023]
Abstract
Differential classification of prostate cancer (CaP) grade group (GG) 2 and 3 tumors remains challenging, likely due to the subjective quantification of percentage of Gleason pattern 4 (%GP4). Artificial intelligence assessment of %GP4 may improve its accuracy and reproducibility and provide information for prognosis prediction. To investigate this potential, a convolutional neural network (CNN) model was trained to objectively identify and quantify Gleason pattern (GP) 3 and 4 areas, estimate %GP4, and assess whether CNN-assessed %GP4 is associated with biochemical recurrence (BCR) risk in intermediate risk GG 2 and 3 tumors. The study was conducted in a radical prostatectomy cohort (1999-2012) of African American men from the Henry Ford Health System (Detroit, Michigan). A CNN model that could discriminate four tissue types (stroma, benign glands, GP3 glands, and GP4 glands) was developed using histopathologic images containing GG 1 (n=45) and 4 (n=20) tumor foci. The CNN model was applied to GG 2 (n=153) and 3 (n=62) for %GP4 estimation, and Cox proportional hazard modeling was used to assess the association of %GP4 and BCR, accounting for other clinicopathologic features including GG. The CNN model achieved an overall accuracy of 86% in distinguishing the four tissue types. Further, CNN-assessed %GP4 was significantly higher in GG 3 compared with GG 2 tumors (p=7.2*10-11). %GP4 was associated with an increased risk of BCR (adjusted HR=1.09 per 10% increase in %GP4, p=0.010) in GG 2 and 3 tumors. Within GG 2 tumors specifically, %GP4 was more strongly associated with BCR (adjusted HR=1.12, p=0.006). Our findings demonstrate the feasibility of CNN-assessed %GP4 estimation, which is associated with BCR risk. This objective approach could be added to the standard pathological assessment for patients with GG 2 and 3 tumors and act as a surrogate for specialist genitourinary pathologist evaluation when such consultation is not available.
Collapse
Affiliation(s)
- Yalei Chen
- Department of Public Health Sciences, Henry Ford Health System, Detroit, MI; Center for Bioinformatics, Henry Ford Health System, Detroit, MI.
| | - Ian M Loveless
- Department of Public Health Sciences, Henry Ford Health System, Detroit, MI; Center for Bioinformatics, Henry Ford Health System, Detroit, MI
| | - Tiffany Nakai
- Department of Public Health Sciences, Henry Ford Health System, Detroit, MI
| | - Rehnuma Newaz
- Department of Public Health Sciences, Henry Ford Health System, Detroit, MI
| | - Firas F Abdollah
- Department of Urology, Vattikuti Urology Institute, Henry Ford Health System, Detroit, MI
| | - Craig G Rogers
- Department of Urology, Vattikuti Urology Institute, Henry Ford Health System, Detroit, MI
| | - Oudai Hassan
- Department of Pathology, Henry Ford Health System, Detroit, MI
| | | | - Kanika Arora
- Department of Pathology, Henry Ford Health System, Detroit, MI
| | | | - Nilesh S Gupta
- Department of Pathology, Henry Ford Health System, Detroit, MI
| | - Benjamin A Rybicki
- Department of Public Health Sciences, Henry Ford Health System, Detroit, MI
| | - Sudha M Sadasivan
- Department of Public Health Sciences, Henry Ford Health System, Detroit, MI
| | - Albert M Levin
- Department of Public Health Sciences, Henry Ford Health System, Detroit, MI; Center for Bioinformatics, Henry Ford Health System, Detroit, MI.
| |
Collapse
|
18
|
Kumar GV, Bellary MI, Reddy TB. Prostate cancer classification with MRI using Taylor-Bird Squirrel Optimization based Deep Recurrent Neural Network. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2165242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
Affiliation(s)
- Goddumarri Vijay Kumar
- Dept. of Computer Science and Technology, Sri Krishnadevaraya University, Ananthapuram, A.P., India
| | - Mohammed Ismail Bellary
- Department of Artificial Intelligence & Machine Learning, P.A. College of Engineering, Managalore, Affiliated to Visvesvaraya Technological University, Belagavi, K.A., India
| | - Thota Bhaskara Reddy
- Dept. of Computer Science and Technology, Sri Krishnadevaraya University, Ananthapuram, A.P., India
| |
Collapse
|
19
|
Expectation-maximization algorithm leads to domain adaptation for a perineural invasion and nerve extraction task in whole slide digital pathology images. Med Biol Eng Comput 2023; 61:457-473. [PMID: 36496513 DOI: 10.1007/s11517-022-02711-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 10/22/2022] [Indexed: 12/14/2022]
Abstract
In addition to lymphatic and vascular channels, tumor cells can also spread via nerves, i.e., perineural invasion (PNI). PNI serves as an independent prognostic indicator in many malignancies. As a result, identifying and determining the extent of PNI is an important yet extremely tedious task in surgical pathology. In this work, we present a computational approach to extract nerves and PNI from whole slide histopathology images. We make manual annotations on selected prostate cancer slides once but then apply the trained model for nerve segmentation to both prostate cancer slides and head and neck cancer slides. For the purpose of multi-domain learning/prediction and investigation on the generalization capability of deep neural network, an expectation-maximization (EM)-based domain adaptation approach is proposed to improve the segmentation performance, in particular for the head and neck cancer slides. Experiments are conducted to demonstrate the segmentation performances. The average Dice coefficient for prostate cancer slides is 0.82 and 0.79 for head and neck cancer slides. Comparisons are then made for segmentations with and without the proposed EM-based domain adaptation on prostate cancer and head and neck cancer whole slide histopathology images from The Cancer Genome Atlas (TCGA) database and significant improvements are observed.
Collapse
|
20
|
Lu X, Zhang S, Liu Z, Liu S, Huang J, Kong G, Li M, Liang Y, Cui Y, Yang C, Zhao S. Ultrasonographic pathological grading of prostate cancer using automatic region-based Gleason grading network. Comput Med Imaging Graph 2022; 102:102125. [PMID: 36257091 DOI: 10.1016/j.compmedimag.2022.102125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 08/26/2022] [Accepted: 09/20/2022] [Indexed: 11/05/2022]
Abstract
The Gleason scoring system is a reliable method for quantifying the aggressiveness of prostate cancer, which provides an important reference value for clinical assessment on therapeutic strategies. However, to the best of our knowledge, no study has been done on the pathological grading of prostate cancer from single ultrasound images. In this work, a novel Automatic Region-based Gleason Grading (ARGG) network for prostate cancer based on deep learning is proposed. ARGG consists of two stages: (1) a region labeling object detection (RLOD) network is designed to label the prostate cancer lesion region; (2) a Gleason grading network (GNet) is proposed for pathological grading of prostate ultrasound images. In RLOD, a new feature fusion structure Skip-connected Feature Pyramid Network (CFPN) is proposed as an auxiliary branch for extracting features and enhancing the fusion of high-level features and low-level features, which helps to detect the small lesion and extract the image detail information. In GNet, we designed a synchronized pulse enhancement module (SPEM) based on pulse-coupled neural networks for enhancing the results of RLOD detection and used as training samples, and then fed the enhanced results and the original ones into the channel attention classification network (CACN), which introduces an attention mechanism to benefit the prediction of cancer grading. Experimental performance on the dataset of prostate ultrasound images collected from hospitals shows that the proposed Gleason grading model outperforms the manual diagnosis by physicians with a precision of 0.830. In addition, we have evaluated the lesions detection performance of RLOD, which achieves a mean Dice metric of 0.815.
Collapse
Affiliation(s)
- Xu Lu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China; Pazhou Lab, Guangzhou 510330, China
| | - Shulian Zhang
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Zhiyong Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Shaopeng Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Jun Huang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Guoquan Kong
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Mingzhu Li
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yinying Liang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yunneng Cui
- Department of Radiology, Foshan Maternity and Children's Healthcare Hospital Affiliated to Southern Medical University, Foshan 528000, China
| | - Chuan Yang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China.
| | - Shen Zhao
- Department of Artificial Intelligence, Sun Yat-sen University, Guangzhou 510006, China.
| |
Collapse
|
21
|
Ramamurthy K, Varikuti AR, Gupta B, Aswani N. A deep learning network for Gleason grading of prostate biopsies using EfficientNet. BIOMED ENG-BIOMED TE 2022; 68:187-198. [PMID: 36332194 DOI: 10.1515/bmt-2022-0201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 10/23/2022] [Indexed: 11/06/2022]
Abstract
Abstract
Objectives
The most crucial part in the diagnosis of cancer is severity grading. Gleason’s score is a widely used grading system for prostate cancer. Manual examination of the microscopic images and grading them is tiresome and consumes a lot of time. Hence to automate the Gleason grading process, a novel deep learning network is proposed in this work.
Methods
In this work, a deep learning network for Gleason grading of prostate cancer is proposed based on EfficientNet architecture. It applies a compound scaling method to balance the dimensions of the underlying network. Also, an additional attention branch is added to EfficientNet-B7 for precise feature weighting.
Result
To the best of our knowledge, this is the first work that integrates an additional attention branch with EfficientNet architecture for Gleason grading. The proposed models were trained using H&E-stained samples from prostate cancer Tissue Microarrays (TMAs) in the Harvard Dataverse dataset.
Conclusions
The proposed network was able to outperform the existing methods and it achieved an Kappa score of 0.5775.
Collapse
Affiliation(s)
- Karthik Ramamurthy
- Centre for Cyber Physical Systems, School of Electronics Engineering, Vellore Institute of Technology , Chennai , India
| | - Abinash Reddy Varikuti
- School of Computer Science Engineering, Vellore Institute of Technology , Chennai , India
| | - Bhavya Gupta
- School of Computer Science Engineering, Vellore Institute of Technology , Chennai , India
| | - Nehal Aswani
- School of Electronics Engineering, Vellore Institute of Technology , Chennai , India
| |
Collapse
|
22
|
Orsulic S, John J, Walts AE, Gertych A. Computational pathology in ovarian cancer. Front Oncol 2022; 12:924945. [PMID: 35965569 PMCID: PMC9372445 DOI: 10.3389/fonc.2022.924945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 06/27/2022] [Indexed: 11/30/2022] Open
Abstract
Histopathologic evaluations of tissue sections are key to diagnosing and managing ovarian cancer. Pathologists empirically assess and integrate visual information, such as cellular density, nuclear atypia, mitotic figures, architectural growth patterns, and higher-order patterns, to determine the tumor type and grade, which guides oncologists in selecting appropriate treatment options. Latent data embedded in pathology slides can be extracted using computational imaging. Computers can analyze digital slide images to simultaneously quantify thousands of features, some of which are visible with a manual microscope, such as nuclear size and shape, while others, such as entropy, eccentricity, and fractal dimensions, are quantitatively beyond the grasp of the human mind. Applications of artificial intelligence and machine learning tools to interpret digital image data provide new opportunities to explore and quantify the spatial organization of tissues, cells, and subcellular structures. In comparison to genomic, epigenomic, transcriptomic, and proteomic patterns, morphologic and spatial patterns are expected to be more informative as quantitative biomarkers of complex and dynamic tumor biology. As computational pathology is not limited to visual data, nuanced subvisual alterations that occur in the seemingly “normal” pre-cancer microenvironment could facilitate research in early cancer detection and prevention. Currently, efforts to maximize the utility of computational pathology are focused on integrating image data with other -omics platforms that lack spatial information, thereby providing a new way to relate the molecular, spatial, and microenvironmental characteristics of cancer. Despite a dire need for improvements in ovarian cancer prevention, early detection, and treatment, the ovarian cancer field has lagged behind other cancers in the application of computational pathology. The intent of this review is to encourage ovarian cancer research teams to apply existing and/or develop additional tools in computational pathology for ovarian cancer and actively contribute to advancing this important field.
Collapse
Affiliation(s)
- Sandra Orsulic
- Veterans Affairs Greater Los Angeles Healthcare System, Los Angeles, CA, United States
- Department of Obstetrics and Gynecology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, United States
- Jonsson Comprehensive Cancer Center, University of California Los Angeles, Los Angeles, CA, United States
- *Correspondence: Sandra Orsulic,
| | - Joshi John
- Veterans Affairs Greater Los Angeles Healthcare System, Los Angeles, CA, United States
- Department of Psychiatry, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Ann E. Walts
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Arkadiusz Gertych
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Faculty of Biomedical Engineering, Silesian University of Technology, Zabrze, Poland
| |
Collapse
|
23
|
Wu Y, Ma W. Rural Workplace Sustainable Development of Smart Rural Governance Workplace Platform for Efficient Enterprise Performances. JOURNAL OF ENVIRONMENTAL AND PUBLIC HEALTH 2022; 2022:1588638. [PMID: 35692664 PMCID: PMC9187484 DOI: 10.1155/2022/1588638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 05/11/2022] [Accepted: 05/18/2022] [Indexed: 11/18/2022]
Abstract
In the long developmental process, China's agriculture has transformed from organic agriculture to inorganic agriculture. New technologies have made the modernization of agriculture possible. However, most older people who are engaged in agriculture may not completely understand the modernization of agriculture. Based on the limitations of traditional image target detection methods, a deep learning-based pest target detection and recognition method is proposed from a blockchain perspective, to analyze and research agricultural data supervision and governance and explore the effectiveness of deep learning methods in crop pest detection and recognition. The comparative analysis demonstrates that the average precision (AP) of GA-CPN-LAR (global activation-characteristic pyramid network-local activation region) increases by 4.2% compared with other methods. Whether under the Inception or ResNet-50 backbone networks, the AP of GA-CPN-LAR is significantly better than other methods. Compared with the ResNet-50 backbone network, GA-CPN-LAR has higher accuracy and recall rates under Inception. Precision-recall curve measurement shows that the proposed method can significantly reduce the false detection rate and missed detection rate. The GA-CPN-LAR model proposed here has a higher AP value on the MPD dataset than the other target detection methods, which can be increased by 4.2%. Besides, the accuracy and recall of the GA-CPN-LAR method corresponding to two representative pests under the initial feature extractor are higher than the MPD dataset baseline. In addition, the research results of the MPD dataset and AgriPest dataset also show that the pest target detection method based on convolutional neural networks (CNNs) has a good presentation effect and can significantly reduce false detection and missed detection. Moreover, the pest regulation based on blockchain and deep learning comprehensively considers global and local feature extraction and pattern recognition, which positively impacts the conscientization of agricultural data processing and promotes the sustainable development of rural areas.
Collapse
Affiliation(s)
- Yingli Wu
- Agricultural and Rural Development Institute, Heilongjiang Provincial Academy of Social Sciences, Harbin, China
| | - Wanying Ma
- Changchun Guanghua University, College of Business, Jilin, Changchun 130033, China
| |
Collapse
|
24
|
Texture Analysis of Enhanced MRI and Pathological Slides Predicts EGFR Mutation Status in Breast Cancer. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1376659. [PMID: 35663041 PMCID: PMC9162871 DOI: 10.1155/2022/1376659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 04/25/2022] [Accepted: 04/29/2022] [Indexed: 12/02/2022]
Abstract
Objective Image texture information was extracted from enhanced magnetic resonance imaging (MRI) and pathological hematoxylin and eosin- (HE-) stained images of female breast cancer patients. We established models individually, and then, we combine the two kinds of data to establish model. Through this method, we verified whether sufficient information could be obtained from enhanced MRI and pathological slides to assist in the determination of epidermal growth factor receptor (EGFR) mutation status in patients. Methods We obtained enhanced MRI data from patients with breast cancer before treatment and selected diffusion-weighted imaging (DWI), T1 fast-spin echo (T1 FSE), and T2 fast-spin echo (T2 FSE) as the data sources for extracting texture information. Imaging physicians manually outlined the 3D regions of interest (ROIs) and extracted texture features according to the gray level cooccurrence matrix (GLCM) of the images. For the HE staining images of the patients, we adopted a specific normalization algorithm to simulate the images dyed with only hematoxylin or eosin and extracted textures. We extracted texture features to predict the expression of EGFR. After evaluating the predictive power of each model, the models from the two data sources were combined for remodeling. Results For enhanced MRI data, the modeling of texture information of T1 FSE had a good predictive effect for EGFR mutation status. For pathological images, eosin-stained images can achieve a better prediction effect. We selected these two classifiers as the weak classifiers of the final model and obtained good results (training group: AUC, 0.983; 95% CI, 0.95-1.00; accuracy, 0.962; specificity, 0.936; and sensitivity, 0.979; test group: AUC, 0.983; 95% CI, 0.94-1.00; accuracy, 0.943; specificity, 1.00; and sensitivity, 0.905). Conclusion The EGFR mutation status of patients with breast cancer can be well predicted based on enhanced MRI data and pathological data. This helps hospitals that do not test the EGFR mutation status of patients with breast cancer. The technology gives clinicians more information about breast cancer, which helps them make accurate diagnoses and select suitable treatments.
Collapse
|
25
|
Ma Z, Zhang M, Liu J, Yang A, Li H, Wang J, Hua D, Li M. An Assisted Diagnosis Model for Cancer Patients Based on Federated Learning. Front Oncol 2022; 12:860532. [PMID: 35311106 PMCID: PMC8928102 DOI: 10.3389/fonc.2022.860532] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 02/08/2022] [Indexed: 12/24/2022] Open
Abstract
Since the 20th century, cancer has been a growing threat to human health. Cancer is a malignant tumor with high clinical morbidity and mortality, and there is a high risk of recurrence after surgery. At the same time, the diagnosis of whether the cancer is in situ recurrence is crucial for further treatment of cancer patients. According to statistics, about 90% of cancer-related deaths are due to metastasis of primary tumor cells. Therefore, the study of the location of cancer recurrence and its influencing factors is of great significance for the clinical diagnosis and treatment of cancer. In this paper, we propose an assisted diagnosis model for cancer patients based on federated learning. In terms of data, the influencing factors of cancer recurrence and the special needs of data samples required by federated learning were comprehensively considered. Six first-level impact indicators were determined, and the historical case data of cancer patients were further collected. Based on the federated learning framework combined with convolutional neural network, various physical examination indicators of patients were taken as input. The recurrence time and recurrence location of patients were used as output to construct an auxiliary diagnostic model, and linear regression, support vector regression, Bayesling regression, gradient ascending tree and multilayer perceptrons neural network algorithm were used as comparison algorithms. CNN’s federated prediction model based on improved under the condition of the joint modeling and simulation on the five types of cancer data accuracy reached more than 90%, the accuracy is better than single modeling machine learning tree model and linear model and neural network, the results show that auxiliary diagnosis model based on the study of cancer patients in assisted the doctor in the diagnosis of patients, As well as effectively provide nutritional programs for patients and have application value in prolonging the life of patients, it has certain guiding significance in the field of medical cancer rehabilitation.
Collapse
Affiliation(s)
- Zezhong Ma
- Hebei Engineering Research Center for the Intelligentization of Iron Ore Optimization and Ironmaking Raw Materials Preparation Processes, North China University of Science and Technology, Tangshan, China.,Hebei Key Laboratory of Data Science and Application, North China University of Science and Technology, Tangshan, China.,The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,College of Science, North China University of Science and Technology, Tangshan, China
| | - Meng Zhang
- The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, China
| | - Jiajia Liu
- College of Science, North China University of Science and Technology, Tangshan, China
| | - Aimin Yang
- Hebei Engineering Research Center for the Intelligentization of Iron Ore Optimization and Ironmaking Raw Materials Preparation Processes, North China University of Science and Technology, Tangshan, China.,Hebei Key Laboratory of Data Science and Application, North China University of Science and Technology, Tangshan, China.,The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,College of Science, North China University of Science and Technology, Tangshan, China.,Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, China
| | - Hao Li
- The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, China
| | - Jian Wang
- The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, China
| | - Dianbo Hua
- Beijing Sitairui Cancer Data Analysis Joint Laboratory, Beijing, China
| | - Mingduo Li
- State Key Laboratory of Process Automation in Mining and Metallurgy, Beijing, China.,Beijing Key Laboratory of Process Automation in Mining and Metallurgy, Beijing, China
| |
Collapse
|
26
|
Deep Learning in the Classification of Stage of Liver Fibrosis in Chronic Hepatitis B with Magnetic Resonance ADC Images. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2021:2015780. [PMID: 35024010 PMCID: PMC8716233 DOI: 10.1155/2021/2015780] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 08/09/2021] [Accepted: 11/05/2021] [Indexed: 12/18/2022]
Abstract
Liver fibrosis in chronic hepatitis B is the pathological repair response of the liver to chronic injury, which is a key step in the development of various chronic liver diseases to cirrhosis and an important link affecting the prognosis of chronic liver diseases. The further development of liver fibrosis in chronic hepatitis B can lead to the disorder of hepatic lobule structure, nodular regeneration of hepatocytes, formation of a pseudolobular structure, namely, cirrhosis, clinical manifestations of liver dysfunction, and portal hypertension. So far, the diagnosis of liver fibrosis in chronic hepatitis B has been made manually by doctors. However, this is very subjective and boring for doctors. Doctors are likely to be interfered with by external factors, such as fatigue and lack of sleep. This paper proposed a 5-layer deep convolution neural network structure for the automatic classification of liver fibrosis in chronic hepatitis B. In the 5-layer deep convolution neural network structure, there were three convolution layers and two fully connected layers, and each convolution layer was connected with a pooling layer. 123 ADC images were collected, and the following results were obtained: the accuracy, sensitivity, specificity, precision, F1, MCC, and FMI were 88.13% ± 1.47%, 81.45% ± 3.69%, 91.12% ± 1.72%, 80.49% ± 2.94%, 80.90% ± 2.39%, 72.36% ± 3.39%, and 80.94% ± 2.37%, respectively.
Collapse
|
27
|
韩 继, 谢 嘉, 顾 松, 闫 朝, 李 建, 张 志, 徐 军. [Automated grading of glioma based on density and atypia analysis in whole slide images]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2021; 38:1062-1071. [PMID: 34970888 PMCID: PMC9927119 DOI: 10.7507/1001-5515.202103050] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 08/05/2021] [Indexed: 06/14/2023]
Abstract
Glioma is the most common malignant brain tumor and classification of low grade glioma (LGG) and high grade glioma (HGG) is an important reference of making decisions on patient treatment options and prognosis. This work is largely done manually by pathologist based on an examination of whole slide image (WSI), which is arduous and heavily dependent on doctors' experience. In the World Health Organization (WHO) criteria, grade of glioma is closely related to hypercellularity, nuclear atypia and necrosis. Inspired by this, this paper designed and extracted cell density and atypia features to classify LGG and HGG. First, regions of interest (ROI) were located by analyzing cell density and global density features were extracted as well. Second, local density and atypia features were extracted in ROI. Third, balanced support vector machine (SVM) classifier was trained and tested using 10 selected features. The area under the curve (AUC) and accuracy (ACC) of 5-fold cross validation were 0.92 ± 0.01 and 0.82 ± 0.01 respectively. The results demonstrate that the proposed method of locating ROI is effective and the designed features of density and atypia can be used to predict glioma grade accurately, which can provide reliable basis for clinical diagnosis.
Collapse
Affiliation(s)
- 继能 韩
- 南京信息工程大学 自动化学院(南京 210044)School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, P.R.China
- 南京大学医学院附属金陵医院 放射诊断科(南京 210002)Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, Nanjing 210002, P.R.China
| | - 嘉伟 谢
- 南京信息工程大学 自动化学院(南京 210044)School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, P.R.China
- 南京大学医学院附属金陵医院 放射诊断科(南京 210002)Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, Nanjing 210002, P.R.China
| | - 松 顾
- 南京信息工程大学 自动化学院(南京 210044)School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, P.R.China
- 南京大学医学院附属金陵医院 放射诊断科(南京 210002)Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, Nanjing 210002, P.R.China
| | - 朝阳 闫
- 南京信息工程大学 自动化学院(南京 210044)School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, P.R.China
- 南京大学医学院附属金陵医院 放射诊断科(南京 210002)Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, Nanjing 210002, P.R.China
| | - 建瑞 李
- 南京信息工程大学 自动化学院(南京 210044)School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, P.R.China
| | - 志强 张
- 南京信息工程大学 自动化学院(南京 210044)School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, P.R.China
| | - 军 徐
- 南京信息工程大学 自动化学院(南京 210044)School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, P.R.China
- 南京大学医学院附属金陵医院 放射诊断科(南京 210002)Department of Diagnostic Radiology, Jinling Hospital, Medical School of Nanjing University, Nanjing 210002, P.R.China
| |
Collapse
|
28
|
Li W, Li J, Polson J, Wang Z, Speier W, Arnold C. High resolution histopathology image generation and segmentation through adversarial training. Med Image Anal 2021; 75:102251. [PMID: 34814059 DOI: 10.1016/j.media.2021.102251] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 07/09/2021] [Accepted: 09/20/2021] [Indexed: 12/01/2022]
Abstract
Semantic segmentation of histopathology images can be a vital aspect of computer-aided diagnosis, and deep learning models have been effectively applied to this task with varying levels of success. However, their impact has been limited due to the small size of fully annotated datasets. Data augmentation is one avenue to address this limitation. Generative Adversarial Networks (GANs) have shown promise in this respect, but previous work has focused mostly on classification tasks applied to MR and CT images, both of which have lower resolution and scale than histopathology images. There is limited research that applies GANs as a data augmentation approach for large-scale image semantic segmentation, which requires high-quality image-mask pairs. In this work, we propose a multi-scale conditional GAN for high-resolution, large-scale histopathology image generation and segmentation. Our model consists of a pyramid of GAN structures, each responsible for generating and segmenting images at a different scale. Using semantic masks, the generative component of our model is able to synthesize histopathology images that are visually realistic. We demonstrate that these synthesized images along with their masks can be used to boost segmentation performance, especially in the semi-supervised scenario.
Collapse
Affiliation(s)
- Wenyuan Li
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Electrical and Computer Engineering, UCLA, Los Angeles, USA.
| | - Jiayun Li
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA
| | - Jennifer Polson
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA
| | - Zichen Wang
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA
| | - William Speier
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA; The Department of Radiological Sciences, UCLA, Los Angeles, USA
| | - Corey Arnold
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Electrical and Computer Engineering, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA; The Department of Radiological Sciences, UCLA, Los Angeles, USA; The Department of Pathology & Laboratory Medicine, UCLA, Los Angeles, USA.
| |
Collapse
|
29
|
Jose L, Liu S, Russo C, Nadort A, Di Ieva A. Generative Adversarial Networks in Digital Pathology and Histopathological Image Processing: A Review. J Pathol Inform 2021; 12:43. [PMID: 34881098 PMCID: PMC8609288 DOI: 10.4103/jpi.jpi_103_20] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 03/03/2021] [Accepted: 04/23/2021] [Indexed: 12/13/2022] Open
Abstract
Digital pathology is gaining prominence among the researchers with developments in advanced imaging modalities and new technologies. Generative adversarial networks (GANs) are a recent development in the field of artificial intelligence and since their inception, have boosted considerable interest in digital pathology. GANs and their extensions have opened several ways to tackle many challenging histopathological image processing problems such as color normalization, virtual staining, ink removal, image enhancement, automatic feature extraction, segmentation of nuclei, domain adaptation and data augmentation. This paper reviews recent advances in histopathological image processing using GANs with special emphasis on the future perspectives related to the use of such a technique. The papers included in this review were retrieved by conducting a keyword search on Google Scholar and manually selecting the papers on the subject of H&E stained digital pathology images for histopathological image processing. In the first part, we describe recent literature that use GANs in various image preprocessing tasks such as stain normalization, virtual staining, image enhancement, ink removal, and data augmentation. In the second part, we describe literature that use GANs for image analysis, such as nuclei detection, segmentation, and feature extraction. This review illustrates the role of GANs in digital pathology with the objective to trigger new research on the application of generative models in future research in digital pathology informatics.
Collapse
Affiliation(s)
- Laya Jose
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
- ARC Centre of Excellence for Nanoscale Biophotonics,
Macquarie University, Sydney, Australia
| | - Sidong Liu
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
- Australian Institute of Health Innovation, Centre for
Health Informatics, Macquarie University, Sydney, Australia
| | - Carlo Russo
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
| | - Annemarie Nadort
- ARC Centre of Excellence for Nanoscale Biophotonics,
Macquarie University, Sydney, Australia
- Department of Physics and Astronomy, Faculty of Science
and Engineering, Macquarie University, Sydney, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
| |
Collapse
|
30
|
Wide & Deep neural network model for patch aggregation in CNN-based prostate cancer detection systems. Comput Biol Med 2021; 136:104743. [PMID: 34426172 DOI: 10.1016/j.compbiomed.2021.104743] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2021] [Revised: 08/03/2021] [Accepted: 08/03/2021] [Indexed: 12/21/2022]
Abstract
Prostate cancer (PCa) is one of the most commonly diagnosed cancer and one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020. Artificial Intelligence algorithms have had a huge impact on medical image analysis, including digital histopathology, where Convolutional Neural Networks (CNNs) are used to provide a fast and accurate diagnosis, supporting experts in this task. To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images. Due to the size of these images, neural networks cannot use them as input and, therefore, small subimages called patches are extracted and predicted, obtaining a patch-level classification. In this work, a novel patch aggregation method based on a custom Wide & Deep neural network model is presented, which performs a slide-level classification using the patch-level classes obtained from a CNN. The malignant tissue ratio, a 10-bin malignant probability histogram, the least squares regression line of the histogram, and the number of malignant connected components are used by the proposed model to perform the classification. An accuracy of 94.24% and a sensitivity of 98.87% were achieved, proving that the proposed system could aid pathologists by speeding up the screening process and, thus, contribute to the fight against PCa.
Collapse
|
31
|
Yu H, Zhang X, Song L, Jiang L, Huang X, Chen W, Zhang C, Li J, Yang J, Hu Z, Duan Q, Chen W, He X, Fan J, Jiang W, Zhang L, Qiu C, Gu M, Sun W, Zhang Y, Peng G, Shen W, Fu G. Large-scale gastric cancer screening and localization using multi-task deep neural network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
32
|
Crouzet C, Jeong G, Chae RH, LoPresti KT, Dunn CE, Xie DF, Agu C, Fang C, Nunes ACF, Lau WL, Kim S, Cribbs DH, Fisher M, Choi B. Spectroscopic and deep learning-based approaches to identify and quantify cerebral microhemorrhages. Sci Rep 2021; 11:10725. [PMID: 34021170 PMCID: PMC8140127 DOI: 10.1038/s41598-021-88236-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Accepted: 03/25/2021] [Indexed: 02/04/2023] Open
Abstract
Cerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5-40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.
Collapse
Affiliation(s)
- Christian Crouzet
- grid.266093.80000 0001 0668 7243Beckman Laser Institute and Medical Clinic, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Department of Biomedical Engineering, University of California-Irvine, Irvine, CA USA
| | - Gwangjin Jeong
- grid.411982.70000 0001 0705 4288Department of Biomedical Engineering, Beckman Laser Institute Korea, Dankook University, Cheonan, 31116 Republic of Korea
| | - Rachel H. Chae
- grid.116068.80000 0001 2341 2786Massachusetts Institute of Technology, Cambridge, MA USA
| | - Krystal T. LoPresti
- grid.266093.80000 0001 0668 7243Beckman Laser Institute and Medical Clinic, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Department of Biomedical Engineering, University of California-Irvine, Irvine, CA USA
| | - Cody E. Dunn
- grid.266093.80000 0001 0668 7243Beckman Laser Institute and Medical Clinic, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Department of Biomedical Engineering, University of California-Irvine, Irvine, CA USA
| | - Danny F. Xie
- grid.266093.80000 0001 0668 7243Beckman Laser Institute and Medical Clinic, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Department of Biomedical Engineering, University of California-Irvine, Irvine, CA USA
| | - Chiagoziem Agu
- grid.251990.60000 0000 9562 8554Albany State University, Albany, GA USA
| | - Chuo Fang
- grid.266093.80000 0001 0668 7243Neurology and Pathology and Laboratory Medicine, University of California-Irvine, Irvine, CA USA
| | - Ane C. F. Nunes
- grid.266093.80000 0001 0668 7243Department of Medicine, Division of Nephrology, University of California-Irvine, Irvine, CA USA
| | - Wei Ling Lau
- grid.266093.80000 0001 0668 7243Department of Medicine, Division of Nephrology, University of California-Irvine, Irvine, CA USA
| | - Sehwan Kim
- grid.411982.70000 0001 0705 4288Department of Biomedical Engineering, Beckman Laser Institute Korea, Dankook University, Cheonan, 31116 Republic of Korea
| | - David H. Cribbs
- grid.266093.80000 0001 0668 7243Institute for Memory Impairments and Neurological Disorders, University of California-Irvine, Irvine, CA USA
| | - Mark Fisher
- grid.266093.80000 0001 0668 7243Neurology and Pathology and Laboratory Medicine, University of California-Irvine, Irvine, CA USA
| | - Bernard Choi
- grid.266093.80000 0001 0668 7243Beckman Laser Institute and Medical Clinic, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Department of Biomedical Engineering, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Department of Surgery, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California-Irvine, Irvin, CA USA
| |
Collapse
|
33
|
A Review of Explainable Deep Learning Cancer Detection Models in Medical Imaging. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11104573] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Deep learning has demonstrated remarkable accuracy analyzing images for cancer detection tasks in recent years. The accuracy that has been achieved rivals radiologists and is suitable for implementation as a clinical tool. However, a significant problem is that these models are black-box algorithms therefore they are intrinsically unexplainable. This creates a barrier for clinical implementation due to lack of trust and transparency that is a characteristic of black box algorithms. Additionally, recent regulations prevent the implementation of unexplainable models in clinical settings which further demonstrates a need for explainability. To mitigate these concerns, there have been recent studies that attempt to overcome these issues by modifying deep learning architectures or providing after-the-fact explanations. A review of the deep learning explanation literature focused on cancer detection using MR images is presented here. The gap between what clinicians deem explainable and what current methods provide is discussed and future suggestions to close this gap are provided.
Collapse
|
34
|
Ayyad SM, Shehata M, Shalaby A, Abou El-Ghar M, Ghazal M, El-Melegy M, Abdel-Hamid NB, Labib LM, Ali HA, El-Baz A. Role of AI and Histopathological Images in Detecting Prostate Cancer: A Survey. SENSORS (BASEL, SWITZERLAND) 2021; 21:2586. [PMID: 33917035 PMCID: PMC8067693 DOI: 10.3390/s21082586] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 03/29/2021] [Accepted: 04/04/2021] [Indexed: 02/07/2023]
Abstract
Prostate cancer is one of the most identified cancers and second most prevalent among cancer-related deaths of men worldwide. Early diagnosis and treatment are substantial to stop or handle the increase and spread of cancer cells in the body. Histopathological image diagnosis is a gold standard for detecting prostate cancer as it has different visual characteristics but interpreting those type of images needs a high level of expertise and takes too much time. One of the ways to accelerate such an analysis is by employing artificial intelligence (AI) through the use of computer-aided diagnosis (CAD) systems. The recent developments in artificial intelligence along with its sub-fields of conventional machine learning and deep learning provide new insights to clinicians and researchers, and an abundance of research is presented specifically for histopathology images tailored for prostate cancer. However, there is a lack of comprehensive surveys that focus on prostate cancer using histopathology images. In this paper, we provide a very comprehensive review of most, if not all, studies that handled the prostate cancer diagnosis using histopathological images. The survey begins with an overview of histopathological image preparation and its challenges. We also briefly review the computing techniques that are commonly applied in image processing, segmentation, feature selection, and classification that can help in detecting prostate malignancies in histopathological images.
Collapse
Affiliation(s)
- Sarah M. Ayyad
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Mohamed Shehata
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.S.)
| | - Ahmed Shalaby
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.S.)
| | - Mohamed Abou El-Ghar
- Department of Radiology, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt;
| | - Mohammed Ghazal
- Department of Electrical and Computer Engineering, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Moumen El-Melegy
- Department of Electrical Engineering, Assiut University, Assiut 71511, Egypt;
| | - Nahla B. Abdel-Hamid
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Labib M. Labib
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - H. Arafat Ali
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Ayman El-Baz
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.S.)
| |
Collapse
|
35
|
Semantic Instance Segmentation of Kidney Cysts in MR Images: A Fully Automated 3D Approach Developed Through Active Learning. J Digit Imaging 2021; 34:773-787. [PMID: 33821360 PMCID: PMC8455788 DOI: 10.1007/s10278-021-00452-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 01/17/2021] [Accepted: 03/22/2021] [Indexed: 11/18/2022] Open
Abstract
Total kidney volume (TKV) is the main imaging biomarker used to monitor disease progression and to classify patients affected by autosomal dominant polycystic kidney disease (ADPKD) for clinical trials. However, patients with similar TKVs may have drastically different cystic presentations and phenotypes. In an effort to quantify these cystic differences, we developed the first 3D semantic instance cyst segmentation algorithm for kidneys in MR images. We have reformulated both the object detection/localization task and the instance-based segmentation task into a semantic segmentation task. This allowed us to solve this unique imaging problem efficiently, even for patients with thousands of cysts. To do this, a convolutional neural network (CNN) was trained to learn cyst edges and cyst cores. Images were converted from instance cyst segmentations to semantic edge-core segmentations by applying a 3D erosion morphology operator to up-sampled versions of the images. The reduced cysts were labeled as core; the eroded areas were dilated in 2D and labeled as edge. The network was trained on 30 MR images and validated on 10 MR images using a fourfold cross-validation procedure. The final ensemble model was tested on 20 MR images not seen during the initial training/validation. The results from the test set were compared to segmentations from two readers. The presented model achieved an averaged R2 value of 0.94 for cyst count, 1.00 for total cyst volume, 0.94 for cystic index, and an averaged Dice coefficient of 0.85. These results demonstrate the feasibility of performing cyst segmentations automatically in ADPKD patients.
Collapse
|
36
|
Li J, Li W, Sisk A, Ye H, Wallace WD, Speier W, Arnold CW. A multi-resolution model for histopathology image classification and localization with multiple instance learning. Comput Biol Med 2021; 131:104253. [PMID: 33601084 DOI: 10.1016/j.compbiomed.2021.104253] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 01/31/2021] [Accepted: 02/03/2021] [Indexed: 12/17/2022]
Abstract
Large numbers of histopathological images have been digitized into high resolution whole slide images, opening opportunities in developing computational image analysis tools to reduce pathologists' workload and potentially improve inter- and intra-observer agreement. Most previous work on whole slide image analysis has focused on classification or segmentation of small pre-selected regions-of-interest, which requires fine-grained annotation and is non-trivial to extend for large-scale whole slide analysis. In this paper, we proposed a multi-resolution multiple instance learning model that leverages saliency maps to detect suspicious regions for fine-grained grade prediction. Instead of relying on expensive region- or pixel-level annotations, our model can be trained end-to-end with only slide-level labels. The model is developed on a large-scale prostate biopsy dataset containing 20,229 slides from 830 patients. The model achieved 92.7% accuracy, 81.8% Cohen's Kappa for benign, low grade (i.e. Grade group 1) and high grade (i.e. Grade group ≥ 2) prediction, an area under the receiver operating characteristic curve (AUROC) of 98.2% and an average precision (AP) of 97.4% for differentiating malignant and benign slides. The model obtained an AUROC of 99.4% and an AP of 99.8% for cancer detection on an external dataset.
Collapse
Affiliation(s)
- Jiayun Li
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA.
| | - Wenyuan Li
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA
| | - Anthony Sisk
- Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA
| | - Huihui Ye
- Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA
| | - W Dean Wallace
- Department of Pathology, USC, 2011 Zonal Avenue, Los Angeles, CA, 90033, USA
| | - William Speier
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA
| | - Corey W Arnold
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA.
| |
Collapse
|
37
|
Xie X, Niu J, Liu X, Chen Z, Tang S, Yu S. A survey on incorporating domain knowledge into deep learning for medical image analysis. Med Image Anal 2021; 69:101985. [PMID: 33588117 DOI: 10.1016/j.media.2021.101985] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 12/04/2020] [Accepted: 01/26/2021] [Indexed: 12/27/2022]
Abstract
Although deep learning models like CNNs have achieved great success in medical image analysis, the small size of medical datasets remains a major bottleneck in this area. To address this problem, researchers have started looking for external information beyond current available medical datasets. Traditional approaches generally leverage the information from natural images via transfer learning. More recent works utilize the domain knowledge from medical doctors, to create networks that resemble how medical doctors are trained, mimic their diagnostic patterns, or focus on the features or areas they pay particular attention to. In this survey, we summarize the current progress on integrating medical domain knowledge into deep learning models for various tasks, such as disease diagnosis, lesion, organ and abnormality detection, lesion and organ segmentation. For each task, we systematically categorize different kinds of medical domain knowledge that have been utilized and their corresponding integrating methods. We also provide current challenges and directions for future research.
Collapse
Affiliation(s)
- Xiaozheng Xie
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Jianwei Niu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC) and Hangzhou Innovation Institute of Beihang University, 18 Chuanghui Street, Binjiang District, Hangzhou 310000, China
| | - Xuefeng Liu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China.
| | - Zhengsu Chen
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Shaojie Tang
- Jindal School of Management, The University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080-3021, USA
| | - Shui Yu
- School of Computer Science, University of Technology Sydney, 15 Broadway, Ultimo NSW 2007, Australia
| |
Collapse
|
38
|
Lee YW, Huang CS, Shih CC, Chang RF. Axillary lymph node metastasis status prediction of early-stage breast cancer using convolutional neural networks. Comput Biol Med 2020; 130:104206. [PMID: 33421823 DOI: 10.1016/j.compbiomed.2020.104206] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Revised: 12/28/2020] [Accepted: 12/28/2020] [Indexed: 11/25/2022]
Abstract
Deep learning (DL) algorithms have been proven to be very effective in a wide range of computer vision applications, such as segmentation, classification, and detection. DL models can automatically assess complex medical image scenes without human intervention and can be applied as a second reader to provide an additional opinion for the physician. To predict the axillary lymph node (ALN) metastatic status in patients with early-stage breast cancer, a deep learning-based computer-aided prediction system for ultrasound (US) images was proposed. A total of 153 women with breast tumor US images were involved in this study; there were 59 patients with metastasis and 94 patients without ALN metastasis. A deep learning-based computer-aided prediction (CAP) system using the tumor region and peritumoral tissue in ultrasound (US) images were employed to determine the ALN status in breast cancer. First, we adopted Mask R-CNN as our tumor detection and segmentation model to obtain the tumor localization and region. Second, the peritumoral tissue was extracted from the US image, which reflects metastatic progression. Third, we used the DL model to predict ALN metastasis. Finally, the simple linear iterative clustering (SLIC) superpixel segmentation method and the LIME explanation algorithm were employed to explain how the model makes decisions. The experimental results indicated that the DL model had the best prediction performance on tumor regions with 3 mm thick peritumoral tissue, and the accuracy, sensitivity, specificity, and AUC were 81.05% (124/153), 81.36% (48/59), 80.85% (76/94), and 0.8054, respectively. The results indicated that the proposed CAP system could help determine the ALN status in patients with early-stage breast cancer. The results reveal that the proposed CAP model, which combines primary tumor and peritumoral tissue, is an effective method to predict the ALN status in patients with early-stage breast cancer.
Collapse
Affiliation(s)
- Yan-Wei Lee
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, 10617, Taiwan
| | - Chiun-Sheng Huang
- Department of Surgery, College of Medicine, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, 100, Taiwan
| | - Chung-Chih Shih
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, 10617, Taiwan
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, 10617, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, 10617, Taiwan; Graduate Institute of Network and Multimedia, National Taiwan University, Taipei, 10617, Taiwan; MOST Joint Research Center for AI Technology and All Vista Healthcare, Taipei, 10617, Taiwan.
| |
Collapse
|
39
|
Analysis of cancer in histological images: employing an approach based on genetic algorithm. Pattern Anal Appl 2020. [DOI: 10.1007/s10044-020-00931-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
40
|
Li Y, Chen J, Xue P, Tang C, Chang J, Chu C, Ma K, Li Q, Zheng Y, Qiao Y. Computer-Aided Cervical Cancer Diagnosis Using Time-Lapsed Colposcopic Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3403-3415. [PMID: 32406830 DOI: 10.1109/tmi.2020.2994778] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Cervical cancer causes the fourth most cancer-related deaths of women worldwide. Early detection of cervical intraepithelial neoplasia (CIN) can significantly increase the survival rate of patients. In this paper, we propose a deep learning framework for the accurate identification of LSIL+ (including CIN and cervical cancer) using time-lapsed colposcopic images. The proposed framework involves two main components, i.e., key-frame feature encoding networks and feature fusion network. The features of the original (pre-acetic-acid) image and the colposcopic images captured at around 60s, 90s, 120s and 150s during the acetic acid test are encoded by the feature encoding networks. Several fusion approaches are compared, all of which outperform the existing automated cervical cancer diagnosis systems using a single time slot. A graph convolutional network with edge features (E-GCN) is found to be the most suitable fusion approach in our study, due to its excellent explainability consistent with the clinical practice. A large-scale dataset, containing time-lapsed colposcopic images from 7,668 patients, is collected from the collaborative hospital to train and validate our deep learning framework. Colposcopists are invited to compete with our computer-aided diagnosis system. The proposed deep learning framework achieves a classification accuracy of 78.33%-comparable to that of an in-service colposcopist-which demonstrates its potential to provide assistance in the realistic clinical scenario.
Collapse
|
41
|
Yan C, Nakane K, Wang X, Fu Y, Lu H, Fan X, Feldman MD, Madabhushi A, Xu J. Automated gleason grading on prostate biopsy slides by statistical representations of homology profile. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 194:105528. [PMID: 32470903 PMCID: PMC8153074 DOI: 10.1016/j.cmpb.2020.105528] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Revised: 04/13/2020] [Accepted: 04/30/2020] [Indexed: 05/03/2023]
Abstract
BACKGROUND AND OBJECTIVE Gleason grading system is currently the clinical gold standard for determining prostate cancer aggressiveness. Prostate cancer is typically classified into one of 5 different categories with 1 representing the most indolent disease and 5 reflecting the most aggressive disease. Grades 3 and 4 are the most common and difficult patterns to be discriminated in clinical practice. Even though the degree of gland differentiation is the strongest determinant of Gleason grade, manual grading is subjective and is hampered by substantial inter-reader disagreement, especially with regard to intermediate grade groups. METHODS To capture the topological characteristics and the degree of connectivity between nuclei around the gland, the concept of Homology Profile (HP) for prostate cancer grading is presented in this paper. HP is an algebraic tool, whereby, certain algebraic invariants are computed based on the structure of a topological space. We utilized the Statistical Representation of Homology Profile (SRHP) features to quantify the extent of glandular differentiation. The quantitative characteristics which represent the image patch are fed into a supervised classifier model for discrimination of grade patterns 3 and 4. RESULTS On the basis of the novel homology profile, we evaluated 43 digitized images of prostate biopsy slides annotated for regions corresponding to Grades 3 and 4. The quantitative patch-level evaluation results showed that our approach achieved an Area Under Curve (AUC) of 0.96 and an accuracy of 0.89 in terms of discriminating Grade 3 and 4 patches. Our approach was found to be superior to comparative methods including handcrafted cellular features, Stacked Sparse Autoencoder (SSAE) algorithm and end-to-end supervised learning method (DLGg). Also, slide-level quantitative and qualitative evaluation results reflect the ability of our approach in discriminating Gleason Grade 3 from 4 patterns on H&E tissue images. CONCLUSIONS We presented a novel Statistical Representation of Homology Profile (SRHP) approach for automated Gleason grading on prostate biopsy slides. The most discriminating topological descriptions of cancerous regions for grade 3 and 4 in prostate cancer were identified. Moreover, these characteristics of homology profile are interpretable, visually meaningful and highly consistent with the rubric employed by pathologists for the task of Gleason grading.
Collapse
Affiliation(s)
- Chaoyang Yan
- School of Automation, Nanjing University of Information Science & Technology, Nanjing 210044, China; Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Kazuaki Nakane
- Department of Molecular Pathology, Osaka University Graduate School of Medicine, Division of Health Science, Osaka 565-0871, Japan
| | - Xiangxue Wang
- Dept. of Biomedical Engineering, Case Western Reserve University, OH 44106-7207, USA
| | - Yao Fu
- Dept. of Pathology, the affiliated Drum Tower Hospital, Nanjing University Medical School, 210008, China
| | - Haoda Lu
- School of Automation, Nanjing University of Information Science & Technology, Nanjing 210044, China; Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Xiangshan Fan
- Dept. of Pathology, the affiliated Drum Tower Hospital, Nanjing University Medical School, 210008, China
| | - Michael D Feldman
- Division of Surgical Pathology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, USA
| | - Anant Madabhushi
- Dept. of Biomedical Engineering, Case Western Reserve University, OH 44106-7207, USA; Louis Stokes Cleveland Veterans Medical Center, Cleveland, OH 44106
| | - Jun Xu
- School of Automation, Nanjing University of Information Science & Technology, Nanjing 210044, China; Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China.
| |
Collapse
|
42
|
Silva-Rodríguez J, Colomer A, Sales MA, Molina R, Naranjo V. Going deeper through the Gleason scoring scale: An automatic end-to-end system for histology prostate grading and cribriform pattern detection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 195:105637. [PMID: 32653747 DOI: 10.1016/j.cmpb.2020.105637] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 06/28/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Prostate cancer is one of the most common diseases affecting men worldwide. The Gleason scoring system is the primary diagnostic and prognostic tool for prostate cancer. Furthermore, recent reports indicate that the presence of patterns of the Gleason scale such as the cribriform pattern may also correlate with a worse prognosis compared to other patterns belonging to the Gleason grade 4. Current clinical guidelines have indicated the convenience of highlight its presence during the analysis of biopsies. All these requirements suppose a great workload for the pathologist during the analysis of each sample, which is based on the pathologist's visual analysis of the morphology and organisation of the glands in the tissue, a time-consuming and subjective task. In recent years, with the development of digitisation devices, the use of computer vision techniques for the analysis of biopsies has increased. However, to the best of the authors' knowledge, the development of algorithms to automatically detect individual cribriform patterns belonging to Gleason grade 4 has not yet been studied in the literature. The objective of the work presented in this paper is to develop a deep-learning-based system able to support pathologists in the daily analysis of prostate biopsies. This analysis must include the Gleason grading of local structures, the detection of cribriform patterns, and the Gleason scoring of the whole biopsy. METHODS The methodological core of this work is a patch-wise predictive model based on convolutional neural networks able to determine the presence of cancerous patterns based on the Gleason grading system. In particular, we train from scratch a simple self-design architecture with three filters and a top model with global-max pooling. The cribriform pattern is detected by retraining the set of filters of the last convolutional layer in the network. Subsequently, a biopsy-level prediction map is reconstructed by bi-linear interpolation of the patch-level prediction of the Gleason grades. In addition, from the reconstructed prediction map, we compute the percentage of each Gleason grade in the tissue to feed a multi-layer perceptron which provides a biopsy-level score. RESULTS In our SICAPv2 database, composed of 182 annotated whole slide images, we obtained a Cohen's quadratic kappa of 0.77 in the test set for the patch-level Gleason grading with the proposed architecture trained from scratch. Our results outperform previous ones reported in the literature. Furthermore, this model reaches the level of fine-tuned state-of-the-art architectures in a patient-based four groups cross validation. In the cribriform pattern detection task, we obtained an area under ROC curve of 0.82. Regarding the biopsy Gleason scoring, we achieved a quadratic Cohen's Kappa of 0.81 in the test subset. Shallow CNN architectures trained from scratch outperform current state-of-the-art methods for Gleason grades classification. Our proposed model is capable of characterising the different Gleason grades in prostate tissue by extracting low-level features through three basic blocks (i.e. convolutional layer + max pooling). The use of global-max pooling to reduce each activation map has shown to be a key factor for reducing complexity in the model and avoiding overfitting. Regarding the Gleason scoring of biopsies, a multi-layer perceptron has shown to better model the decision-making of pathologists than previous simpler models used in the literature.
Collapse
Affiliation(s)
- Julio Silva-Rodríguez
- Institute of Transport and Territory, Universitat Politècnica de València, Valencia, Spain.
| | - Adrián Colomer
- Institute of Research and Innovation in Bioengineering, Universitat Politècnica de València, Valencia, Spain.
| | - María A Sales
- Anatomical Pathology Service, University Clinical Hospital of Valencia, Valencia, Spain.
| | - Rafael Molina
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain.
| | - Valery Naranjo
- Institute of Research and Innovation in Bioengineering, Universitat Politècnica de València, Valencia, Spain.
| |
Collapse
|
43
|
Deng S, Zhang X, Yan W, Chang EIC, Fan Y, Lai M, Xu Y. Deep learning in digital pathology image analysis: a survey. Front Med 2020; 14:470-487. [PMID: 32728875 DOI: 10.1007/s11684-020-0782-9] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 03/05/2020] [Indexed: 12/21/2022]
Abstract
Deep learning (DL) has achieved state-of-the-art performance in many digital pathology analysis tasks. Traditional methods usually require hand-crafted domain-specific features, and DL methods can learn representations without manually designed features. In terms of feature extraction, DL approaches are less labor intensive compared with conventional machine learning methods. In this paper, we comprehensively summarize recent DL-based image analysis studies in histopathology, including different tasks (e.g., classification, semantic segmentation, detection, and instance segmentation) and various applications (e.g., stain normalization, cell/gland/region structure analysis). DL methods can provide consistent and accurate outcomes. DL is a promising tool to assist pathologists in clinical diagnosis.
Collapse
Affiliation(s)
- Shujian Deng
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Xin Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Wen Yan
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | | | - Yubo Fan
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang University, Hangzhou, 310007, China
| | - Yan Xu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China.
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China.
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.
- Microsoft Research Asia, Beijing, 100080, China.
| |
Collapse
|
44
|
Tolkach Y, Dohmgörgen T, Toma M, Kristiansen G. High-accuracy prostate cancer pathology using deep learning. NAT MACH INTELL 2020. [DOI: 10.1038/s42256-020-0200-7] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
45
|
Wang L, Chen A, Zhang Y, Wang X, Zhang Y, Shen Q, Xue Y. AK-DL: A Shallow Neural Network Model for Diagnosing Actinic Keratosis with Better Performance Than Deep Neural Networks. Diagnostics (Basel) 2020; 10:diagnostics10040217. [PMID: 32294962 PMCID: PMC7235884 DOI: 10.3390/diagnostics10040217] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 04/07/2020] [Accepted: 04/09/2020] [Indexed: 11/29/2022] Open
Abstract
Actinic keratosis (AK) is one of the most common precancerous skin lesions, which is easily confused with benign keratosis (BK). At present, the diagnosis of AK mainly depends on histopathological examination, and ignorance can easily occur in the early stage, thus missing the opportunity for treatment. In this study, we designed a shallow convolutional neural network (CNN) named actinic keratosis deep learning (AK-DL) and further developed an intelligent diagnostic system for AK based on the iOS platform. After data preprocessing, the AK-DL model was trained and tested with AK and BK images from dataset HAM10000. We further compared it with mainstream deep CNN models, such as AlexNet, GoogLeNet, and ResNet, as well as traditional medical image processing algorithms. Our results showed that the performance of AK-DL was better than the mainstream deep CNN models and traditional medical image processing algorithms based on the AK dataset. The recognition accuracy of AK-DL was 0.925, the area under the receiver operating characteristic curve (AUC) was 0.887, and the training time was only 123.0 s. An iOS app of intelligent diagnostic system was developed based on the AK-DL model for accurate and automatic diagnosis of AK. Our results indicate that it is better to employ a shallow CNN in the recognition of AK.
Collapse
Affiliation(s)
- Liyang Wang
- Beijing Advanced Innovation Center for Food Nutrition and Human Health, Key Laboratory of Plant Protein and Grain Processing, National Engineering and Technology Research Center for Fruits and Vegetables, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China; (L.W.); (X.W.); (Q.S.)
| | - Angxuan Chen
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; (A.C.); (Y.Z.); (Y.Z.)
| | - Yan Zhang
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; (A.C.); (Y.Z.); (Y.Z.)
| | - Xiaoya Wang
- Beijing Advanced Innovation Center for Food Nutrition and Human Health, Key Laboratory of Plant Protein and Grain Processing, National Engineering and Technology Research Center for Fruits and Vegetables, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China; (L.W.); (X.W.); (Q.S.)
| | - Yu Zhang
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; (A.C.); (Y.Z.); (Y.Z.)
| | - Qun Shen
- Beijing Advanced Innovation Center for Food Nutrition and Human Health, Key Laboratory of Plant Protein and Grain Processing, National Engineering and Technology Research Center for Fruits and Vegetables, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China; (L.W.); (X.W.); (Q.S.)
| | - Yong Xue
- Beijing Advanced Innovation Center for Food Nutrition and Human Health, Key Laboratory of Plant Protein and Grain Processing, National Engineering and Technology Research Center for Fruits and Vegetables, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China; (L.W.); (X.W.); (Q.S.)
- Correspondence:
| |
Collapse
|
46
|
Chen CM, Huang YS, Fang PW, Liang CW, Chang RF. A computer-aided diagnosis system for differentiation and delineation of malignant regions on whole-slide prostate histopathology image using spatial statistics and multidimensional DenseNet. Med Phys 2020; 47:1021-1033. [PMID: 31834623 DOI: 10.1002/mp.13964] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 11/26/2019] [Accepted: 12/04/2019] [Indexed: 02/06/2023] Open
Abstract
PURPOSE Prostate cancer (PCa) is a major health concern in aging males, and proper management of the disease depends on accurately interpreting pathology specimens. However, reading prostatectomy histopathology slides, which is basically for staging, is usually time consuming and differs from reading small biopsy specimens, which is mainly used for diagnosis. Generally, each prostatectomy specimen generates tens of large tissue sections and for each section, the malignant region needs to be delineated to assess the amount of tumor and its burden. With the aim of reducing the workload of pathologists, in this study, we focus on developing a computer-aided diagnosis (CAD) system based on a densely connected convolutional neural network (DenseNet) for whole-slide histopathology images to outline the malignant regions. METHODS We use an efficient color normalization process based on ranklet transformation to automatically correct the intensity of the images. Additionally, we use spatial probability to segment the tissue structure regions for different tissue recognition patterns. Based on the segmentation, we incorporate a multidimensional structure into DenseNet to determine if a particular prostatic region is benign or malignant. RESULTS As demonstrated by the experimental results with a test set of 2,663 images from 32 whole-slide prostate histopathology images, our proposed system achieved 0.726, 0.6306, and 0.5209 in the average of the Dice coefficient, Jaccard similarity coefficient, and Boundary F1 score measures, respectively. Then, the accuracy, sensitivity, specificity, and the area under the ROC curve (AUC) of the proposed classification method were observed to be 95.0% (2544/2663), 96.7% (1210/1251), 93.9% (1334/1412), and 0.9831, respectively. DISCUSSIONS We provide a detailed discussion on how our proposed system demonstrates considerable improvement compared with similar methods considered in previous researches as well as how it can be used for delineating malignant regions.
Collapse
Affiliation(s)
- Chiao-Min Chen
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Pei-Wei Fang
- Department of Pathology, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Cher-Wei Liang
- Department of Pathology, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan.,School of Medicine, College of Medicine, Fu Jen Catholic University, New Taipei City, Taiwan.,Graduate Institute of Pathology, College of Medicine, National Taiwan University Taipei, Taipei, Taiwan
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan.,Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.,MOST Joint Research Center for AI Technology and All Vista Healthcare Taipei, Taipei, Taiwan
| |
Collapse
|
47
|
Esteban ÁE, López-Pérez M, Colomer A, Sales MA, Molina R, Naranjo V. A new optical density granulometry-based descriptor for the classification of prostate histological images using shallow and deep Gaussian processes. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 178:303-317. [PMID: 31416557 DOI: 10.1016/j.cmpb.2019.07.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 06/28/2019] [Accepted: 07/03/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Prostate cancer is one of the most common male tumors. The increasing use of whole slide digital scanners has led to an enormous interest in the application of machine learning techniques to histopathological image classification. Here we introduce a novel family of morphological descriptors which, extracted in the appropriate image space and combined with shallow and deep Gaussian process based classifiers, improves early prostate cancer diagnosis. METHOD We decompose the acquired RGB image in its RGB and optical density hematoxylin and eosin components. Then, we define two novel granulometry-based descriptors which work in both, RGB and optical density, spaces but perform better when used on the latter. In this space they clearly encapsulate knowledge used by pathologists to identify cancer lesions. The obtained features become the inputs to shallow and deep Gaussian process classifiers which achieve an accurate prediction of cancer. RESULTS We have used a real and unique dataset. The dataset is composed of 60 Whole Slide Images. For a five fold cross validation, shallow and deep Gaussian Processes obtain area under ROC curve values higher than 0.98. They outperform current state of the art patch based shallow classifiers and are very competitive to the best performing deep learning method. Models were also compared on 17 Whole Slide test Images using the FROC curve. With the cost of one false positive, the best performing method, the one layer Gaussian process, identifies 83.87% (sensitivity) of all annotated cancer in the Whole Slide Image. This result corroborates the quality of the extracted features, no more than a layer is needed to achieve excellent generalization results. CONCLUSION Two new descriptors to extract morphological features from histological images have been proposed. They collect very relevant information for cancer detection. From these descriptors, shallow and deep Gaussian Processes are capable of extracting the complex structure of prostate histological images. The new space/descriptor/classifier paradigm outperforms state-of-art shallow classifiers. Furthermore, despite being much simpler, it is competitive to state-of-art CNN architectures both on the proposed SICAPv1 database and on an external database.
Collapse
Affiliation(s)
- Ángel E Esteban
- Institute of Research and Innovation in Bioengineering, I3B, Polytechnic University of Valencia, Valencia, Spain.
| | - Miguel López-Pérez
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain.
| | - Adrián Colomer
- Institute of Research and Innovation in Bioengineering, I3B, Polytechnic University of Valencia, Valencia, Spain.
| | - María A Sales
- Anatomical Pathology Service, University Clinical Hospital of Valencia, Valencia, Spain.
| | - Rafael Molina
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain.
| | - Valery Naranjo
- Institute of Research and Innovation in Bioengineering, I3B, Polytechnic University of Valencia, Valencia, Spain.
| |
Collapse
|
48
|
|
49
|
Harmon SA, Tuncer S, Sanford T, Choyke PL, Türkbey B. Artificial intelligence at the intersection of pathology and radiology in prostate cancer. Diagn Interv Radiol 2019; 25:183-188. [PMID: 31063138 PMCID: PMC6521904 DOI: 10.5152/dir.2019.19125] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Revised: 03/08/2019] [Accepted: 03/23/2019] [Indexed: 01/30/2023]
Abstract
Pathologic grading plays a key role in prostate cancer risk stratification and treatment selection, traditionally assessed from systemic core needle biopsies sampled throughout the prostate gland. Multiparametric magnetic resonance imaging (mpMRI) has become a well-established clinical tool for detecting and localizing prostate cancer. However, both pathologic and radiologic assessment suffer from poor reproducibility among readers. Artificial intelligence (AI) methods show promise in aiding the detection and assessment of imaging-based tasks, dependent on the curation of high-quality training sets. This review provides an overview of recent advances in AI applied to mpMRI and digital pathology in prostate cancer which enable advanced characterization of disease through combined radiology-pathology assessment.
Collapse
Affiliation(s)
- Stephanie A. Harmon
- From the Clinical Research Directorate (S.A.H. ), Frederick National Laboratory for Cancer Research sponsored by the National Cancer Institute, Frederick, MD, USA; Molecular Imaging Program (S.A.H.,T.S., P.L.C., B.T.), National Cancer Institute, NIH, Bethesda, MD, USA; Department of Radiology (S.T.), İstanbul University, İstanbul School of Medicine, İstanbul, Turkey
| | - Sena Tuncer
- From the Clinical Research Directorate (S.A.H. ), Frederick National Laboratory for Cancer Research sponsored by the National Cancer Institute, Frederick, MD, USA; Molecular Imaging Program (S.A.H.,T.S., P.L.C., B.T.), National Cancer Institute, NIH, Bethesda, MD, USA; Department of Radiology (S.T.), İstanbul University, İstanbul School of Medicine, İstanbul, Turkey
| | - Thomas Sanford
- From the Clinical Research Directorate (S.A.H. ), Frederick National Laboratory for Cancer Research sponsored by the National Cancer Institute, Frederick, MD, USA; Molecular Imaging Program (S.A.H.,T.S., P.L.C., B.T.), National Cancer Institute, NIH, Bethesda, MD, USA; Department of Radiology (S.T.), İstanbul University, İstanbul School of Medicine, İstanbul, Turkey
| | - Peter L. Choyke
- From the Clinical Research Directorate (S.A.H. ), Frederick National Laboratory for Cancer Research sponsored by the National Cancer Institute, Frederick, MD, USA; Molecular Imaging Program (S.A.H.,T.S., P.L.C., B.T.), National Cancer Institute, NIH, Bethesda, MD, USA; Department of Radiology (S.T.), İstanbul University, İstanbul School of Medicine, İstanbul, Turkey
| | - Barış Türkbey
- From the Clinical Research Directorate (S.A.H. ), Frederick National Laboratory for Cancer Research sponsored by the National Cancer Institute, Frederick, MD, USA; Molecular Imaging Program (S.A.H.,T.S., P.L.C., B.T.), National Cancer Institute, NIH, Bethesda, MD, USA; Department of Radiology (S.T.), İstanbul University, İstanbul School of Medicine, İstanbul, Turkey
| |
Collapse
|
50
|
Ho KC, Scalzo F, Sarma KV, Speier W, El-Saden S, Arnold C. Predicting ischemic stroke tissue fate using a deep convolutional neural network on source magnetic resonance perfusion images. J Med Imaging (Bellingham) 2019; 6:026001. [PMID: 31131293 PMCID: PMC6529818 DOI: 10.1117/1.jmi.6.2.026001] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 04/18/2019] [Indexed: 01/09/2023] Open
Abstract
Predicting infarct volume from magnetic resonance perfusion-weighted imaging can provide helpful information to clinicians in deciding how aggressively to treat acute stroke patients. Models have been developed to predict tissue fate, yet these models are mostly built using hand-crafted features (e.g., time-to-maximum) derived from perfusion images, which are sensitive to deconvolution methods. We demonstrate the application of deep convolution neural networks (CNNs) on predicting final stroke infarct volume using only the source perfusion images. We propose a deep CNN architecture that improves feature learning and achieves an area under the curve of 0.871 ± 0.024 , outperforming existing tissue fate models. We further validate the proposed deep CNN with existing 2-D and 3-D deep CNNs for images/video classification, showing the importance of the proposed architecture. Our work leverages deep learning techniques in stroke tissue outcome prediction, advancing magnetic resonance imaging perfusion analysis one step closer to an operational decision support tool for stroke treatment guidance.
Collapse
Affiliation(s)
- King Chung Ho
- University of California, Los Angeles, Department of Bioengineering, Los Angeles, California, United States
| | - Fabien Scalzo
- University of California, Los Angeles, Department of Computer Science, Los Angeles, California, United States
| | - Karthik V. Sarma
- University of California, Los Angeles, Department of Bioengineering, Los Angeles, California, United States
| | - William Speier
- University of California, Los Angeles, Department of Radiological Sciences, Los Angeles, California, United States
| | - Suzie El-Saden
- University of California, Los Angeles, Department of Radiological Sciences, Los Angeles, California, United States
| | - Corey Arnold
- University of California, Los Angeles, Department of Radiological Sciences, Los Angeles, California, United States
| |
Collapse
|