1
|
McGenity C, Clarke EL, Jennings C, Matthews G, Cartlidge C, Freduah-Agyemang H, Stocken DD, Treanor D. Artificial intelligence in digital pathology: a systematic review and meta-analysis of diagnostic test accuracy. NPJ Digit Med 2024; 7:114. [PMID: 38704465 DOI: 10.1038/s41746-024-01106-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 04/12/2024] [Indexed: 05/06/2024] Open
Abstract
Ensuring diagnostic performance of artificial intelligence (AI) before introduction into clinical practice is essential. Growing numbers of studies using AI for digital pathology have been reported over recent years. The aim of this work is to examine the diagnostic accuracy of AI in digital pathology images for any disease. This systematic review and meta-analysis included diagnostic accuracy studies using any type of AI applied to whole slide images (WSIs) for any disease. The reference standard was diagnosis by histopathological assessment and/or immunohistochemistry. Searches were conducted in PubMed, EMBASE and CENTRAL in June 2022. Risk of bias and concerns of applicability were assessed using the QUADAS-2 tool. Data extraction was conducted by two investigators and meta-analysis was performed using a bivariate random effects model, with additional subgroup analyses also performed. Of 2976 identified studies, 100 were included in the review and 48 in the meta-analysis. Studies were from a range of countries, including over 152,000 whole slide images (WSIs), representing many diseases. These studies reported a mean sensitivity of 96.3% (CI 94.1-97.7) and mean specificity of 93.3% (CI 90.5-95.4). There was heterogeneity in study design and 99% of studies identified for inclusion had at least one area at high or unclear risk of bias or applicability concerns. Details on selection of cases, division of model development and validation data and raw performance data were frequently ambiguous or missing. AI is reported as having high diagnostic accuracy in the reported areas but requires more rigorous evaluation of its performance.
Collapse
Affiliation(s)
- Clare McGenity
- University of Leeds, Leeds, UK.
- Leeds Teaching Hospitals NHS Trust, Leeds, UK.
| | - Emily L Clarke
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Charlotte Jennings
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | | | | | | | - Darren Treanor
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Centre for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| |
Collapse
|
2
|
Abbaker N, Minervini F, Guttadauro A, Solli P, Cioffi U, Scarci M. The future of artificial intelligence in thoracic surgery for non-small cell lung cancer treatment a narrative review. Front Oncol 2024; 14:1347464. [PMID: 38414748 PMCID: PMC10897973 DOI: 10.3389/fonc.2024.1347464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 01/16/2024] [Indexed: 02/29/2024] Open
Abstract
Objectives To present a comprehensive review of the current state of artificial intelligence (AI) applications in lung cancer management, spanning the preoperative, intraoperative, and postoperative phases. Methods A review of the literature was conducted using PubMed, EMBASE and Cochrane, including relevant studies between 2002 and 2023 to identify the latest research on artificial intelligence and lung cancer. Conclusion While AI holds promise in managing lung cancer, challenges exist. In the preoperative phase, AI can improve diagnostics and predict biomarkers, particularly in cases with limited biopsy materials. During surgery, AI provides real-time guidance. Postoperatively, AI assists in pathology assessment and predictive modeling. Challenges include interpretability issues, training limitations affecting model use and AI's ineffectiveness beyond classification. Overfitting and global generalization, along with high computational costs and ethical frameworks, pose hurdles. Addressing these challenges requires a careful approach, considering ethical, technical, and regulatory factors. Rigorous analysis, external validation, and a robust regulatory framework are crucial for responsible AI implementation in lung surgery, reflecting the evolving synergy between human expertise and technology.
Collapse
Affiliation(s)
- Namariq Abbaker
- Division of Thoracic Surgery, Imperial College NHS Healthcare Trust and National Heart and Lung Institute, London, United Kingdom
| | - Fabrizio Minervini
- Division of Thoracic Surgery, Luzerner Kantonsspital, Lucern, Switzerland
| | - Angelo Guttadauro
- Division of Surgery, Università Milano-Bicocca and Istituti Clinici Zucchi, Monza, Italy
| | - Piergiorgio Solli
- Division of Thoracic Surgery, Policlinico S. Orsola-Malpighi, Bologna, Italy
| | - Ugo Cioffi
- Department of Surgery, University of Milan, Milan, Italy
| | - Marco Scarci
- Division of Thoracic Surgery, Imperial College NHS Healthcare Trust and National Heart and Lung Institute, London, United Kingdom
| |
Collapse
|
3
|
Ishiwata T, Yasufuku K. Artificial intelligence in interventional pulmonology. Curr Opin Pulm Med 2024; 30:92-98. [PMID: 37916605 DOI: 10.1097/mcp.0000000000001024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
PURPOSE OF REVIEW In recent years, there has been remarkable progress in the field of artificial intelligence technology. Artificial intelligence applications have been extensively researched and actively implemented across various domains within healthcare. This study reviews the current state of artificial intelligence research in interventional pulmonology and engages in a discussion to comprehend its capabilities and implications. RECENT FINDINGS Deep learning, a subset of artificial intelligence, has found extensive applications in recent years, enabling highly accurate identification and labeling of bronchial segments solely from intraluminal bronchial images. Furthermore, research has explored the use of artificial intelligence for the analysis of endobronchial ultrasound images, achieving a high degree of accuracy in distinguishing between benign and malignant targets within ultrasound images. These advancements have become possible due to the increased computational power of modern systems and the utilization of vast datasets, facilitating detections and predictions with greater precision and speed. SUMMARY Artificial intelligence integration into interventional pulmonology has the potential to enhance diagnostic accuracy and patient safety, ultimately leading to improved patient outcomes. However, the clinical impacts of artificial intelligence enhanced procedures remain unassessed. Additional research is necessary to evaluate both the advantages and disadvantages of artificial intelligence in the field of interventional pulmonology.
Collapse
Affiliation(s)
- Tsukasa Ishiwata
- Division of Thoracic Surgery, Toronto General Hospital, University Health Network, Toronto, Ontario, Canada
| | | |
Collapse
|
4
|
Sun X, Li W, Fu B, Peng Y, He J, Wang L, Yang T, Meng X, Li J, Wang J, Huang P, Wang R. TGMIL: A hybrid multi-instance learning model based on the Transformer and the Graph Attention Network for whole-slide images classification of renal cell carcinoma. Comput Methods Programs Biomed 2023; 242:107789. [PMID: 37722310 DOI: 10.1016/j.cmpb.2023.107789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 08/30/2023] [Accepted: 09/01/2023] [Indexed: 09/20/2023]
Abstract
BACKGROUND AND OBJECTIVES The pathological diagnosis of renal cell carcinoma is crucial for treatment. Currently, the multi-instance learning method is commonly used for whole-slide image classification of renal cell carcinoma, which is mainly based on the assumption of independent identical distribution. But this is inconsistent with the need to consider the correlation between different instances in the diagnosis process. Furthermore, the problem of high resource consumption of pathology images is still urgent to be solved. Therefore, we propose a new multi-instance learning method to solve this problem. METHODS In this study, we proposed a hybrid multi-instance learning model based on the Transformer and the Graph Attention Network, called TGMIL, to achieve whole-slide image of renal cell carcinoma classification without pixel-level annotation or region of interest extraction. Our approach is divided into three steps. First, we designed a feature pyramid with the multiple low magnifications of whole-slide image named MMFP. It makes the model incorporates richer information, and reduces memory consumption as well as training time compared to the highest magnification. Second, TGMIL amalgamates the Transformer and the Graph Attention's capabilities, adeptly addressing the loss of instance contextual and spatial. Within the Graph Attention network stream, an easy and efficient approach employing max pooling and mean pooling yields the graph adjacency matrix, devoid of extra memory consumption. Finally, the outputs of two streams of TGMIL are aggregated to achieve the classification of renal cell carcinoma. RESULTS On the TCGA-RCC validation set, a public dataset for renal cell carcinoma, the area under a receiver operating characteristic (ROC) curve (AUC) and accuracy of TGMIL were 0.98±0.0015,0.9191±0.0062, respectively. It showcased remarkable proficiency on the private validation set of renal cell carcinoma pathology images, attaining AUC of 0.9386±0.0162 and ACC of 0.9197±0.0124. Furthermore, on the public breast cancer whole-slide image test dataset, CAMELYON 16, our model showed good classification performance with an accuracy of 0.8792. CONCLUSIONS TGMIL models the diagnostic process of pathologists and shows good classification performance on multiple datasets. Concurrently, the MMFP module efficiently diminishes resource requirements, offering a novel angle for exploring computational pathology images.
Collapse
Affiliation(s)
- Xinhuan Sun
- Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China; Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Wuchao Li
- Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Bangkang Fu
- Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Yunsong Peng
- Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Junjie He
- Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China; Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Lihui Wang
- Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China
| | - Tongyin Yang
- Department of Pathology, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Xue Meng
- Department of Pathology, Affiliated Hospital of Zunyi Medical University, Zunyi, 563000, China
| | - Jin Li
- Department of Pathology, Affiliated Hospital of Zunyi Medical University, Zunyi, 563000, China
| | - Jinjing Wang
- Department of Pathology, Affiliated Hospital of Zunyi Medical University, Zunyi, 563000, China
| | - Ping Huang
- Department of Pathology, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Rongpin Wang
- Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guiyang, 550002, China.
| |
Collapse
|
5
|
Tavolara TE, Su Z, Gurcan MN, Niazi MKK. One label is all you need: Interpretable AI-enhanced histopathology for oncology. Semin Cancer Biol 2023; 97:70-85. [PMID: 37832751 DOI: 10.1016/j.semcancer.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 09/06/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023]
Abstract
Artificial Intelligence (AI)-enhanced histopathology presents unprecedented opportunities to benefit oncology through interpretable methods that require only one overall label per hematoxylin and eosin (H&E) slide with no tissue-level annotations. We present a structured review of these methods organized by their degree of verifiability and by commonly recurring application areas in oncological characterization. First, we discuss morphological markers (tumor presence/absence, metastases, subtypes, grades) in which AI-identified regions of interest (ROIs) within whole slide images (WSIs) verifiably overlap with pathologist-identified ROIs. Second, we discuss molecular markers (gene expression, molecular subtyping) that are not verified via H&E but rather based on overlap with positive regions on adjacent tissue. Third, we discuss genetic markers (mutations, mutational burden, microsatellite instability, chromosomal instability) that current technologies cannot verify if AI methods spatially resolve specific genetic alterations. Fourth, we discuss the direct prediction of survival to which AI-identified histopathological features quantitatively correlate but are nonetheless not mechanistically verifiable. Finally, we discuss in detail several opportunities and challenges for these one-label-per-slide methods within oncology. Opportunities include reducing the cost of research and clinical care, reducing the workload of clinicians, personalized medicine, and unlocking the full potential of histopathology through new imaging-based biomarkers. Current challenges include explainability and interpretability, validation via adjacent tissue sections, reproducibility, data availability, computational needs, data requirements, domain adaptability, external validation, dataset imbalances, and finally commercialization and clinical potential. Ultimately, the relative ease and minimum upfront cost with which relevant data can be collected in addition to the plethora of available AI methods for outcome-driven analysis will surmount these current limitations and achieve the innumerable opportunities associated with AI-driven histopathology for the benefit of oncology.
Collapse
Affiliation(s)
- Thomas E Tavolara
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Ziyu Su
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Metin N Gurcan
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - M Khalid Khan Niazi
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA.
| |
Collapse
|
6
|
Terada K, Yoshizawa A, Liu X, Ito H, Hamaji M, Menju T, Date H, Bise R, Haga H. Deep Learning for Predicting Effect of Neoadjuvant Therapies in Non-Small Cell Lung Carcinomas With Histologic Images. Mod Pathol 2023; 36:100302. [PMID: 37580019 DOI: 10.1016/j.modpat.2023.100302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 06/23/2023] [Accepted: 08/02/2023] [Indexed: 08/16/2023]
Abstract
Neoadjuvant therapies are used for locally advanced non-small cell lung carcinomas, whereby pathologists histologically evaluate the effect using resected specimens. Major pathological response (MPR) has recently been used for treatment evaluation and as an economical survival surrogate; however, interobserver variability and poor reproducibility are often noted. The aim of this study was to develop a deep learning (DL) model to predict MPR from hematoxylin and eosin-stained tissue images and to validate its utility for clinical use. We collected data on 125 primary non-small cell lung carcinoma cases that were resected after neoadjuvant therapy. The cases were randomly divided into 55 for training/validation and 70 for testing. A total of 261 hematoxylin and eosin-stained slides were obtained from the maximum tumor beds, and whole slide images were prepared. We used a multiscale patch model that can adaptively weight multiple convolutional neural networks trained with different field-of-view images. We performed 3-fold cross-validation to evaluate the model. During testing, we compared the percentages of viable tumor evaluated by annotator pathologists (reviewed data), those evaluated by nonannotator pathologists (primary data), and those predicted by the DL-based model using 2-class confusion matrices and receiver operating characteristic curves and performed a survival analysis between MPR-achieved and non-MPR cases. In cross-validation, accuracy and mean F1 score were 0.859 and 0.805, respectively. During testing, accuracy and mean F1 score with reviewed data and those with primary data were 0.986, 0.985, 0.943, and 0.943, respectively. The areas under the receiver operating characteristic curve with reviewed and primary data were 0.999 and 0.978, respectively. The disease-free survival of MPR-achieved cases with reviewed and primary data was significantly better than that of the non-MPR cases (P<.001 and P=.001), and that predicted by the DL-based model was almost identical (P=.005). The DL model may support pathologist evaluations and can offer accurate determinations of MPR in patients.
Collapse
Affiliation(s)
- Kazuhiro Terada
- Department of Diagnostic Pathology, Kyoto University Hospital, Kyoto, Japan
| | - Akihiko Yoshizawa
- Department of Diagnostic Pathology, Kyoto University Hospital, Kyoto, Japan.
| | - Xiaoqing Liu
- Department of Advanced Information Technology, Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Hiroaki Ito
- Department of Diagnostic Pathology, Kyoto University Hospital, Kyoto, Japan
| | - Masatsugu Hamaji
- Department of Thoracic Surgery, Kyoto University Hospital, Kyoto, Japan
| | - Toshi Menju
- Department of Thoracic Surgery, Kyoto University Hospital, Kyoto, Japan
| | - Hiroshi Date
- Department of Thoracic Surgery, Kyoto University Hospital, Kyoto, Japan
| | - Ryoma Bise
- Department of Advanced Information Technology, Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Hironori Haga
- Department of Diagnostic Pathology, Kyoto University Hospital, Kyoto, Japan
| |
Collapse
|
7
|
Mu Y, Tizhoosh HR, Dehkharghanian T, Campbell CJV. Whole slide image representation in bone marrow cytology. Comput Biol Med 2023; 166:107530. [PMID: 37837726 DOI: 10.1016/j.compbiomed.2023.107530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 09/17/2023] [Accepted: 09/27/2023] [Indexed: 10/16/2023]
Abstract
One of the goals of AI-based computational pathology is to generate compact representations of whole slide images (WSIs) that capture the essential information needed for diagnosis. While such approaches have been applied to histopathology, few applications have been reported in cytology. Bone marrow aspirate cytology is the basis for key clinical decisions in hematology. However, visual inspection of aspirate specimens is a tedious and complex process subject to variation in interpretation, and hematopathology expertise is scarce. The ability to generate a compact representation of an aspirate specimen may form the basis for clinical decision-support tools in hematology. In this study, we leverage our previously published end-to-end AI-based system for counting and classifying cells from bone marrow aspirate WSIs, which enables the direct use of individual cells as inputs rather than WSI patches. We then construct bags of individual cell features from each WSI, and apply multiple instance learning to extract their vector representations. To evaluate the quality of our representations, we conducted WSI retrieval and classification tasks. Our results show that we achieved a mAP@10 of 0.58 ±0.02 in WSI-level image retrieval, surpassing the random-retrieval baseline of 0.39 ±0.1. Furthermore, we predicted five diagnostic labels for individual aspirate WSIs with a weighted-average F1 score of 0.57 ±0.03 using a k-nearest-neighbors (k-NN) model, outperforming guessing using empirical class prior probabilities (0.26 ±0.02). We present the first example of exploring trainable mechanisms to generate compact, slide-level representations in bone marrow cytology with deep learning. This method has the potential to summarize complex semantic information in WSIs toward improved diagnostics in hematology, and may eventually support AI-assisted computational pathology approaches.
Collapse
Affiliation(s)
- Youqing Mu
- University of Toronto, Toronto, Canada; McMaster University, Hamilton, Canada
| | - H R Tizhoosh
- Rhazes Lab, Artificial Intelligence & Informatics, Mayo Clinic, Rochester, MN, USA
| | - Taher Dehkharghanian
- McMaster University, Hamilton, Canada; University Health Network, Toronto, Canada
| | - Clinton J V Campbell
- McMaster University, Hamilton, Canada; William Osler Health System, Brampton, Canada.
| |
Collapse
|
8
|
Yang Y, Sun K, Gao Y, Wang K, Yu G. Preparing Data for Artificial Intelligence in Pathology with Clinical-Grade Performance. Diagnostics (Basel) 2023; 13:3115. [PMID: 37835858 PMCID: PMC10572440 DOI: 10.3390/diagnostics13193115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/15/2023] Open
Abstract
The pathology is decisive for disease diagnosis but relies heavily on experienced pathologists. In recent years, there has been growing interest in the use of artificial intelligence in pathology (AIP) to enhance diagnostic accuracy and efficiency. However, the impressive performance of deep learning-based AIP in laboratory settings often proves challenging to replicate in clinical practice. As the data preparation is important for AIP, the paper has reviewed AIP-related studies in the PubMed database published from January 2017 to February 2022, and 118 studies were included. An in-depth analysis of data preparation methods is conducted, encompassing the acquisition of pathological tissue slides, data cleaning, screening, and subsequent digitization. Expert review, image annotation, dataset division for model training and validation are also discussed. Furthermore, we delve into the reasons behind the challenges in reproducing the high performance of AIP in clinical settings and present effective strategies to enhance AIP's clinical performance. The robustness of AIP depends on a randomized collection of representative disease slides, incorporating rigorous quality control and screening, correction of digital discrepancies, reasonable annotation, and sufficient data volume. Digital pathology is fundamental in clinical-grade AIP, and the techniques of data standardization and weakly supervised learning methods based on whole slide image (WSI) are effective ways to overcome obstacles of performance reproduction. The key to performance reproducibility lies in having representative data, an adequate amount of labeling, and ensuring consistency across multiple centers. Digital pathology for clinical diagnosis, data standardization and the technique of WSI-based weakly supervised learning will hopefully build clinical-grade AIP.
Collapse
Affiliation(s)
- Yuanqing Yang
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Department of Biomedical Engineering, School of Medical, Tsinghua University, Beijing 100084, China
| | - Kai Sun
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Furong Laboratory, Changsha 410013, China
| | - Yanhua Gao
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, Xi’an 710068, China;
| | - Kuansong Wang
- Department of Pathology, School of Basic Medical Sciences, Central South University, Changsha 410013, China;
- Department of Pathology, Xiangya Hospital, Central South University, Changsha 410013, China
| | - Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
| |
Collapse
|
9
|
Ali MA, Fujita D, Kobashi S. Teeth and prostheses detection in dental panoramic X-rays using CNN-based object detector and a priori knowledge-based algorithm. Sci Rep 2023; 13:16542. [PMID: 37783773 PMCID: PMC10545749 DOI: 10.1038/s41598-023-43591-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 09/26/2023] [Indexed: 10/04/2023] Open
Abstract
Deep learning techniques for automatically detecting teeth in dental X-rays have gained popularity, providing valuable assistance to healthcare professionals. However, teeth detection in X-ray images is often hindered by alterations in tooth appearance caused by dental prostheses. To address this challenge, our paper proposes a novel method for teeth detection and numbering in dental panoramic X-rays, leveraging two separate CNN-based object detectors, namely YOLOv7, for detecting teeth and prostheses, alongside an optimization algorithm to refine the outcomes. The study utilizes a dataset of 3138 radiographs, of which 2553 images contain prostheses, to build a robust model. The tooth and prosthesis detection algorithms perform excellently, achieving mean average precisions of 0.982 and 0.983, respectively. Additionally, the trained tooth detection model is verified using an external dataset, and six-fold cross-validation is conducted to demonstrate the proposed method's feasibility and robustness. Moreover, the investigation of performance improvement resulting from the inclusion of prosthesis information in the teeth detection process reveals a marginal increase in the average F1-score, rising from 0.985 to 0.987 compared to the sole teeth detection method. The proposed method is unique in its approach to numbering teeth as it incorporates prosthesis information and considers complete restorations such as dental implants and dentures of fixed bridges during the teeth enumeration process, which follows the universal tooth numbering system. These advancements hold promise for automating dental charting processes.
Collapse
Affiliation(s)
- Md Anas Ali
- Graduate School of Engineering, University of Hyogo, Himeji, Japan.
| | - Daisuke Fujita
- Graduate School of Engineering, University of Hyogo, Himeji, Japan
| | - Syoji Kobashi
- Graduate School of Engineering, University of Hyogo, Himeji, Japan
| |
Collapse
|
10
|
Sambyal D, Sarwar A. Recent developments in cervical cancer diagnosis using deep learning on whole slide images: An Overview of models, techniques, challenges and future directions. Micron 2023; 173:103520. [PMID: 37556898 DOI: 10.1016/j.micron.2023.103520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 07/16/2023] [Accepted: 07/28/2023] [Indexed: 08/11/2023]
Abstract
Integration of whole slide imaging (WSI) and deep learning technology has led to significant improvements in the screening and diagnosis of cervical cancer. WSI enables the examination of all cells on a slide simultaneously and deep learning algorithms can accurately label them as cancerous or non-cancerous. Although many studies have investigated the application of deep learning for diagnosing various diseases, there is a lack of research focusing on the evolution, limitations, and gaps of intelligent algorithms in conjunction with WSI for cervical cancer. This paper provides a comprehensive overview of the state-of-the-art deep learning algorithms used for the timely and precise analysis of cervical WSI images. A total of 115 relevant papers were reviewed, and 37 were selected after screening with specific inclusion and exclusion criteria. Methodological aspects including deep learning techniques, data sources, architectures, and classification techniques employed by the selected studies were analyzed. The review presents the most popular techniques and current trends in deep learning-based cervical classification systems, and categorizes the evolution of the domain based on deep learning techniques, citing an in-depth analysis of various models developed over time. The paper advocates for the implementation of transfer supervised learning when utilizing deep learning models such as ResNet, VGG19, and EfficientNet, and builds a solid foundation for applying relevant techniques in different fields. Although some progress has been made in developing novel models for the diagnosis of cervical cancer, substantial work remains to be done in creating standardized benchmark databases of WSI images for the research community. This paper serves as a comprehensive guide for understanding the fundamental concepts, benefits, and challenges related to various deep learning models on WSI, including their application for cervical system classification. Additionally, it provides valuable insights into future research directions in this area.
Collapse
Affiliation(s)
| | - Abid Sarwar
- Department of CS&IT, University of Jammu, India.
| |
Collapse
|
11
|
Kinoshita F, Takenaka T, Yamashita T, Matsumoto K, Oku Y, Ono Y, Wakasu S, Haratake N, Tagawa T, Nakashima N, Mori M. Development of artificial intelligence prognostic model for surgically resected non-small cell lung cancer. Sci Rep 2023; 13:15683. [PMID: 37735585 PMCID: PMC10514331 DOI: 10.1038/s41598-023-42964-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 09/17/2023] [Indexed: 09/23/2023] Open
Abstract
There are great expectations for artificial intelligence (AI) in medicine. We aimed to develop an AI prognostic model for surgically resected non-small cell lung cancer (NSCLC). This study enrolled 1049 patients with pathological stage I-IIIA surgically resected NSCLC at Kyushu University. We set 17 clinicopathological factors and 30 preoperative and 22 postoperative blood test results as explanatory variables. Disease-free survival (DFS), overall survival (OS), and cancer-specific survival (CSS) were set as objective variables. The eXtreme Gradient Boosting (XGBoost) was used as the machine learning algorithm. The median age was 69 (23-89) years, and 605 patients (57.7%) were male. The numbers of patients with pathological stage IA, IB, IIA, IIB, and IIIA were 553 (52.7%), 223 (21.4%), 100 (9.5%), 55 (5.3%), and 118 (11.2%), respectively. The 5-year DFS, OS, and CSS rates were 71.0%, 82.8%, and 88.7%, respectively. Our AI prognostic model showed that the areas under the curve of the receiver operating characteristic curves of DFS, OS, and CSS at 5 years were 0.890, 0.926, and 0.960, respectively. The AI prognostic model using XGBoost showed good prediction accuracy and provided accurate predictive probability of postoperative prognosis of NSCLC.
Collapse
Affiliation(s)
- Fumihiko Kinoshita
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
- Department of Thoracic Oncology, National Hospital Organization Kyushu Cancer Center, Fukuoka, Japan
| | - Tomoyoshi Takenaka
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan.
| | | | | | - Yuka Oku
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
- Department of Thoracic Oncology, National Hospital Organization Kyushu Cancer Center, Fukuoka, Japan
| | - Yuki Ono
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Sho Wakasu
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Naoki Haratake
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Tetsuzo Tagawa
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Naoki Nakashima
- Medical Information Center, Kyushu University Hospital, Fukuoka, Japan
| | - Masaki Mori
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| |
Collapse
|
12
|
Davri A, Birbas E, Kanavos T, Ntritsos G, Giannakeas N, Tzallas AT, Batistatou A. Deep Learning for Lung Cancer Diagnosis, Prognosis and Prediction Using Histological and Cytological Images: A Systematic Review. Cancers (Basel) 2023; 15:3981. [PMID: 37568797 PMCID: PMC10417369 DOI: 10.3390/cancers15153981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 07/27/2023] [Accepted: 08/03/2023] [Indexed: 08/13/2023] Open
Abstract
Lung cancer is one of the deadliest cancers worldwide, with a high incidence rate, especially in tobacco smokers. Lung cancer accurate diagnosis is based on distinct histological patterns combined with molecular data for personalized treatment. Precise lung cancer classification from a single H&E slide can be challenging for a pathologist, requiring most of the time additional histochemical and special immunohistochemical stains for the final pathology report. According to WHO, small biopsy and cytology specimens are the available materials for about 70% of lung cancer patients with advanced-stage unresectable disease. Thus, the limited available diagnostic material necessitates its optimal management and processing for the completion of diagnosis and predictive testing according to the published guidelines. During the new era of Digital Pathology, Deep Learning offers the potential for lung cancer interpretation to assist pathologists' routine practice. Herein, we systematically review the current Artificial Intelligence-based approaches using histological and cytological images of lung cancer. Most of the published literature centered on the distinction between lung adenocarcinoma, lung squamous cell carcinoma, and small cell lung carcinoma, reflecting the realistic pathologist's routine. Furthermore, several studies developed algorithms for lung adenocarcinoma predominant architectural pattern determination, prognosis prediction, mutational status characterization, and PD-L1 expression status estimation.
Collapse
Affiliation(s)
- Athena Davri
- Department of Pathology, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45500 Ioannina, Greece;
| | - Effrosyni Birbas
- Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (E.B.); (T.K.)
| | - Theofilos Kanavos
- Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (E.B.); (T.K.)
| | - Georgios Ntritsos
- Department of Hygiene and Epidemiology, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece;
- Department of Informatics and Telecommunications, University of Ioannina, 47100 Arta, Greece;
| | - Nikolaos Giannakeas
- Department of Informatics and Telecommunications, University of Ioannina, 47100 Arta, Greece;
| | - Alexandros T. Tzallas
- Department of Informatics and Telecommunications, University of Ioannina, 47100 Arta, Greece;
| | - Anna Batistatou
- Department of Pathology, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45500 Ioannina, Greece;
| |
Collapse
|
13
|
Lee J, Warner E, Shaikhouni S, Bitzer M, Kretzler M, Gipson D, Pennathur S, Bellovich K, Bhat Z, Gadegbeku C, Massengill S, Perumal K, Saha J, Yang Y, Luo J, Zhang X, Mariani L, Hodgin JB, Rao A. Clustering-based spatial analysis (CluSA) framework through graph neural network for chronic kidney disease prediction using histopathology images. Sci Rep 2023; 13:12701. [PMID: 37543648 PMCID: PMC10404289 DOI: 10.1038/s41598-023-39591-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 07/27/2023] [Indexed: 08/07/2023] Open
Abstract
Machine learning applied to digital pathology has been increasingly used to assess kidney function and diagnose the underlying cause of chronic kidney disease (CKD). We developed a novel computational framework, clustering-based spatial analysis (CluSA), that leverages unsupervised learning to learn spatial relationships between local visual patterns in kidney tissue. This framework minimizes the need for time-consuming and impractical expert annotations. 107,471 histopathology images obtained from 172 biopsy cores were used in the clustering and in the deep learning model. To incorporate spatial information over the clustered image patterns on the biopsy sample, we spatially encoded clustered patterns with colors and performed spatial analysis through graph neural network. A random forest classifier with various groups of features were used to predict CKD. For predicting eGFR at the biopsy, we achieved a sensitivity of 0.97, specificity of 0.90, and accuracy of 0.95. AUC was 0.96. For predicting eGFR changes in one-year, we achieved a sensitivity of 0.83, specificity of 0.85, and accuracy of 0.84. AUC was 0.85. This study presents the first spatial analysis based on unsupervised machine learning algorithms. Without expert annotation, CluSA framework can not only accurately classify and predict the degree of kidney function at the biopsy and in one year, but also identify novel predictors of kidney function and renal prognosis.
Collapse
Affiliation(s)
- Joonsang Lee
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA.
| | - Elisa Warner
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Salma Shaikhouni
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Markus Bitzer
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Matthias Kretzler
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Debbie Gipson
- Department of Pediatrics, Pediatric Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Subramaniam Pennathur
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Keith Bellovich
- Department of Internal Medicine, Nephrology, St. Clair Nephrology Research, Detroit, MI, USA
| | - Zeenat Bhat
- Department of Internal Medicine, Nephrology, Wayne State University, Detroit, MI, USA
| | - Crystal Gadegbeku
- Department of Internal Medicine, Nephrology, Cleveland Clinic, , Cleveland, OH, USA
| | - Susan Massengill
- Department of Pediatrics, Pediatric Nephrology, Levine Children's Hospital, Charlotte, NC, USA
| | - Kalyani Perumal
- Department of Internal Medicine, Nephrology, Department of JH Stroger Hospital, Chicago, IL, USA
| | - Jharna Saha
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Yingbao Yang
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Jinghui Luo
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Xin Zhang
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Laura Mariani
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Jeffrey B Hodgin
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA.
| | - Arvind Rao
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA.
- Department of Biostatistics, University of Michigan, Ann Arbor, MI, USA.
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA.
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA.
| |
Collapse
|
14
|
Verghese G, Lennerz JK, Ruta D, Ng W, Thavaraj S, Siziopikou KP, Naidoo T, Rane S, Salgado R, Pinder SE, Grigoriadis A. Computational pathology in cancer diagnosis, prognosis, and prediction - present day and prospects. J Pathol 2023; 260:551-563. [PMID: 37580849 PMCID: PMC10785705 DOI: 10.1002/path.6163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 06/12/2023] [Accepted: 06/17/2023] [Indexed: 08/16/2023]
Abstract
Computational pathology refers to applying deep learning techniques and algorithms to analyse and interpret histopathology images. Advances in artificial intelligence (AI) have led to an explosion in innovation in computational pathology, ranging from the prospect of automation of routine diagnostic tasks to the discovery of new prognostic and predictive biomarkers from tissue morphology. Despite the promising potential of computational pathology, its integration in clinical settings has been limited by a range of obstacles including operational, technical, regulatory, ethical, financial, and cultural challenges. Here, we focus on the pathologists' perspective of computational pathology: we map its current translational research landscape, evaluate its clinical utility, and address the more common challenges slowing clinical adoption and implementation. We conclude by describing contemporary approaches to drive forward these techniques. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Gregory Verghese
- School of Cancer & Pharmaceutical Sciences, Faculty of Life Sciences and MedicineKing's College LondonLondonUK
- The Breast Cancer Now Research Unit, School of Cancer and Pharmaceutical Sciences, Faculty of Life Sciences and MedicineKing's College LondonLondonUK
| | - Jochen K Lennerz
- Center for Integrated Diagnostics, Department of PathologyMassachusetts General Hospital/Harvard Medical SchoolBostonMAUSA
| | - Danny Ruta
- Guy's CancerGuy's and St Thomas’ NHS Foundation TrustLondonUK
| | - Wen Ng
- Department of Cellular PathologyGuy's and St Thomas NHS Foundation TrustLondonUK
| | - Selvam Thavaraj
- Head & Neck PathologyGuy's and St Thomas NHS Foundation TrustLondonUK
- Centre for Clinical, Oral & Translational Science, Faculty of Dentistry, Oral & Craniofacial SciencesKing's College LondonLondonUK
| | - Kalliopi P Siziopikou
- Department of Pathology, Section of Breast PathologyNorthwestern University Feinberg School of MedicineChicagoILUSA
| | - Threnesan Naidoo
- Department of Laboratory Medicine and Pathology, Walter Sisulu University, Mthatha, Eastern CapeSouth Africa and Africa Health Research InstituteDurbanSouth Africa
| | - Swapnil Rane
- Department of PathologyTata Memorial Centre – ACTRECHBNINavi MumbaiIndia
- Computational Pathology, AI & Imaging LaboratoryTata Memorial Centre – ACTREC, HBNINavi MumbaiIndia
| | - Roberto Salgado
- Department of PathologyGZA–ZNA ZiekenhuizenAntwerpBelgium
- Division of ResearchPeter MacCallum Cancer CentreMelbourneVictoriaAustralia
| | - Sarah E Pinder
- School of Cancer & Pharmaceutical Sciences, Faculty of Life Sciences and MedicineKing's College LondonLondonUK
- Department of Cellular PathologyGuy's and St Thomas NHS Foundation TrustLondonUK
| | - Anita Grigoriadis
- School of Cancer & Pharmaceutical Sciences, Faculty of Life Sciences and MedicineKing's College LondonLondonUK
- The Breast Cancer Now Research Unit, School of Cancer and Pharmaceutical Sciences, Faculty of Life Sciences and MedicineKing's College LondonLondonUK
| |
Collapse
|
15
|
Asif A, Rajpoot K, Graham S, Snead D, Minhas F, Rajpoot N. Unleashing the potential of AI for pathology: challenges and recommendations. J Pathol 2023; 260:564-577. [PMID: 37550878 PMCID: PMC10952719 DOI: 10.1002/path.6168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/21/2023] [Accepted: 06/22/2023] [Indexed: 08/09/2023]
Abstract
Computational pathology is currently witnessing a surge in the development of AI techniques, offering promise for achieving breakthroughs and significantly impacting the practices of pathology and oncology. These AI methods bring with them the potential to revolutionize diagnostic pipelines as well as treatment planning and overall patient care. Numerous peer-reviewed studies reporting remarkable performance across diverse tasks serve as a testimony to the potential of AI in the field. However, widespread adoption of these methods in clinical and pre-clinical settings still remains a challenge. In this review article, we present a detailed analysis of the major obstacles encountered during the development of effective models and their deployment in practice. We aim to provide readers with an overview of the latest developments, assist them with insights into identifying some specific challenges that may require resolution, and suggest recommendations and potential future research directions. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Amina Asif
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Kashif Rajpoot
- School of Computer ScienceUniversity of BirminghamBirminghamUK
| | - Simon Graham
- Histofy Ltd, Birmingham Business ParkBirminghamUK
| | - David Snead
- Histofy Ltd, Birmingham Business ParkBirminghamUK
- Department of PathologyUniversity Hospitals Coventry & Warwickshire NHS TrustCoventryUK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
- Cancer Research CentreUniversity of WarwickCoventryUK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
- Histofy Ltd, Birmingham Business ParkBirminghamUK
- Cancer Research CentreUniversity of WarwickCoventryUK
- The Alan Turing InstituteLondonUK
| |
Collapse
|
16
|
Liu H, Gao T, Liu Z, Shu M. FGSQA-Net: A Weakly Supervised Approach to Fine-Grained Electrocardiogram Signal Quality Assessment. IEEE J Biomed Health Inform 2023; 27:3844-3855. [PMID: 37247317 DOI: 10.1109/jbhi.2023.3280931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
OBJECTIVE Due to the lack of fine-grained labels, current research can only evaluate the signal quality at a coarse scale. This article proposes a weakly supervised fine-grained electrocardiogram (ECG) signal quality assessment method, which can produce continuous segment-level quality scores with only coarse labels. METHODS A novel network architecture, i.e. FGSQA-Net, is developed for signal quality assessment, which consists of a feature shrinking module and a feature aggregation module. Multiple feature shrinking blocks, which combine residual CNN block and max pooling layer, are stacked to produce a feature map corresponding to continuous segments along the spatial dimension. Segment-level quality scores are obtained by feature aggregation along the channel dimension. RESULTS The proposed method was evaluated on two real-world ECG databases and one synthetic dataset. Our method produced an average AUC value of 0.975, which outperforms the state-of-the-art beat-by-beat quality assessment method. The results are visualized for 12-lead and single-lead signals over a granularity from 0.64 to 1.7 seconds, demonstrating that high-quality and low-quality segments can be effectively distinguished at a fine scale. CONCLUSION FGSQA-Net is flexible and effective for fine-grained quality assessment for various ECG recordings and is suitable for ECG monitoring using wearable devices. SIGNIFICANCE This is the first study on fine-grained ECG quality assessment using weak labels and can be generalized to similar tasks for other physiological signals.
Collapse
|
17
|
Bilal M, Jewsbury R, Wang R, AlGhamdi HM, Asif A, Eastwood M, Rajpoot N. An aggregation of aggregation methods in computational pathology. Med Image Anal 2023; 88:102885. [PMID: 37423055 DOI: 10.1016/j.media.2023.102885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 05/02/2023] [Accepted: 06/28/2023] [Indexed: 07/11/2023]
Abstract
Image analysis and machine learning algorithms operating on multi-gigapixel whole-slide images (WSIs) often process a large number of tiles (sub-images) and require aggregating predictions from the tiles in order to predict WSI-level labels. In this paper, we present a review of existing literature on various types of aggregation methods with a view to help guide future research in the area of computational pathology (CPath). We propose a general CPath workflow with three pathways that consider multiple levels and types of data and the nature of computation to analyse WSIs for predictive modelling. We categorize aggregation methods according to the context and representation of the data, features of computational modules and CPath use cases. We compare and contrast different methods based on the principle of multiple instance learning, perhaps the most commonly used aggregation method, covering a wide range of CPath literature. To provide a fair comparison, we consider a specific WSI-level prediction task and compare various aggregation methods for that task. Finally, we conclude with a list of objectives and desirable attributes of aggregation methods in general, pros and cons of the various approaches, some recommendations and possible future directions.
Collapse
Affiliation(s)
- Mohsin Bilal
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; School of Computing, National University of Computer and Emerging Sciences, Islamabad, Pakistan
| | - Robert Jewsbury
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Ruoyu Wang
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Hammam M AlGhamdi
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Amina Asif
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Mark Eastwood
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, UK; Department of Pathology, University Hospitals Coventry and Warwickshire, UK.
| |
Collapse
|
18
|
Yu J, Ma T, Fu Y, Chen H, Lai M, Zhuo C, Xu Y. Local-to-global spatial learning for whole-slide image representation and classification. Comput Med Imaging Graph 2023; 107:102230. [PMID: 37116341 DOI: 10.1016/j.compmedimag.2023.102230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Revised: 03/27/2023] [Accepted: 04/05/2023] [Indexed: 04/30/2023]
Abstract
Whole-slide image (WSI) provides an important reference for clinical diagnosis. Classification with only WSI-level labels can be recognized for multi-instance learning (MIL) tasks. However, most existing MIL-based WSI classification methods have moderate performance on correlation mining between instances limited by their instance- level classification strategy. Herein, we propose a novel local-to-global spatial learning method to mine global position and local morphological information by redefining the MIL-based WSI classification strategy, better at learning WSI-level representation, called Global-Local Attentional Multi-Instance Learning (GLAMIL). GLAMIL can focus on regional relationships rather than single instances. It first learns relationships between patches in the local pool to aggregate region correlation (tissue types of a WSI). These correlations then can be further mined to fulfill WSI-level representation, where position correlation between different regions can be modeled. Furthermore, Transformer layers are employed to model global and local spatial information rather than being simply used as feature extractors, and the corresponding structure improvements are present. In addition, we evaluate GIAMIL on three benchmarks considering various challenging factors and achieve satisfactory results. GLAMIL outperforms state-of-the-art methods and baselines by about 1 % and 10 %, respectively.
Collapse
Affiliation(s)
- Jiahui Yu
- Department of Biomedical Enginearing, Key Laboratory of Biomedical Engineering of Ministry of Education, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou 310027, China; Innovation Center for Smart Medical Technologies & Devices, Binjiang Institute of Zhejiang University, Hangzhou 310053, China
| | - Tianyu Ma
- Innovation Center for Smart Medical Technologies & Devices, Binjiang Institute of Zhejiang University, Hangzhou 310053, China
| | - Yu Fu
- College of Information Science & Electronic Engineering, Zhejiang University, Hangzhou 310027, China
| | - Hang Chen
- Department of Biomedical Enginearing, Key Laboratory of Biomedical Engineering of Ministry of Education, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou 310027, China
| | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang University, Hangzhou 310053, China
| | - Cheng Zhuo
- College of Information Science & Electronic Engineering, Zhejiang University, Hangzhou 310027, China
| | - Yingke Xu
- Department of Biomedical Enginearing, Key Laboratory of Biomedical Engineering of Ministry of Education, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou 310027, China; Innovation Center for Smart Medical Technologies & Devices, Binjiang Institute of Zhejiang University, Hangzhou 310053, China; Department of Endocrinology, Children's Hospital of Zhejiang University School of Medicine, National Clinical Research Center for Children's Health, Hangzhou, Zhejiang 310051, China.
| |
Collapse
|
19
|
Sun K, Chen Y, Bai B, Gao Y, Xiao J, Yu G. Automatic Classification of Histopathology Images across Multiple Cancers Based on Heterogeneous Transfer Learning. Diagnostics (Basel) 2023; 13:diagnostics13071277. [PMID: 37046497 PMCID: PMC10093253 DOI: 10.3390/diagnostics13071277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/07/2023] [Accepted: 03/23/2023] [Indexed: 03/31/2023] Open
Abstract
Background: Current artificial intelligence (AI) in histopathology typically specializes on a single task, resulting in a heavy workload of collecting and labeling a sufficient number of images for each type of cancer. Heterogeneous transfer learning (HTL) is expected to alleviate the data bottlenecks and establish models with performance comparable to supervised learning (SL). Methods: An accurate source domain model was trained using 28,634 colorectal patches. Additionally, 1000 sentinel lymph node patches and 1008 breast patches were used to train two target domain models. The feature distribution difference between sentinel lymph node metastasis or breast cancer and CRC was reduced by heterogeneous domain adaptation, and the maximum mean difference between subdomains was used for knowledge transfer to achieve accurate classification across multiple cancers. Result: HTL on 1000 sentinel lymph node patches (L-HTL-1000) outperforms SL on 1000 sentinel lymph node patches (L-SL-1-1000) (average area under the curve (AUC) and standard deviation of L-HTL-1000 vs. L-SL-1-1000: 0.949 ± 0.004 vs. 0.931 ± 0.008, p value = 0.008). There is no significant difference between L-HTL-1000 and SL on 7104 patches (L-SL-2-7104) (0.949 ± 0.004 vs. 0.948 ± 0.008, p value = 0.742). Similar results are observed for breast cancer. B-HTL-1008 vs. B-SL-1-1008: 0.962 ± 0.017 vs. 0.943 ± 0.018, p value = 0.008; B-HTL-1008 vs. B-SL-2-5232: 0.962 ± 0.017 vs. 0.951 ± 0.023, p value = 0.148. Conclusions: HTL is capable of building accurate AI models for similar cancers using a small amount of data based on a large dataset for a certain type of cancer. HTL holds great promise for accelerating the development of AI in histopathology.
Collapse
|
20
|
Gürsoy E, Kaya Y. An overview of deep learning techniques for COVID-19 detection: methods, challenges, and future works. Multimed Syst 2023; 29:1603-1627. [PMID: 37261262 PMCID: PMC10039775 DOI: 10.1007/s00530-023-01083-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/20/2023] [Indexed: 06/02/2023]
Abstract
The World Health Organization (WHO) declared a pandemic in response to the coronavirus COVID-19 in 2020, which resulted in numerous deaths worldwide. Although the disease appears to have lost its impact, millions of people have been affected by this virus, and new infections still occur. Identifying COVID-19 requires a reverse transcription-polymerase chain reaction test (RT-PCR) or analysis of medical data. Due to the high cost and time required to scan and analyze medical data, researchers are focusing on using automated computer-aided methods. This review examines the applications of deep learning (DL) and machine learning (ML) in detecting COVID-19 using medical data such as CT scans, X-rays, cough sounds, MRIs, ultrasound, and clinical markers. First, the data preprocessing, the features used, and the current COVID-19 detection methods are divided into two subsections, and the studies are discussed. Second, the reported publicly available datasets, their characteristics, and the potential comparison materials mentioned in the literature are presented. Third, a comprehensive comparison is made by contrasting the similar and different aspects of the studies. Finally, the results, gaps, and limitations are summarized to stimulate the improvement of COVID-19 detection methods, and the study concludes by listing some future research directions for COVID-19 classification.
Collapse
Affiliation(s)
- Ercan Gürsoy
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| | - Yasin Kaya
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| |
Collapse
|
21
|
Li M, Abe M, Nakano S, Tsuneki M. Deep Learning Approach to Classify Cutaneous Melanoma in a Whole Slide Image. Cancers (Basel) 2023; 15:cancers15061907. [PMID: 36980793 PMCID: PMC10047087 DOI: 10.3390/cancers15061907] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 03/02/2023] [Accepted: 03/21/2023] [Indexed: 03/30/2023] Open
Abstract
Although the histopathological diagnosis of cutaneous melanocytic lesions is fairly accurate and reliable among experienced surgical pathologists, it is not perfect in every case (especially melanoma). Microscopic examination-clinicopathological correlation is the gold standard for the definitive diagnosis of melanoma. Pathologists may encounter diagnostic controversies when melanoma closely mimics Spitz's nevus or blue nevus, exhibits amelanotic histopathology, or is in situ. It would be beneficial if diagnosing cutaneous melanocytic lesions can be automated by using deep learning, particularly when assisting surgical pathologists with their workloads. In this preliminary study, we investigated the application of deep learning for classifying cutaneous melanoma in whole-slide images (WSIs). We trained models via weakly supervised learning using a dataset of 66 WSIs (33 melanomas and 33 non-melanomas). We evaluated the models on a test set of 90 WSIs (40 melanomas and 50 non-melanomas), achieving ROC-AUC at 0.821 for the WSI level and 0.936 for the tile level by the best model.
Collapse
Affiliation(s)
- Meng Li
- Medmain Research, Medmain Inc., Fukuoka 810-0042, Japan
| | - Makoto Abe
- Department of Pathology, Tochigi Cancer Center, 4-9-13 Yohnan, Utsunomiya 320-0834, Japan
| | - Shigeo Nakano
- Department of Surgical Pathology, Tokyo Shinagawa Hospital, 6-3-22 Higashi-Ooi, Shinagawa, Tokyo 140-8522, Japan
| | | |
Collapse
|
22
|
Li K, Qian Z, Han Y, Chang EIC, Wei B, Lai M, Liao J, Fan Y, Xu Y. Weakly supervised histopathology image segmentation with self-attention. Med Image Anal 2023; 86:102791. [PMID: 36933385 DOI: 10.1016/j.media.2023.102791] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 01/09/2023] [Accepted: 02/24/2023] [Indexed: 03/13/2023]
Abstract
Accurate segmentation in histopathology images at pixel-level plays a critical role in the digital pathology workflow. The development of weakly supervised methods for histopathology image segmentation liberates pathologists from time-consuming and labor-intensive works, opening up possibilities of further automated quantitative analysis of whole-slide histopathology images. As an effective subgroup of weakly supervised methods, multiple instance learning (MIL) has achieved great success in histopathology images. In this paper, we specially treat pixels as instances so that the histopathology image segmentation task is transformed into an instance prediction task in MIL. However, the lack of relations between instances in MIL limits the further improvement of segmentation performance. Therefore, we propose a novel weakly supervised method called SA-MIL for pixel-level segmentation in histopathology images. SA-MIL introduces a self-attention mechanism into the MIL framework, which captures global correlation among all instances. In addition, we use deep supervision to make the best use of information from limited annotations in the weakly supervised method. Our approach makes up for the shortcoming that instances are independent of each other in MIL by aggregating global contextual information. We demonstrate state-of-the-art results compared to other weakly supervised methods on two histopathology image datasets. It is evident that our approach has generalization ability for the high performance on both tissue and cell histopathology datasets. There is potential in our approach for various applications in medical images.
Collapse
Affiliation(s)
- Kailu Li
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China.
| | - Ziniu Qian
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China.
| | - Yingnan Han
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China.
| | | | | | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang University, Hangzhou 310027, China.
| | - Jing Liao
- Department of Computer Science, City University of Hong Kong, 999077, Hong Kong SAR, China.
| | - Yubo Fan
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China.
| | - Yan Xu
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China; Microsoft Research, Beijing 100080, China.
| |
Collapse
|
23
|
Liang M, Chen Q, Li B, Wang L, Wang Y, Zhang Y, Wang R, Jiang X, Zhang C. Interpretable classification of pathology whole-slide images using attention based context-aware graph convolutional neural network. Comput Methods Programs Biomed 2023; 229:107268. [PMID: 36495811 DOI: 10.1016/j.cmpb.2022.107268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 11/23/2022] [Accepted: 11/23/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Whole slide image (WSI) classification and lesion localization within giga-pixel slide are challenging tasks in computational pathology that requires context-aware representations of histological features to adequately infer nidus. The existing weakly supervised learning methods mainly treat different locations in the slide as independent regions and cannot learn potential nonlinear interactions between instances based on i.i.d assumption, resulting in the model unable to effectively utilize context-ware information to predict the labels of WSIs and locate the region of interest (ROI). METHODS Here, we propose an interpretable classification model named bidirectional Attention-based Multiple Instance Learning Graph Convolutional Network (ABMIL-GCN), which hierarchically aggregates context-aware features of instances into a global representation in a topology fashion to predict the slide labels and localize the region of lymph node metastasis in WSIs. RESULTS We verified the superiority of this method on the Camelyon16 dataset, and the results show that the average predicted ACC and AUC of the proposed model after flooding optimization can reach 90.89% and 0.9149, respectively. The average accuracy and ACC score are improved by more than 7% and 4% compared with the existing state-of-the-art algorithms. CONCLUSIONS The results demonstrate that context-aware GCN outperforms existing weakly supervised learning methods by introducing spatial correlations between the neighbor image patches, which also addresses the 'accuracy-interpretability trade-off' problem. The framework provides a novel paradigm for the clinical application of computer-aided diagnosis and intelligent systems.
Collapse
Affiliation(s)
- Meiyan Liang
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China.
| | - Qinghui Chen
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China
| | - Bo Li
- Department of Rehabilitation Treatment, Shanxi Rongjun Hospital, Taiyuan 030000, China
| | - Lin Wang
- Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan, 030032, China; Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Ying Wang
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China
| | - Yu Zhang
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China
| | - Ru Wang
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China
| | - Xing Jiang
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan 030006, China
| | - Cunlin Zhang
- Beijing Key Laboratory for Terahertz Spectroscopy and Imaging, Key Laboratory of Terahertz, Optoelectronics, Ministry of Education, Capital Normal University, Beijing 100048, China
| |
Collapse
|
24
|
Tsuneki M, Abe M, Ichihara S, Kanavati F. Inference of core needle biopsy whole slide images requiring definitive therapy for prostate cancer. BMC Cancer 2023; 23:11. [PMID: 36600203 DOI: 10.1186/s12885-022-10488-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 12/26/2022] [Indexed: 01/06/2023] Open
Abstract
BACKGROUND Prostate cancer is often a slowly progressive indolent disease. Unnecessary treatments from overdiagnosis are a significant concern, particularly low-grade disease. Active surveillance has being considered as a risk management strategy to avoid potential side effects by unnecessary radical treatment. In 2016, American Society of Clinical Oncology (ASCO) endorsed the Cancer Care Ontario (CCO) Clinical Practice Guideline on active surveillance for the management of localized prostate cancer. METHODS Based on this guideline, we developed a deep learning model to classify prostate adenocarcinoma into indolent (applicable for active surveillance) and aggressive (necessary for definitive therapy) on core needle biopsy whole slide images (WSIs). In this study, we trained deep learning models using a combination of transfer, weakly supervised, and fully supervised learning approaches using a dataset of core needle biopsy WSIs (n=1300). In addition, we performed an inter-rater reliability evaluation on the WSI classification. RESULTS We evaluated the models on a test set (n=645), achieving ROC-AUCs of 0.846 for indolent and 0.980 for aggressive. The inter-rater reliability evaluation showed s-scores in the range of 0.10 to 0.95, with the lowest being on the WSIs with both indolent and aggressive classification by the model, and the highest on benign WSIs. CONCLUSION The results demonstrate the promising potential of deployment in a practical prostate adenocarcinoma histopathological diagnostic workflow system.
Collapse
Affiliation(s)
- Masayuki Tsuneki
- Medmain Research, Medmain Inc., 2-4-5-104, Akasaka, Chuo-ku, Fukuoka, 810-0042, Japan.
| | - Makoto Abe
- Department of Pathology, Tochigi Cancer Center, 4-9-13 Yohnan, Utsunomiya, 320-0834, Japan
| | - Shin Ichihara
- Department of Surgical Pathology, Sapporo Kosei General Hospital, 8-5 Kita-3-jo Higashi, Chuo-ku, Sapporo, 060-0033, Japan
| | - Fahdi Kanavati
- Medmain Research, Medmain Inc., 2-4-5-104, Akasaka, Chuo-ku, Fukuoka, 810-0042, Japan
| |
Collapse
|
25
|
Tsuneki M, Abe M, Kanavati F. Deep Learning-Based Screening of Urothelial Carcinoma in Whole Slide Images of Liquid-Based Cytology Urine Specimens. Cancers (Basel) 2022; 15. [PMID: 36612222 DOI: 10.3390/cancers15010226] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 12/26/2022] [Accepted: 12/27/2022] [Indexed: 01/01/2023] Open
Abstract
Urinary cytology is a useful, essential diagnostic method in routine urological clinical practice. Liquid-based cytology (LBC) for urothelial carcinoma screening is commonly used in the routine clinical cytodiagnosis because of its high cellular yields. Since conventional screening processes by cytoscreeners and cytopathologists using microscopes is limited in terms of human resources, it is important to integrate new deep learning methods that can automatically and rapidly diagnose a large amount of specimens without delay. The goal of this study was to investigate the use of deep learning models for the classification of urine LBC whole-slide images (WSIs) into neoplastic and non-neoplastic (negative). We trained deep learning models using 786 WSIs by transfer learning, fully supervised, and weakly supervised learning approaches. We evaluated the trained models on two test sets, one of which was representative of the clinical distribution of neoplastic cases, with a combined total of 750 WSIs, achieving an area under the curve for diagnosis in the range of 0.984-0.990 by the best model, demonstrating the promising potential use of our model for aiding urine cytodiagnostic processes.
Collapse
|
26
|
Mridha MF, Prodeep AR, Hoque ASMM, Islam MR, Lima AA, Kabir MM, Hamid MA, Watanobe Y. A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. J Healthc Eng 2022; 2022:5905230. [PMID: 36569180 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
|
27
|
Tavolara TE, Gurcan MN, Niazi MKK. Contrastive Multiple Instance Learning: An Unsupervised Framework for Learning Slide-Level Representations of Whole Slide Histopathology Images without Labels. Cancers (Basel) 2022; 14:cancers14235778. [PMID: 36497258 PMCID: PMC9738801 DOI: 10.3390/cancers14235778] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/16/2022] [Accepted: 11/19/2022] [Indexed: 11/25/2022] Open
Abstract
Recent methods in computational pathology have trended towards semi- and weakly-supervised methods requiring only slide-level labels. Yet, even slide-level labels may be absent or irrelevant to the application of interest, such as in clinical trials. Hence, we present a fully unsupervised method to learn meaningful, compact representations of WSIs. Our method initially trains a tile-wise encoder using SimCLR, from which subsets of tile-wise embeddings are extracted and fused via an attention-based multiple-instance learning framework to yield slide-level representations. The resulting set of intra-slide-level and inter-slide-level embeddings are attracted and repelled via contrastive loss, respectively. This resulted in slide-level representations with self-supervision. We applied our method to two tasks- (1) non-small cell lung cancer subtyping (NSCLC) as a classification prototype and (2) breast cancer proliferation scoring (TUPAC16) as a regression prototype-and achieved an AUC of 0.8641 ± 0.0115 and correlation (R2) of 0.5740 ± 0.0970, respectively. Ablation experiments demonstrate that the resulting unsupervised slide-level feature space can be fine-tuned with small datasets for both tasks. Overall, our method approaches computational pathology in a novel manner, where meaningful features can be learned from whole-slide images without the need for annotations of slide-level labels. The proposed method stands to benefit computational pathology, as it theoretically enables researchers to benefit from completely unlabeled whole-slide images.
Collapse
|
28
|
Tsuneki M, Kanavati F. Weakly supervised learning for multi-organ adenocarcinoma classification in whole slide images. PLoS One 2022; 17:e0275378. [PMID: 36417401 PMCID: PMC9683606 DOI: 10.1371/journal.pone.0275378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 09/15/2022] [Indexed: 11/25/2022] Open
Abstract
The primary screening by automated computational pathology algorithms of the presence or absence of adenocarcinoma in biopsy specimens (e.g., endoscopic biopsy, transbronchial lung biopsy, and needle biopsy) of possible primary organs (e.g., stomach, colon, lung, and breast) and radical lymph node dissection specimen is very useful and should be a powerful tool to assist surgical pathologists in routine histopathological diagnostic workflow. In this paper, we trained multi-organ deep learning models to classify adenocarcinoma in biopsy and radical lymph node dissection specimens whole slide images (WSIs). We evaluated the models on five independent test sets (stomach, colon, lung, breast, lymph nodes) to demonstrate the feasibility in multi-organ and lymph nodes specimens from different medical institutions, achieving receiver operating characteristic areas under the curves (ROC-AUCs) in the range of 0.91 -0.98.
Collapse
Affiliation(s)
- Masayuki Tsuneki
- Medmain Research, Medmain Inc., Akasaka, Chuo-ku, Fukuoka, Japan
- * E-mail:
| | - Fahdi Kanavati
- Medmain Research, Medmain Inc., Akasaka, Chuo-ku, Fukuoka, Japan
| |
Collapse
|
29
|
Kosaraju S, Park J, Lee H, Yang JW, Kang M. Deep learning-based framework for slide-based histopathological image analysis. Sci Rep 2022; 12:19075. [PMID: 36351997 PMCID: PMC9646838 DOI: 10.1038/s41598-022-23166-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 10/26/2022] [Indexed: 11/11/2022] Open
Abstract
Digital pathology coupled with advanced machine learning (e.g., deep learning) has been changing the paradigm of whole-slide histopathological images (WSIs) analysis. Major applications in digital pathology using machine learning include automatic cancer classification, survival analysis, and subtyping from pathological images. While most pathological image analyses are based on patch-wise processing due to the extremely large size of histopathology images, there are several applications that predict a single clinical outcome or perform pathological diagnosis per slide (e.g., cancer classification, survival analysis). However, current slide-based analyses are task-dependent, and a general framework of slide-based analysis in WSI has been seldom investigated. We propose a novel slide-based histopathology analysis framework that creates a WSI representation map, called HipoMap, that can be applied to any slide-based problems, coupled with convolutional neural networks. HipoMap converts a WSI of various shapes and sizes to structured image-type representation. Our proposed HipoMap outperformed existing methods in intensive experiments with various settings and datasets. HipoMap showed the Area Under the Curve (AUC) of 0.96±0.026 (5% improved) in the experiments for lung cancer classification, and c-index of 0.787±0.013 (3.5% improved) and coefficient of determination ([Formula: see text]) of 0.978±0.032 (24% improved) in survival analysis and survival prediction with TCGA lung cancer data respectively, as a general framework of slide-based analysis with a flexible capability. The results showed significant improvement comparing to the current state-of-the-art methods on each task. We further discussed experimental results of HipoMap as pathological viewpoints and verified the performance using publicly available TCGA datasets. A Python package is available at https://pypi.org/project/hipomap , and the package can be easily installed using Python PIP. The open-source codes in Python are available at: https://github.com/datax-lab/HipoMap .
Collapse
Affiliation(s)
- Sai Kosaraju
- grid.272362.00000 0001 0806 6926Department of Computer Science, University of Nevada, Las Vegas, Las Vegas, NV 89154 USA
| | - Jeongyeon Park
- grid.412859.30000 0004 0533 4202Department of Computer Science, Sun Moon University, Asan, 336708 South Korea
| | - Hyun Lee
- grid.412859.30000 0004 0533 4202Department of Computer Science, Sun Moon University, Asan, 336708 South Korea
| | - Jung Wook Yang
- grid.256681.e0000 0001 0661 1492Department of Pathology, Gyeongsang National University Hospital, Gyeongsang National University College of Medicine, Jinju, South Korea
| | - Mingon Kang
- grid.272362.00000 0001 0806 6926Department of Computer Science, University of Nevada, Las Vegas, Las Vegas, NV 89154 USA
| |
Collapse
|
30
|
Civit-Masot J, Bañuls-Beaterio A, Domínguez-Morales M, Rivas-Pérez M, Muñoz-Saavedra L, Rodríguez Corral JM. Non-small cell lung cancer diagnosis aid with histopathological images using Explainable Deep Learning techniques. Comput Methods Programs Biomed 2022; 226:107108. [PMID: 36113183 DOI: 10.1016/j.cmpb.2022.107108] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 07/26/2022] [Accepted: 09/01/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Lung cancer has the highest mortality rate in the world, twice as high as the second highest. On the other hand, pathologists are overworked and this is detrimental to the time spent on each patient, diagnostic turnaround time, and their success rate. OBJECTIVE In this work, we design, implement, and evaluate a diagnostic aid system for non-small cell lung cancer detection, using Deep Learning techniques. METHODS The classifier developed is based on Artificial Intelligence techniques, obtaining an automatic classification result between healthy, adenocarcinoma and squamous cell carcinoma, given an histopathological image from lung tissue. Moreover, a report module based on Explainable Deep Learning techniques is included and gives the pathologist information about the image's areas used to classify the sample and the confidence of belonging to each class. RESULTS The results show a system accuracy between 97.11 and 99.69%, depending on the number of classes classified, and a value of the area under ROC curve between 99.77 and 99.94%. CONCLUSIONS The classification results obtain a substantial improvement according to previous works. Thanks to the given report, the time spent by the pathologist and the diagnostic turnaround time can be reduced.
Collapse
Affiliation(s)
- Javier Civit-Masot
- Architecture and Computer Technology department (ATC), Robotics and Technology of Computers Lab (RTC), E.T.S. Ingeniería Informática, Avda. Reina Mercedes s/n, Universidad de Sevilla, Seville, 41012, Spain
| | - Alejandro Bañuls-Beaterio
- Architecture and Computer Technology department (ATC), Robotics and Technology of Computers Lab (RTC), E.T.S. Ingeniería Informática, Avda. Reina Mercedes s/n, Universidad de Sevilla, Seville, 41012, Spain
| | - Manuel Domínguez-Morales
- Architecture and Computer Technology department (ATC), Robotics and Technology of Computers Lab (RTC), E.T.S. Ingeniería Informática, Avda. Reina Mercedes s/n, Universidad de Sevilla, Seville, 41012, Spain; Computer Engineering Research Institute (I3US), E.T.S. Ingeniería Informática, Avda. Reina Mercedes s/n, Universidad de Sevilla, Seville, 41012, Spain.
| | - Manuel Rivas-Pérez
- Architecture and Computer Technology department (ATC), Robotics and Technology of Computers Lab (RTC), E.T.S. Ingeniería Informática, Avda. Reina Mercedes s/n, Universidad de Sevilla, Seville, 41012, Spain
| | - Luis Muñoz-Saavedra
- Architecture and Computer Technology department (ATC), Robotics and Technology of Computers Lab (RTC), E.T.S. Ingeniería Informática, Avda. Reina Mercedes s/n, Universidad de Sevilla, Seville, 41012, Spain
| | - José M Rodríguez Corral
- Computer Science department, School of Engineering, Avda. Universidad de Cádiz 10, Universidad de Cádiz, Puerto Real (Cádiz), 11519, Spain
| |
Collapse
|
31
|
Aboobacker S, Vijayasenan D, S SD, Suresh PK, Sreeram S. Semantic segmentation of low magnification effusion cytology images: A semi-supervised approach. Comput Biol Med 2022; 150:106179. [PMID: 36252367 DOI: 10.1016/j.compbiomed.2022.106179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 09/09/2022] [Accepted: 10/01/2022] [Indexed: 11/17/2022]
Abstract
Cytopathologists examine microscopic images obtained at various magnifications to identify malignancy in effusions. They locate the malignant cell clusters at a low magnification and then zoom in to investigate cell-level features at a high magnification. This study predicts the malignancy at low magnification levels such as 4X and 10X in effusion cytology images to reduce scanning time. However, the most challenging problem is annotating the low magnification images, particularly the 4X images. This paper extends two semi-supervised learning (SSL) models, MixMatch and FixMatch, for semantic segmentation. The original FixMatch and MixMatch algorithms are designed for classification tasks. While performing image augmentation, the generated pseudo labels are spatially altered. We introduce reverse augmentation to compensate for the effect of the spatial alterations. The extended models are trained using labelled 10X and unlabelled 4X images. The average F-score of benign and malignant pixels on the predictions of 4X images is improved approximately by 9% for both Extended MixMatch and Extended FixMatch respectively compared with the baseline model. In the Extended MixMatch, 62% sub-regions of low magnification images are eliminated from scanning at a higher magnification, thereby saving scanning time.
Collapse
Affiliation(s)
- Shajahan Aboobacker
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, 575025, Karnataka, India.
| | - Deepu Vijayasenan
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, 575025, Karnataka, India
| | - Sumam David S
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, 575025, Karnataka, India
| | - Pooja K Suresh
- Department of Pathology, Kasturba Medical College Mangalore, Manipal Academy of Higher Education, Manipal, 575001, Karnataka, India
| | - Saraswathy Sreeram
- Department of Pathology, Kasturba Medical College Mangalore, Manipal Academy of Higher Education, Manipal, 575001, Karnataka, India
| |
Collapse
|
32
|
Lee W, Kim JH, Lee S, Kim K, Kang TS, Han YS. Estimation of best corrected visual acuity based on deep neural network. Sci Rep 2022; 12:17808. [PMID: 36280678 PMCID: PMC9589880 DOI: 10.1038/s41598-022-22586-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 10/17/2022] [Indexed: 01/19/2023] Open
Abstract
In this study, we investigated a convolutional neural network (CNN)-based framework for the estimation of the best-corrected visual acuity (BCVA) from fundus images. First, we collected 53,318 fundus photographs from the Gyeongsang National University Changwon Hospital, where each fundus photograph is categorized into 11 levels by retrospective medical chart review. Then, we designed 4 BCVA estimation schemes using transfer learning with pre-trained ResNet-18 and EfficientNet-B0 models where both regression and classification-based prediction are taken into account. According to the results of the study, the predicted BCVA by CNN-based schemes is close to the actual value such that 94.37% of prediction accuracy can be achieved when 3 levels of difference can be tolerated during prediction. The mean squared error and [Formula: see text] score were measured as 0.028 and 0.654, respectively. These results indicate that the BCVA can be predicted accurately for extreme cases, i.e., the level of BCVA is close to either 0.0 or 1.0. Moreover, using the Guided Grad-CAM, we confirmed that the macula and the blood vessel surrounding the macula are mainly utilized in the prediction of BCVA, which validates the rationality of the CNN-based BCVA estimation schemes since the same area is also exploited during the retrospective medical chart review. Finally, we applied the t-distributed stochastic neighbor embedding to examine the characteristics of CNN-based BCVA estimation schemes. The developed BCVA estimation schemes can be employed to obtain the objective measurement of BVCA as well as the medical screening of people with poor access to medical care through smartphone-based fundus imaging.
Collapse
Affiliation(s)
- Woongsup Lee
- Department of Information and Communication Engineering, Gyeongsang National University, Tongyeong, Republic of Korea
| | - Jin Hyun Kim
- Department of Information and Communication Engineering, Gyeongsang National University, Tongyeong, Republic of Korea.
| | - Seongjin Lee
- Department of AI Convergence Engineering, Gyeongsang National University, Jinju, Republic of Korea
| | - Kyonghoon Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Republic of Korea
| | - Tae Seen Kang
- Department of Ophthalmology, Gyeongsang National University Changwon Hospital, #11 Samjeongja-ro, Seongsan-gu, Changwon, 51472, Republic of Korea
| | - Yong Seop Han
- Department of Ophthalmology, Gyeongsang National University Changwon Hospital, #11 Samjeongja-ro, Seongsan-gu, Changwon, 51472, Republic of Korea.
- Department of Ophthalmology, Gyeongsang National University College of Medicine, Institute of Health Sciences, Jinju, Republic of Korea.
| |
Collapse
|
33
|
Alqudah AM, Qazan S, Obeidat YM. Deep learning models for detecting respiratory pathologies from raw lung auscultation sounds. Soft comput 2022; 26:13405-13429. [PMID: 36186666 PMCID: PMC9510581 DOI: 10.1007/s00500-022-07499-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/09/2022] [Indexed: 11/23/2022]
Abstract
In recent years deep learning models improve the diagnosis performance of many diseases especially respiratory diseases. This paper will propose an evaluation for the performance of different deep learning models associated with the raw lung auscultation sounds in detecting respiratory pathologies to help in providing diagnostic of respiratory pathologies in digital recorded respiratory sounds. Also, we will find out the best deep learning model for this task. In this paper, three different deep learning models have been evaluated on non-augmented and augmented datasets, where two different datasets have been utilized to generate four different sub-datasets. The results show that all the proposed deep learning methods were successful and achieved high performance in classifying the raw lung sounds, the methods were applied on different datasets and used either augmentation or non-augmentation. Among all proposed deep learning models, the CNN–LSTM model was the best model in all datasets for both augmentation and non-augmentation cases. The accuracy of CNN–LSTM model using non-augmentation was 99.6%, 99.8%, 82.4%, and 99.4% for datasets 1, 2, 3, and 4, respectively, and using augmentation was 100%, 99.8%, 98.0%, and 99.5% for datasets 1, 2, 3, and 4, respectively. While the augmentation process successfully helps the deep learning models in enhancing their performance on the testing datasets with a notable value. Moreover, the hybrid model that combines both CNN and LSTM techniques performed better than models that are based only on one of these techniques, this mainly refers to the use of CNN for automatic deep features extraction from lung sound while LSTM is used for classification.
Collapse
Affiliation(s)
- Ali Mohammad Alqudah
- Department of Biomedical Systems and Informatics Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| | - Shoroq Qazan
- Department of Computer Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| | - Yusra M Obeidat
- Department of Electronic Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| |
Collapse
|
34
|
Wetstein SC, de Jong VMT, Stathonikos N, Opdam M, Dackus GMHE, Pluim JPW, van Diest PJ, Veta M. Deep learning-based breast cancer grading and survival analysis on whole-slide histopathology images. Sci Rep 2022; 12:15102. [PMID: 36068311 PMCID: PMC9448798 DOI: 10.1038/s41598-022-19112-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 08/24/2022] [Indexed: 11/10/2022] Open
Abstract
Breast cancer tumor grade is strongly associated with patient survival. In current clinical practice, pathologists assign tumor grade after visual analysis of tissue specimens. However, different studies show significant inter-observer variation in breast cancer grading. Computer-based breast cancer grading methods have been proposed but only work on specifically selected tissue areas and/or require labor-intensive annotations to be applied to new datasets. In this study, we trained and evaluated a deep learning-based breast cancer grading model that works on whole-slide histopathology images. The model was developed using whole-slide images from 706 young (< 40 years) invasive breast cancer patients with corresponding tumor grade (low/intermediate vs. high), and its constituents nuclear grade, tubule formation and mitotic rate. The performance of the model was evaluated using Cohen's kappa on an independent test set of 686 patients using annotations by expert pathologists as ground truth. The predicted low/intermediate (n = 327) and high (n = 359) grade groups were used to perform survival analysis. The deep learning system distinguished low/intermediate versus high tumor grade with a Cohen's Kappa of 0.59 (80% accuracy) compared to expert pathologists. In subsequent survival analysis the two groups predicted by the system were found to have a significantly different overall survival (OS) and disease/recurrence-free survival (DRFS/RFS) (p < 0.05). Univariate Cox hazard regression analysis showed statistically significant hazard ratios (p < 0.05). After adjusting for clinicopathologic features and stratifying for molecular subtype the hazard ratios showed a trend but lost statistical significance for all endpoints. In conclusion, we developed a deep learning-based model for automated grading of breast cancer on whole-slide images. The model distinguishes between low/intermediate and high grade tumors and finds a trend in the survival of the two predicted groups.
Collapse
Affiliation(s)
- Suzanne C Wetstein
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 5, 5612 AE, Eindhoven, The Netherlands
| | - Vincent M T de Jong
- Department of Molecular Pathology, Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Nikolas Stathonikos
- Department of Pathology, University Medical Center Utrecht, University Utrecht, Utrecht, The Netherlands
| | - Mark Opdam
- Department of Molecular Pathology, Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Gwen M H E Dackus
- Department of Molecular Pathology, Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- Department of Pathology, University Medical Center Utrecht, University Utrecht, Utrecht, The Netherlands
| | - Josien P W Pluim
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 5, 5612 AE, Eindhoven, The Netherlands
| | - Paul J van Diest
- Department of Pathology, University Medical Center Utrecht, University Utrecht, Utrecht, The Netherlands
| | - Mitko Veta
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 5, 5612 AE, Eindhoven, The Netherlands.
| |
Collapse
|
35
|
Zhu X, Chen C, Guo Q, Ma J, Sun F, Lu H. Deep Learning-Based Recognition of Different Thyroid Cancer Categories Using Whole Frozen-Slide Images. Front Bioeng Biotechnol 2022; 10:857377. [PMID: 35875502 PMCID: PMC9298848 DOI: 10.3389/fbioe.2022.857377] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 05/30/2022] [Indexed: 11/30/2022] Open
Abstract
Introduction: The pathological rare category of thyroid is a type of lesion with a low incidence rate and is easily misdiagnosed in clinical practice, which directly affects a patient’s treatment decision. However, it has not been adequately investigated to recognize the rare, benign, and malignant categories of thyroid using the deep learning method and recommend the rare to pathologists. Methods: We present an empirical decision tree based on the binary classification results of the patch-based UNet model to predict rare categories and recommend annotated lesion areas to be rereviewed by pathologists. Results: Applying this framework to 1,374 whole-slide images (WSIs) of frozen sections from thyroid lesions, we obtained an area under a curve of 0.946 and 0.986 for the test datasets with and without WSIs, respectively, of rare types. However, the recognition error rate for the rare categories was significantly higher than that for the benign and malignant categories (p < 0.00001). For rare WSIs, the addition of the empirical decision tree obtained a recall rate and precision of 0.882 and 0.498, respectively; the rare types (only 33.4% of all WSIs) were further recommended to be rereviewed by pathologists. Additionally, we demonstrated that the performance of our framework was comparable to that of pathologists in clinical practice for the predicted benign and malignant sections. Conclusion: Our study provides a baseline for the recommendation of the uncertain predicted rare category to pathologists, offering potential feasibility for the improvement of pathologists’ work efficiency.
Collapse
Affiliation(s)
- Xinyi Zhu
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer / Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Cancan Chen
- Digital Health China Technologies Corporation Limited, Beijing, China
| | - Qiang Guo
- Department of Big Data, National Cancer Center/National Clinical Research Center for Cancer / Cancer Institute and Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianhui Ma
- Department of Urology, National Cancer Center/National Clinical Research Center for Cancer / Cancer Institute and Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Fenglong Sun
- Digital Health China Technologies Corporation Limited, Beijing, China
| | - Haizhen Lu
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer / Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
36
|
Chen C, Chen C, Ma M, Ma X, Lv X, Dong X, Yan Z, Zhu M, Chen J. Classification of multi-differentiated liver cancer pathological images based on deep learning attention mechanism. BMC Med Inform Decis Mak 2022; 22:176. [PMID: 35787805 DOI: 10.1186/s12911-022-01919-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 06/23/2022] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Liver cancer is one of the most common malignant tumors in the world, ranking fifth in malignant tumors. The degree of differentiation can reflect the degree of malignancy. The degree of malignancy of liver cancer can be divided into three types: poorly differentiated, moderately differentiated, and well differentiated. Diagnosis and treatment of different levels of differentiation are crucial to the survival rate and survival time of patients. As the gold standard for liver cancer diagnosis, histopathological images can accurately distinguish liver cancers of different levels of differentiation. Therefore, the study of intelligent classification of histopathological images is of great significance to patients with liver cancer. At present, the classification of histopathological images of liver cancer with different degrees of differentiation has disadvantages such as time-consuming, labor-intensive, and large manual investment. In this context, the importance of intelligent classification of histopathological images is obvious. METHODS Based on the development of a complete data acquisition scheme, this paper applies the SENet deep learning model to the intelligent classification of all types of differentiated liver cancer histopathological images for the first time, and compares it with the four deep learning models of VGG16, ResNet50, ResNet_CBAM, and SKNet. The evaluation indexes adopted in this paper include confusion matrix, Precision, recall, F1 Score, etc. These evaluation indexes can be used to evaluate the model in a very comprehensive and accurate way. RESULTS Five different deep learning classification models are applied to collect the data set and evaluate model. The experimental results show that the SENet model has achieved the best classification effect with an accuracy of 95.27%. The model also has good reliability and generalization ability. The experiment proves that the SENet deep learning model has a good application prospect in the intelligent classification of histopathological images. CONCLUSIONS This study also proves that deep learning has great application value in solving the time-consuming and laborious problems existing in traditional manual film reading, and it has certain practical significance for the intelligent classification research of other cancer histopathological images.
Collapse
|
37
|
Jain DK, Lakshmi KM, Varma KP, Ramachandran M, Bharati S, Sharma K. Lung Cancer Detection Based on Kernel PCA-Convolution Neural Network Feature Extraction and Classification by Fast Deep Belief Neural Network in Disease Management Using Multimedia Data Sources. Computational Intelligence and Neuroscience 2022; 2022:1-12. [PMID: 35669646 PMCID: PMC9167006 DOI: 10.1155/2022/3149406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 05/06/2022] [Accepted: 05/09/2022] [Indexed: 11/18/2022]
Abstract
In lung cancer, tumor histology is a significant predictor of treatment response and prognosis. Although tissue samples for pathologist view are the most pertinent approach for histology classification, current advances in DL for medical image analysis point to the importance of radiologic data in further characterization of disease characteristics as well as risk stratification. Cancer is a complex global health problem that has seen an increase in death rates in recent years. Progress in cancer disease detection based on subset traits has enabled awareness of significant as well as exact disease diagnosis, thanks to the rapid flowering of high-throughput technology as well as numerous ML techniques that have emerged in recent years. As a result, advanced ML approaches that can successfully distinguish lung cancer patients from healthy people are of major importance. This paper proposed lung tumor detection based on histopathological image analysis using deep learning architectures. Here, the input image is taken as a histopathological image, and it has also been processed for removing noise, image resizing, and enhancing the image. Then the image features are extracted using Kernel PCA integrated with a convolutional neural network (KPCA-CNN), in which KPCA has been used in the feature extraction layer of CNN. The classification of extracted features has been put into effect using a Fast Deep Belief Neural Network (FDBNN). Finally, the classified output will give the tumorous cell and nontumorous cell of the lung from the input histopathological image. The experimental analysis has been carried out for various histopathological image datasets, and the obtained parameters are accuracy, precision, recall, and F-measure. Confusion matrix gives the actual class and predicted class of tumor in an input image. From the comparative analysis, the proposed technique obtains enhanced output in detecting the tumor once compared with an existing methodology for the various datasets.
Collapse
|
38
|
Wang Y, Cai H, Pu Y, Li J, Yang F, Yang C, Chen L, Hu Z. The value of AI in the Diagnosis, Treatment, and Prognosis of Malignant Lung Cancer. Front Radiol 2022; 2:810731. [PMID: 37492685 PMCID: PMC10365105 DOI: 10.3389/fradi.2022.810731] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Accepted: 03/30/2022] [Indexed: 07/27/2023]
Abstract
Malignant tumors is a serious public health threat. Among them, lung cancer, which has the highest fatality rate globally, has significantly endangered human health. With the development of artificial intelligence (AI) and its integration with medicine, AI research in malignant lung tumors has become critical. This article reviews the value of CAD, computer neural network deep learning, radiomics, molecular biomarkers, and digital pathology for the diagnosis, treatment, and prognosis of malignant lung tumors.
Collapse
Affiliation(s)
- Yue Wang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Haihua Cai
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongzhu Pu
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Jindan Li
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Fake Yang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Conghui Yang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Long Chen
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
39
|
Laleh NG, Muti HS, Loeffler CML, Echle A, Saldanha OL, Mahmood F, Lu MY, Trautwein C, Langer R, Dislich B, Buelow RD, Grabsch HI, Brenner H, Chang-Claude J, Alwers E, Brinker TJ, Khader F, Truhn D, Gaisa NT, Boor P, Hoffmeister M, Schulz V, Kather JN. Benchmarking weakly-supervised deep learning pipelines for whole slide classification in computational pathology. Med Image Anal 2022; 79:102474. [DOI: 10.1016/j.media.2022.102474] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 04/07/2022] [Accepted: 05/03/2022] [Indexed: 02/07/2023]
|
40
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 77] [Impact Index Per Article: 38.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
41
|
Carrillo-Perez F, Morales JC, Castillo-Secilla D, Gevaert O, Rojas I, Herrera LJ. Machine-Learning-Based Late Fusion on Multi-Omics and Multi-Scale Data for Non-Small-Cell Lung Cancer Diagnosis. J Pers Med 2022; 12:601. [PMID: 35455716 PMCID: PMC9025878 DOI: 10.3390/jpm12040601] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 03/29/2022] [Accepted: 04/06/2022] [Indexed: 01/27/2023] Open
Abstract
Differentiation between the various non-small-cell lung cancer subtypes is crucial for providing an effective treatment to the patient. For this purpose, machine learning techniques have been used in recent years over the available biological data from patients. However, in most cases this problem has been treated using a single-modality approach, not exploring the potential of the multi-scale and multi-omic nature of cancer data for the classification. In this work, we study the fusion of five multi-scale and multi-omic modalities (RNA-Seq, miRNA-Seq, whole-slide imaging, copy number variation, and DNA methylation) by using a late fusion strategy and machine learning techniques. We train an independent machine learning model for each modality and we explore the interactions and gains that can be obtained by fusing their outputs in an increasing manner, by using a novel optimization approach to compute the parameters of the late fusion. The final classification model, using all modalities, obtains an F1 score of 96.81±1.07, an AUC of 0.993±0.004, and an AUPRC of 0.980±0.016, improving those results that each independent model obtains and those presented in the literature for this problem. These obtained results show that leveraging the multi-scale and multi-omic nature of cancer data can enhance the performance of single-modality clinical decision support systems in personalized medicine, consequently improving the diagnosis of the patient.
Collapse
Affiliation(s)
- Francisco Carrillo-Perez
- Department of Computer Architecture and Technology, University of Granada, C.I.T.I.C., Periodista Rafael Gómez Montero, 2, 18170 Granada, Spain; (J.C.M.); (I.R.); (L.J.H.)
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University, 1265 Welch Rd, Stanford, CA 94305, USA;
| | - Juan Carlos Morales
- Department of Computer Architecture and Technology, University of Granada, C.I.T.I.C., Periodista Rafael Gómez Montero, 2, 18170 Granada, Spain; (J.C.M.); (I.R.); (L.J.H.)
| | - Daniel Castillo-Secilla
- Fujitsu Technology Solutions S.A, CoE Data Intelligence, Camino del Cerro de los Gamos, 1, Pozuelo de Alarcón, 28224 Madrid, Spain;
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research (BMIR), Department of Medicine, Stanford University, 1265 Welch Rd, Stanford, CA 94305, USA;
| | - Ignacio Rojas
- Department of Computer Architecture and Technology, University of Granada, C.I.T.I.C., Periodista Rafael Gómez Montero, 2, 18170 Granada, Spain; (J.C.M.); (I.R.); (L.J.H.)
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology, University of Granada, C.I.T.I.C., Periodista Rafael Gómez Montero, 2, 18170 Granada, Spain; (J.C.M.); (I.R.); (L.J.H.)
| |
Collapse
|
42
|
Tsuneki M, Abe M, Kanavati F. A Deep Learning Model for Prostate Adenocarcinoma Classification in Needle Biopsy Whole-Slide Images Using Transfer Learning. Diagnostics (Basel) 2022; 12:768. [PMID: 35328321 DOI: 10.3390/diagnostics12030768] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/08/2022] [Accepted: 03/18/2022] [Indexed: 02/04/2023] Open
Abstract
The histopathological diagnosis of prostate adenocarcinoma in needle biopsy specimens is of pivotal importance for determining optimum prostate cancer treatment. Since diagnosing a large number of cases containing 12 core biopsy specimens by pathologists using a microscope is time-consuming manual system and limited in terms of human resources, it is necessary to develop new techniques that can rapidly and accurately screen large numbers of histopathological prostate needle biopsy specimens. Computational pathology applications that can assist pathologists in detecting and classifying prostate adenocarcinoma from whole-slide images (WSIs) would be of great benefit for routine pathological practice. In this paper, we trained deep learning models capable of classifying needle biopsy WSIs into adenocarcinoma and benign (non-neoplastic) lesions. We evaluated the models on needle biopsy, transurethral resection of the prostate (TUR-P), and The Cancer Genome Atlas (TCGA) public dataset test sets, achieving an ROC-AUC up to 0.978 in needle biopsy test sets and up to 0.9873 in TCGA test sets for adenocarcinoma.
Collapse
|
43
|
Abstract
BACKGROUND Deep learning is a state-of-the-art technology that has rapidly become the method of choice for medical image analysis. Its fast and robust object detection, segmentation, tracking, and classification of pathophysiological anatomical structures can support medical practitioners during routine clinical workflow. Thus, deep learning-based applications for diseases diagnosis will empower physicians and allow fast decision-making in clinical practice. HIGHLIGHT Deep learning can be more robust with various features for differentiating classes, provided the training set is large and diverse for analysis. However, sufficient medical images for training sets are not always available from medical institutions, which is one of the major limitations of deep learning in medical image analysis. This review article presents some solutions for this issue and discusses efforts needed to develop robust deep learning-based computer-aided diagnosis applications for better clinical workflow in endoscopy, radiology, pathology, and dentistry. CONCLUSION The introduction of deep learning-based applications will enhance the traditional role of medical practitioners in ensuring accurate diagnoses and treatment in terms of precision, reproducibility, and scalability.
Collapse
Affiliation(s)
- Masayuki Tsuneki
- Medmain Research, Medmain Inc., Fukuoka, Japan; Division of Anatomy and Cell Biology of the Hard Tissue, Department of Tissue Regeneration and Reconstruction, Niigata University Graduate School of Medical and Dental Sciences, Niigata, Japan.
| |
Collapse
|
44
|
Wahab N, Miligy IM, Dodd K, Sahota H, Toss M, Lu W, Jahanifar M, Bilal M, Graham S, Park Y, Hadjigeorghiou G, Bhalerao A, Lashen AG, Ibrahim AY, Katayama A, Ebili HO, Parkin M, Sorell T, Raza SEA, Hero E, Eldaly H, Tsang YW, Gopalakrishnan K, Snead D, Rakha E, Rajpoot N, Minhas F. Semantic annotation for computational pathology: multidisciplinary experience and best practice recommendations. J Pathol Clin Res 2022; 8:116-128. [PMID: 35014198 PMCID: PMC8822374 DOI: 10.1002/cjp2.256] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 11/25/2021] [Accepted: 12/10/2021] [Indexed: 02/06/2023]
Abstract
Recent advances in whole‐slide imaging (WSI) technology have led to the development of a myriad of computer vision and artificial intelligence‐based diagnostic, prognostic, and predictive algorithms. Computational Pathology (CPath) offers an integrated solution to utilise information embedded in pathology WSIs beyond what can be obtained through visual assessment. For automated analysis of WSIs and validation of machine learning (ML) models, annotations at the slide, tissue, and cellular levels are required. The annotation of important visual constructs in pathology images is an important component of CPath projects. Improper annotations can result in algorithms that are hard to interpret and can potentially produce inaccurate and inconsistent results. Despite the crucial role of annotations in CPath projects, there are no well‐defined guidelines or best practices on how annotations should be carried out. In this paper, we address this shortcoming by presenting the experience and best practices acquired during the execution of a large‐scale annotation exercise involving a multidisciplinary team of pathologists, ML experts, and researchers as part of the Pathology image data Lake for Analytics, Knowledge and Education (PathLAKE) consortium. We present a real‐world case study along with examples of different types of annotations, diagnostic algorithm, annotation data dictionary, and annotation constructs. The analyses reported in this work highlight best practice recommendations that can be used as annotation guidelines over the lifecycle of a CPath project.
Collapse
Affiliation(s)
- Noorul Wahab
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Islam M Miligy
- Pathology, University of Nottingham, Nottingham, UK.,Department of Pathology, Faculty of Medicine, Menoufia University, Shebin El-Kom, Egypt
| | - Katherine Dodd
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | - Harvir Sahota
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | - Michael Toss
- Pathology, University of Nottingham, Nottingham, UK
| | - Wenqi Lu
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | | | - Mohsin Bilal
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Simon Graham
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Young Park
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | | | - Abhir Bhalerao
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | | | | | - Ayaka Katayama
- Graduate School of Medicine, Gunma University, Maebashi, Japan
| | | | | | - Tom Sorell
- Department of Politics and International Studies, University of Warwick, Coventry, UK
| | | | - Emily Hero
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK.,Leicester Royal Infirmary, Histopathology, University Hospitals Leicester, Leicester, UK
| | - Hesham Eldaly
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | - Yee Wah Tsang
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | | | - David Snead
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | - Emad Rakha
- Pathology, University of Nottingham, Nottingham, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| |
Collapse
|
45
|
Prabhu S, Prasad K, Robels-Kelly A, Lu X. AI-based carcinoma detection and classification using histopathological images: A systematic review. Comput Biol Med 2022; 142:105209. [DOI: 10.1016/j.compbiomed.2022.105209] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 02/07/2023]
|
46
|
Chang CW, Christian M, Chang DH, Lai F, Liu TJ, Chen YS, Chen WJ. Deep learning approach based on superpixel segmentation assisted labeling for automatic pressure ulcer diagnosis. PLoS One 2022; 17:e0264139. [PMID: 35176101 PMCID: PMC8853507 DOI: 10.1371/journal.pone.0264139] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 02/03/2022] [Indexed: 01/14/2023] Open
Abstract
A pressure ulcer is an injury of the skin and underlying tissues adjacent to a bony eminence. Patients who suffer from this disease may have difficulty accessing medical care. Recently, the COVID-19 pandemic has exacerbated this situation. Automatic diagnosis based on machine learning (ML) brings promising solutions. Traditional ML requires complicated preprocessing steps for feature extraction. Its clinical applications are thus limited to particular datasets. Deep learning (DL), which extracts features from convolution layers, can embrace larger datasets that might be deliberately excluded in traditional algorithms. However, DL requires large sets of domain specific labeled data for training. Labeling various tissues of pressure ulcers is a challenge even for experienced plastic surgeons. We propose a superpixel-assisted, region-based method of labeling images for tissue classification. The boundary-based method is applied to create a dataset for wound and re-epithelialization (re-ep) segmentation. Five popular DL models (U-Net, DeeplabV3, PsPNet, FPN, and Mask R-CNN) with encoder (ResNet-101) were trained on the two datasets. A total of 2836 images of pressure ulcers were labeled for tissue classification, while 2893 images were labeled for wound and re-ep segmentation. All five models had satisfactory results. DeeplabV3 had the best performance on both tasks with a precision of 0.9915, recall of 0.9915 and accuracy of 0.9957 on the tissue classification; and a precision of 0.9888, recall of 0.9887 and accuracy of 0.9925 on the wound and re-ep segmentation task. Combining segmentation results with clinical data, our algorithm can detect the signs of wound healing, monitor the progress of healing, estimate the wound size, and suggest the need for surgical debridement.
Collapse
Affiliation(s)
- Che Wei Chang
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
- * E-mail:
| | - Mesakh Christian
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Dun Hao Chang
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
- Department of Information Management, Yuan Ze University, Taoyuan City, Taiwan
| | - Feipei Lai
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Tom J. Liu
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
- Division of Plastic Surgery, Department of Surgery, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Yo Shen Chen
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Wei Jen Chen
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
47
|
Yang JW, Song DH, An HJ, Seo SB. Classification of subtypes including LCNEC in lung cancer biopsy slides using convolutional neural network from scratch. Sci Rep 2022; 12:1830. [PMID: 35115593 PMCID: PMC8813931 DOI: 10.1038/s41598-022-05709-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 01/11/2022] [Indexed: 12/02/2022] Open
Abstract
Identifying the lung carcinoma subtype in small biopsy specimens is an important part of determining a suitable treatment plan but is often challenging without the help of special and/or immunohistochemical stains. Pathology image analysis that tackles this issue would be helpful for diagnoses and subtyping of lung carcinoma. In this study, we developed AI models to classify multinomial patterns of lung carcinoma; ADC, LCNEC, SCC, SCLC, and non-neoplastic lung tissue based on convolutional neural networks (CNN or ConvNet). Four CNNs that were pre-trained using transfer learning and one CNN built from scratch were used to classify patch images from pathology whole-slide images (WSIs). We first evaluated the diagnostic performance of each model in the test sets. The Xception model and the CNN built from scratch both achieved the highest performance with a macro average AUC of 0.90. The CNN built from scratch model obtained a macro average AUC of 0.97 on the dataset of four classes excluding LCNEC, and 0.95 on the dataset of three subtypes of lung carcinomas; NSCLC, SCLC, and non-tumor, respectively. Of particular note is that the relatively simple CNN built from scratch may be an approach for pathological image analysis.
Collapse
Affiliation(s)
- Jung Wook Yang
- Department of Pathology, Gyeongsang National University Hospital, Jinju, Republic of Korea.,Department of Pathology, Gyeongsang National University, College of Medicine, Jinju, Republic of Korea.,Gyeongsang Institute of Health Science, Jinju, Republic of Korea
| | - Dae Hyun Song
- Department of Pathology, Gyeongsang National University, College of Medicine, Jinju, Republic of Korea.,Gyeongsang Institute of Health Science, Jinju, Republic of Korea.,Department of Pathology, Changwon Gyeongsang National University Hospital, Changwon, Republic of Korea
| | - Hyo Jung An
- Department of Pathology, Gyeongsang National University, College of Medicine, Jinju, Republic of Korea.,Gyeongsang Institute of Health Science, Jinju, Republic of Korea.,Department of Pathology, Changwon Gyeongsang National University Hospital, Changwon, Republic of Korea
| | - Sat Byul Seo
- Department of Mathematics Education, School of Education, Kyungnam University, 7 Kyugnamdaehak-ro, Masanhappo-gu, Changwon, Gyeongsangnam-do, 51767, Republic of Korea.
| |
Collapse
|
48
|
Bellini V, Valente M, Del Rio P, Bignami E. Artificial intelligence in thoracic surgery: a narrative review. J Thorac Dis 2022; 13:6963-6975. [PMID: 35070380 PMCID: PMC8743413 DOI: 10.21037/jtd-21-761] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 08/30/2021] [Indexed: 12/12/2022]
Abstract
Objective The aim of this article is to review the current applications of artificial intelligence in thoracic surgery, from diagnosis and pulmonary disease management, to preoperative risk-assessment, surgical planning, and outcomes prediction. Background Artificial intelligence implementation in healthcare settings is rapidly growing, though its widespread use in clinical practice is still limited. The employment of machine learning algorithms in thoracic surgery is wide-ranging, including all steps of the clinical pathway. Methods We performed a narrative review of the literature on Scopus, PubMed and Cochrane databases, including all the relevant studies published in the last ten years, until March 2021. Conclusion Machine learning methods are promising encouraging results throughout the key issues of thoracic surgery, both clinical, organizational, and educational. Artificial intelligence-based technologies showed remarkable efficacy to improve the perioperative evaluation of the patient, to assist the decision-making process, to enhance the surgical performance, and to optimize the operating room scheduling. Still, some concern remains about data supply, protection, and transparency, thus further studies and specific consensus guidelines are needed to validate these technologies for daily common practice. Keywords Artificial intelligence (AI); thoracic surgery; machine learning; lung resection; perioperative medicine
Collapse
Affiliation(s)
- Valentina Bellini
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Marina Valente
- General Surgery Unit, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Paolo Del Rio
- General Surgery Unit, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Elena Bignami
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| |
Collapse
|
49
|
Kanavati F, Ichihara S, Tsuneki M. A deep learning model for breast ductal carcinoma in situ classification in whole slide images. Virchows Arch 2022. [PMID: 35076741 DOI: 10.1007/s00428-021-03241-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 11/12/2021] [Accepted: 11/20/2021] [Indexed: 02/06/2023]
Abstract
The pathological differential diagnosis between breast ductal carcinoma in situ (DCIS) and invasive ductal carcinoma (IDC) is of pivotal importance for determining optimum cancer treatment(s) and clinical outcomes. Since conventional diagnosis by pathologists using microscopes is limited in terms of human resources, it is necessary to develop new techniques that can rapidly and accurately diagnose large numbers of histopathological specimens. Computational pathology tools which can assist pathologists in detecting and classifying DCIS and IDC from whole slide images (WSIs) would be of great benefit for routine pathological diagnosis. In this paper, we trained deep learning models capable of classifying biopsy and surgical histopathological WSIs into DCIS, IDC, and benign. We evaluated the models on two independent test sets (n= 1382, n= 548), achieving ROC areas under the curves (AUCs) up to 0.960 and 0.977 for DCIS and IDC, respectively.
Collapse
|
50
|
Lee J, Park J, Moon SY, Lee K. Automated Prediction of Extraction Difficulty and Inferior Alveolar Nerve Injury for Mandibular Third Molar Using a Deep Neural Network. Applied Sciences 2022; 12:475. [DOI: 10.3390/app12010475] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Extraction of mandibular third molars is a common procedure in oral and maxillofacial surgery. There are studies that simultaneously predict the extraction difficulty of mandibular third molar and the complications that may occur. Thus, we propose a method of automatically detecting mandibular third molars in the panoramic radiographic images and predicting the extraction difficulty and likelihood of inferior alveolar nerve (IAN) injury. Our dataset consists of 4903 panoramic radiographic images acquired from various dental hospitals. Seven dentists annotated detection and classification labels. The detection model determines the mandibular third molar in the panoramic radiographic image. The region of interest (ROI) includes the detected mandibular third molar, adjacent teeth, and IAN, which is cropped in the panoramic radiographic image. The classification models use ROI as input to predict the extraction difficulty and likelihood of IAN injury. The achieved detection performance was 99.0% mAP over the intersection of union (IOU) 0.5. In addition, we achieved an 83.5% accuracy for the prediction of extraction difficulty and an 81.1% accuracy for the prediction of the likelihood of IAN injury. We demonstrated that a deep learning method can support the diagnosis for extracting the mandibular third molar.
Collapse
|