1
|
Patil VI, Patil SR. Optimized Transfer Learning With Hybrid Feature Extraction for Uterine Tissue Classification Using Histopathological Images. Microsc Res Tech 2025; 88:1582-1598. [PMID: 39871427 DOI: 10.1002/jemt.24787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 11/13/2024] [Accepted: 12/18/2024] [Indexed: 01/29/2025]
Abstract
Endometrial cancer, termed uterine cancer, seriously affects female reproductive organs, and the analysis of histopathological images formed a golden standard for diagnosing this cancer. Sometimes, early detection of this disease is difficult because of the limited capability of modeling complicated relationships among histopathological images and their interpretations. Moreover, many previous methods do not effectively handle the cell appearance variations. Hence, this study develops a novel classification technique called transfer learning convolution neural network with artificial bald eagle optimization (TL-CNN with ABEO) for the classification of uterine tissue. Here, preprocessing is done by the median filter, followed by image enhancement by the multiple identities representation network (MIRNet). Moreover, pelican crow search optimization (PCSO) is used for adapting weights in MIRNet, where PCSO is generated by combining the crow search algorithm (CSA) and pelican optimization algorithm (POA). Then, segmentation quality assessment (SQA) helps in tissue segmentation, and deep convolutional neural network (DCNN) helps in parameter selection that is trained by fractional PCSO (FPCSO). Furthermore, feature extraction is done and, finally, cell classification is done by TL with CNN, which is trained by the proposed ABEO algorithm. Here, ABEO is newly developed by the integration of the bald eagle search (BES) algorithm and artificial hummingbird algorithm (AHA). Furthermore, ABEO + TL-CNN achieved a high accuracy of 89.59%, a sensitivity of 90.25%, and a specificity of 89.89% by utilizing the cancer image archive dataset.
Collapse
Affiliation(s)
- Veena I Patil
- Research scholar, Department of Computer Science and Engineering, Basaveshwar Engineering College, Visvesvaraya Technological University, Belagavi, India
- BLDEA's V. P. Dr. P. G. Halakatti College of Engineering & Technology, Vijayapura, India
| | - Shobha R Patil
- Information Science and Engineering, Basaveshwar Engineering College, Visvesvaraya Technological University, Belagavi, India
| |
Collapse
|
2
|
Wang L, Wang Z, Zhao B, Wang K, Zheng J, Zhao L. Diagnosis Test Accuracy of Artificial Intelligence for Endometrial Cancer: Systematic Review and Meta-Analysis. J Med Internet Res 2025; 27:e66530. [PMID: 40249940 PMCID: PMC12048793 DOI: 10.2196/66530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2024] [Revised: 01/15/2025] [Accepted: 03/20/2025] [Indexed: 04/20/2025] Open
Abstract
BACKGROUND Endometrial cancer is one of the most common gynecological tumors, and early screening and diagnosis are crucial for its treatment. Research on the application of artificial intelligence (AI) in the diagnosis of endometrial cancer is increasing, but there is currently no comprehensive meta-analysis to evaluate the diagnostic accuracy of AI in screening for endometrial cancer. OBJECTIVE This paper presents a systematic review of AI-based endometrial cancer screening, which is needed to clarify its diagnostic accuracy and provide evidence for the application of AI technology in screening for endometrial cancer. METHODS A search was conducted across PubMed, Embase, Cochrane Library, Web of Science, and Scopus databases to include studies published in English, which evaluated the performance of AI in endometrial cancer screening. A total of 2 independent reviewers screened the titles and abstracts, and the quality of the selected studies was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. The certainty of the diagnostic test evidence was evaluated using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) system. RESULTS A total of 13 studies were included, and the hierarchical summary receiver operating characteristic model used for the meta-analysis showed that the overall sensitivity of AI-based endometrial cancer screening was 86% (95% CI 79%-90%) and specificity was 92% (95% CI 87%-95%). Subgroup analysis revealed similar results across AI type, study region, publication year, and study type, but the overall quality of evidence was low. CONCLUSIONS AI-based endometrial cancer screening can effectively detect patients with endometrial cancer, but large-scale population studies are needed in the future to further clarify the diagnostic accuracy of AI in screening for endometrial cancer. TRIAL REGISTRATION PROSPERO CRD42024519835; https://www.crd.york.ac.uk/PROSPERO/view/CRD42024519835.
Collapse
Affiliation(s)
- Longyun Wang
- Department of Rehabilitation, School of Nursing, Jilin University, Changchun, China
| | - Zeyu Wang
- Department of Rehabilitation, School of Nursing, Jilin University, Changchun, China
| | - Bowei Zhao
- Department of Rehabilitation, School of Nursing, Jilin University, Changchun, China
| | - Kai Wang
- Department of Rehabilitation, School of Nursing, Jilin University, Changchun, China
| | - Jingying Zheng
- Department of Gynecology and Obstetrics, The Second Hospital of Jilin University, Changchun, China
| | - Lijing Zhao
- Department of Rehabilitation, School of Nursing, Jilin University, Changchun, China
| |
Collapse
|
3
|
Joshua A, Allen KE, Orsi NM. An Overview of Artificial Intelligence in Gynaecological Pathology Diagnostics. Cancers (Basel) 2025; 17:1343. [PMID: 40282519 PMCID: PMC12025868 DOI: 10.3390/cancers17081343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2025] [Revised: 03/24/2025] [Accepted: 03/30/2025] [Indexed: 04/29/2025] Open
Abstract
Background: The advent of artificial intelligence (AI) has revolutionised many fields in healthcare. More recently, it has garnered interest in terms of its potential applications in histopathology, where algorithms are increasingly being explored as adjunct technologies that can support pathologists in diagnosis, molecular typing and prognostication. While many research endeavours have focused on solid tumours, gynaecological malignancies have nevertheless been relatively overlooked. The aim of this review was therefore to provide a summary of the status quo in the field of AI in gynaecological pathology by encompassing malignancies throughout the entirety of the female reproductive tract rather than focusing on individual cancers. Methods: This narrative/scoping review explores the potential application of AI in whole slide image analysis in gynaecological histopathology, drawing on both findings from the research setting (where such technologies largely remain confined), and highlights any findings and/or applications identified and developed in other cancers that could be translated to this arena. Results: A particular focus is given to ovarian, endometrial, cervical and vulval/vaginal tumours. This review discusses different algorithms, their performance and potential applications. Conclusions: The effective application of AI tools is only possible through multidisciplinary co-operation and training.
Collapse
Affiliation(s)
- Anna Joshua
- Christian Medical College, Vellore 632004, Tamil Nadu, India;
| | - Katie E. Allen
- Women’s Health Research Group, Leeds Institute of Cancer & Pathology, Wellcome Trust Brenner Building, St James’s University Hospital, Beckett Street, Leeds LS9 7TF, UK;
| | - Nicolas M. Orsi
- Women’s Health Research Group, Leeds Institute of Cancer & Pathology, Wellcome Trust Brenner Building, St James’s University Hospital, Beckett Street, Leeds LS9 7TF, UK;
| |
Collapse
|
4
|
Chen C, Mat Isa NA, Liu X. A review of convolutional neural network based methods for medical image classification. Comput Biol Med 2025; 185:109507. [PMID: 39631108 DOI: 10.1016/j.compbiomed.2024.109507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 11/20/2024] [Accepted: 11/28/2024] [Indexed: 12/07/2024]
Abstract
This study systematically reviews CNN-based medical image classification methods. We surveyed 149 of the latest and most important papers published to date and conducted an in-depth analysis of the methods used therein. Based on the selected literature, we organized this review systematically. First, the development and evolution of CNN in the field of medical image classification are analyzed. Subsequently, we provide an in-depth overview of the main techniques of CNN applied to medical image classification, which is also the current research focus in this field, including data preprocessing, transfer learning, CNN architectures, and explainability, and their role in improving classification accuracy and efficiency. In addition, this overview summarizes the main public datasets for various diseases. Although CNN has great potential in medical image classification tasks and has achieved good results, clinical application is still difficult. Therefore, we conclude by discussing the main challenges faced by CNNs in medical image analysis and pointing out future research directions to address these challenges. This review will help researchers with their future studies and can promote the successful integration of deep learning into clinical practice and smart medical systems.
Collapse
Affiliation(s)
- Chao Chen
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia; School of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin, 644000, China
| | - Nor Ashidi Mat Isa
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia.
| | - Xin Liu
- School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia
| |
Collapse
|
5
|
Wang YL, Gao S, Xiao Q, Li C, Grzegorzek M, Zhang YY, Li XH, Kang Y, Liu FH, Huang DH, Gong TT, Wu QJ. Role of artificial intelligence in digital pathology for gynecological cancers. Comput Struct Biotechnol J 2024; 24:205-212. [PMID: 38510535 PMCID: PMC10951449 DOI: 10.1016/j.csbj.2024.03.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 03/08/2024] [Accepted: 03/09/2024] [Indexed: 03/22/2024] Open
Abstract
The diagnosis of cancer is typically based on histopathological sections or biopsies on glass slides. Artificial intelligence (AI) approaches have greatly enhanced our ability to extract quantitative information from digital histopathology images as a rapid growth in oncology data. Gynecological cancers are major diseases affecting women's health worldwide. They are characterized by high mortality and poor prognosis, underscoring the critical importance of early detection, treatment, and identification of prognostic factors. This review highlights the various clinical applications of AI in gynecological cancers using digitized histopathology slides. Particularly, deep learning models have shown promise in accurately diagnosing, classifying histopathological subtypes, and predicting treatment response and prognosis. Furthermore, the integration with transcriptomics, proteomics, and other multi-omics techniques can provide valuable insights into the molecular features of diseases. Despite the considerable potential of AI, substantial challenges remain. Further improvements in data acquisition and model optimization are required, and the exploration of broader clinical applications, such as the biomarker discovery, need to be explored.
Collapse
Affiliation(s)
- Ya-Li Wang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Information Center, The Fourth Affiliated Hospital of China Medical University, Shenyang, China
| | - Song Gao
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qian Xiao
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Ying-Ying Zhang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Xiao-Han Li
- Department of Pathology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ye Kang
- Department of Pathology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Fang-Hua Liu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Dong-Hui Huang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ting-Ting Gong
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qi-Jun Wu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Liaoning Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
- NHC Key Laboratory of Advanced Reproductive Medicine and Fertility (China Medical University), National Health Commission, Shenyang, China
| |
Collapse
|
6
|
Fiaz A, Raza B, Faheem M, Raza A. A deep fusion-based vision transformer for breast cancer classification. Healthc Technol Lett 2024; 11:471-484. [PMID: 39720758 PMCID: PMC11665795 DOI: 10.1049/htl2.12093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 09/16/2024] [Accepted: 10/14/2024] [Indexed: 12/26/2024] Open
Abstract
Breast cancer is one of the most common causes of death in women in the modern world. Cancerous tissue detection in histopathological images relies on complex features related to tissue structure and staining properties. Convolutional neural network (CNN) models like ResNet50, Inception-V1, and VGG-16, while useful in many applications, cannot capture the patterns of cell layers and staining properties. Most previous approaches, such as stain normalization and instance-based vision transformers, either miss important features or do not process the whole image effectively. Therefore, a deep fusion-based vision Transformer model (DFViT) that combines CNNs and transformers for better feature extraction is proposed. DFViT captures local and global patterns more effectively by fusing RGB and stain-normalized images. Trained and tested on several datasets, such as BreakHis, breast cancer histology (BACH), and UCSC cancer genomics (UC), the results demonstrate outstanding accuracy, F1 score, precision, and recall, setting a new milestone in histopathological image analysis for diagnosing breast cancer.
Collapse
Affiliation(s)
- Ahsan Fiaz
- Department of Computer ScienceCOMSATS University Islamabad (CUI)IslamabadPakistan
| | - Basit Raza
- Department of Computer ScienceCOMSATS University Islamabad (CUI)IslamabadPakistan
| | - Muhammad Faheem
- School of Technology and InnovationsUniversity of VaasaVaasaFinland
| | - Aadil Raza
- Department of PhysicsCOMSATS University Islamabad (CUI)IslamabadPakistan
| |
Collapse
|
7
|
Pu X, Liu L, Zhou Y, Xu Z. Determination of the rat estrous cycle vased on EfficientNet. Front Vet Sci 2024; 11:1434991. [PMID: 39119352 PMCID: PMC11306968 DOI: 10.3389/fvets.2024.1434991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 07/01/2024] [Indexed: 08/10/2024] Open
Abstract
In the field of biomedical research, rats are widely used as experimental animals due to their short gestation period and strong reproductive ability. Accurate monitoring of the estrous cycle is crucial for the success of experiments. Traditional methods are time-consuming and rely on the subjective judgment of professionals, which limits the efficiency and accuracy of experiments. This study proposes an EfficientNet model to automate the recognition of the estrous cycle of female rats using deep learning techniques. The model optimizes performance through systematic scaling of the network depth, width, and image resolution. A large dataset of physiological data from female rats was used for training and validation. The improved EfficientNet model effectively recognized different stages of the estrous cycle. The model demonstrated high-precision feature capture and significantly improved recognition accuracy compared to conventional methods. The proposed technique enhances experimental efficiency and reduces human error in recognizing the estrous cycle. This study highlights the potential of deep learning to optimize data processing and achieve high-precision recognition in biomedical research. Future work should focus on further validation with larger datasets and integration into experimental workflows.
Collapse
Affiliation(s)
- Xiaodi Pu
- Reproductive Section, Huaihua City Maternal and Child Health Care Hospital, Huaihua, China
| | - Longyi Liu
- Shenyang Institute of Computing Technology, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yonglai Zhou
- Reproductive Section, Huaihua City Maternal and Child Health Care Hospital, Huaihua, China
| | - Zihan Xu
- College of Biological Sciences, China Agricultural University, Beijing, China
| |
Collapse
|
8
|
Zheng Y, Wang H, Weng T, Li Q, Guo L. Application of convolutional neural network for differentiating ovarian thecoma-fibroma and solid ovarian cancer based on MRI. Acta Radiol 2024; 65:860-868. [PMID: 38751048 DOI: 10.1177/02841851241252951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/26/2024]
Abstract
BACKGROUND Ovarian thecoma-fibroma and solid ovarian cancer have similar clinical and imaging features, and it is difficult for radiologists to differentiate them. Since the treatment and prognosis of them are different, accurate characterization is crucial. PURPOSE To non-invasively differentiate ovarian thecoma-fibroma and solid ovarian cancer by convolutional neural network based on magnetic resonance imaging (MRI), and to provide the interpretability of the model. MATERIAL AND METHODS A total of 156 tumors, including 86 ovarian thecoma-fibroma and 70 solid ovarian cancer, were split into the training set, the validation set, and the test set according to the ratio of 8:1:1 by stratified random sampling. In this study, we used four different networks, two different weight modes, two different optimizers, and four different sizes of regions of interest (ROI) to test the model performance. This process was repeated 10 times to calculate the average performance of the test set. The gradient weighted class activation mapping (Grad-CAM) was used to explain how the model makes classification decisions by visual location map. RESULTS ResNet18, which had pre-trained weight, using Adam and one multiple ROI circumscribed rectangle, achieved best performance. The average accuracy, precision, recall, and AUC were 0.852, 0.828, 0.848, and 0.919 (P < 0.01), respectively. Grad-CAM showed areas associated with classification appeared on the edge or interior of ovarian thecoma-fibroma and the interior of solid ovarian cancer. CONCLUSION This study shows that convolution neural network based on MRI can be helpful for radiologists in differentiating ovarian thecoma-fibroma and solid ovarian cancer.
Collapse
Affiliation(s)
- Yuemei Zheng
- Department of Medical Imaging, Affiliated Hospital of Jining Medical University, Jining, PR China
| | - Hong Wang
- Department of Radiology, Tianjin First Central Hospital, Tianjin, PR China
| | - Tingting Weng
- School of Medical Imaging, Tianjin Medical University, Tianjin, PR China
| | - Qiong Li
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, PR China
| | - Li Guo
- School of Medical Imaging, Tianjin Medical University, Tianjin, PR China
| |
Collapse
|
9
|
Kitaya K, Yasuo T, Yamaguchi T. Bridging the Diagnostic Gap between Histopathologic and Hysteroscopic Chronic Endometritis with Deep Learning Models. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:972. [PMID: 38929589 PMCID: PMC11205857 DOI: 10.3390/medicina60060972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 05/17/2024] [Accepted: 06/10/2024] [Indexed: 06/28/2024]
Abstract
Chronic endometritis (CE) is an inflammatory pathologic condition of the uterine mucosa characterized by unusual infiltration of CD138(+) endometrial stromal plasmacytes (ESPCs). CE is often identified in infertile women with unexplained etiology, tubal factors, endometriosis, repeated implantation failure, and recurrent pregnancy loss. Diagnosis of CE has traditionally relied on endometrial biopsy and histopathologic/immunohistochemistrical detection of ESPCs. Endometrial biopsy, however, is a somewhat painful procedure for the subjects and does not allow us to grasp the whole picture of this mucosal tissue. Meanwhile, fluid hysteroscopy has been recently adopted as a less-invasive diagnostic modality for CE. We launched the ARCHIPELAGO (ARChival Hysteroscopic Image-based Prediction for histopathologic chronic Endometritis in infertile women using deep LeArninG mOdel) study to construct the hysteroscopic CE finding-based prediction tools for histopathologic CE. The development of these deep learning-based novel models and computer-aided detection/diagnosis systems potentially benefits infertile women suffering from this elusive disease.
Collapse
Affiliation(s)
- Kotaro Kitaya
- Infertility Center, Iryouhoujin Kouseikai Mihara Hospital, 6-8 Kamikatsura Miyanogo-cho, Nishikyo-ku, Kyoto 615-8227, Japan
- Iryouhoujin Kouseikai Katsura-ekimae Mihara Clinic, 103 Katsura OS Plaza Building, 133 Katsura Minamitatsumi-cho, Nishikyo-ku, Kyoto 615-8074, Japan
| | - Tadahiro Yasuo
- Department of Obstetrics and Gynecology, Otsu City Hospital, 2-9-9 Motomiya, Otsu 520-0804, Japan
| | - Takeshi Yamaguchi
- Infertility Center, Daigo Watanabe Clinic, 30-15 Daigo Takahata-cho, Fushimi-ku, Kyoto 601-1375, Japan
| |
Collapse
|
10
|
McGenity C, Clarke EL, Jennings C, Matthews G, Cartlidge C, Freduah-Agyemang H, Stocken DD, Treanor D. Artificial intelligence in digital pathology: a systematic review and meta-analysis of diagnostic test accuracy. NPJ Digit Med 2024; 7:114. [PMID: 38704465 PMCID: PMC11069583 DOI: 10.1038/s41746-024-01106-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 04/12/2024] [Indexed: 05/06/2024] Open
Abstract
Ensuring diagnostic performance of artificial intelligence (AI) before introduction into clinical practice is essential. Growing numbers of studies using AI for digital pathology have been reported over recent years. The aim of this work is to examine the diagnostic accuracy of AI in digital pathology images for any disease. This systematic review and meta-analysis included diagnostic accuracy studies using any type of AI applied to whole slide images (WSIs) for any disease. The reference standard was diagnosis by histopathological assessment and/or immunohistochemistry. Searches were conducted in PubMed, EMBASE and CENTRAL in June 2022. Risk of bias and concerns of applicability were assessed using the QUADAS-2 tool. Data extraction was conducted by two investigators and meta-analysis was performed using a bivariate random effects model, with additional subgroup analyses also performed. Of 2976 identified studies, 100 were included in the review and 48 in the meta-analysis. Studies were from a range of countries, including over 152,000 whole slide images (WSIs), representing many diseases. These studies reported a mean sensitivity of 96.3% (CI 94.1-97.7) and mean specificity of 93.3% (CI 90.5-95.4). There was heterogeneity in study design and 99% of studies identified for inclusion had at least one area at high or unclear risk of bias or applicability concerns. Details on selection of cases, division of model development and validation data and raw performance data were frequently ambiguous or missing. AI is reported as having high diagnostic accuracy in the reported areas but requires more rigorous evaluation of its performance.
Collapse
Affiliation(s)
- Clare McGenity
- University of Leeds, Leeds, UK.
- Leeds Teaching Hospitals NHS Trust, Leeds, UK.
| | - Emily L Clarke
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Charlotte Jennings
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | | | | | | | - Darren Treanor
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Centre for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| |
Collapse
|
11
|
Sierra-Jerez F, Martinez F. A non-aligned translation with a neoplastic classifier regularization to include vascular NBI patterns in standard colonoscopies. Comput Biol Med 2024; 170:108008. [PMID: 38277922 DOI: 10.1016/j.compbiomed.2024.108008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 12/21/2023] [Accepted: 01/13/2024] [Indexed: 01/28/2024]
Abstract
Polyp vascular patterns are key to categorizing colorectal cancer malignancy. These patterns are typically observed in situ from specialized narrow-band images (NBI). Nonetheless, such vascular characterization is lost from standard colonoscopies (the primary attention mechanism). Besides, even for NBI observations, the categorization remains biased for expert observations, reporting errors in classification from 59.5% to 84.2%. This work introduces an end-to-end computational strategy to enhance in situ standard colonoscopy observations, including vascular patterns typically observed from NBI mechanisms. These retrieved synthetic images are achieved by adjusting a deep representation under a non-aligned translation task from optical colonoscopy (OC) to NBI. The introduced scheme includes an architecture to discriminate enhanced neoplastic patterns achieving a remarkable separation into the embedding representation. The proposed approach was validated in a public dataset with a total of 76 sequences, including standard optical sequences and the respective NBI observations. The enhanced optical sequences were automatically classified among adenomas and hyperplastic samples achieving an F1-score of 0.86%. To measure the sensibility capability of the proposed approach, serrated samples were projected to the trained architecture. In this experiment, statistical differences from three classes with a ρ-value <0.05 were reported, following a Mann-Whitney U test. This work showed remarkable polyp discrimination results in enhancing OC sequences regarding typical NBI patterns. This method also learns polyp class distributions under the unpaired criteria (close to real practice), with the capability to separate serrated samples from adenomas and hyperplastic ones.
Collapse
Affiliation(s)
- Franklin Sierra-Jerez
- Biomedical Imaging, Vision and Learning Laboratory (BIVL(2)ab), Universidad Industrial de Santander (UIS), Colombia
| | - Fabio Martinez
- Biomedical Imaging, Vision and Learning Laboratory (BIVL(2)ab), Universidad Industrial de Santander (UIS), Colombia.
| |
Collapse
|
12
|
Zhang Y, Gao Y, Xu J, Zhao G, Shi L, Kong L. Unsupervised Joint Domain Adaptation for Decoding Brain Cognitive States From tfMRI Images. IEEE J Biomed Health Inform 2024; 28:1494-1503. [PMID: 38157464 DOI: 10.1109/jbhi.2023.3348130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Recent advances in large model and neuroscience have enabled exploration of the mechanism of brain activity by using neuroimaging data. Brain decoding is one of the most promising researches to further understand the human cognitive function. However, current methods excessively depends on high-quality labeled data, which brings enormous expense of collection and annotation of neural images by experts. Besides, the performance of cross-individual decoding suffers from inconsistency in data distribution caused by individual variation and different collection equipments. To address mentioned above issues, a Join Domain Adapative Decoding (JDAD) framework is proposed for unsupervised decoding specific brain cognitive state related to behavioral task. Based on the volumetric feature extraction from task-based functional Magnetic Resonance Imaging (tfMRI) data, a novel objective loss function is designed by the combination of joint distribution regularizer, which aims to restrict the distance of both the conditional and marginal probability distribution of labeled and unlabeled samples. Experimental results on the public Human Connectome Project (HCP) S1200 dataset show that JDAD achieves superior performance than other prevalent methods, especially for fine-grained task with 11.5%-21.6% improvements of decoding accuracy. The learned 3D features are visualized by Grad-CAM to build a combination with brain functional regions, which provides a novel path to learn the function of brain cortex regions related to specific cognitive task in group level.
Collapse
|
13
|
Brandão M, Mendes F, Martins M, Cardoso P, Macedo G, Mascarenhas T, Mascarenhas Saraiva M. Revolutionizing Women's Health: A Comprehensive Review of Artificial Intelligence Advancements in Gynecology. J Clin Med 2024; 13:1061. [PMID: 38398374 PMCID: PMC10889757 DOI: 10.3390/jcm13041061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/04/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
Artificial intelligence has yielded remarkably promising results in several medical fields, namely those with a strong imaging component. Gynecology relies heavily on imaging since it offers useful visual data on the female reproductive system, leading to a deeper understanding of pathophysiological concepts. The applicability of artificial intelligence technologies has not been as noticeable in gynecologic imaging as in other medical fields so far. However, due to growing interest in this area, some studies have been performed with exciting results. From urogynecology to oncology, artificial intelligence algorithms, particularly machine learning and deep learning, have shown huge potential to revolutionize the overall healthcare experience for women's reproductive health. In this review, we aim to establish the current status of AI in gynecology, the upcoming developments in this area, and discuss the challenges facing its clinical implementation, namely the technological and ethical concerns for technology development, implementation, and accountability.
Collapse
Affiliation(s)
- Marta Brandão
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
| | - Francisco Mendes
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Miguel Martins
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Guilherme Macedo
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Teresa Mascarenhas
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Obstetrics and Gynecology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Miguel Mascarenhas Saraiva
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| |
Collapse
|
14
|
Vermorgen S, Gelton T, Bult P, Kusters-Vandevelde HVN, Hausnerová J, Van de Vijver K, Davidson B, Stefansson IM, Kooreman LFS, Qerimi A, Huvila J, Gilks B, Shahi M, Zomer S, Bartosch C, Pijnenborg JMA, Bulten J, Ciompi F, Simons M. Endometrial Pipelle Biopsy Computer-Aided Diagnosis: A Feasibility Study. Mod Pathol 2024; 37:100417. [PMID: 38154654 DOI: 10.1016/j.modpat.2023.100417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 12/02/2023] [Accepted: 12/19/2023] [Indexed: 12/30/2023]
Abstract
Endometrial biopsies are important in the diagnostic workup of women who present with abnormal uterine bleeding or hereditary risk of endometrial cancer. In general, approximately 10% of all endometrial biopsies demonstrate endometrial (pre)malignancy that requires specific treatment. As the diagnostic evaluation of mostly benign cases results in a substantial workload for pathologists, artificial intelligence (AI)-assisted preselection of biopsies could optimize the workflow. This study aimed to assess the feasibility of AI-assisted diagnosis for endometrial biopsies (endometrial Pipelle biopsy computer-aided diagnosis), trained on daily-practice whole-slide images instead of highly selected images. Endometrial biopsies were classified into 6 clinically relevant categories defined as follows: nonrepresentative, normal, nonneoplastic, hyperplasia without atypia, hyperplasia with atypia, and malignant. The agreement among 15 pathologists, within these classifications, was evaluated in 91 endometrial biopsies. Next, an algorithm (trained on a total of 2819 endometrial biopsies) rated the same 91 cases, and we compared its performance using the pathologist's classification as the reference standard. The interrater reliability among pathologists was moderate with a mean Cohen's kappa of 0.51, whereas for a binary classification into benign vs (pre)malignant, the agreement was substantial with a mean Cohen's kappa of 0.66. The AI algorithm performed slightly worse for the 6 categories with a moderate Cohen's kappa of 0.43 but was comparable for the binary classification with a substantial Cohen's kappa of 0.65. AI-assisted diagnosis of endometrial biopsies was demonstrated to be feasible in discriminating between benign and (pre)malignant endometrial tissues, even when trained on unselected cases. Endometrial premalignancies remain challenging for both pathologists and AI algorithms. Future steps to improve reliability of the diagnosis are needed to achieve a more refined AI-assisted diagnostic solution for endometrial biopsies that covers both premalignant and malignant diagnoses.
Collapse
Affiliation(s)
- Sanne Vermorgen
- Department of Pathology, Radboudumc, Nijmegen, the Netherlands
| | - Thijs Gelton
- Department of Pathology, Radboudumc, Nijmegen, the Netherlands
| | - Peter Bult
- Department of Pathology, Radboudumc, Nijmegen, the Netherlands
| | | | - Jitka Hausnerová
- Department of Pathology, University Hospital Brno, Brno, Czech Republic
| | | | - Ben Davidson
- Department of Pathology, Oslo University Hospital, Norwegian Radium Hospital, Oslo, Norway; University of Oslo, Faculty of Medicine, Institute of Clinical Medicine, Oslo, Norway
| | - Ingunn Marie Stefansson
- Centre for Cancer Biomarkers CCBIO, Department of Clinical Medicine, Section for Pathology, University of Bergen, Bergen, Norway; Department of Pathology, Haukeland University Hospital Bergen, Bergen, Norway
| | - Loes F S Kooreman
- Department of Pathology, GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, the Netherlands
| | - Adelina Qerimi
- Department of Pathology, ViraTherapeutics GmbH, Innsbruck, Austria
| | - Jutta Huvila
- Department of Pathology, University of Turku, Turku University Hospital, Turku, Finland
| | - Blake Gilks
- Department of Pathology, University of British Columbia, Vancouver, Canada
| | - Maryam Shahi
- Department of Pathology, Mayo Clinic, Rochester, Minnesota
| | - Saskia Zomer
- Department of Pathology, Canisius-Wilhelmina Hospital, Nijmegen, the Netherlands
| | - Carla Bartosch
- Department of Pathology, Portuguese Oncology Institute Lisbon, Lisbon, Portugal
| | | | - Johan Bulten
- Department of Pathology, Radboudumc, Nijmegen, the Netherlands
| | | | - Michiel Simons
- Department of Pathology, Radboudumc, Nijmegen, the Netherlands.
| |
Collapse
|
15
|
Jiang X, Feng C, Sun W, Feng L, Hao Y, Liu Q, Cui B. Enhancing clinical decision-making in endometrial cancer through deep learning technology: A review of current research. Digit Health 2024; 10:20552076241297053. [PMID: 39559386 PMCID: PMC11571264 DOI: 10.1177/20552076241297053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 10/17/2024] [Indexed: 11/20/2024] Open
Abstract
Endometrial cancer (EC), a growing malignancy among women, underscores an urgent need for early detection and intervention, critical for enhancing patient outcomes and survival rates. Traditional diagnostic approaches, including ultrasound (US), magnetic resonance imaging (MRI), hysteroscopy, and histopathology, have been essential in establishing robust diagnostic and prognostic frameworks for EC. These methods offer detailed insights into tumor morphology, vital for clinical decision-making. However, their analysis relies heavily on the expertise of radiologists and pathologists, a process that is not only time-consuming and labor-intensive but also prone to human error. The emergence of deep learning (DL) in computer vision has significantly transformed medical image analysis, presenting substantial potential for EC diagnosis. DL models, capable of autonomously learning and extracting complex features from imaging and histopathological data, have demonstrated remarkable accuracy in discriminating EC and stratifying patient prognoses. This review comprehensively examines and synthesizes the current literature on DL-based imaging techniques for EC diagnosis and management. It also aims to identify challenges faced by DL in this context and to explore avenues for its future development. Through these detailed analyses, our objective is to inform future research directions and promote the integration of DL into EC diagnostic and treatment strategies, thereby enhancing the precision and efficiency of clinical practice.
Collapse
Affiliation(s)
- Xuji Jiang
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan City, China
| | - Chuanli Feng
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan City, China
| | - Wanying Sun
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan City, China
| | - Lianlian Feng
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan City, China
| | - Yiping Hao
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan City, China
| | - Qingqing Liu
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan City, China
| | - Baoxia Cui
- Department of Obstetrics and Gynecology, Qilu Hospital of Shandong University, Jinan City, China
| |
Collapse
|
16
|
Ushakov E, Naumov A, Fomberg V, Vishnyakova P, Asaturova A, Badlaeva A, Tregubova A, Karpulevich E, Sukhikh G, Fatkhudinov T. EndoNet: A Model for the Automatic Calculation of H-Score on Histological Slides. INFORMATICS 2023; 10:90. [DOI: 10.3390/informatics10040090] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025] Open
Abstract
H-score is a semi-quantitative method used to assess the presence and distribution of proteins in tissue samples by combining the intensity of staining and the percentage of stained nuclei. It is widely used but time-consuming and can be limited in terms of accuracy and precision. Computer-aided methods may help overcome these limitations and improve the efficiency of pathologists’ workflows. In this work, we developed a model EndoNet for automatic H-score calculation on histological slides. Our proposed method uses neural networks and consists of two main parts. The first is a detection model which predicts the keypoints of centers of nuclei. The second is an H-score module that calculates the value of the H-score using mean pixel values of predicted keypoints. Our model was trained and validated on 1780 annotated tiles with a shape of 100 × 100 µm and we achieved 0.77 mAP on a test dataset. We obtained our best results in H-score calculation; these results proved superior to QuPath predictions. Moreover, the model can be adjusted to a specific specialist or whole laboratory to reproduce the manner of calculating the H-score. Thus, EndoNet is effective and robust in the analysis of histology slides, which can improve and significantly accelerate the work of pathologists.
Collapse
Affiliation(s)
- Egor Ushakov
- Information Systems Department, Ivannikov Institute for System Programming of the Russian Academy of Sciences (ISP RAS), 109004 Moscow, Russia
| | - Anton Naumov
- Information Systems Department, Ivannikov Institute for System Programming of the Russian Academy of Sciences (ISP RAS), 109004 Moscow, Russia
| | - Vladislav Fomberg
- Information Systems Department, Ivannikov Institute for System Programming of the Russian Academy of Sciences (ISP RAS), 109004 Moscow, Russia
| | - Polina Vishnyakova
- FSBI “National Medical Research Centre for Obstetrics, Gynecology and Perinatology Named after Academician V.I.Kulakov”, Ministry of Health of the Russian Federation, 4, Oparina Street, 117997 Moscow, Russia
- Research Institute of Molecular and Cellular Medicine, Peoples’ Friendship University of Russia (RUDN University), Miklukho-Maklaya Street 6, 117198 Moscow, Russia
| | - Aleksandra Asaturova
- FSBI “National Medical Research Centre for Obstetrics, Gynecology and Perinatology Named after Academician V.I.Kulakov”, Ministry of Health of the Russian Federation, 4, Oparina Street, 117997 Moscow, Russia
| | - Alina Badlaeva
- FSBI “National Medical Research Centre for Obstetrics, Gynecology and Perinatology Named after Academician V.I.Kulakov”, Ministry of Health of the Russian Federation, 4, Oparina Street, 117997 Moscow, Russia
| | - Anna Tregubova
- FSBI “National Medical Research Centre for Obstetrics, Gynecology and Perinatology Named after Academician V.I.Kulakov”, Ministry of Health of the Russian Federation, 4, Oparina Street, 117997 Moscow, Russia
| | - Evgeny Karpulevich
- Information Systems Department, Ivannikov Institute for System Programming of the Russian Academy of Sciences (ISP RAS), 109004 Moscow, Russia
| | - Gennady Sukhikh
- FSBI “National Medical Research Centre for Obstetrics, Gynecology and Perinatology Named after Academician V.I.Kulakov”, Ministry of Health of the Russian Federation, 4, Oparina Street, 117997 Moscow, Russia
| | - Timur Fatkhudinov
- Research Institute of Molecular and Cellular Medicine, Peoples’ Friendship University of Russia (RUDN University), Miklukho-Maklaya Street 6, 117198 Moscow, Russia
- Avtsyn Research Institute of Human Morphology, Federal State Budgetary Scientific Institution “Petrovsky National Research Centre of Surgery”, 3 Tsurupa Street, 117418 Moscow, Russia
| |
Collapse
|
17
|
Fang Y, Wei Y, Liu X, Qin L, Gao Y, Yu Z, Xu X, Cha G, Zhu X, Wang X, Xu L, Cao L, Chen X, Jiang H, Zhang C, Zhou Y, Zhu J. A self-supervised classification model for endometrial diseases. J Cancer Res Clin Oncol 2023; 149:17855-17863. [PMID: 37947870 PMCID: PMC10725391 DOI: 10.1007/s00432-023-05467-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 10/09/2023] [Indexed: 11/12/2023]
Abstract
PURPOSE Ultrasound imaging is the preferred method for the early diagnosis of endometrial diseases because of its non-invasive nature, low cost, and real-time imaging features. However, the accurate evaluation of ultrasound images relies heavily on the experience of radiologist. Therefore, a stable and objective computer-aided diagnostic model is crucial to assist radiologists in diagnosing endometrial lesions. METHODS Transvaginal ultrasound images were collected from multiple hospitals in Quzhou city, Zhejiang province. The dataset comprised 1875 images from 734 patients, including cases of endometrial polyps, hyperplasia, and cancer. Here, we proposed a based self-supervised endometrial disease classification model (BSEM) that learns a joint unified task (raw and self-supervised tasks) and applies self-distillation techniques and ensemble strategies to aid doctors in diagnosing endometrial diseases. RESULTS The performance of BSEM was evaluated using fivefold cross-validation. The experimental results indicated that the BSEM model achieved satisfactory performance across indicators, with scores of 75.1%, 87.3%, 76.5%, 73.4%, and 74.1% for accuracy, area under the curve, precision, recall, and F1 score, respectively. Furthermore, compared to the baseline models ResNet, DenseNet, VGGNet, ConvNeXt, VIT, and CMT, the BSEM model enhanced accuracy, area under the curve, precision, recall, and F1 score in 3.3-7.9%, 3.2-7.3%, 3.9-8.5%, 3.1-8.5%, and 3.3-9.0%, respectively. CONCLUSION The BSEM model is an auxiliary diagnostic tool for the early detection of endometrial diseases revealed by ultrasound and helps radiologists to be accurate and efficient while screening for precancerous endometrial lesions.
Collapse
Affiliation(s)
- Yun Fang
- Quzhou People's Hospital, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou, 324000, Zhejiang, China
| | - Yanmin Wei
- Tianjin Normal University, Tianjin, 300387, China
| | - Xiaoying Liu
- Quzhou People's Hospital, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou, 324000, Zhejiang, China
| | - Liufeng Qin
- Quzhou People's Hospital, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou, 324000, Zhejiang, China
| | - Yunxia Gao
- The Second People's Hospital of Quzhou, Quzhou, 324000, Zhejiang, China
| | - Zhengjun Yu
- Kaihua County People's Hospital, Quzhou, 324300, Zhejiang, China
| | - Xia Xu
- Changshan County People's Hospital, Quzhou, 324200, Zhejiang, China
| | - Guofen Cha
- People's Hospital of Quzhou Kecheng, Quzhou, 324000, Zhejiang, China
| | - Xuehua Zhu
- Quzhou Maternal and Child Health Care Hospital, Quzhou, 324000, Zhejiang, China
| | - Xue Wang
- Quzhou People's Hospital, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou, 324000, Zhejiang, China
| | - Lijuan Xu
- Quzhou People's Hospital, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou, 324000, Zhejiang, China
| | - Lulu Cao
- Quzhou People's Hospital, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou, 324000, Zhejiang, China
| | - Xiangrui Chen
- Changshan County People's Hospital, Quzhou, 324200, Zhejiang, China
| | - Haixia Jiang
- Kaihua County People's Hospital, Quzhou, 324300, Zhejiang, China
| | - Chaozhen Zhang
- People's Hospital of Quzhou Kecheng, Quzhou, 324000, Zhejiang, China
| | - Yuwang Zhou
- Quzhou People's Hospital, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou, 324000, Zhejiang, China.
| | - Jinqi Zhu
- Tianjin Normal University, Tianjin, 300387, China.
| |
Collapse
|
18
|
Zhao F, Wang Z, Du H, He X, Cao X. Self-Supervised Triplet Contrastive Learning for Classifying Endometrial Histopathological Images. IEEE J Biomed Health Inform 2023; 27:5970-5981. [PMID: 37698968 DOI: 10.1109/jbhi.2023.3314663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/14/2023]
Abstract
Early identification of endometrial cancer or precancerous lesions from histopathological images is crucial for precise endometrial medical care, which however is increasing hampered by the relative scarcity of pathologists. Computer-aided diagnosis (CAD) provides an automated alternative for confirming endometrial diseases with either feature-engineered machine learning or end-to-end deep learning (DL). In particular, advanced self-supervised learning alleviates the dependence of supervised learning on large-scale human-annotated data and can be used to pre-train DL models for specific classification tasks. Thereby, we develop a novel self-supervised triplet contrastive learning (SSTCL) model for classifying endometrial histopathological images. Specifically, this model consists of one online branch and two target branches. The second target branch includes a simple yet powerful augmentation module named random mosaic masking (RMM), which functions as an effective regularization by mapping the features of masked images close to those of intact ones. Moreover, we add a bottleneck Transformer (BoT) model into each branch as a self-attention module to learn the global information by considering both content information and relative distances between features at different locations. On public endometrial dataset, our model achieved four-class classification accuracies of 77.31 ± 0.84, 80.87 ± 0.48 and 83.22 ± 0.87% using 20, 50 and 100% labeled images, respectively. When transferred to the in-house dataset, our model obtained a three-class diagnostic accuracy of 96.81% with 95% confidence interval of 95.61-98.02%. On both datasets, our model outperformed state-of-the-art supervised and self-supervised methods. Our model may help pathologists to automatically diagnose endometrial diseases with high accuracy and efficiency using limited human-annotated histopathological images.
Collapse
|
19
|
Jiang Y, Wang C, Zhou S. Artificial intelligence-based risk stratification, accurate diagnosis and treatment prediction in gynecologic oncology. Semin Cancer Biol 2023; 96:82-99. [PMID: 37783319 DOI: 10.1016/j.semcancer.2023.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 08/27/2023] [Accepted: 09/25/2023] [Indexed: 10/04/2023]
Abstract
As data-driven science, artificial intelligence (AI) has paved a promising path toward an evolving health system teeming with thrilling opportunities for precision oncology. Notwithstanding the tremendous success of oncological AI in such fields as lung carcinoma, breast tumor and brain malignancy, less attention has been devoted to investigating the influence of AI on gynecologic oncology. Hereby, this review sheds light on the ever-increasing contribution of state-of-the-art AI techniques to the refined risk stratification and whole-course management of patients with gynecologic tumors, in particular, cervical, ovarian and endometrial cancer, centering on information and features extracted from clinical data (electronic health records), cancer imaging including radiological imaging, colposcopic images, cytological and histopathological digital images, and molecular profiling (genomics, transcriptomics, metabolomics and so forth). However, there are still noteworthy challenges beyond performance validation. Thus, this work further describes the limitations and challenges faced in the real-word implementation of AI models, as well as potential solutions to address these issues.
Collapse
Affiliation(s)
- Yuting Jiang
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Chengdi Wang
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Shengtao Zhou
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China.
| |
Collapse
|
20
|
Li M, Hu Z, Qiu S, Zhou C, Weng J, Dong Q, Sheng X, Ren N, Zhou M. Dual-branch hybrid encoding embedded network for histopathology image classification. Phys Med Biol 2023; 68:195002. [PMID: 37647919 DOI: 10.1088/1361-6560/acf556] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 08/30/2023] [Indexed: 09/01/2023]
Abstract
Objective.Learning-based histopathology image (HI) classification methods serve as important tools for auxiliary diagnosis in the prognosis stage. However, most existing methods are focus on a single target cancer due to inter-domain differences among different cancer types, limiting their applicability to different cancer types. To overcome these limitations, this paper presents a high-performance HI classification method that aims to address inter-domain differences and provide an improved solution for reliable and practical HI classification.Approach.Firstly, we collect a high-quality hepatocellular carcinoma (HCC) dataset with enough data to verify the stability and practicability of the method. Secondly, a novel dual-branch hybrid encoding embedded network is proposed, which integrates the feature extraction capabilities of convolutional neural network and Transformer. This well-designed structure enables the network to extract diverse features while minimizing redundancy from a single complex network. Lastly, we develop a salient area constraint loss function tailored to the unique characteristics of HIs to address inter-domain differences and enhance the robustness and universality of the methods.Main results.Extensive experiments have conducted on the proposed HCC dataset and two other publicly available datasets. The proposed method demonstrates outstanding performance with an impressive accuracy of 99.09% on the HCC dataset and achieves state-of-the-art results on the other two public datasets. These remarkable outcomes underscore the superior performance and versatility of our approach in multiple HI classification.Significance.The advancements presented in this study contribute to the field of HI analysis by providing a reliable and practical solution for multiple cancer classification, potentially improving diagnostic accuracy and patient outcomes. Our code is available athttps://github.com/lms-design/DHEE-net.
Collapse
Affiliation(s)
- Mingshuai Li
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, People's Republic of China
| | - Zhiqiu Hu
- Department of Hepatobiliary and Pancreatic Surgery, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Institute of Fudan-Minhang Academic Health System, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
| | - Song Qiu
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, People's Republic of China
- MOE Engineering Research Center of Software/Hardware Co-design Technology and Application, East China Normal University, Shanghai, 200241, People's Republic of China
| | - Chenhao Zhou
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, Fudan University, Key Laboratory of Carcinogenesis and Cancer Invasion, Ministry of Education, Shanghai, 200032, People's Republic of China
| | - Jialei Weng
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, Fudan University, Key Laboratory of Carcinogenesis and Cancer Invasion, Ministry of Education, Shanghai, 200032, People's Republic of China
| | - Qiongzhu Dong
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Institute of Fudan-Minhang Academic Health System, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
| | - Xia Sheng
- Department of Pathology, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
| | - Ning Ren
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Institute of Fudan-Minhang Academic Health System, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, Fudan University, Key Laboratory of Carcinogenesis and Cancer Invasion, Ministry of Education, Shanghai, 200032, People's Republic of China
| | - Mei Zhou
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, People's Republic of China
| |
Collapse
|
21
|
Piedimonte S, Rosa G, Gerstl B, Sopocado M, Coronel A, Lleno S, Vicus D. Evaluating the use of machine learning in endometrial cancer: a systematic review. Int J Gynecol Cancer 2023; 33:1383-1393. [PMID: 37666535 DOI: 10.1136/ijgc-2023-004622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/06/2023] Open
Abstract
OBJECTIVE To review the literature on machine learning in endometrial cancer, report the most commonly used algorithms, and compare performance with traditional prediction models. METHODS This is a systematic review of the literature from January 1985 to March 2021 on the use of machine learning in endometrial cancer. An extensive search of electronic databases was conducted. Four independent reviewers screened studies initially by title then full text. Quality was assessed using the MINORS (Methodological Index for Non-Randomized Studies) criteria. P values were derived using the Pearson's Χ2 test in JMP 15.0. RESULTS Among 4295 articles screened, 30 studies on machine learning in endometrial cancer were included. The most frequent applications were in patient datasets (33.3%, n=10), pre-operative diagnostics (30%, n=9), genomics (23.3%, n=7), and serum biomarkers (13.3%, n=4). The most commonly used models were neural networks (n=10, 33.3%) and support vector machine (n=6, 20%).The number of publications on machine learning in endometrial cancer increased from 1 in 2010 to 29 in 2021.Eight studies compared machine learning with traditional statistics. Among patient dataset studies, two machine learning models (20%) performed similarly to logistic regression (accuracy: 0.85 vs 0.82, p=0.16). Machine learning algorithms performed similarly to detect endometrial cancer based on MRI (accuracy: 0.87 vs 0.82, p=0.24) while outperforming traditional methods in predicting extra-uterine disease in one serum biomarker study (accuracy: 0.81 vs 0.61). For survival outcomes, one study compared machine learning with Kaplan-Meier and reported no difference in concordance index (83.8% vs 83.1%). CONCLUSION Although machine learning is an innovative and emerging technology, performance is similar to that of traditional regression models in endometrial cancer. More studies are needed to assess its role in endometrial cancer. PROSPERO REGISTRATION NUMBER CRD42021269565.
Collapse
Affiliation(s)
- Sabrina Piedimonte
- Department of Gynecologic Oncology, University of Toronto, Toronto, Ontario, Canada
| | | | - Brigitte Gerstl
- The Rosa Institute, Sydney, New South Wales, Australia
- The Kirby Institute, University of New South Wales, Sydney, New South Wales, Australia
| | - Mars Sopocado
- The Rosa Institute, Sydney, New South Wales, Australia
| | - Ana Coronel
- The Rosa Institute, Sydney, New South Wales, Australia
| | | | - Danielle Vicus
- Department of Gynecologic Oncology, University of Toronto, Toronto, Ontario, Canada
- Department of Gynecologic Oncology, Sunnybrook Health Sciences, Toronto, Ontario, Canada
| |
Collapse
|
22
|
Brady LM, Rombokas E, Wang YN, Shofer JB, Ledoux WR. The effect of diabetes and tissue depth on adipose chamber size and plantar soft tissue features. Foot (Edinb) 2023; 56:101989. [PMID: 36905794 PMCID: PMC10450093 DOI: 10.1016/j.foot.2023.101989] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 02/19/2023] [Accepted: 02/23/2023] [Indexed: 03/13/2023]
Abstract
BACKGROUND Plantar ulceration is a serious complication of diabetes. However, the mechanism of injury initiating ulceration remains unclear. The unique structure of the plantar soft tissue includes superficial and deep layers of adipocytes contained in septal chambers, however, the size of these chambers has not been quantified in diabetic or non-diabetic tissue. Computer-aided methods can be leveraged to guide microstructural measurements and differences with disease status. METHODS Adipose chambers in whole slide images of diabetic and non-diabetic plantar soft tissue were segmented with a pre-trained U-Net and area, perimeter, and minimum and maximum diameter of adipose chambers were measured. Whole slide images were classified as diabetic or non-diabetic using the Axial-DeepLab network, and the attention layer was overlaid on the input image for interpretation. RESULTS Non-diabetic deep chambers were 90 %, 41 %, 34 %, and 39 % larger in area (26,954 ± 2428 µm2 vs 14,157 ± 1153 µm2), maximum (277 ± 13 µm vs 197 ± 8 µm) and minimum (140 ± 6 µm vs 104 ± 4 µm) diameter, and perimeter (405 ± 19 µm vs 291 ± 12 µm), respectively, than the superficial (p < 0.001). However, there was no significant difference in these parameters in diabetic specimens (area 18,695 ± 2576 µm2 vs 16627 ± 130 µm2, maximum diameter 221 ± 16 µm vs 210 ± 14 µm, minimum diameter 121 ± 8 µm vs 114 ± 7 µm, perimeter 341 ± 24 µm vs 320 ± 21 µm). Between diabetic and non-diabetic chambers, only the maximum diameter of the deep chambers differed (221 ± 16 µm vs 277 ± 13 µm). The attention network achieved 82 % accuracy on validation, but the attention resolution was too coarse to identify meaningful additional measurements. CONCLUSIONS Adipose chamber size differences may provide a basis for plantar soft tissue mechanical changes with diabetes. Attention networks are promising tools for classification, but additional care is required when designing networks for identifying novel features. DATA AVAILABILITY All images, analysis code, data, and/or other resources required to replicate this work are available from the corresponding author upon reasonable request.
Collapse
Affiliation(s)
- Lynda M Brady
- VA RR& D Center for Limb Loss and MoBility, Seattle, WA 98108, USA; Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Eric Rombokas
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Yak-Nam Wang
- VA RR& D Center for Limb Loss and MoBility, Seattle, WA 98108, USA; Center for Industrial and Medical Ultrasound, Applied Physics Laboratory, University of Washington, Seattle, WA 98195, USA
| | - Jane B Shofer
- VA RR& D Center for Limb Loss and MoBility, Seattle, WA 98108, USA
| | - William R Ledoux
- VA RR& D Center for Limb Loss and MoBility, Seattle, WA 98108, USA; Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA; Department of Orthopaedics & Sports Medicine, University of Washington, Seattle, WA 98195, USA.
| |
Collapse
|
23
|
Shen L, Du L, Hu Y, Chen X, Hou Z, Yan Z, Wang X. MRI-based radiomics model for distinguishing Stage I endometrial carcinoma from endometrial polyp: a multicenter study. Acta Radiol 2023; 64:2651-2658. [PMID: 37291882 DOI: 10.1177/02841851231175249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
BACKGROUND Patients with early endometrial carcinoma (EC) have a good prognosis, but it is difficult to distinguish from endometrial polyps (EPs). PURPOSE To develop and assess magnetic resonance imaging (MRI)-based radiomics models for discriminating Stage I EC from EP in a multicenter setting. MATERIAL AND METHODS Patients with Stage I EC (n = 202) and EP (n = 99) who underwent preoperative MRI scans were collected in three centers (seven devices). The images from devices 1-3 were utilized for training and validation, and the images from devices 4-7 were utilized for testing, leading to three models. They were evaluated by the area under the receiver operating characteristic curve (AUC) and metrics including accuracy, sensitivity, and specificity. Two radiologists evaluated the endometrial lesions and compared them with the three models. RESULTS The AUCs of device 1, 2_ada, device 1, 3_ada, and device 2, 3_ada for discriminating Stage I EC from EP were 0.951, 0.912, and 0.896 for the training set, 0.755, 0.928, and 1.000 for the validation set, and 0.883, 0.956, and 0.878 for the external validation set, respectively. The specificity of the three models was higher, but the accuracy and sensitivity were lower than those of radiologists. CONCLUSION Our MRI-based models showed good potential in differentiating Stage I EC from EP and had been validated in multiple centers. Their specificity was higher than that of radiologists and may be used for computer-aided diagnosis in the future to assist clinical diagnosis.
Collapse
Affiliation(s)
- Liting Shen
- Department of Radiology, the Second Affiliated Hospital and Yuying Children's Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, PR China
| | - Lixin Du
- Department of Medical Imaging, Shenzhen Longhua District Central Hospital, Shenzhen, PR China
| | - Yumin Hu
- Department of Radiology, Lishui Central Hospital, Zhejiang, PR China
| | - Xiaojun Chen
- Department of Radiology, Affiliated Jinhua Hospital, Zhejiang University School of Medicine, Jinhua, PR China
| | - Zujun Hou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, PR China
| | - Zhihan Yan
- Department of Radiology, the Second Affiliated Hospital and Yuying Children's Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, PR China
| | - Xue Wang
- Department of Radiology, the Second Affiliated Hospital and Yuying Children's Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, PR China
| |
Collapse
|
24
|
Fu J, He B, Yang J, Liu J, Ouyang A, Wang Y. CDRNet: Cascaded dense residual network for grayscale and pseudocolor medical image fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 234:107506. [PMID: 37003041 DOI: 10.1016/j.cmpb.2023.107506] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 03/18/2023] [Accepted: 03/22/2023] [Indexed: 06/19/2023]
Abstract
OBJECTIVE Multimodal medical fusion images have been widely used in clinical medicine, computer-aided diagnosis and other fields. However, the existing multimodal medical image fusion algorithms generally have shortcomings such as complex calculations, blurred details and poor adaptability. To solve this problem, we propose a cascaded dense residual network and use it for grayscale and pseudocolor medical image fusion. METHODS The cascaded dense residual network uses a multiscale dense network and a residual network as the basic network architecture, and a multilevel converged network is obtained through cascade. The cascaded dense residual network contains 3 networks, the first-level network inputs two images with different modalities to obtain a fused Image 1, the second-level network uses fused Image 1 as the input image to obtain fused Image 2 and the third-level network uses fused Image 2 as the input image to obtain fused Image 3. The multimodal medical image is trained through each level of the network, and the output fusion image is enhanced step-by-step. RESULTS As the number of networks increases, the fusion image becomes increasingly clearer. Through numerous fusion experiments, the fused images of the proposed algorithm have higher edge strength, richer details, and better performance in the objective indicators than the reference algorithms. CONCLUSION Compared with the reference algorithms, the proposed algorithm has better original information, higher edge strength, richer details and an improvement of the four objective SF, AG, MZ and EN indicator metrics.
Collapse
Affiliation(s)
- Jun Fu
- School of Information Engineering, Zunyi Normal University, Zunyi, Guizhou, 563006, China.
| | - Baiqing He
- Nanchang Institute of Technology, Nanchang, Jiangxi, 330044, China
| | - Jie Yang
- School of Information Engineering, Zunyi Normal University, Zunyi, Guizhou, 563006, China
| | - Jianpeng Liu
- School of Science, East China Jiaotong University, Nanchang, Jiangxi, 330013, China
| | - Aijia Ouyang
- School of Information Engineering, Zunyi Normal University, Zunyi, Guizhou, 563006, China
| | - Ya Wang
- School of Information Engineering, Zunyi Normal University, Zunyi, Guizhou, 563006, China
| |
Collapse
|
25
|
Li YX, Chen F, Shi JJ, Huang YL, Wang M. Convolutional Neural Networks for Classifying Cervical Cancer Types Using Histological Images. J Digit Imaging 2023; 36:441-449. [PMID: 36474087 PMCID: PMC10039125 DOI: 10.1007/s10278-022-00722-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/12/2022] [Accepted: 10/17/2022] [Indexed: 12/12/2022] Open
Abstract
Cervical cancer is the most common cancer among women worldwide. The diagnosis and classification of cancer are extremely important, as it influences the optimal treatment and length of survival. The objective was to develop and validate a diagnosis system based on convolutional neural networks (CNN) that identifies cervical malignancies and provides diagnostic interpretability. A total of 8496 labeled histology images were extracted from 229 cervical specimens (cervical squamous cell carcinoma, SCC, n = 37; cervical adenocarcinoma, AC, n = 8; nonmalignant cervical tissues, n = 184). AlexNet, VGG-19, Xception, and ResNet-50 with five-fold cross-validation were constructed to distinguish cervical cancer images from nonmalignant images. The performance of CNNs was quantified in terms of accuracy, precision, recall, and the area under the receiver operating curve (AUC). Six pathologists were recruited to make a comparison with the performance of CNNs. Guided Backpropagation and Gradient-weighted Class Activation Mapping (Grad-CAM) were deployed to highlight the area of high malignant probability. The Xception model had excellent performance in identifying cervical SCC and AC in test sets. For cervical SCC, AUC was 0.98 (internal validation) and 0.974 (external validation). For cervical AC, AUC was 0.966 (internal validation) and 0.958 (external validation). The performance of CNNs falls between experienced and inexperienced pathologists. Grad-CAM and Guided Gard-CAM ensured diagnoses interpretability by highlighting morphological features of malignant changes. CNN is efficient for histological image classification tasks of distinguishing cervical malignancies from benign tissues and could highlight the specific areas of concern. All these findings suggest that CNNs could serve as a diagnostic tool to aid pathologic diagnosis.
Collapse
Affiliation(s)
- Yi-Xin Li
- Department of Obstetrics and Gynecology, Xinhua Hospital Chongming Branch, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Feng Chen
- Department of Pathology, Xinhua Hospital Chongming Branch, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jiao-Jiao Shi
- Department of Obstetrics and Gynecology, Xinhua Hospital Chongming Branch, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yu-Li Huang
- Department of Obstetrics and Gynecology, Xinhua Hospital Chongming Branch, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Mei Wang
- Department of Gynecology, Shanghai Pudong New Area People's Hospital, Shanghai Pudong New Area, Shanghai, 202150, China.
| |
Collapse
|
26
|
Fell C, Mohammadi M, Morrison D, Arandjelović O, Syed S, Konanahalli P, Bell S, Bryson G, Harrison DJ, Harris-Birtill D. Detection of malignancy in whole slide images of endometrial cancer biopsies using artificial intelligence. PLoS One 2023; 18:e0282577. [PMID: 36888621 PMCID: PMC9994759 DOI: 10.1371/journal.pone.0282577] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 02/21/2023] [Indexed: 03/09/2023] Open
Abstract
In this study we use artificial intelligence (AI) to categorise endometrial biopsy whole slide images (WSI) from digital pathology as either "malignant", "other or benign" or "insufficient". An endometrial biopsy is a key step in diagnosis of endometrial cancer, biopsies are viewed and diagnosed by pathologists. Pathology is increasingly digitised, with slides viewed as images on screens rather than through the lens of a microscope. The availability of these images is driving automation via the application of AI. A model that classifies slides in the manner proposed would allow prioritisation of these slides for pathologist review and hence reduce time to diagnosis for patients with cancer. Previous studies using AI on endometrial biopsies have examined slightly different tasks, for example using images alongside genomic data to differentiate between cancer subtypes. We took 2909 slides with "malignant" and "other or benign" areas annotated by pathologists. A fully supervised convolutional neural network (CNN) model was trained to calculate the probability of a patch from the slide being "malignant" or "other or benign". Heatmaps of all the patches on each slide were then produced to show malignant areas. These heatmaps were used to train a slide classification model to give the final slide categorisation as either "malignant", "other or benign" or "insufficient". The final model was able to accurately classify 90% of all slides correctly and 97% of slides in the malignant class; this accuracy is good enough to allow prioritisation of pathologists' workload.
Collapse
Affiliation(s)
- Christina Fell
- School of Computer Science, University of St Andrews, St Andrews, Scotland, United Kingdom
| | - Mahnaz Mohammadi
- School of Computer Science, University of St Andrews, St Andrews, Scotland, United Kingdom
| | - David Morrison
- School of Computer Science, University of St Andrews, St Andrews, Scotland, United Kingdom
| | - Ognjen Arandjelović
- School of Computer Science, University of St Andrews, St Andrews, Scotland, United Kingdom
| | - Sheeba Syed
- Pathology Department, NHS Greater Glasgow and Clyde, Glasgow, Scotland, United Kingdom
| | - Prakash Konanahalli
- Pathology Department, NHS Greater Glasgow and Clyde, Glasgow, Scotland, United Kingdom
| | - Sarah Bell
- Pathology Department, NHS Greater Glasgow and Clyde, Glasgow, Scotland, United Kingdom
| | - Gareth Bryson
- Pathology Department, NHS Greater Glasgow and Clyde, Glasgow, Scotland, United Kingdom
| | - David J. Harrison
- School of Medicine, University of St Andrews, St Andrews, Scotland, United Kingdom
- NHS Lothian Pathology, Division of Laboratory Medicine, Royal Infirmary of Edinburgh, Edinburgh, Scotland, United Kingdom
| | - David Harris-Birtill
- School of Computer Science, University of St Andrews, St Andrews, Scotland, United Kingdom
| |
Collapse
|
27
|
ASI-DBNet: An Adaptive Sparse Interactive ResNet-Vision Transformer Dual-Branch Network for the Grading of Brain Cancer Histopathological Images. Interdiscip Sci 2023; 15:15-31. [PMID: 35810266 DOI: 10.1007/s12539-022-00532-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 05/26/2022] [Accepted: 05/31/2022] [Indexed: 10/17/2022]
Abstract
Brain cancer is the deadliest cancer that occurs in the brain and central nervous system, and rapid and precise grading is essential to reduce patient suffering and improve survival. Traditional convolutional neural network (CNN)-based computer-aided diagnosis algorithms cannot fully utilize the global information of pathology images, and the recently popular vision transformer (ViT) model does not focus enough on the local details of pathology images, both of which lead to a lack of precision in the focus of the model and a lack of accuracy in the grading of brain cancer. To solve this problem, we propose an adaptive sparse interaction ResNet-ViT dual-branch network (ASI-DBNet). First, we design the ResNet-ViT parallel structure to simultaneously capture and retain the local and global information of pathology images. Second, we design the adaptive sparse interaction block (ASIB) to interact the ResNet branch with the ViT branch. Furthermore, we introduce the attention mechanism in ASIB to adaptively filter the redundant information from the dual branches during the interaction so that the feature maps delivered during the interaction are more beneficial. Intensive experiments have shown that ASI-DBNet performs best in various baseline and SOTA models, with 95.24% accuracy in four grades. In particular, for brain tumors with a high degree of deterioration (Grade III and Grade IV), the highest diagnostic accuracies achieved by ASI-DBNet are 97.93% and 96.28%, respectively, which is of great clinical significance. Meanwhile, the gradient-weighted class activation map (Grad_cam) and attention rollout visualization mechanisms are utilized to visualize the working logic behind the model, and the resulting feature maps highlight the important distinguishing features related to the diagnosis. Therefore, the interpretability and confidence of the model are improved, which is of great value for the clinical diagnosis of brain cancer.
Collapse
|
28
|
Precision Medicine for Chronic Endometritis: Computer-Aided Diagnosis Using Deep Learning Model. Diagnostics (Basel) 2023; 13:diagnostics13050936. [PMID: 36900079 PMCID: PMC10000436 DOI: 10.3390/diagnostics13050936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 02/15/2023] [Accepted: 02/24/2023] [Indexed: 03/06/2023] Open
Abstract
Chronic endometritis (CE) is a localized mucosal infectious and inflammatory disorder marked by infiltration of CD138(+) endometrial stromal plasmacytes (ESPC). CE is drawing interest in the field of reproductive medicine because of its association with female infertility of unknown etiology, endometriosis, repeated implantation failure, recurrent pregnancy loss, and multiple maternal/newborn complications. The diagnosis of CE has long relied on somewhat painful endometrial biopsy and histopathologic examinations combined with immunohistochemistry for CD138 (IHC-CD138). With IHC-CD138 only, CE may be potentially over-diagnosed by misidentification of endometrial epithelial cells, which constitutively express CD138, as ESPCs. Fluid hysteroscopy is emerging as an alternative, less-invasive diagnostic tool that can visualize the whole uterine cavity in real-time and enables the detection of several unique mucosal findings associated with CE. The biases in the hysteroscopic diagnosis of CE; however, are the inter-observer and intra-observer disagreements on the interpretation of the endoscopic findings. Additionally, due to the variances in the study designs and adopted diagnostic criteria, there exists some dissociation in the histopathologic and hysteroscopic diagnosis of CE among researchers. To address these questions, novel dual immunohistochemistry for CD138 and another plasmacyte marker multiple myeloma oncogene 1 are currently being tested. Furthermore, computer-aided diagnosis using a deep learning model is being developed for more accurate detection of ESPCs. These approaches have the potential to contribute to the reduction in human errors and biases, the improvement of the diagnostic performance of CE, and the establishment of unified diagnostic criteria and standardized clinical guidelines for the disease.
Collapse
|
29
|
Huang P, Zhou X, He P, Feng P, Tian S, Sun Y, Mercaldo F, Santone A, Qin J, Xiao H. Interpretable laryngeal tumor grading of histopathological images via depth domain adaptive network with integration gradient CAM and priori experience-guided attention. Comput Biol Med 2023; 154:106447. [PMID: 36706570 DOI: 10.1016/j.compbiomed.2022.106447] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 11/29/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
Tumor grading and interpretability of laryngeal cancer is a key yet challenging task in the clinical diagnosis, mainly because of the commonly used low-magnification pathological images lack fine cellular structure information and accurate localization, the diagnosis results of pathologists are different from those of attentional convolutional network -based methods, and the gradient-weighted class activation mapping method cannot be optimized to create the best visualization map. To address this problem, we propose an end-to-end depth domain adaptive network (DDANet) with integration gradient CAM and priori experience-guided attention to improve the tumor grading performance and interpretability by introducing the pathologist's a priori experience in high-magnification into the depth model. Specifically, a novel priori experience-guided attention (PE-GA) method is developed to solve the traditional unsupervised attention optimization problem. Besides, a novel integration gradient CAM is proposed to mitigate overfitting, information redundancies and low sparsity of the Grad-CAM graphs generated by the PE-GA method. Furthermore, we establish a set of quantitative evaluation metric systems for model visual interpretation. Extensive experimental results show that compared with the state-of-the-art methods, the average grading accuracy is increased to 88.43% (↑4.04%), the effective interpretable rate is increased to 52.73% (↑11.45%). Additionally, it effectively reduces the difference between CV-based method and pathology in diagnosis results. Importantly, the visualized interpretive maps are closer to the region of interest of concern by pathologists, and our model outperforms pathologists with different levels of experience.
Collapse
Affiliation(s)
- Pan Huang
- Key Laboratory of Optoelectronic Technology & Systems (Ministry of Education), College of Optoelectronic Engineering, Chongqing University, Chongqing, China
| | - Xiaoli Zhou
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Peng He
- Key Laboratory of Optoelectronic Technology & Systems (Ministry of Education), College of Optoelectronic Engineering, Chongqing University, Chongqing, China.
| | - Peng Feng
- Key Laboratory of Optoelectronic Technology & Systems (Ministry of Education), College of Optoelectronic Engineering, Chongqing University, Chongqing, China.
| | - Sukun Tian
- Center of Digital Dentistry, School and Hospital of Stomatology, Peking University, Beijing, China
| | - Yuchun Sun
- Center of Digital Dentistry, School and Hospital of Stomatology, Peking University, Beijing, China.
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Antonella Santone
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Hualiang Xiao
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| |
Collapse
|
30
|
Li M, Chen C, Cao Y, Zhou P, Deng X, Liu P, Wang Y, Lv X, Chen C. CIABNet: Category imbalance attention block network for the classification of multi-differentiated types of esophageal cancer. Med Phys 2023; 50:1507-1527. [PMID: 36272103 DOI: 10.1002/mp.16067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 08/25/2022] [Accepted: 09/09/2022] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND Esophageal cancer has become one of the important cancers that seriously threaten human life and health, and its incidence and mortality rate are still among the top malignant tumors. Histopathological image analysis is the gold standard for diagnosing different differentiation types of esophageal cancer. PURPOSE The grading accuracy and interpretability of the auxiliary diagnostic model for esophageal cancer are seriously affected by small interclass differences, imbalanced data distribution, and poor model interpretability. Therefore, we focused on developing the category imbalance attention block network (CIABNet) model to try to solve the previous problems. METHODS First, the quantitative metrics and model visualization results are integrated to transfer knowledge from the source domain images to better identify the regions of interest (ROI) in the target domain of esophageal cancer. Second, in order to pay attention to the subtle interclass differences, we propose the concatenate fusion attention block, which can focus on the contextual local feature relationships and the changes of channel attention weights among different regions simultaneously. Third, we proposed a category imbalance attention module, which treats each esophageal cancer differentiation class fairly based on aggregating different intensity information at multiple scales and explores more representative regional features for each class, which effectively mitigates the negative impact of category imbalance. Finally, we use feature map visualization to focus on interpreting whether the ROIs are the same or similar between the model and pathologists, thus better improving the interpretability of the model. RESULTS The experimental results show that the CIABNet model outperforms other state-of-the-art models, which achieves the most advanced results in classifying the differentiation types of esophageal cancer with an average classification accuracy of 92.24%, an average precision of 93.52%, an average recall of 90.31%, an average F1 value of 91.73%, and an average AUC value of 97.43%. In addition, the CIABNet model has essentially similar or identical to the ROI of pathologists in identifying histopathological images of esophageal cancer. CONCLUSIONS Our experimental results prove that our proposed computer-aided diagnostic algorithm shows great potential in histopathological images of multi-differentiated types of esophageal cancer.
Collapse
Affiliation(s)
- Min Li
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, China
| | - Chen Chen
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
- Xinjiang Cloud Computing Application Laboratory, Karamay, China
| | - Yanzhen Cao
- Department of Pathology, The Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, China
| | - Panyun Zhou
- College of Software, Xinjiang University, Urumqi, China
| | - Xin Deng
- College of Software, Xinjiang University, Urumqi, China
| | - Pei Liu
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Yunling Wang
- The First Affiliated Hospital of Xinjiang Medical University, Urumqi, China
| | - Xiaoyi Lv
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, China
- Xinjiang Cloud Computing Application Laboratory, Karamay, China
- College of Software, Xinjiang University, Urumqi, China
- Key Laboratory of software engineering technology, Xinjiang University, Urumqi, China
| | - Cheng Chen
- College of Software, Xinjiang University, Urumqi, China
| |
Collapse
|
31
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
32
|
Rewcastle E, Gudlaugsson E, Lillesand M, Skaland I, Baak JPA, Janssen EAM. Automated Prognostic Assessment of Endometrial Hyperplasia for Progression Risk Evaluation Using Artificial Intelligence. Mod Pathol 2023; 36:100116. [PMID: 36805790 DOI: 10.1016/j.modpat.2023.100116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 12/20/2022] [Accepted: 01/18/2023] [Indexed: 02/04/2023]
Abstract
Endometrial hyperplasia is a precursor to endometrial cancer, characterized by excessive proliferation of glands that is distinguishable from normal endometrium. Current classifications define 2 types of EH, each with a different risk of progression to endometrial cancer. However, these schemes are based on visual assessments and, therefore, subjective, possibly leading to overtreatment or undertreatment. In this study, we developed an automated artificial intelligence tool (ENDOAPP) for the measurement of morphologic and cytologic features of endometrial tissue using the software Visiopharm. The ENDOAPP was used to extract features from whole-slide images of PAN-CK+-stained formalin-fixed paraffin-embedded tissue sections from 388 patients diagnosed with endometrial hyperplasia between 1980 and 2007. Follow-up data were available for all patients (mean = 140 months). The most prognostic features were identified by a logistic regression model and used to assign a low-risk or high-risk progression score. Performance of the ENDOAPP was assessed for the following variables: images from 2 different scanners (Hamamatsu XR and S60) and automated placement of a region of interest versus manual placement by an operator. Then, the performance of the application was compared with that of current classification schemes: WHO94, WHO20, and EIN, and the computerized-morphometric risk classification method: D-score. The most significant prognosticators were percentage stroma and the standard deviation of the lesser diameter of epithelial nuclei. The ENDOAPP had an acceptable discriminative power with an area under the curve of 0.765. Furthermore, strong to moderate agreement was observed between manual operators (intraclass correlation coefficient: 0.828) and scanners (intraclass correlation coefficient: 0.791). Comparison of the prognostic capability of each classification scheme revealed that the ENDOAPP had the highest accuracy of 88%-91% alongside the D-score method (91%). The other classification schemes had an accuracy between 83% and 87%. This study demonstrated the use of computer-aided prognosis to classify progression risk in EH for improved patient treatment.
Collapse
Affiliation(s)
- Emma Rewcastle
- Department of Pathology, Stavanger University Hospital, Stavanger, Norway; Department of Chemistry, Bioscience and Environmental Engineering, University of Stavanger, Stavanger, Norway.
| | - Einar Gudlaugsson
- Department of Pathology, Stavanger University Hospital, Stavanger, Norway
| | - Melinda Lillesand
- Department of Pathology, Stavanger University Hospital, Stavanger, Norway; Department of Chemistry, Bioscience and Environmental Engineering, University of Stavanger, Stavanger, Norway
| | - Ivar Skaland
- Department of Pathology, Stavanger University Hospital, Stavanger, Norway
| | - Jan P A Baak
- Department of Pathology, Stavanger University Hospital, Stavanger, Norway; Dr. Med. Jan Baak AS, Tananger, Norway
| | - Emiel A M Janssen
- Department of Pathology, Stavanger University Hospital, Stavanger, Norway; Department of Chemistry, Bioscience and Environmental Engineering, University of Stavanger, Stavanger, Norway
| |
Collapse
|
33
|
Huang P, He P, Tian S, Ma M, Feng P, Xiao H, Mercaldo F, Santone A, Qin J. A ViT-AMC Network With Adaptive Model Fusion and Multiobjective Optimization for Interpretable Laryngeal Tumor Grading From Histopathological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:15-28. [PMID: 36018875 DOI: 10.1109/tmi.2022.3202248] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The tumor grading of laryngeal cancer pathological images needs to be accurate and interpretable. The deep learning model based on the attention mechanism-integrated convolution (AMC) block has good inductive bias capability but poor interpretability, whereas the deep learning model based on the vision transformer (ViT) block has good interpretability but weak inductive bias ability. Therefore, we propose an end-to-end ViT-AMC network (ViT-AMCNet) with adaptive model fusion and multiobjective optimization that integrates and fuses the ViT and AMC blocks. However, existing model fusion methods often have negative fusion: 1). There is no guarantee that the ViT and AMC blocks will simultaneously have good feature representation capability. 2). The difference in feature representations learning between the ViT and AMC blocks is not obvious, so there is much redundant information in the two feature representations. Accordingly, we first prove the feasibility of fusing the ViT and AMC blocks based on Hoeffding's inequality. Then, we propose a multiobjective optimization method to solve the problem that ViT and AMC blocks cannot simultaneously have good feature representation. Finally, an adaptive model fusion method integrating the metrics block and the fusion block is proposed to increase the differences between feature representations and improve the deredundancy capability. Our methods improve the fusion ability of ViT-AMCNet, and experimental results demonstrate that ViT-AMCNet significantly outperforms state-of-the-art methods. Importantly, the visualized interpretive maps are closer to the region of interest of concern by pathologists, and the generalization ability is also excellent. Our code is publicly available at https://github.com/Baron-Huang/ViT-AMCNet.
Collapse
|
34
|
He Q, He L, Duan H, Sun Q, Zheng R, Guan J, He Y, Huang W, Guan T. Expression site agnostic histopathology image segmentation framework by self supervised domain adaption. Comput Biol Med 2023; 152:106412. [PMID: 36516576 DOI: 10.1016/j.compbiomed.2022.106412] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 11/22/2022] [Accepted: 12/03/2022] [Indexed: 12/12/2022]
Abstract
MOTIVATION With the sites of antigen expression different, the segmentation of immunohistochemical (IHC) histopathology images is challenging, due to the visual variances. With H&E images highlighting the tissue structure and cell distribution more broadly, transferring more salient features from H&E images can achieve considerable performance on expression site agnostic IHC images segmentation. METHODS To the best of our knowledge, this is the first work that focuses on domain adaptive segmentation for different expression sites. We propose an expression site agnostic domain adaptive histopathology image semantic segmentation framework (ESASeg). In ESASeg, multi-level feature alignment encodes expression site invariance by learning generic representations of global and multi-scale local features. Moreover, self-supervision enhances domain adaptation to perceive high-level semantics by predicting pseudo-labels. RESULTS We construct a dataset with three IHCs (Her2 with membrane stained, Ki67 with nucleus stained, GPC3 with cytoplasm stained) with different expression sites from two diseases (breast and liver cancer). Intensive experiments on tumor region segmentation illustrate that ESASeg performs best across all metrics, and the implementation of each module proves to achieve impressive improvements. CONCLUSION The performance of ESASeg on the tumor region segmentation demonstrates the efficiency of the proposed framework, which provides a novel solution on expression site agnostic IHC related tasks. Moreover, the proposed domain adaption and self-supervision module can improve feature domain adaption and extraction without labels. In addition, ESASeg lays the foundation to perform joint analysis and information interaction for IHCs with different expression sites.
Collapse
Affiliation(s)
- Qiming He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Ling He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Hufei Duan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Qiehe Sun
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Runliang Zheng
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Jian Guan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China.
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Wenting Huang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China.
| | - Tian Guan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| |
Collapse
|
35
|
Zhang X, Ba W, Zhao X, Wang C, Li Q, Zhang Y, Lu S, Wang L, Wang S, Song Z, Shen D. Clinical-grade endometrial cancer detection system via whole-slide images using deep learning. Front Oncol 2022; 12:1040238. [PMID: 36408137 PMCID: PMC9668742 DOI: 10.3389/fonc.2022.1040238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 10/19/2022] [Indexed: 11/05/2022] Open
Abstract
The accurate pathological diagnosis of endometrial cancer (EC) improves the curative effect and reduces the mortality rate. Deep learning has demonstrated expert-level performance in pathological diagnosis of a variety of organ systems using whole-slide images (WSIs). It is urgent to build the deep learning system for endometrial cancer detection using WSIs. The deep learning model was trained and validated using a dataset of 601 WSIs from PUPH. The model performance was tested on three independent datasets containing a total of 1,190 WSIs. For the retrospective test, we evaluated the model performance on 581 WSIs from PUPH. In the prospective study, 317 consecutive WSIs from PUPH were collected from April 2022 to May 2022. To further evaluate the generalizability of the model, 292 WSIs were gathered from PLAHG as part of the external test set. The predictions were thoroughly analyzed by expert pathologists. The model achieved an area under the receiver operating characteristic curve (AUC), sensitivity, and specificity of 0.928, 0.924, and 0.801, respectively, on 1,190 WSIs in classifying EC and non-EC. On the retrospective dataset from PUPH/PLAGH, the model achieved an AUC, sensitivity, and specificity of 0.948/0.971, 0.928/0.947, and 0.80/0.938, respectively. On the prospective dataset, the AUC, sensitivity, and specificity were, in order, 0.933, 0.934, and 0.837. Falsely predicted results were analyzed to further improve the pathologists’ confidence in the model. The deep learning model achieved a high degree of accuracy in identifying EC using WSIs. By pre-screening the suspicious EC regions, it would serve as an assisted diagnostic tool to improve working efficiency for pathologists.
Collapse
Affiliation(s)
- Xiaobo Zhang
- Department of Pathology, Peking University People’s Hospital, Beijing, China
| | - Wei Ba
- Department of Pathology, Chinese PLA General Hospital, Beijing, China
| | - Xiaoya Zhao
- Department of Pathology, Peking University People’s Hospital, Beijing, China
| | - Chen Wang
- Department of Pathology, Peking University People’s Hospital, Beijing, China
| | - Qiting Li
- R&D Department, China Academy of Launch Vehicle Technology, Beijing, China
| | - Yinli Zhang
- Department of Pathology, Peking University People’s Hospital, Beijing, China
| | - Shanshan Lu
- Department of Pathology, Peking University People’s Hospital, Beijing, China
| | - Lang Wang
- Thorough Lab, Thorough Future, Beijing, China
| | - Shuhao Wang
- Thorough Lab, Thorough Future, Beijing, China
- *Correspondence: Danhua Shen, ; Zhigang Song, ; Shuhao Wang,
| | - Zhigang Song
- Department of Pathology, Chinese PLA General Hospital, Beijing, China
- *Correspondence: Danhua Shen, ; Zhigang Song, ; Shuhao Wang,
| | - Danhua Shen
- Department of Pathology, Peking University People’s Hospital, Beijing, China
- *Correspondence: Danhua Shen, ; Zhigang Song, ; Shuhao Wang,
| |
Collapse
|
36
|
Song J, Im S, Lee SH, Jang HJ. Deep Learning-Based Classification of Uterine Cervical and Endometrial Cancer Subtypes from Whole-Slide Histopathology Images. Diagnostics (Basel) 2022; 12:2623. [PMID: 36359467 PMCID: PMC9689570 DOI: 10.3390/diagnostics12112623] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 10/26/2022] [Accepted: 10/26/2022] [Indexed: 08/11/2023] Open
Abstract
Uterine cervical and endometrial cancers have different subtypes with different clinical outcomes. Therefore, cancer subtyping is essential for proper treatment decisions. Furthermore, an endometrial and endocervical origin for an adenocarcinoma should also be distinguished. Although the discrimination can be helped with various immunohistochemical markers, there is no definitive marker. Therefore, we tested the feasibility of deep learning (DL)-based classification for the subtypes of cervical and endometrial cancers and the site of origin of adenocarcinomas from whole slide images (WSIs) of tissue slides. WSIs were split into 360 × 360-pixel image patches at 20× magnification for classification. Then, the average of patch classification results was used for the final classification. The area under the receiver operating characteristic curves (AUROCs) for the cervical and endometrial cancer classifiers were 0.977 and 0.944, respectively. The classifier for the origin of an adenocarcinoma yielded an AUROC of 0.939. These results clearly demonstrated the feasibility of DL-based classifiers for the discrimination of cancers from the cervix and uterus. We expect that the performance of the classifiers will be much enhanced with an accumulation of WSI data. Then, the information from the classifiers can be integrated with other data for more precise discrimination of cervical and endometrial cancers.
Collapse
Affiliation(s)
- JaeYen Song
- Department of Obstetrics and Gynecology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| | - Soyoung Im
- Department of Hospital Pathology, St. Vincent’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 16247, Korea
| | - Sung Hak Lee
- Department of Hospital Pathology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| | - Hyun-Jong Jang
- Catholic Big Data Integration Center, Department of Physiology, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
| |
Collapse
|
37
|
Kanse AS, Kurian NC, Aswani HP, Khan Z, Gann PH, Rane S, Sethi A. Cautious Artificial Intelligence Improves Outcomes and Trust by Flagging Outlier Cases. JCO Clin Cancer Inform 2022; 6:e2200067. [PMID: 36228179 DOI: 10.1200/cci.22.00067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 08/12/2022] [Accepted: 09/01/2022] [Indexed: 06/16/2023] Open
Abstract
PURPOSE Artificial intelligence (AI) models for medical image diagnosis are often trained and validated on curated data. However, in a clinical setting, images that are outliers with respect to the training data, such as those representing rare disease conditions or acquired using a slightly different setup, can lead to wrong decisions. It is not practical to expect clinicians to be trained to discount results for such outlier images. Toward clinical deployment, we have designed a method to train cautious AI that can automatically flag outlier cases. MATERIALS AND METHODS Our method-ClassClust-forms tight clusters of training images using supervised contrastive learning, which helps it identify outliers during testing. We compared ClassClust's ability to detect outliers with three competing methods on four publicly available data sets covering pathology, dermatoscopy, and radiology. We held out certain diseases, artifacts, and types of images from training data and examined the ability of various models to detect these as outliers during testing. We compared the decision accuracy of the models on held-out nonoutlier images also. We visualized the regions of the images that the models used for their decisions. RESULTS Area under receiver operating characteristic curve for outlier detection was consistently higher using ClassClust compared with the previous methods. Average accuracy on held-out nonoutlier images was also higher, and the visualizations of image regions were more informative using ClassClust. CONCLUSION The ability to flag outlier test cases need not be at odds with the ability to accurately classify nonoutliers in AI models. Although the latter capability has received research and regulatory attention, AI models for clinical deployment should possess the former as well.
Collapse
Affiliation(s)
- Abhiraj S Kanse
- Department of Electrical Engineering Indian Institute of Technology Bombay, Mumbai, India
| | - Nikhil C Kurian
- Department of Electrical Engineering Indian Institute of Technology Bombay, Mumbai, India
| | - Himanshu P Aswani
- Department of Electrical Engineering Indian Institute of Technology Bombay, Mumbai, India
| | | | - Peter H Gann
- Department of Pathology, University of Illinois College of Medicine, Chicago, IL
| | - Swapnil Rane
- Department of Pathology, Tata Memorial Centre-ACTREC, HBNI, Navi Mumbai, India
| | - Amit Sethi
- Department of Electrical Engineering Indian Institute of Technology Bombay, Mumbai, India
| |
Collapse
|
38
|
Zhou P, Cao Y, Li M, Ma Y, Chen C, Gan X, Wu J, Lv X, Chen C. HCCANet: histopathological image grading of colorectal cancer using CNN based on multichannel fusion attention mechanism. Sci Rep 2022; 12:15103. [PMID: 36068309 PMCID: PMC9448811 DOI: 10.1038/s41598-022-18879-1] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 08/22/2022] [Indexed: 12/17/2022] Open
Abstract
Histopathological image analysis is the gold standard for pathologists to grade colorectal cancers of different differentiation types. However, the diagnosis by pathologists is highly subjective and prone to misdiagnosis. In this study, we constructed a new attention mechanism named MCCBAM based on channel attention mechanism and spatial attention mechanism, and developed a computer-aided diagnosis (CAD) method based on CNN and MCCBAM, called HCCANet. In this study, 630 histopathology images processed with Gaussian filtering denoising were included and gradient-weighted class activation map (Grad-CAM) was used to visualize regions of interest in HCCANet to improve its interpretability. The experimental results show that the proposed HCCANet model outperforms four advanced deep learning (ResNet50, MobileNetV2, Xception, and DenseNet121) and four classical machine learning (KNN, NB, RF, and SVM) techniques, achieved 90.2%, 85%, and 86.7% classification accuracy for colorectal cancers with high, medium, and low differentiation levels, respectively, with an overall accuracy of 87.3% and an average AUC value of 0.9.In addition, the MCCBAM constructed in this study outperforms several commonly used attention mechanisms SAM, SENet, SKNet, Non_Local, CBAM, and BAM on the backbone network. In conclusion, the HCCANet model proposed in this study is feasible for postoperative adjuvant diagnosis and grading of colorectal cancer.
Collapse
Affiliation(s)
- Panyun Zhou
- College of Software, Xinjiang University, Urumqi, 830046, China
| | - Yanzhen Cao
- The Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, China
| | - Min Li
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China.,Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, 830046, China
| | - Yuhua Ma
- Department of Oncology, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 200120, China.,Karamay Central Hospital of Xinjiang Karamay, Karamay, Xinjiang Uygur Autonomous Region, Department of Pathology, Karamay, 834000, China
| | - Chen Chen
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China.,Xinjiang Cloud Computing Application Laboratory, Karamay, 834099, China
| | - Xiaojing Gan
- The Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, 830011, China
| | - Jianying Wu
- College of Physics and Electronic Engineering, Xinjiang Normal University, Urumqi, 830054, China
| | - Xiaoyi Lv
- College of Software, Xinjiang University, Urumqi, 830046, China. .,College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China. .,Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, 830046, China. .,Xinjiang Cloud Computing Application Laboratory, Karamay, 834099, China. .,Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, 830046, China.
| | - Cheng Chen
- College of Software, Xinjiang University, Urumqi, 830046, China.
| |
Collapse
|
39
|
Li Q, Wang R, Xie Z, Zhao L, Wang Y, Sun C, Han L, Liu Y, Hou H, Liu C, Zhang G, Shi G, Zhong D, Li Q. Clinically Applicable Pathological Diagnosis System for Cell Clumps in Endometrial Cancer Screening via Deep Convolutional Neural Networks. Cancers (Basel) 2022; 14:4109. [PMID: 36077646 PMCID: PMC9454725 DOI: 10.3390/cancers14174109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 08/22/2022] [Indexed: 11/17/2022] Open
Abstract
OBJECTIVES The soaring demand for endometrial cancer screening has exposed a huge shortage of cytopathologists worldwide. To address this problem, our study set out to establish an artificial intelligence system that automatically recognizes and diagnoses pathological images of endometrial cell clumps (ECCs). METHODS We used Li Brush to acquire endometrial cells from patients. Liquid-based cytology technology was used to provide slides. The slides were scanned and divided into malignant and benign groups. We proposed two (a U-net segmentation and a DenseNet classification) networks to identify images. Another four classification networks were used for comparison tests. RESULTS A total of 113 (42 malignant and 71 benign) endometrial samples were collected, and a dataset containing 15,913 images was constructed. A total of 39,000 ECCs patches were obtained by the segmentation network. Then, 26,880 and 11,520 patches were used for training and testing, respectively. On the premise that the training set reached 100%, the testing set gained 93.5% accuracy, 92.2% specificity, and 92.0% sensitivity. The remaining 600 malignant patches were used for verification. CONCLUSIONS An artificial intelligence system was successfully built to classify malignant and benign ECCs.
Collapse
Affiliation(s)
- Qing Li
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
- Department of Obstetrics and Gynecology, Northwest Women’s and Children’s Hospital, Xi’an 710061, China
| | - Ruijie Wang
- School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Zhonglin Xie
- School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Lanbo Zhao
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
| | - Yiran Wang
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
| | - Chao Sun
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
| | - Lu Han
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
| | - Yu Liu
- Department of Pathology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
| | - Huilian Hou
- Department of Pathology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
| | - Chen Liu
- Department of Obstetrics and Gynecology, Northwest Women’s and Children’s Hospital, Xi’an 710061, China
| | - Guanjun Zhang
- Department of Pathology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
| | - Guizhi Shi
- Laboratory Animal Center, Institute of Biophysics, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing 100101, China
| | - Dexing Zhong
- School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an 710049, China
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China
- Pazhou Lab, Guangzhou 510335, China
| | - Qiling Li
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
- Department of Obstetrics and Gynecology, Northwest Women’s and Children’s Hospital, Xi’an 710061, China
| |
Collapse
|
40
|
Nanni L, Paci M, Brahnam S, Lumini A. Feature transforms for image data augmentation. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07645-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractA problem with convolutional neural networks (CNNs) is that they require large datasets to obtain adequate robustness; on small datasets, they are prone to overfitting. Many methods have been proposed to overcome this shortcoming with CNNs. In cases where additional samples cannot easily be collected, a common approach is to generate more data points from existing data using an augmentation technique. In image classification, many augmentation approaches utilize simple image manipulation algorithms. In this work, we propose some new methods for data augmentation based on several image transformations: the Fourier transform (FT), the Radon transform (RT), and the discrete cosine transform (DCT). These and other data augmentation methods are considered in order to quantify their effectiveness in creating ensembles of neural networks. The novelty of this research is to consider different strategies for data augmentation to generate training sets from which to train several classifiers which are combined into an ensemble. Specifically, the idea is to create an ensemble based on a kind of bagging of the training set, where each model is trained on a different training set obtained by augmenting the original training set with different approaches. We build ensembles on the data level by adding images generated by combining fourteen augmentation approaches, with three based on FT, RT, and DCT, proposed here for the first time. Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method. These networks and several fusions are evaluated and compared across eleven benchmarks. Results show that building ensembles on the data level by combining different data augmentation methods produce classifiers that not only compete competitively against the state-of-the-art but often surpass the best approaches reported in the literature.
Collapse
|
41
|
He Z, Lin M, Xu Z, Yao Z, Chen H, Alhudhaif A, Alenezi F. Deconv-transformer (DecT): A histopathological image classification model for breast cancer based on color deconvolution and transformer architecture. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.06.091] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
42
|
Classification of multi-differentiated liver cancer pathological images based on deep learning attention mechanism. BMC Med Inform Decis Mak 2022; 22:176. [PMID: 35787805 PMCID: PMC9254605 DOI: 10.1186/s12911-022-01919-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 06/23/2022] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Liver cancer is one of the most common malignant tumors in the world, ranking fifth in malignant tumors. The degree of differentiation can reflect the degree of malignancy. The degree of malignancy of liver cancer can be divided into three types: poorly differentiated, moderately differentiated, and well differentiated. Diagnosis and treatment of different levels of differentiation are crucial to the survival rate and survival time of patients. As the gold standard for liver cancer diagnosis, histopathological images can accurately distinguish liver cancers of different levels of differentiation. Therefore, the study of intelligent classification of histopathological images is of great significance to patients with liver cancer. At present, the classification of histopathological images of liver cancer with different degrees of differentiation has disadvantages such as time-consuming, labor-intensive, and large manual investment. In this context, the importance of intelligent classification of histopathological images is obvious. METHODS Based on the development of a complete data acquisition scheme, this paper applies the SENet deep learning model to the intelligent classification of all types of differentiated liver cancer histopathological images for the first time, and compares it with the four deep learning models of VGG16, ResNet50, ResNet_CBAM, and SKNet. The evaluation indexes adopted in this paper include confusion matrix, Precision, recall, F1 Score, etc. These evaluation indexes can be used to evaluate the model in a very comprehensive and accurate way. RESULTS Five different deep learning classification models are applied to collect the data set and evaluate model. The experimental results show that the SENet model has achieved the best classification effect with an accuracy of 95.27%. The model also has good reliability and generalization ability. The experiment proves that the SENet deep learning model has a good application prospect in the intelligent classification of histopathological images. CONCLUSIONS This study also proves that deep learning has great application value in solving the time-consuming and laborious problems existing in traditional manual film reading, and it has certain practical significance for the intelligent classification research of other cancer histopathological images.
Collapse
|
43
|
van der Velden BH, Kuijf HJ, Gilhuijs KG, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal 2022; 79:102470. [DOI: 10.1016/j.media.2022.102470] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 03/15/2022] [Accepted: 05/02/2022] [Indexed: 12/11/2022]
|
44
|
A deep learning model combining multimodal radiomics, clinical and imaging features for differentiating ocular adnexal lymphoma from idiopathic orbital inflammation. Eur Radiol 2022; 32:6922-6932. [PMID: 35674824 DOI: 10.1007/s00330-022-08857-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 04/10/2022] [Accepted: 05/01/2022] [Indexed: 11/04/2022]
Abstract
OBJECTIVES To evaluate the value of deep learning (DL) combining multimodal radiomics and clinical and imaging features for differentiating ocular adnexal lymphoma (OAL) from idiopathic orbital inflammation (IOI). METHODS Eighty-nine patients with histopathologically confirmed OAL (n = 39) and IOI (n = 50) were divided into training and validation groups. Convolutional neural networks and multimodal fusion layers were used to extract multimodal radiomics features from the T1-weighted image (T1WI), T2-weighted image, and contrast-enhanced T1WI. These multimodal radiomics features were then combined with clinical and imaging features and used together to differentiate between OAL and IOI. The area under the curve (AUC) was used to evaluate DL models with different features under five-fold cross-validation. The Student t-test, chi-squared, or Fisher exact test was used for comparison of different groups. RESULTS In the validation group, the diagnostic AUC of the DL model using combined features was 0.953 (95% CI, 0.895-1.000), higher than that of the DL model using multimodal radiomics features (0.843, 95% CI, 0.786-0.898, p < 0.01) or clinical and imaging features only (0.882, 95% CI, 0.782-0.982, p = 0.13). The DL model built on multimodal radiomics features outperformed those built on most bimodalities and unimodalities (p < 0.05). In addition, the DL-based analysis with the orbital cone area (covering both the orbital mass and surrounding tissues) was superior to that with the region of interest (ROI) covering only the mass area, although the difference was not significant (p = 0.33). CONCLUSIONS DL-based analysis that combines multimodal radiomics features with clinical and imaging features may help to differentiate between OAL and IOI. KEY POINTS • It is difficult to differentiate OAL from IOI due to the overlap in clinical and imaging manifestations. • Radiomics has shown potential for noninvasive diagnosis of different orbital lymphoproliferative disorders. • DL-based analysis combining radiomics and imaging and clinical features may help the differentiation between OAL and IOI.
Collapse
|
45
|
Fu B, Zhang M, He J, Cao Y, Guo Y, Wang R. StoHisNet: A hybrid multi-classification model with CNN and Transformer for gastric pathology images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106924. [PMID: 35671603 DOI: 10.1016/j.cmpb.2022.106924] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 05/28/2022] [Accepted: 05/28/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVES Gastric cancer has high morbidity and mortality compared to other cancers. Accurate histopathological diagnosis has great significance for the treatment of gastric cancer. With the development of artificial intelligence, many researchers have applied deep learning for the classification of gastric cancer pathological images. However, most studies have used binary classification on pathological images of gastric cancer, which is insufficient with respect to the clinical requirements. Therefore, we proposed a multi-classification method based on deep learning with more practical clinical value. METHODS In this study, we developed a novel multi-scale model called StoHisNet based on Transformer and the convolutional neural network (CNN) for the multi-classification task. StoHisNet adopts Transformer to learn global features to alleviate the inherent limitations of the convolution operation. The proposed StoHisNet can classify the publicly available pathological images of a gastric dataset into four categories -normal tissue, tubular adenocarcinoma, mucinous adenocarcinoma, and papillary adenocarcinoma. RESULTS The accuracy, F1-score, recall, and precision of the proposed model in the public gastric pathological image dataset were 94.69%, 94.96%, 94.95%, and 94.97%, respectively. We conducted additional experiments using two other public datasets to verify the generalization ability of the model. On the BreakHis dataset, our model performed better compared with other classification models, and the accuracy was 91.64%. Similarly, on the four-classification task on the Endometrium dataset, our model showed better classification ability than others with accuracy of 81.74%. These experiments showed that the proposed model has excellent ability of classification and generalization. CONCLUSION The StoHisNet model had high performance in the multi-classification on gastric histopathological images and showed strong generalization ability on other pathological datasets. This model may be a potential tool to assist pathologists in the analysis of gastric histopathological images.
Collapse
Affiliation(s)
- Bangkang Fu
- Medical College, Guizhou University, Guizhou 550000, China; Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China
| | - Mudan Zhang
- Medical College, Guizhou University, Guizhou 550000, China; Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China
| | - Junjie He
- College of Computer Science and Technology, Guizhou University, Guizhou 550025, China
| | - Ying Cao
- Medical College, Guizhou University, Guizhou 550000, China
| | - Yuchen Guo
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100192, China
| | - Rongpin Wang
- Medical College, Guizhou University, Guizhou 550000, China; Department of Medical Imaging, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guizhou Provincial People's Hospital, Guizhou 550002, China.
| |
Collapse
|
46
|
Zhao F, Dong D, Du H, Guo Y, Su X, Wang Z, Xie X, Wang M, Zhang H, Cao X, He X. Diagnosis of endometrium hyperplasia and screening of endometrial intraepithelial neoplasia in histopathological images using a global-to-local multi-scale convolutional neural network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106906. [PMID: 35671602 DOI: 10.1016/j.cmpb.2022.106906] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 05/10/2022] [Accepted: 05/23/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Endometrial hyperplasia (EH), a uterine pathology characterized by an increased gland-to-stroma ratio compared to normal endometrium (NE), may precede the development of endometrial cancer (EC). Particularly, atypical EH also known as endometrial intraepithelial neoplasia (EIN), has been proven to be a precursor of EC. Thus, diagnosing different EH (EIN, hyperplasia without atypia (HwA) and NE) and screening EIN from non-EIN are crucial for the health of female reproductive system. Computer-aided-diagnosis (CAD) was used to diagnose endometrial histological images based on machine learning and deep learning. However, these studies perform single-scale image analysis and thus can only characterize partial endometrial features. Empirically, both global (cytological changes relative to background) and local features (gland-to-stromal ratio and lesion dimension) are helpful in identifying endometrial lesions. METHODS We proposed a global-to-local multi-scale convolutional neural network (G2LNet) to diagnose different EH and to screen EIN in endometrial histological images stained by hematoxylin and eosin (H&E). The G2LNet first used a supervised model in the global part to extract contextual features of endometrial lesions, and simultaneously deployed multi-instance learning in the local part to obtain textural features from multiple image patches. The contextual and textural features were used together to diagnose different endometrial lesions after fusion by a convolutional block attention module. In addition, we visualized the salient regions on both the global image and local images to investigate the interpretability of the model in endometrial diagnosis. RESULTS In the five-fold cross validation on 7812 H&E images from 467 endometrial specimens, G2LNet achieved an accuracy of 97.01% for EH diagnosis and an area-under-the-curve (AUC) of 0.9902 for EIN screening, significantly higher than state-of-the-arts. In external validation on 1631 H&E images from 135 specimens, G2LNet achieved an accuracy of 95.34% for EH diagnosis, which was comparable to that of a mid-level pathologist (95.71%). Specifically, G2LNet had advantages in diagnosing EIN, while humans performed better in identifying NE and HwA. CONCLUSIONS The developed G2LNet that integrated both the global (contextual) and local (textural) features may help pathologists diagnose endometrial lesions in clinical practices, especially to improve the accuracy and efficiency of screening for precancerous lesions.
Collapse
Affiliation(s)
- Fengjun Zhao
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Science and Technology, Northwest University, Xi'an, Shaanxi 710069, China
| | - Didi Dong
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Science and Technology, Northwest University, Xi'an, Shaanxi 710069, China
| | - Hongyan Du
- Department of Pathology, Northwest Women and Children's Hospital, Xi'an, Shaanxi 710061, China.
| | - Yinan Guo
- Department of Pathology, Northwest Women and Children's Hospital, Xi'an, Shaanxi 710061, China
| | - Xue Su
- Department of Pathology, Northwest Women and Children's Hospital, Xi'an, Shaanxi 710061, China
| | - Zhiwei Wang
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Science and Technology, Northwest University, Xi'an, Shaanxi 710069, China
| | - Xiaoyang Xie
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Science and Technology, Northwest University, Xi'an, Shaanxi 710069, China
| | - Mingjuan Wang
- Department of Pathology, Northwest Women and Children's Hospital, Xi'an, Shaanxi 710061, China
| | - Haiyan Zhang
- Department of Pathology, Northwest Women and Children's Hospital, Xi'an, Shaanxi 710061, China
| | - Xin Cao
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Science and Technology, Northwest University, Xi'an, Shaanxi 710069, China
| | - Xiaowei He
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Science and Technology, Northwest University, Xi'an, Shaanxi 710069, China.
| |
Collapse
|
47
|
Dong X, Li M, Zhou P, Deng X, Li S, Zhao X, Wu Y, Qin J, Guo W. Fusing pre-trained convolutional neural networks features for multi-differentiated subtypes of liver cancer on histopathological images. BMC Med Inform Decis Mak 2022; 22:122. [PMID: 35509058 PMCID: PMC9066403 DOI: 10.1186/s12911-022-01798-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 02/21/2022] [Indexed: 11/10/2022] Open
Abstract
Liver cancer is a malignant tumor with high morbidity and mortality, which has a tremendous negative impact on human survival. However, it is a challenging task to recognize tens of thousands of histopathological images of liver cancer by naked eye, which poses numerous challenges to inexperienced clinicians. In addition, factors such as long time-consuming, tedious work and huge number of images impose a great burden on clinical diagnosis. Therefore, our study combines convolutional neural networks with histopathology images and adopts a feature fusion approach to help clinicians efficiently discriminate the differentiation types of primary hepatocellular carcinoma histopathology images, thus improving their diagnostic efficiency and relieving their work pressure. In this study, for the first time, 73 patients with different differentiation types of primary liver cancer tumors were classified. We performed an adequate classification evaluation of liver cancer differentiation types using four pre-trained deep convolutional neural networks and nine different machine learning (ML) classifiers on a dataset of liver cancer histopathology images with multiple differentiation types. And the test set accuracy, validation set accuracy, running time with different strategies, precision, recall and F1 value were used for adequate comparative evaluation. Proved by experimental results, fusion networks (FuNet) structure is a good choice, which covers both channel attention and spatial attention, and suppresses channel interference with less information. Meanwhile, it can clarify the importance of each spatial location by learning the weights of different locations in space, then apply it to the study of classification of multi-differentiated types of liver cancer. In addition, in most cases, the Stacking-based integrated learning classifier outperforms other ML classifiers in the classification task of multi-differentiation types of liver cancer with the FuNet fusion strategy after dimensionality reduction of the fused features by principle component analysis (PCA) features, and a satisfactory result of 72.46% is achieved in the test set, which has certain practicality.
Collapse
Affiliation(s)
- Xiaogang Dong
- Department of Hepatopancreatobiliary Surgery, Cancer Affiliated Hospital of Xinjiang Medical University, Ürümqi, Xinjiang, China
| | - Min Li
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Ürümqi, 830046, China.,College of Information Science and Engineering, Xinjiang University, Ürümqi, 830046, China
| | - Panyun Zhou
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Xin Deng
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Siyu Li
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Xingyue Zhao
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Yi Wu
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Jiwei Qin
- College of Information Science and Engineering, Xinjiang University, Ürümqi, 830046, China.
| | - Wenjia Guo
- Cancer Institute, Affiliated Cancer Hospital of Xinjiang Medical University, Ürümqi, 830011, China. .,Key Laboratory of Oncology of Xinjiang Uyghur Autonomous Region, Ürümqi, 830011, China.
| |
Collapse
|
48
|
Jiao L, Wang J, Zhu L. A Comparative Study of Endometriosis and Normal Endometrium Based on Ultrasound Observation. Appl Bionics Biomech 2022; 2022:7934690. [PMID: 35535323 PMCID: PMC9078799 DOI: 10.1155/2022/7934690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 04/05/2022] [Accepted: 04/19/2022] [Indexed: 12/13/2022] Open
Abstract
In order to compare the microscopic ultrastructure of eutopic endometrium and normal endometrium in patients with endometriosis, to study the specific pathogenesis of endometriosis. In this paper, on the basis of using B-ultrasound technology, several patients with endometriosis were subjected to B-ultrasound to observe the ultrastructure of the eutopic uterine endometrium and compared with the pictures of normal endometrium to carry out the specific analysis between the two ultrastructural comparisons. This study is based on the analysis of B-ultrasound images of patients with endometriosis, compares the difference between their ultrastructure and normal human body, and conducts specific pathological diagnosis and analysis to find out the impact of the endometrium in place. The specific factors of the occurrence of lesions and the corresponding treatment methods are proposed. The experimental results show that the ultrastructure of endometriosis eutopic endometrium is different from that of normal endometrium. The microvilli of secretory cells and the cilia of ciliated cells of the former are abnormally increased and lengthened, and they are superior to B-ultrasound technology. The success rate of the examination is 93.75%, which can play an important role in the specific examination process of patients with endometriosis, as one of the actual indicators of detection. Under the electron microscope, microvilli are tiny finger-like protrusions extending from the cell membrane and the cytoplasm on the free surface of the cell, surrounded by the cell membrane and perpendicular to the cell membrane surface.
Collapse
Affiliation(s)
- Lin Jiao
- Department of Ultrasound, Qingdao Chengyang District People's Hospital, Qingdao, 266000 Shandong, China
| | - Jue Wang
- Department of Ultrasound, Qingdao Chengyang District People's Hospital, Qingdao, 266000 Shandong, China
| | - Lingling Zhu
- Army 73rd Group Military Hospital, Xiamen, 361001 Fujian, China
| |
Collapse
|
49
|
Liu Y, Zhou Q, Peng B, Jiang J, Fang L, Weng W, Wang W, Wang S, Zhu X. Automatic Measurement of Endometrial Thickness From Transvaginal Ultrasound Images. Front Bioeng Biotechnol 2022; 10:853845. [PMID: 35425763 PMCID: PMC9001908 DOI: 10.3389/fbioe.2022.853845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 02/21/2022] [Indexed: 11/25/2022] Open
Abstract
Purpose: Endometrial thickness is one of the most important indicators in endometrial disease screening and diagnosis. Herein, we propose a method for automated measurement of endometrial thickness from transvaginal ultrasound images. Methods: Accurate automated measurement of endometrial thickness relies on endometrium segmentation from transvaginal ultrasound images that usually have ambiguous boundaries and heterogeneous textures. Therefore, a two-step method was developed for automated measurement of endometrial thickness. First, a semantic segmentation method was developed based on deep learning, to segment the endometrium from 2D transvaginal ultrasound images. Second, we estimated endometrial thickness from the segmented results, using a largest inscribed circle searching method. Overall, 8,119 images (size: 852 × 1136 pixels) from 467 cases were used to train and validate the proposed method. Results: We achieved an average Dice coefficient of 0.82 for endometrium segmentation using a validation dataset of 1,059 images from 71 cases. With validation using 3,210 images from 214 cases, 89.3% of endometrial thickness errors were within the clinically accepted range of ±2 mm. Conclusion: Endometrial thickness can be automatically and accurately estimated from transvaginal ultrasound images for clinical screening and diagnosis.
Collapse
Affiliation(s)
- Yiyang Liu
- Biomedical Information Engineering Lab, The University of Aizu, Aizuwakamatsu, Japan
| | - Qin Zhou
- Department of Obstetrics and Gynecology, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Boyuan Peng
- Biomedical Information Engineering Lab, The University of Aizu, Aizuwakamatsu, Japan
| | - Jingjing Jiang
- Department of Obstetrics and Gynecology, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Li Fang
- Department of Obstetrics and Gynecology, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Weihao Weng
- Biomedical Information Engineering Lab, The University of Aizu, Aizuwakamatsu, Japan
| | - Wenwen Wang
- Department of Obstetrics and Gynecology, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
- *Correspondence: Wenwen Wang, ; Shixuan Wang, ; Xin Zhu,
| | - Shixuan Wang
- Department of Obstetrics and Gynecology, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
- *Correspondence: Wenwen Wang, ; Shixuan Wang, ; Xin Zhu,
| | - Xin Zhu
- Biomedical Information Engineering Lab, The University of Aizu, Aizuwakamatsu, Japan
- *Correspondence: Wenwen Wang, ; Shixuan Wang, ; Xin Zhu,
| |
Collapse
|
50
|
Chen K, Wang Q, Ma Y. Cervical optical coherence tomography image classification based on contrastive self-supervised texture learning. Med Phys 2022; 49:3638-3653. [PMID: 35342956 DOI: 10.1002/mp.15630] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 02/26/2022] [Accepted: 03/16/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Cervical cancer seriously affects the health of the female reproductive system. Optical coherence tomography (OCT) emerged as a non-invasive, high-resolution imaging technology for cervical disease detection. However, OCT image annotation is knowledge-intensive and time-consuming, which impedes the training process of deep-learning-based classification models. PURPOSE This study aims to develop a computer-aided diagnosis (CADx) approach to classifying in-vivo cervical OCT images based on self-supervised learning. METHODS In addition to high-level semantic features extracted by a convolutional neural network (CNN), the proposed CADx approach designs a contrastive texture learning (CTL) strategy to leverage unlabeled cervical OCT images' texture features. We conducted ten-fold cross-validation on the OCT image dataset from a multi-center clinical study on 733 patients from China. RESULTS In a binary classification task for detecting high-risk diseases, including high-grade squamous intraepithelial lesion and cervical cancer, our method achieved an area-under-the-curve value of 0.9798 ± 0.0157 with a sensitivity of 91.17 ± 4.99% and a specificity of 93.96 ± 4.72% for OCT image patches; also, it outperformed two out of four medical experts on the test set. Furthermore, our method achieved a 91.53% sensitivity and 97.37% specificity on an external validation dataset containing 287 3D OCT volumes from 118 Chinese patients in a new hospital using a cross-shaped threshold voting strategy. CONCLUSIONS The proposed contrastive-learning-based CADx method outperformed the end-to-end CNN models and provided better interpretability based on texture features, which holds great potential to be used in the clinical protocol of "see-and-treat." This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Kaiyi Chen
- School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Qingbin Wang
- School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Yutao Ma
- School of Computer Science, Wuhan University, Wuhan, 430072, China
| |
Collapse
|