1
|
Lee J, Choi S, Shin S, Alam MR, Abdul-Ghafar J, Seo KJ, Hwang G, Jeong D, Gong G, Cho NH, Yoo CW, Kim HK, Chong Y, Yim K. Ovarian Cancer Detection in Ascites Cytology with Weakly Supervised Model on Nationwide Dataset. THE AMERICAN JOURNAL OF PATHOLOGY 2025:S0002-9440(25)00143-9. [PMID: 40311756 DOI: 10.1016/j.ajpath.2025.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2025] [Revised: 03/14/2025] [Accepted: 04/08/2025] [Indexed: 05/03/2025]
Abstract
Conventional ascitic fluid cytology for detecting ovarian cancer is limited by its low sensitivity. To address this issue, this multicenter study developed patch image (PI)-based fully supervised convolutional neural network (CNN) models and clustering-constrained attention multiple-instance learning (CLAM) algorithms for detecting ovarian cancer using ascitic fluid cytology. Whole-slide images (WSIs), 356 benign and 147 cancer, were collected, from which 14,699 benign and 8025 cancer PIs were extracted. Additionally, 131 WSIs (44 benign and 87 cancer) were used for external validation. Six CNN algorithms were developed for cancer detection using PIs. Subsequently, two CLAM algorithms, single branch (CLAM-SB) and multiple branch (CLAM-MB), were developed. ResNet50 demonstrated the best performance, achieving an accuracy of 0.973. The performance when interpreting internal WSIs was an area under the curve (AUC) of 0.982. CLAM-SB outperformed CLAM-MB with an AUC of 0.944 in internal WSIs. Notably, in the external test, CLAM-SB exhibited superior performance with an AUC of 0.866 compared with ResNet50's AUC of 0.804. Analysis of the heatmap revealed that cases frequently misinterpreted by AI were easily interpreted by humans, and vice versa. Because AI and humans were found to function complementarily, implementing computer-aided diagnosis is expected to significantly enhance diagnostic accuracy and reproducibility. Furthermore, the WSI-based learning in CLAM, eliminating the need for patch-by-patch annotation, offers an advantage over the CNN model.
Collapse
Affiliation(s)
- Jiwon Lee
- College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seonggyeong Choi
- College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seoyeon Shin
- College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Mohammad Rizwan Alam
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Jamshid Abdul-Ghafar
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Kyung Jin Seo
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Gisu Hwang
- AI Team, DeepNoid Inc., Seoul, Republic of Korea
| | - Daeky Jeong
- AI Team, DeepNoid Inc., Seoul, Republic of Korea
| | - Gyungyub Gong
- Department of Pathology, Asan Medical Center, Seoul, Republic of Korea
| | - Nam Hoon Cho
- Department of Pathology, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Chong Woo Yoo
- Department of Pathology, National Cancer Center, Ilsan, Gyeonggi-do, Republic of Korea
| | - Hyung Kyung Kim
- Department of Pathology, Seoul National University Bundang Hospital, Seongnam-si, Gyeonggi-do, Republic of Korea; Department of Pathology, Samsung Medical Center, Seoul, Republic of Korea
| | - Yosep Chong
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
| | - Kwangil Yim
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
| |
Collapse
|
2
|
Liu X, Zhang CJ. Reply to: Comment on "Staging peritoneal metastases in colorectal cancer: The correlation between MRI, surgical and histopathological peritoneal cancer index". EUROPEAN JOURNAL OF SURGICAL ONCOLOGY 2025:109653. [PMID: 39924413 DOI: 10.1016/j.ejso.2025.109653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2024] [Accepted: 01/29/2025] [Indexed: 02/11/2025]
Affiliation(s)
- Xiaoxia Liu
- Department of Neurology, Ningbo No. 2 Hospital, No. 41 Northwest Street, Haishu District, Ningbo, 315010, Zhejiang Province, China.
| | - Chong-Jie Zhang
- Department of Colorectal Surgery, Ningbo No. 2 Hospital, No. 41 Northwest Street, Haishu District, Ningbo, 315010, Zhejiang Province, China.
| |
Collapse
|
3
|
Wojtulewski A, Sikora A, Dineen S, Raoof M, Karolak A. Using artificial intelligence and statistics for managing peritoneal metastases from gastrointestinal cancers. Brief Funct Genomics 2025; 24:elae049. [PMID: 39736152 PMCID: PMC11735730 DOI: 10.1093/bfgp/elae049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 12/11/2024] [Accepted: 12/13/2024] [Indexed: 01/01/2025] Open
Abstract
OBJECTIVE The primary objective of this study is to investigate various applications of artificial intelligence (AI) and statistical methodologies for analyzing and managing peritoneal metastases (PM) caused by gastrointestinal cancers. METHODS Relevant keywords and search criteria were comprehensively researched on PubMed and Google Scholar to identify articles and reviews related to the topic. The AI approaches considered were conventional machine learning (ML) and deep learning (DL) models, and the relevant statistical approaches included biostatistics and logistic models. RESULTS The systematic literature review yielded nearly 30 articles meeting the predefined criteria. Analyses of these studies showed that AI methodologies consistently outperformed traditional statistical approaches. In the AI approaches, DL consistently produced the most precise results, while classical ML demonstrated varied performance but maintained high predictive accuracy. The sample size was the recurring factor that increased the accuracy of the predictions for models of the same type. CONCLUSIONS AI and statistical approaches can detect PM developing among patients with gastrointestinal cancers. Therefore, if clinicians integrated these approaches into diagnostics and prognostics, they could better analyze and manage PM, enhancing clinical decision-making and patients' outcomes. Collaboration across multiple institutions would also help in standardizing methods for data collection and allowing consistent results.
Collapse
Affiliation(s)
- Adam Wojtulewski
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, 12902 Magnolia Drive, Tampa FL 33612, United States
- Department of Computer and Information Science and Engineering, University of Florida, 432 Newell Dr, Gainesville, FL 32611, United States
| | - Aleksandra Sikora
- Department of Medicine, Medical University of Warsaw, Żwirki i Wigury 61, 02-091 Warszawa, Poland
| | - Sean Dineen
- Department of Gastrointestinal Oncology, H. Lee Moffitt Cancer Center and Research Institute, 12902 Magnolia Drive, Tampa FL 33612, United States
| | - Mustafa Raoof
- Division of Surgical Oncology, Department of Surgery, City of Hope National Medical Center, 1500 East Duarte Road Duarte, CA 91010, United States
| | - Aleksandra Karolak
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, 12902 Magnolia Drive, Tampa FL 33612, United States
- Department of Gastrointestinal Oncology, H. Lee Moffitt Cancer Center and Research Institute, 12902 Magnolia Drive, Tampa FL 33612, United States
| |
Collapse
|
4
|
Gong W, Vaishnani DK, Jin XC, Zeng J, Chen W, Huang H, Zhou YQ, Hla KWY, Geng C, Ma J. Evaluation of an enhanced ResNet-18 classification model for rapid On-site diagnosis in respiratory cytology. BMC Cancer 2025; 25:10. [PMID: 39754166 PMCID: PMC11697834 DOI: 10.1186/s12885-024-13402-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 12/26/2024] [Indexed: 01/07/2025] Open
Abstract
OBJECTIVE Rapid on-site evaluation (ROSE) of respiratory cytology specimens is a critical technique for accurate and timely diagnosis of lung cancer. However, in China, limited familiarity with the Diff-Quik staining method and a shortage of trained cytopathologists hamper utilization of ROSE. Therefore, developing an improved deep learning model to assist clinicians in promptly and accurately evaluating Diff-Quik stained cytology samples during ROSE has important clinical value. METHODS Retrospectively, 116 digital images of Diff-Quik stained cytology samples were obtained from whole slide scans. These included 6 diagnostic categories - carcinoid, normal cells, adenocarcinoma, squamous cell carcinoma, non-small cell carcinoma, and small cell carcinoma. All malignant diagnoses were confirmed by histopathology and immunohistochemistry. The test image set was presented to 3 cytopathologists from different hospitals with varying levels of experience, as well as an artificial intelligence system, as single-choice questions. RESULTS The diagnostic accuracy of the cytopathologists correlated with their years of practice and hospital setting. The AI model demonstrated proficiency comparable to the humans. Importantly, all combinations of AI assistance and human cytopathologist increased diagnostic efficiency to varying degrees. CONCLUSIONS This deep learning model shows promising capability as an aid for on-site diagnosis of respiratory cytology samples. However, human expertise remains essential to the diagnostic process.
Collapse
Affiliation(s)
- Wei Gong
- Department of Pathology, Lishui Municipal Central Hospital, Lishui, 323000, Zhejiang Province, China
| | - Deep K Vaishnani
- School of International Studies, Wenzhou Medical University, Ouhai District, Chashan, Wenzhou, 325035, Zhejiang Province, China
| | - Xuan-Chen Jin
- School of Clinical Medicine, Wenzhou Medical University, Ouhai District, Chashan, Wenzhou, Zhejiang Province, 325035, China
| | - Jing Zeng
- School of Clinical Medicine, Wenzhou Medical University, Ouhai District, Chashan, Wenzhou, Zhejiang Province, 325035, China
| | - Wei Chen
- Renji College, Wenzhou Medical University, Wenzhou, 325035, Zhejiang, PR China
| | - Huixia Huang
- Department of Archives, Lishui Second People's Hospital, Liandu District, Lishui City, 323000, Zhejiang Province, China
| | - Yu-Qing Zhou
- School of International Studies, Wenzhou Medical University, Ouhai District, Chashan, Wenzhou, 325035, Zhejiang Province, China
| | - Khaing Wut Yi Hla
- School of International Studies, Wenzhou Medical University, Ouhai District, Chashan, Wenzhou, 325035, Zhejiang Province, China
| | - Chen Geng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, Jiangsu Province, China.
| | - Jun Ma
- Department of Pathology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, Zhejiang, China.
| |
Collapse
|
5
|
Vaickus LJ, Kerr DA, Velez Torres JM, Levy J. Artificial Intelligence Applications in Cytopathology: Current State of the Art. Surg Pathol Clin 2024; 17:521-531. [PMID: 39129146 DOI: 10.1016/j.path.2024.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
The practice of cytopathology has been significantly refined in recent years, largely through the creation of consensus rule sets for the diagnosis of particular specimens (Bethesda, Milan, Paris, and so forth). In general, these diagnostic systems have focused on reducing intraobserver variance, removing nebulous/redundant categories, reducing the use of "atypical" diagnoses, and promoting the use of quantitative scoring systems while providing a uniform language to communicate these results. Computational pathology is a natural offshoot of this process in that it promises 100% reproducible diagnoses rendered by quantitative processes that are free from many of the biases of human practitioners.
Collapse
Affiliation(s)
- Louis J Vaickus
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Geisel School of Medicine at Dartmouth, Hanover, NH 03750, USA.
| | - Darcy A Kerr
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Geisel School of Medicine at Dartmouth, Hanover, NH 03750, USA. https://twitter.com/darcykerrMD
| | - Jaylou M Velez Torres
- Department of Pathology and Laboratory Medicine, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Joshua Levy
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, One Medical Center Drive, Lebanon, NH 03756, USA; Cedars-Sinai Medical Center, 8700 Beverly Boulevard, Los Angeles, CA 90048, USA
| |
Collapse
|
6
|
Feng R, Li S, Zhang Y. AI-powered microscopy image analysis for parasitology: integrating human expertise. Trends Parasitol 2024; 40:633-646. [PMID: 38824067 DOI: 10.1016/j.pt.2024.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 05/06/2024] [Accepted: 05/07/2024] [Indexed: 06/03/2024]
Abstract
Microscopy image analysis plays a pivotal role in parasitology research. Deep learning (DL), a subset of artificial intelligence (AI), has garnered significant attention. However, traditional DL-based methods for general purposes are data-driven, often lacking explainability due to their black-box nature and sparse instructional resources. To address these challenges, this article presents a comprehensive review of recent advancements in knowledge-integrated DL models tailored for microscopy image analysis in parasitology. The massive amounts of human expert knowledge from parasitologists can enhance the accuracy and explainability of AI-driven decisions. It is expected that the adoption of knowledge-integrated DL models will open up a wide range of applications in the field of parasitology.
Collapse
Affiliation(s)
- Ruijun Feng
- College of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China; School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| | - Sen Li
- College of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China
| | - Yang Zhang
- College of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China.
| |
Collapse
|
7
|
Wei GX, Zhou YW, Li ZP, Qiu M. Application of artificial intelligence in the diagnosis, treatment, and recurrence prediction of peritoneal carcinomatosis. Heliyon 2024; 10:e29249. [PMID: 38601686 PMCID: PMC11004411 DOI: 10.1016/j.heliyon.2024.e29249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 03/26/2024] [Accepted: 04/03/2024] [Indexed: 04/12/2024] Open
Abstract
Peritoneal carcinomatosis (PC) is a type of secondary cancer which is not sensitive to conventional intravenous chemotherapy. Treatment strategies for PC are usually palliative rather than curative. Recently, artificial intelligence (AI) has been widely used in the medical field, making the early diagnosis, individualized treatment, and accurate prognostic evaluation of various cancers, including mediastinal malignancies, colorectal cancer, lung cancer more feasible. As a branch of computer science, AI specializes in image recognition, speech recognition, automatic large-scale data extraction and output. AI technologies have also made breakthrough progress in the field of peritoneal carcinomatosis (PC) based on its powerful learning capacity and efficient computational power. AI has been successfully applied in various approaches in PC diagnosis, including imaging, blood tests, proteomics, and pathological diagnosis. Due to the automatic extraction function of the convolutional neural network and the learning model based on machine learning algorithms, AI-assisted diagnosis types are associated with a higher accuracy rate compared to conventional diagnosis methods. In addition, AI is also used in the treatment of peritoneal cancer, including surgical resection, intraperitoneal chemotherapy, systemic chemotherapy, which significantly improves the survival of patients with PC. In particular, the recurrence prediction and emotion evaluation of PC patients are also combined with AI technology, further improving the quality of life of patients. Here we have comprehensively reviewed and summarized the latest developments in the application of AI in PC, helping oncologists to comprehensively diagnose PC and provide more precise treatment strategies for patients with PC.
Collapse
Affiliation(s)
- Gui-Xia Wei
- Department of Abdominal Cancer, Cancer Center, West China Hospital of Sichuan University, Chengdu, China
| | - Yu-Wen Zhou
- Department of Colorectal Cancer Center, West China Hospital of Sichuan University, Chengdu, China
| | - Zhi-Ping Li
- Department of Abdominal Cancer, Cancer Center, West China Hospital of Sichuan University, Chengdu, China
| | - Meng Qiu
- Department of Colorectal Cancer Center, West China Hospital of Sichuan University, Chengdu, China
| |
Collapse
|
8
|
Abd-Almoniem E, Abd-Alsabour N, Elsheikh S, Mostafa RR, Elesawy YF. A Novel Validated Real-World Dataset for the Diagnosis of Multiclass Serous Effusion Cytology according to the International System and Ground-Truth Validation Data. Acta Cytol 2024; 68:160-170. [PMID: 38522415 DOI: 10.1159/000538465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Accepted: 03/12/2024] [Indexed: 03/26/2024]
Abstract
INTRODUCTION The application of artificial intelligence (AI) algorithms in serous fluid cytology is lacking due to the deficiency in standardized publicly available datasets. Here, we develop a novel public serous effusion cytology dataset. Furthermore, we apply AI algorithms on it to test its diagnostic utility and safety in clinical practice. METHODS The work is divided into three phases. Phase 1 entails building the dataset based on the multitiered evidence-based classification system proposed by the International System (TIS) of serous fluid cytology along with ground-truth tissue diagnosis for malignancy. To ensure reliable results of future AI research on this dataset, we carefully consider all the steps of the preparation and staining from a real-world cytopathology perspective. In phase 2, we pay special consideration to the image acquisition pipeline to ensure image integrity. Then we utilize the power of transfer learning using the convolutional layers of the VGG16 deep learning model for feature extraction. Finally, in phase 3, we apply the random forest classifier on the constructed dataset. RESULTS The dataset comprises 3,731 images distributed among the four TIS diagnostic categories. The model achieves 74% accuracy in this multiclass classification problem. Using a one-versus-all classifier, the fallout rate for images that are misclassified as negative for malignancy despite being a higher risk diagnosis is 0.13. Most of these misclassified images (77%) belong to the atypia of undetermined significance category in concordance with real-life statistics. CONCLUSION This is the first and largest publicly available serous fluid cytology dataset based on a standardized diagnostic system. It is also the first dataset to include various types of effusions and pericardial fluid specimens. In addition, it is the first dataset to include the diagnostically challenging atypical categories. AI algorithms applied on this novel dataset show reliable results that can be incorporated into actual clinical practice with minimal risk of missing a diagnosis of malignancy. This work provides a foundation for researchers to develop and test further AI algorithms for the diagnosis of serous effusions.
Collapse
Affiliation(s)
- Esraa Abd-Almoniem
- Department of Anatomic Pathology, Kasr Alainy Faculty of Medicine, Cairo University, Giza, Egypt
| | - Nadia Abd-Alsabour
- Department of Computer Science, Faculty of Graduate Studies for Statistical Research, Cairo University, Giza, Egypt
| | - Samar Elsheikh
- Department of Anatomic Pathology, Kasr Alainy Faculty of Medicine, Cairo University, Giza, Egypt
| | - Rasha R Mostafa
- Department of Anatomic Pathology, Kasr Alainy Faculty of Medicine, Cairo University, Giza, Egypt
| | - Yasmine Fathy Elesawy
- Department of Anatomic Pathology, Kasr Alainy Faculty of Medicine, Cairo University, Giza, Egypt
| |
Collapse
|
9
|
Kim HK, Han E, Lee J, Yim K, Abdul-Ghafar J, Seo KJ, Seo JW, Gong G, Cho NH, Kim M, Yoo CW, Chong Y. Artificial-Intelligence-Assisted Detection of Metastatic Colorectal Cancer Cells in Ascitic Fluid. Cancers (Basel) 2024; 16:1064. [PMID: 38473421 DOI: 10.3390/cancers16051064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 02/17/2024] [Accepted: 02/29/2024] [Indexed: 03/14/2024] Open
Abstract
Ascites cytology is a cost-effective test for metastatic colorectal cancer (CRC) in the abdominal cavity. However, metastatic carcinoma of the peritoneum is difficult to diagnose based on biopsy findings, and ascitic aspiration cytology has a low sensitivity and specificity and a high inter-observer variability. The aim of the present study was to apply artificial intelligence (AI) to classify benign and malignant cells in ascites cytology patch images of metastatic CRC using a deep convolutional neural network. Datasets were collected from The OPEN AI Dataset Project, a nationwide cytology dataset for AI research. The numbers of patch images used for training, validation, and testing were 56,560, 7068, and 6534, respectively. We evaluated 1041 patch images of benign and metastatic CRC in the ascitic fluid to compare the performance of pathologists and an AI algorithm, and to examine whether the diagnostic accuracy of pathologists improved with the assistance of AI. This AI method showed an accuracy, a sensitivity, and a specificity of 93.74%, 87.76%, and 99.75%, respectively, for the differential diagnosis of malignant and benign ascites. The diagnostic accuracy and sensitivity of the pathologist with the assistance of the proposed AI method increased from 86.8% to 90.5% and from 73.3% to 79.3%, respectively. The proposed deep learning method may assist pathologists with different levels of experience in diagnosing metastatic CRC cells of ascites.
Collapse
Affiliation(s)
- Hyung Kyung Kim
- Department of Pathology, Seoul National University Bundang Hospital, Seongnam 13620, Republic of Korea
- Department of Pathology, Samsung Medical Center, Seoul 06351, Republic of Korea
| | - Eunkyung Han
- Department of Pathology, Soonchunyang University Hospital Bucheon, Bucheon 14584, Republic of Korea
| | - Jeonghyo Lee
- Department of Pathology, Seoul National University Bundang Hospital, Seongnam 13620, Republic of Korea
| | - Kwangil Yim
- Department of Hospital Pathology, The Catholic University of Korea College of Medicine, Seoul 06591, Republic of Korea
| | - Jamshid Abdul-Ghafar
- Department of Hospital Pathology, The Catholic University of Korea College of Medicine, Seoul 06591, Republic of Korea
| | - Kyung Jin Seo
- Department of Hospital Pathology, The Catholic University of Korea College of Medicine, Seoul 06591, Republic of Korea
| | - Jang Won Seo
- AI Team, MTS Company Inc., Seoul 06178, Republic of Korea
| | - Gyungyub Gong
- Department of Pathology, Asan Medical Center, Seoul 05505, Republic of Korea
| | - Nam Hoon Cho
- Department of Pathology, Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Milim Kim
- Department of Pathology, Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Chong Woo Yoo
- Department of Pathology, National Cancer Center, Goyang 10408, Republic of Korea
| | - Yosep Chong
- Department of Hospital Pathology, The Catholic University of Korea College of Medicine, Seoul 06591, Republic of Korea
| |
Collapse
|
10
|
Zhou M, Jie W, Tang F, Zhang S, Mao Q, Liu C, Hao Y. Deep learning algorithms for classification and detection of recurrent aphthous ulcerations using oral clinical photographic images. J Dent Sci 2024; 19:254-260. [PMID: 38303872 PMCID: PMC10829559 DOI: 10.1016/j.jds.2023.04.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 04/19/2023] [Indexed: 02/03/2024] Open
Abstract
Background/purpose The application of artificial intelligence diagnosis based on deep learning in the medical field has been widely accepted. We aimed to evaluate convolutional neural networks (CNNs) for automated classification and detection of recurrent aphthous ulcerations (RAU), normal oral mucosa, and other common oral mucosal diseases in clinical oral photographs. Materials and methods The study included 785 clinical oral photographs, which was divided into 251 images of RAU, 271 images of the normal oral mucosa, and 263 images of other common oral mucosal diseases. Four and three CNN models were used for the classification and detection tasks, respectively. 628 images were randomly selected as training data. In addition, 78 and 79 images were assigned as validating and testing data. Main outcome measures included precision, recall, F1, specificity, sensitivity and area under the receiver operating characteristics curve (AUC). Results In the classification task, the Pretrained ResNet50 model had the best performance with a precision of 92.86%, a recall of 91.84%, an F1 score of 92.24%, a specificity of 96.41%, a sensitivity of 91.84% and an AUC of 98.95%. In the detection task, the Pretrained YOLOV5 model had the best performance with a precision of 98.70%, a recall of 79.51%, an F1 score of 88.07% and an AUC of Precision-Recall curve 90.89%. Conclusion The Pretrained ResNet50 and the Pretrained YOLOV5 algorithms were shown to have superior performance and acceptable potential in the classification and detection of RAU lesions based on non-invasive oral images, which may prove useful in clinical practice.
Collapse
Affiliation(s)
- Mimi Zhou
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Weiping Jie
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Fan Tang
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Shangjun Zhang
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Qinghua Mao
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Chuanxia Liu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Yilong Hao
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| |
Collapse
|
11
|
Wang Z, Liu Y, Niu X. Application of artificial intelligence for improving early detection and prediction of therapeutic outcomes for gastric cancer in the era of precision oncology. Semin Cancer Biol 2023; 93:83-96. [PMID: 37116818 DOI: 10.1016/j.semcancer.2023.04.009] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 04/12/2023] [Accepted: 04/24/2023] [Indexed: 04/30/2023]
Abstract
Gastric cancer is a leading contributor to cancer incidence and mortality globally. Recently, artificial intelligence approaches, particularly machine learning and deep learning, are rapidly reshaping the full spectrum of clinical management for gastric cancer. Machine learning is formed from computers running repeated iterative models for progressively improving performance on a particular task. Deep learning is a subtype of machine learning on the basis of multilayered neural networks inspired by the human brain. This review summarizes the application of artificial intelligence algorithms to multi-dimensional data including clinical and follow-up information, conventional images (endoscope, histopathology, and computed tomography (CT)), molecular biomarkers, etc. to improve the risk surveillance of gastric cancer with established risk factors; the accuracy of diagnosis, and survival prediction among established gastric cancer patients; and the prediction of treatment outcomes for assisting clinical decision making. Therefore, artificial intelligence makes a profound impact on almost all aspects of gastric cancer from improving diagnosis to precision medicine. Despite this, most established artificial intelligence-based models are in a research-based format and often have limited value in real-world clinical practice. With the increasing adoption of artificial intelligence in clinical use, we anticipate the arrival of artificial intelligence-powered gastric cancer care.
Collapse
Affiliation(s)
- Zhe Wang
- Department of Digestive Diseases 1, Cancer Hospital of China Medical University, Cancer Hospital of Dalian University of Technology, Liaoning Cancer Hospital & Institute, Shenyang 110042, Liaoning, China
| | - Yang Liu
- Department of Gastric Surgery, Cancer Hospital of China Medical University, Cancer Hospital of Dalian University of Technology, Liaoning Cancer Hospital & Institute, Shenyang 110042, Liaoning, China.
| | - Xing Niu
- China Medical University, Shenyang 110122, Liaoning, China.
| |
Collapse
|
12
|
Zha Y, Xue C, Liu Y, Ni J, De La Fuente JM, Cui D. Artificial intelligence in theranostics of gastric cancer, a review. MEDICAL REVIEW (2021) 2023; 3:214-229. [PMID: 37789960 PMCID: PMC10542883 DOI: 10.1515/mr-2022-0042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Accepted: 04/26/2023] [Indexed: 10/05/2023]
Abstract
Gastric cancer (GC) is one of the commonest cancers with high morbidity and mortality in the world. How to realize precise diagnosis and therapy of GC owns great clinical requirement. In recent years, artificial intelligence (AI) has been actively explored to apply to early diagnosis and treatment and prognosis of gastric carcinoma. Herein, we review recent advance of AI in early screening, diagnosis, therapy and prognosis of stomach carcinoma. Especially AI combined with breath screening early GC system improved 97.4 % of early GC diagnosis ratio, AI model on stomach cancer diagnosis system of saliva biomarkers obtained an overall accuracy of 97.18 %, specificity of 97.44 %, and sensitivity of 96.88 %. We also discuss concept, issues, approaches and challenges of AI applied in stomach cancer. This review provides a comprehensive view and roadmap for readers working in this field, with the aim of pushing application of AI in theranostics of stomach cancer to increase the early discovery ratio and curative ratio of GC patients.
Collapse
Affiliation(s)
- Yiqian Zha
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| | - Cuili Xue
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| | - Yanlei Liu
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| | - Jian Ni
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| | | | - Daxiang Cui
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| |
Collapse
|
13
|
Su F, Wang Y, Wei M, Wang C, Wang S, Yang L, Li J, Yuan P, Luo DG, Zhang C. Noninvasive Tracking of Every Individual in Unmarked Mouse Groups Using Multi-Camera Fusion and Deep Learning. Neurosci Bull 2023; 39:893-910. [PMID: 36571715 PMCID: PMC10264345 DOI: 10.1007/s12264-022-00988-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 08/29/2022] [Indexed: 12/27/2022] Open
Abstract
Accurate and efficient methods for identifying and tracking each animal in a group are needed to study complex behaviors and social interactions. Traditional tracking methods (e.g., marking each animal with dye or surgically implanting microchips) can be invasive and may have an impact on the social behavior being measured. To overcome these shortcomings, video-based methods for tracking unmarked animals, such as fruit flies and zebrafish, have been developed. However, tracking individual mice in a group remains a challenging problem because of their flexible body and complicated interaction patterns. In this study, we report the development of a multi-object tracker for mice that uses the Faster region-based convolutional neural network (R-CNN) deep learning algorithm with geometric transformations in combination with multi-camera/multi-image fusion technology. The system successfully tracked every individual in groups of unmarked mice and was applied to investigate chasing behavior. The proposed system constitutes a step forward in the noninvasive tracking of individual mice engaged in social behavior.
Collapse
Affiliation(s)
- Feng Su
- Department of Neurobiology, School of Basic Medical Sciences, Beijing Key Laboratory of Neural Regeneration and Repair, Capital Medical University, Beijing, 100069, China
- Chinese Institute for Brain Research, Beijing, 102206, China
- State Key Laboratory of Translational Medicine and Innovative Drug Development, Nanjing, 210000, China
- Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China
| | - Yangzhen Wang
- School of Life Sciences, Tsinghua University, Beijing, 100084, China
| | - Mengping Wei
- Department of Neurobiology, School of Basic Medical Sciences, Beijing Key Laboratory of Neural Regeneration and Repair, Capital Medical University, Beijing, 100069, China
| | - Chong Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Shaoli Wang
- The Key Laboratory of Developmental Genes and Human Disease, Institute of Life Sciences, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Lei Yang
- Department of Neurobiology, School of Basic Medical Sciences, Beijing Key Laboratory of Neural Regeneration and Repair, Capital Medical University, Beijing, 100069, China
| | - Jianmin Li
- Institute for Artificial Intelligence, the State Key Laboratory of Intelligence Technology and Systems, Beijing National Research Center for Information Science and Technology, Department of Computer Science and Technology, Tsinghua University, Beijing, 100084, China
| | - Peijiang Yuan
- School of Mechanical Engineering and Automation, Beihang University, Beijing, 100191, China.
| | - Dong-Gen Luo
- Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China.
| | - Chen Zhang
- Department of Neurobiology, School of Basic Medical Sciences, Beijing Key Laboratory of Neural Regeneration and Repair, Capital Medical University, Beijing, 100069, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
- State Key Laboratory of Translational Medicine and Innovative Drug Development, Nanjing, 210000, China.
| |
Collapse
|
14
|
Su F, Cheng Y, Chang L, Wang L, Huang G, Yuan P, Zhang C, Ma Y. Annotation-free glioma grading from pathological images using ensemble deep learning. Heliyon 2023; 9:e14654. [PMID: 37009333 PMCID: PMC10060174 DOI: 10.1016/j.heliyon.2023.e14654] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 03/12/2023] [Accepted: 03/14/2023] [Indexed: 03/28/2023] Open
Abstract
Glioma grading is critical for treatment selection, and the fine classification between glioma grades II and III is still a pathological challenge. Traditional systems based on a single deep learning (DL) model can only show relatively low accuracy in distinguishing glioma grades II and III. Introducing ensemble DL models by combining DL and ensemble learning techniques, we achieved annotation-free glioma grading (grade II or III) from pathological images. We established multiple tile-level DL models using residual network ResNet-18 architecture and then used DL models as component classifiers to develop ensemble DL models to achieve patient-level glioma grading. Whole-slide images of 507 subjects with low-grade glioma (LGG) from the Cancer Genome Atlas (TCGA) were included. The 30 DL models exhibited an average area under the curve (AUC) of 0.7991 in patient-level glioma grading. Single DL models showed large variation, and the median between-model cosine similarity was 0.9524, significantly smaller than the threshold of 1.0. The ensemble model based on logistic regression (LR) methods with a 14-component DL classifier (LR-14) demonstrated a mean patient-level accuracy and AUC of 0.8011 and 0.8945, respectively. Our proposed LR-14 ensemble DL model achieved state-of-the-art performance in glioma grade II and III classifications based on unannotated pathological images.
Collapse
|
15
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
16
|
Su F, Wei M, Sun M, Jiang L, Dong Z, Wang J, Zhang C. Deep learning-based synapse counting and synaptic ultrastructure analysis of electron microscopy images. J Neurosci Methods 2023; 384:109750. [PMID: 36414102 DOI: 10.1016/j.jneumeth.2022.109750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 11/11/2022] [Accepted: 11/18/2022] [Indexed: 11/21/2022]
Abstract
BACKGROUND Synapses are the connections between neurons in the central nervous system (CNS) or between neurons and other excitable cells in the peripheral nervous system (PNS), where electrical or chemical signals rapidly travel through one cell to another with high spatial precision. Synaptic analysis, based on synapse numbers and fine morphology, is the basis for understanding neurological functions and diseases. Manual analysis of synaptic structures in electron microscopy (EM) images is often limited by low efficiency and subjective bias. NEW METHOD We developed a multifunctional synaptic analysis system based on several advanced deep learning (DL) models. The system achieved synapse counting in low-magnification EM images and synaptic ultrastructure analysis in high-magnification EM images. RESULTS The synapse counting system based on ResNet18 and a Faster R-CNN model had a mean average precision (mAP) of 92.55%. For synaptic ultrastructure analysis, the Faster R-CNN model based on ResNet50 achieved a mAP of 91.60%, the DeepLab v3 + model based on ResNet50 enabled high performance in presynaptic and postsynaptic membrane segmentation with a global accuracy of 0.9811, and the Faster R-CNN model based on ResNet18 achieved a mAP of 91.41% for synaptic vesicle detection. CONCLUSIONS The proposed multifunctional synaptic analysis system may help to overcome the experimental bias inherent in manual analysis, thereby facilitating EM image-based synaptic function studies.
Collapse
Affiliation(s)
- Feng Su
- Department of Neurobiology, School of Basic Medical Sciences, Beijing Key Laboratory of Neural Regeneration and Repair, Capital Medical University, Beijing 100069, China; Chinese Institute for Brain Research, Beijing 102206, China; State Key Laboratory of Translational Medicine and Innovative Drug Development, Nanjing 210000, Jiangsu, China; Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Mengping Wei
- Department of Neurobiology, School of Basic Medical Sciences, Beijing Key Laboratory of Neural Regeneration and Repair, Capital Medical University, Beijing 100069, China
| | - Meng Sun
- Department of Neurobiology, School of Basic Medical Sciences, Beijing Key Laboratory of Neural Regeneration and Repair, Capital Medical University, Beijing 100069, China
| | - Lixin Jiang
- Peking University Institute of Mental Health (Sixth Hospital), No. 51 Huayuanbei Road, Haidian District, Beijing 100191, China
| | - Zhaoqi Dong
- Department of Neurobiology, School of Basic Medical Sciences, Beijing Key Laboratory of Neural Regeneration and Repair, Capital Medical University, Beijing 100069, China
| | - Jue Wang
- Department of Neurobiology, School of Basic Medical Sciences, Beijing Key Laboratory of Neural Regeneration and Repair, Capital Medical University, Beijing 100069, China
| | - Chen Zhang
- Department of Neurobiology, School of Basic Medical Sciences, Beijing Key Laboratory of Neural Regeneration and Repair, Capital Medical University, Beijing 100069, China; Chinese Institute for Brain Research, Beijing 102206, China; State Key Laboratory of Translational Medicine and Innovative Drug Development, Nanjing 210000, Jiangsu, China.
| |
Collapse
|
17
|
Guan X, Lu N, Zhang J. Accurate preoperative staging and HER2 status prediction of gastric cancer by the deep learning system based on enhanced computed tomography. Front Oncol 2022; 12:950185. [PMID: 36452488 PMCID: PMC9702985 DOI: 10.3389/fonc.2022.950185] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/24/2022] [Indexed: 10/24/2023] Open
Abstract
Purpose To construct the deep learning system (DLS) based on enhanced computed tomography (CT) images for preoperative prediction of staging and human epidermal growth factor receptor 2 (HER2) status in gastric cancer patients. Methods The raw enhanced CT image dataset consisted of CT images of 389 patients in the retrospective cohort, The Cancer Imaging Archive (TCIA) cohort, and the prospective cohort. DLS was developed by transfer learning for tumor detection, staging, and HER2 status prediction. The pre-trained Yolov5, EfficientNet, EfficientNetV2, Vision Transformer (VIT), and Swin Transformer (SWT) were studied. The tumor detection and staging dataset consisted of 4860 enhanced CT images and annotated tumor bounding boxes. The HER2 state prediction dataset consisted of 38900 enhanced CT images. Results The DetectionNet based on Yolov5 realized tumor detection and staging and achieved a mean Average Precision (IoU=0.5) (mAP_0.5) of 0.909 in the external validation cohort. The VIT-based PredictionNet performed optimally in HER2 status prediction with the area under the receiver operating characteristics curve (AUC) of 0.9721 and 0.9995 in the TCIA cohort and prospective cohort, respectively. DLS included DetectionNet and PredictionNet had shown excellent performance in CT image interpretation. Conclusion This study developed the enhanced CT-based DLS to preoperatively predict the stage and HER2 status of gastric cancer patients, which will help in choosing the appropriate treatment to improve the survival of gastric cancer patients.
Collapse
Affiliation(s)
| | | | - Jianping Zhang
- Department of General Surgery, The Second Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| |
Collapse
|
18
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
19
|
Interpretable tumor differentiation grade and microsatellite instability recognition in gastric cancer using deep learning. J Transl Med 2022; 102:641-649. [PMID: 35177797 DOI: 10.1038/s41374-022-00742-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 01/18/2022] [Accepted: 01/22/2022] [Indexed: 12/13/2022] Open
Abstract
Gastric cancer possesses great histological and molecular diversity, which creates obstacles for rapid and efficient diagnoses. Classic diagnoses either depend on the pathologist's judgment, which relies heavily on subjective experience, or time-consuming molecular assays for subtype diagnosis. Here, we present a deep learning (DL) system to achieve interpretable tumor differentiation grade and microsatellite instability (MSI) recognition in gastric cancer directly using hematoxylin-eosin (HE) staining whole-slide images (WSIs). WSIs from 467 patients were divided into three cohorts: the training cohort with 348 annotated WSIs, the testing cohort with 88 annotated WSIs, and the integration testing cohort with 31 original WSIs without tumor contour annotation. First, the DL models comprehensibly achieved tumor differentiation recognition with an F1 values of 0.8615 and 0.8977 for poorly differentiated adenocarcinoma (PDA) and well-differentiated adenocarcinoma (WDA) classes. Its ability to extract pathological features about the glandular structure formation, which is the key to distinguishing between PDA and WDA, increased the interpretability of the DL models. Second, the DL models achieved MSI status recognition with a patient-level accuracy of 86.36% directly from HE-stained WSIs in the testing cohort. Finally, the integrated end-to-end system achieved patient-level MSI recognition from original HE staining WSIs with an accuracy of 83.87% in the integration testing cohort with no tumor contour annotation. The proposed system, therefore, demonstrated high accuracy and interpretability, which can potentially promote the implementation of artificial intelligence healthcare.
Collapse
|
20
|
Side-Scan Sonar Image Classification Based on Style Transfer and Pre-Trained Convolutional Neural Networks. ELECTRONICS 2021. [DOI: 10.3390/electronics10151823] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Side-scan sonar is widely used in underwater rescue and the detection of undersea targets, such as shipwrecks, aircraft crashes, etc. Automatic object classification plays an important role in the rescue process to reduce the workload of staff and subjective errors caused by visual fatigue. However, the application of automatic object classification in side-scan sonar images is still lacking, which is due to a lack of datasets and the small number of image samples containing specific target objects. Secondly, the real data of side-scan sonar images are unbalanced. Therefore, a side-scan sonar image classification method based on synthetic data and transfer learning is proposed in this paper. In this method, optical images are used as inputs and the style transfer network is employed to simulate the side-scan sonar image to generate “simulated side-scan sonar images”; meanwhile, a convolutional neural network pre-trained on ImageNet is introduced for classification. In this paper, we experimentally demonstrate that the maximum accuracy of target classification is up to 97.32% by fine-tuning the pre-trained convolutional neural network using a training set incorporating “simulated side-scan sonar images”. The results show that the classification accuracy can be effectively improved by combining a pre-trained convolutional neural network and “similar side-scan sonar images”.
Collapse
|
21
|
Hu Y, Su F, Dong K, Wang X, Zhao X, Jiang Y, Li J, Ji J, Sun Y. Deep learning system for lymph node quantification and metastatic cancer identification from whole-slide pathology images. Gastric Cancer 2021; 24:868-877. [PMID: 33484355 DOI: 10.1007/s10120-021-01158-9] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 01/07/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Traditional diagnosis methods for lymph node metastases are labor-intensive and time-consuming. As a result, diagnostic systems based on deep learning (DL) algorithms have become a hot topic. However, current research lacks testing with sufficient data to verify performance. The aim of this study was to develop and test a deep learning system capable of identifying lymph node metastases. METHODS 921 whole-slide images of lymph nodes were divided into two cohorts: training and testing. For lymph node quantification, we combined Faster RCNN and DeepLab as a cascade DL algorithm to detect regions of interest. For metastatic cancer identification, we fused Xception and DenseNet-121 models and extracted features. Prospective testing to verify the performance of the diagnostic system was performed using 327 unlabeled images. We further validated the proposed system using Positive Predictive Value (PPV) and Negative Predictive Value (NPV) criteria. RESULTS We developed a DL-based system capable of automated quantification and identification of metastatic lymph nodes. The accuracy of lymph node quantification was shown to be 97.13%. The PPV of the combined Xception and DenseNet-121 model was 93.53%, and the NPV was 97.99%. Our experimental results show that the differentiation level of metastatic cancer affects the recognition performance. CONCLUSIONS The diagnostic system we established reached a high level of efficiency and accuracy of lymph node diagnosis. This system could potentially be implemented into clinical workflow to assist pathologists in making a preliminary screening for lymph node metastases in gastric cancer patients.
Collapse
Affiliation(s)
- Yajie Hu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, 52 Fucheng Road, Haidian District, Beijing, 100142, China
| | - Feng Su
- Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China
| | - Kun Dong
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, 52 Fucheng Road, Haidian District, Beijing, 100142, China
| | - Xinyu Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, 52 Fucheng Road, Haidian District, Beijing, 100142, China
| | - Xinya Zhao
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, 52 Fucheng Road, Haidian District, Beijing, 100142, China
| | - Yumeng Jiang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, 52 Fucheng Road, Haidian District, Beijing, 100142, China
| | - Jianming Li
- Institute for Artificial Intelligence, The State Key Laboratory of Intelligence Technology and Systems, Beijing National Research Center for Information Science and Technology, Department of Computer Science and Technology, Tsinghua University, Beijing, 100084, China
| | - Jiafu Ji
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Gastrointestinal Cancer Center, Peking University Cancer Hospital and Institute, Beijing, 100142, China
| | - Yu Sun
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, 52 Fucheng Road, Haidian District, Beijing, 100142, China.
| |
Collapse
|
22
|
Victória Matias A, Atkinson Amorim JG, Buschetto Macarini LA, Cerentini A, Casimiro Onofre AS, De Miranda Onofre FB, Daltoé FP, Stemmer MR, von Wangenheim A. What is the state of the art of computer vision-assisted cytology? A Systematic Literature Review. Comput Med Imaging Graph 2021; 91:101934. [PMID: 34174544 DOI: 10.1016/j.compmedimag.2021.101934] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 04/16/2021] [Accepted: 05/04/2021] [Indexed: 11/28/2022]
Abstract
Cytology is a low-cost and non-invasive diagnostic procedure employed to support the diagnosis of a broad range of pathologies. Cells are harvested from tissues by aspiration or scraping, and it is still predominantly performed manually by medical or laboratory professionals extensively trained for this purpose. It is a time-consuming and repetitive process where many diagnostic criteria are subjective and vulnerable to human interpretation. Computer Vision technologies, by automatically generating quantitative and objective descriptions of examinations' contents, can help minimize the chances of misdiagnoses and shorten the time required for analysis. To identify the state-of-art of computer vision techniques currently applied to cytology, we conducted a Systematic Literature Review, searching for approaches for the segmentation, detection, quantification, and classification of cells and organelles using computer vision on cytology slides. We analyzed papers published in the last 4 years. The initial search was executed in September 2020 and resulted in 431 articles. After applying the inclusion/exclusion criteria, 157 papers remained, which we analyzed to build a picture of the tendencies and problems present in this research area, highlighting the computer vision methods, staining techniques, evaluation metrics, and the availability of the used datasets and computer code. As a result, we identified that the most used methods in the analyzed works are deep learning-based (70 papers), while fewer works employ classic computer vision only (101 papers). The most recurrent metric used for classification and object detection was the accuracy (33 papers and 5 papers), while for segmentation it was the Dice Similarity Coefficient (38 papers). Regarding staining techniques, Papanicolaou was the most employed one (130 papers), followed by H&E (20 papers) and Feulgen (5 papers). Twelve of the datasets used in the papers are publicly available, with the DTU/Herlev dataset being the most used one. We conclude that there still is a lack of high-quality datasets for many types of stains and most of the works are not mature enough to be applied in a daily clinical diagnostic routine. We also identified a growing tendency towards adopting deep learning-based approaches as the methods of choice.
Collapse
Affiliation(s)
- André Victória Matias
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil.
| | | | | | - Allan Cerentini
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil.
| | | | | | - Felipe Perozzo Daltoé
- Department of Pathology, Federal University of Santa Catarina, Florianópolis, Brazil.
| | - Marcelo Ricardo Stemmer
- Automation and Systems Department, Federal University of Santa Catarina, Florianópolis, Brazil.
| | - Aldo von Wangenheim
- Brazilian Institute for Digital Convergence, Federal University of Santa Catarina, Florianópolis, Brazil.
| |
Collapse
|
23
|
Chen Y, Xi W, Yao W, Wang L, Xu Z, Wels M, Yuan F, Yan C, Zhang H. Dual-Energy Computed Tomography-Based Radiomics to Predict Peritoneal Metastasis in Gastric Cancer. Front Oncol 2021; 11:659981. [PMID: 34055627 PMCID: PMC8160383 DOI: 10.3389/fonc.2021.659981] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 04/26/2021] [Indexed: 01/06/2023] Open
Abstract
Objective To develop and validate a dual-energy computed tomography (DECT) derived radiomics model to predict peritoneal metastasis (PM) in patients with gastric cancer (GC). Methods This retrospective study recruited 239 GC (non-PM = 174, PM = 65) patients with histopathological confirmation for peritoneal status from January 2015 to December 2019. All patients were randomly divided into a training cohort (n = 160) and a testing cohort (n = 79). Standardized iodine-uptake (IU) images and 120-kV-equivalent mixed images (simulating conventional CT images) from portal-venous and delayed phases were used for analysis. Two regions of interest (ROIs) including the peritoneal area and the primary tumor were independently delineated. Subsequently, 1691 and 1226 radiomics features were extracted from the peritoneal area and the primary tumor from IU and mixed images on each phase. Boruta and Spearman correlation analysis were used for feature selection. Three radiomics models were established, including the R_IU model for IU images, the R_MIX model for mixed images and the combined radiomics model (the R_comb model). Random forest was used to tune the optimal radiomics model. The performance of the clinical model and human experts to assess PM was also recorded. Results Fourteen and three radiomics features with low redundancy and high importance were extracted from the IU and mixed images, respectively. The R_IU model showed significantly better performance to predict PM than the R_MIX model in the training cohort (AUC, 0.981 vs. 0.917, p = 0.034). No improvement was observed in the R_comb model (AUC = 0.967). The R_IU model was the optimal radiomics model which showed no overfitting in the testing cohort (AUC = 0.967, p = 0.528). The R_IU model demonstrated significantly higher predictive value on peritoneal status than the clinical model and human experts in the testing cohort (AUC, 0.785, p = 0.005; AUC, 0.732, p <0.001, respectively). Conclusion DECT derived radiomics could serve as a non-invasive and easy-to-use biomarker to preoperatively predict PM for GC, providing opportunity for those patients to tailor appropriate treatment.
Collapse
Affiliation(s)
- Yong Chen
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wenqi Xi
- Department of Oncology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Weiwu Yao
- Department of Radiology, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Lingyun Wang
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhihan Xu
- Department of DI CT Collaboration, Siemens Healthineers Ltd, Shanghai, China
| | - Michael Wels
- Department of Diagnostic Imaging Computed Tomography Image Analytics, Siemens Healthcare GmbH, Forchheim, Germany
| | - Fei Yuan
- Department of Pathology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chao Yan
- Department of Surgery, Shanghai Key Laboratory of Gastric Neoplasms, Shanghai Institute of Digestive Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Huan Zhang
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
24
|
Jin P, Ji X, Kang W, Li Y, Liu H, Ma F, Ma S, Hu H, Li W, Tian Y. Artificial intelligence in gastric cancer: a systematic review. J Cancer Res Clin Oncol 2020; 146:2339-2350. [PMID: 32613386 DOI: 10.1007/s00432-020-03304-9] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 06/26/2020] [Indexed: 02/08/2023]
Abstract
OBJECTIVE This study aims to systematically review the application of artificial intelligence (AI) techniques in gastric cancer and to discuss the potential limitations and future directions of AI in gastric cancer. METHODS A systematic review was performed that follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Pubmed, EMBASE, the Web of Science, and the Cochrane Library were used to search for gastric cancer publications with an emphasis on AI that were published up to June 2020. The terms "artificial intelligence" and "gastric cancer" were used to search for the publications. RESULTS A total of 64 articles were included in this review. In gastric cancer, AI is mainly used for molecular bio-information analysis, endoscopic detection for Helicobacter pylori infection, chronic atrophic gastritis, early gastric cancer, invasion depth, and pathology recognition. AI may also be used to establish predictive models for evaluating lymph node metastasis, response to drug treatments, and prognosis. In addition, AI can be used for surgical training, skill assessment, and surgery guidance. CONCLUSIONS In the foreseeable future, AI applications can play an important role in gastric cancer management in the era of precision medicine.
Collapse
Affiliation(s)
- Peng Jin
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Xiaoyan Ji
- Department of Emergency Ward, First Teaching Hospital of Tianjin University of Traditional Chinese Medicine, Tianjin, 300193, China
| | - Wenzhe Kang
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Yang Li
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Hao Liu
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Fuhai Ma
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Shuai Ma
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Haitao Hu
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Weikun Li
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Yantao Tian
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China.
| |
Collapse
|
25
|
Igarashi S, Sasaki Y, Mikami T, Sakuraba H, Fukuda S. Anatomical classification of upper gastrointestinal organs under various image capture conditions using AlexNet. Comput Biol Med 2020; 124:103950. [PMID: 32798923 DOI: 10.1016/j.compbiomed.2020.103950] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 07/28/2020] [Accepted: 07/28/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND Machine learning has led to several endoscopic studies about the automated localization of digestive lesions and prediction of cancer invasion depth. Training and validation dataset collection are required for a disease in each digestive organ under a similar image capture condition; this is the first step in system development. This data cleansing task in data collection causes a great burden among experienced endoscopists. Thus, this study classified upper gastrointestinal (GI) organ images obtained via routine esophagogastroduodenoscopy (EGD) into precise anatomical categories using AlexNet. METHOD In total, 85,246 raw upper GI endoscopic images from 441 patients with gastric cancer were collected retrospectively. The images were manually classified into 14 categories: 0) white-light (WL) stomach with indigo carmine (IC); 1) WL esophagus with iodine; 2) narrow-band (NB) esophagus; 3) NB stomach with IC; 4) NB stomach; 5) WL duodenum; 6) WL esophagus; 7) WL stomach; 8) NB oral-pharynx-larynx; 9) WL oral-pharynx-larynx; 10) WL scaling paper; 11) specimens; 12) WL muscle fibers during endoscopic submucosal dissection (ESD); and 13) others. AlexNet is a deep learning framework and was trained using 49,174 datasets and validated using 36,072 independent datasets. RESULTS The accuracy rates of the training and validation dataset were 0.993 and 0.965, respectively. CONCLUSIONS A simple anatomical organ classifier using AlexNet was developed and found to be effective in data cleansing task for collection of EGD images. Moreover, it could be useful to both expert and non-expert endoscopists as well as engineers in retrospectively assessing upper GI images.
Collapse
Affiliation(s)
- Shohei Igarashi
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| | - Yoshihiro Sasaki
- Department of Medical Informatics, Hirosaki University Hospital, 53 Hon-cho, Hirosaki, 036-8563, Japan.
| | - Tatsuya Mikami
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| | - Hirotake Sakuraba
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| | - Shinsaku Fukuda
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, 5 Zaifu-cho, Hirosaki, 036-8562, Japan
| |
Collapse
|