51
|
Luna M, Chikontwe P, Nam S, Park SH. Attention guided multi-scale cluster refinement with extended field of view for amodal nuclei segmentation. Comput Biol Med 2024; 170:108015. [PMID: 38266467 DOI: 10.1016/j.compbiomed.2024.108015] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/04/2024] [Accepted: 01/19/2024] [Indexed: 01/26/2024]
Abstract
Nuclei segmentation plays a crucial role in disease understanding and diagnosis. In whole slide images, cell nuclei often appear overlapping and densely packed with ambiguous boundaries due to the underlying 3D structure of histopathology samples. Instance segmentation via deep neural networks with object clustering is able to detect individual segments in crowded nuclei but suffers from a limited field of view, and does not support amodal segmentation. In this work, we introduce a dense feature pyramid network with a feature mixing module to increase the field of view of the segmentation model while keeping pixel-level details. We also improve the model output quality by adding a multi-scale self-attention guided refinement module that sequentially adjusts predictions as resolution increases. Finally, we enable clusters to share pixels by separating the instance clustering objective function from other pixel-related tasks, and introduce supervision to occluded areas to guide the learning process. For evaluation of amodal nuclear segmentation, we also update prior metrics used in common modal segmentation to allow the evaluation of overlapping masks and mitigate over-penalization issues via a novel unique matching algorithm. Our experiments demonstrate consistent performance across multiple datasets with significantly improved segmentation quality.
Collapse
Affiliation(s)
- Miguel Luna
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea
| | - Philip Chikontwe
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea
| | - Siwoo Nam
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea
| | - Sang Hyun Park
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea; AI Graduate School, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, 42988, South Korea.
| |
Collapse
|
52
|
Yamaguchi R, Morikawa H, Akatsuka J, Numata Y, Noguchi A, Kokumai T, Ishida M, Mizuma M, Nakagawa K, Unno M, Miyake A, Tamiya G, Yamamoto Y, Furukawa T. Machine Learning of Histopathological Images Predicts Recurrences of Resected Pancreatic Ductal Adenocarcinoma With Adjuvant Treatment. Pancreas 2024; 53:e199-e204. [PMID: 38127849 DOI: 10.1097/mpa.0000000000002289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
OBJECTIVES Pancreatic ductal adenocarcinoma is an intractable disease with frequent recurrence after resection and adjuvant therapy. The present study aimed to clarify whether artificial intelligence-assisted analysis of histopathological images can predict recurrence in patients with pancreatic ductal adenocarcinoma who underwent resection and adjuvant chemotherapy with tegafur/5-chloro-2,4-dihydroxypyridine/potassium oxonate. MATERIALS AND METHODS Eighty-nine patients were enrolled in the study. Machine-learning algorithms were applied to 10-billion-scale pixel data of whole-slide histopathological images to generate key features using multiple deep autoencoders. Areas under the curve were calculated from receiver operating characteristic curves using a support vector machine with key features alone and by combining with clinical data (age and carbohydrate antigen 19-9 and carcinoembryonic antigen levels) for predicting recurrence. Supervised learning with pathological annotations was conducted to determine the significant features for predicting recurrence. RESULTS Areas under the curves obtained were 0.73 (95% confidence interval, 0.59-0.87) by the histopathological data analysis and 0.84 (95% confidence interval, 0.73-0.94) by the combinatorial analysis of histopathological data and clinical data. Supervised learning model demonstrated that poor tumor differentiation was significantly associated with recurrence. CONCLUSIONS Results indicate that machine learning with the integration of artificial intelligence-driven evaluation of histopathological images and conventional clinical data provides relevant prognostic information for patients with pancreatic ductal adenocarcinoma.
Collapse
Affiliation(s)
- Ruri Yamaguchi
- From the Department of Investigative Pathology, Tohoku University Graduate School of Medicine, Sendai
| | - Hiromu Morikawa
- Pathology Informatics Team, RIKEN Center for Advanced Intelligence Project, Tokyo
| | - Jun Akatsuka
- Pathology Informatics Team, RIKEN Center for Advanced Intelligence Project, Tokyo
| | - Yasushi Numata
- Pathology Informatics Team, RIKEN Center for Advanced Intelligence Project, Tokyo
| | - Aya Noguchi
- Department of Surgery, Tohoku University Graduate School of Medicine
| | - Takashi Kokumai
- Department of Surgery, Tohoku University Graduate School of Medicine
| | - Masaharu Ishida
- Department of Surgery, Tohoku University Graduate School of Medicine
| | - Masamichi Mizuma
- Department of Surgery, Tohoku University Graduate School of Medicine
| | - Kei Nakagawa
- Department of Surgery, Tohoku University Graduate School of Medicine
| | - Michiaki Unno
- Department of Surgery, Tohoku University Graduate School of Medicine
| | - Akimitsu Miyake
- Department of AI and Innovative Medicine, Tohoku University Graduate School of Medicine, Sendai
| | | | | | - Toru Furukawa
- From the Department of Investigative Pathology, Tohoku University Graduate School of Medicine, Sendai
| |
Collapse
|
53
|
Liu Z, Cai Y, Tang Q. Nuclei detection in breast histopathology images with iterative correction. Med Biol Eng Comput 2024; 62:465-478. [PMID: 37914958 DOI: 10.1007/s11517-023-02947-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 10/09/2023] [Indexed: 11/03/2023]
Abstract
This work presents a deep network architecture to improve nuclei detection performance and achieve the high localization accuracy of nuclei in breast cancer histopathology images. The proposed model consists of two parts, generating nuclear candidate module and refining nuclear localization module. We first design a novel patch learning method to obtain high-quality nuclear candidates, where in addition to categories, location representations are also added to the patch information to implement the multi-task learning process of nuclear classification and localization; meanwhile, the deep supervision mechanism is introduced to obtain the coherent contributions from each scale layer. In order to refine nuclear localization, we propose an iterative correction strategy to make the prediction progressively closer to the ground truth, which significantly improves the accuracy of nuclear localization and facilitates neighbor size selection in the nonmaximum suppression step. Experimental results demonstrate the superior performance of our method for nuclei detection on the H&E stained histopathological image dataset as compared to previous state-of-the-art methods, especially in multiple cluttered nuclei detection, can achieve better results than existing techniques.
Collapse
Affiliation(s)
- Ziyi Liu
- School of Biomedical Engineering, South Central Minzu University, Wuhan, 430074, People's Republic of China
- Affiliated Yantai Yuhuangding Hospital of Qingdao University, Yantai, 264001, People's Republic of China
| | - Yu Cai
- School of Biomedical Engineering, South Central Minzu University, Wuhan, 430074, People's Republic of China
| | - Qiling Tang
- School of Biomedical Engineering, South Central Minzu University, Wuhan, 430074, People's Republic of China.
| |
Collapse
|
54
|
Kawaguchi K, Miyama K, Endo M, Bise R, Kohashi K, Hirose T, Nabeshima A, Fujiwara T, Matsumoto Y, Oda Y, Nakashima Y. Viable tumor cell density after neoadjuvant chemotherapy assessed using deep learning model reflects the prognosis of osteosarcoma. NPJ Precis Oncol 2024; 8:16. [PMID: 38253709 PMCID: PMC10803362 DOI: 10.1038/s41698-024-00515-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 12/08/2023] [Indexed: 01/24/2024] Open
Abstract
Prognosis after neoadjuvant chemotherapy (NAC) for osteosarcoma is generally predicted using manual necrosis-rate assessments; however, necrosis rates obtained in these assessments are not reproducible and do not adequately reflect individual cell responses. We aimed to investigate whether viable tumor cell density assessed using a deep-learning model (DLM) reflects the prognosis of osteosarcoma. Seventy-one patients were included in this study. Initially, the DLM was trained to detect viable tumor cells, following which it calculated their density. Patients were stratified into high and low-viable tumor cell density groups based on DLM measurements, and survival analysis was performed to evaluate disease-specific survival and metastasis-free survival (DSS and MFS). The high viable tumor cell density group exhibited worse DSS (p = 0.023) and MFS (p = 0.033). DLM-evaluated viable density showed correct stratification of prognosis groups. Therefore, this evaluation method may enable precise stratification of the prognosis in osteosarcoma patients treated with NAC.
Collapse
Affiliation(s)
- Kengo Kawaguchi
- Department of Orthopaedic Surgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
- Department of Anatomic Pathology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Kazuki Miyama
- Department of Orthopaedic Surgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
- Department of Advanced Information Technology, Kyushu University, 744 Motooka, Nishi-Ku, Fukuoka, 819-0395, Japan
| | - Makoto Endo
- Department of Orthopaedic Surgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan.
| | - Ryoma Bise
- Department of Advanced Information Technology, Kyushu University, 744 Motooka, Nishi-Ku, Fukuoka, 819-0395, Japan
| | - Kenichi Kohashi
- Department of Anatomic Pathology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
- Department of Pathology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Takeshi Hirose
- Department of Orthopaedic Surgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Akira Nabeshima
- Department of Orthopaedic Surgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Toshifumi Fujiwara
- Department of Orthopaedic Surgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Yoshihiro Matsumoto
- Department of Orthopaedic Surgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
- Department of Orthopaedic Surgery, School of Medicine, Fukushima Medical University, 1 Hikarigaoka, Fukushima, 960-1295, Japan
| | - Yoshinao Oda
- Department of Anatomic Pathology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Yasuharu Nakashima
- Department of Orthopaedic Surgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| |
Collapse
|
55
|
Kang J, Lafata K, Kim E, Yao C, Lin F, Rattay T, Nori H, Katsoulakis E, Lee CI. Artificial intelligence across oncology specialties: current applications and emerging tools. BMJ ONCOLOGY 2024; 3:e000134. [PMID: 39886165 PMCID: PMC11203066 DOI: 10.1136/bmjonc-2023-000134] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 01/03/2024] [Indexed: 02/01/2025]
Abstract
Oncology is becoming increasingly personalised through advancements in precision in diagnostics and therapeutics, with more and more data available on both ends to create individualised plans. The depth and breadth of data are outpacing our natural ability to interpret it. Artificial intelligence (AI) provides a solution to ingest and digest this data deluge to improve detection, prediction and skill development. In this review, we provide multidisciplinary perspectives on oncology applications touched by AI-imaging, pathology, patient triage, radiotherapy, genomics-driven therapy and surgery-and integration with existing tools-natural language processing, digital twins and clinical informatics.
Collapse
Affiliation(s)
- John Kang
- Department of Radiation Oncology, University of Washington, Seattle, Washington, USA
| | - Kyle Lafata
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
- Department of Radiology, Duke University, Durham, North Carolina, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Ellen Kim
- Department of Radiation Oncology, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Christopher Yao
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Frank Lin
- Kinghorn Centre for Clinical Genomics, Garvan Institute of Medical Research, Darlinghurst, New South Wales, Australia
- NHMRC Clinical Trials Centre, Camperdown, New South Wales, Australia
- Faculty of Medicine, St Vincent's Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Tim Rattay
- Department of Genetics and Genome Biology, University of Leicester Cancer Research Centre, Leicester, UK
| | - Harsha Nori
- Microsoft Research, Redmond, Washington, USA
| | - Evangelia Katsoulakis
- Department of Radiation Oncology, University of South Florida, Tampa, Florida, USA
- Veterans Affairs Informatics and Computing Infrastructure, Salt Lake City, Utah, USA
| | | |
Collapse
|
56
|
Roetzer-Pejrimovsky T, Nenning KH, Kiesel B, Klughammer J, Rajchl M, Baumann B, Langs G, Woehrer A. Deep learning links localized digital pathology phenotypes with transcriptional subtype and patient outcome in glioblastoma. Gigascience 2024; 13:giae057. [PMID: 39185700 PMCID: PMC11345537 DOI: 10.1093/gigascience/giae057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 05/13/2024] [Accepted: 07/20/2024] [Indexed: 08/27/2024] Open
Abstract
BACKGROUND Deep learning has revolutionized medical image analysis in cancer pathology, where it had a substantial clinical impact by supporting the diagnosis and prognostic rating of cancer. Among the first available digital resources in the field of brain cancer is glioblastoma, the most common and fatal brain cancer. At the histologic level, glioblastoma is characterized by abundant phenotypic variability that is poorly linked with patient prognosis. At the transcriptional level, 3 molecular subtypes are distinguished with mesenchymal-subtype tumors being associated with increased immune cell infiltration and worse outcome. RESULTS We address genotype-phenotype correlations by applying an Xception convolutional neural network to a discovery set of 276 digital hematozylin and eosin (H&E) slides with molecular subtype annotation and an independent The Cancer Genome Atlas-based validation cohort of 178 cases. Using this approach, we achieve high accuracy in H&E-based mapping of molecular subtypes (area under the curve for classical, mesenchymal, and proneural = 0.84, 0.81, and 0.71, respectively; P < 0.001) and regions associated with worse outcome (univariable survival model P < 0.001, multivariable P = 0.01). The latter were characterized by higher tumor cell density (P < 0.001), phenotypic variability of tumor cells (P < 0.001), and decreased T-cell infiltration (P = 0.017). CONCLUSIONS We modify a well-known convolutional neural network architecture for glioblastoma digital slides to accurately map the spatial distribution of transcriptional subtypes and regions predictive of worse outcome, thereby showcasing the relevance of artificial intelligence-enabled image mining in brain cancer.
Collapse
Affiliation(s)
- Thomas Roetzer-Pejrimovsky
- Division of Neuropathology and Neurochemistry, Department of Neurology, Medical University of Vienna, 1090 Vienna, Austria
- Comprehensive Center for Clinical Neurosciences and Mental Health, Medical University of Vienna, 1090 Vienna, Austria
| | - Karl-Heinz Nenning
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY 10962, USA
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, 1090 Vienna, Austria
| | - Barbara Kiesel
- Department of Neurosurgery, Medical University of Vienna, 1090 Vienna, Austria
| | - Johanna Klughammer
- Gene Center and Department of Biochemistry, Ludwig-Maximilians-Universität München, 80539 Munich, Germany
| | - Martin Rajchl
- Department of Computing and Medicine, Imperial College London, London SW7 2AZ, UK
| | - Bernhard Baumann
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, 1090 Vienna, Austria
| | - Georg Langs
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, 1090 Vienna, Austria
| | - Adelheid Woehrer
- Division of Neuropathology and Neurochemistry, Department of Neurology, Medical University of Vienna, 1090 Vienna, Austria
- Comprehensive Center for Clinical Neurosciences and Mental Health, Medical University of Vienna, 1090 Vienna, Austria
- Department of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, 6020 Innsbruck, Austria
| |
Collapse
|
57
|
Hassan T, Li Z, Javed S, Dias J, Werghi N. Neural Graph Refinement for Robust Recognition of Nuclei Communities in Histopathological Landscape. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 33:241-256. [PMID: 38064329 DOI: 10.1109/tip.2023.3337666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Accurate classification of nuclei communities is an important step towards timely treating the cancer spread. Graph theory provides an elegant way to represent and analyze nuclei communities within the histopathological landscape in order to perform tissue phenotyping and tumor profiling tasks. Many researchers have worked on recognizing nuclei regions within the histology images in order to grade cancerous progression. However, due to the high structural similarities between nuclei communities, defining a model that can accurately differentiate between nuclei pathological patterns still needs to be solved. To surmount this challenge, we present a novel approach, dubbed neural graph refinement, that enhances the capabilities of existing models to perform nuclei recognition tasks by employing graph representational learning and broadcasting processes. Based on the physical interaction of the nuclei, we first construct a fully connected graph in which nodes represent nuclei and adjacent nodes are connected to each other via an undirected edge. For each edge and node pair, appearance and geometric features are computed and are then utilized for generating the neural graph embeddings. These embeddings are used for diffusing contextual information to the neighboring nodes, all along a path traversing the whole graph to infer global information over an entire nuclei network and predict pathologically meaningful communities. Through rigorous evaluation of the proposed scheme across four public datasets, we showcase that learning such communities through neural graph refinement produces better results that outperform state-of-the-art methods.
Collapse
|
58
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
59
|
Zhao J, He YJ, Zhou SH, Qin J, Xie YN. CNSeg: A dataset for cervical nuclear segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107732. [PMID: 37544166 DOI: 10.1016/j.cmpb.2023.107732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 05/31/2023] [Accepted: 07/23/2023] [Indexed: 08/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Nuclear segmentation in cervical cell images is a crucial technique for automatic cytopathology diagnosis. Experimental evaluation of nuclear segmentation methods with datasets is helpful in promoting the advancement of nuclear segmentation techniques. However, public datasets are not enough for a reasonable and comprehensive evaluation because of insufficient quantity, single data source, and low segmentation difficulty. METHODS Therefore, we provide the largest dataset for cervical nuclear segmentation (CNSeg). It contains 124,000 annotated nuclei collected from 1,530 patients under different conditions. The image styles in this dataset cover most practical application scenarios, including microbial infection, cytopathic heterogeneity, overlapping nuclei, etc. To evaluate the performance of segmentation methods from different aspects, we divided the CNSeg dataset into three subsets, namely the patch segmentation dataset (PatchSeg) with nuclei images collected under complex conditions, the cluster segmentation dataset (ClusterSeg) with cluster nuclei, and the domain segmentation dataset (DomainSeg) with data from different domains. Furthermore, we propose a post-processing method that processes overlapping nuclei single ones. RESULTS AND CONCLUSION Experiments show that our dataset can comprehensively evaluate cervical nuclear segmentation methods from different aspects. We provide guidelines for other researchers to use the dataset. https://github.com/jingzhaohlj/AL-Net.
Collapse
Affiliation(s)
- Jing Zhao
- Northeast Forestry University, Mechanical and Electrical Engineering, Harbin 150006, China
| | - Yong-Jun He
- Harbin Institute of Technology, School of Computer Science, Harbin 150001, China.
| | - Shu-Hang Zhou
- Wenzhou Business College, Information Engineering, Wenzhou 325035, China
| | - Jian Qin
- Harbin University of Science and Technology, School of Computer Science and Technology, No. 52 Xuefu Road, 150080 Harbin, China
| | - Yi-Ning Xie
- Northeast Forestry University, Mechanical and Electrical Engineering, Harbin 150006, China
| |
Collapse
|
60
|
Qi L, Liang JY, Li ZW, Xi SY, Lai YN, Gao F, Zhang XR, Wang DS, Hu MT, Cao Y, Xu LJ, Chan RC, Xing BC, Wang X, Li YH. Deep learning-derived spatial organization features on histology images predicts prognosis in colorectal liver metastasis patients after hepatectomy. iScience 2023; 26:107702. [PMID: 37701575 PMCID: PMC10494211 DOI: 10.1016/j.isci.2023.107702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 07/10/2023] [Accepted: 08/21/2023] [Indexed: 09/14/2023] Open
Abstract
Histopathological images of colorectal liver metastases (CRLM) contain rich morphometric information that may predict patients' outcomes. However, to our knowledge, no study has reported any practical deep learning framework based on the histology images of CRLM, and their direct association with prognosis remains largely unknown. In this study, we developed a deep learning-based framework for fully automated tissue classification and quantification of clinically relevant spatial organization features (SOFs) in H&E-stained images of CRLM. The SOFs based risk-scoring system demonstrated a strong and robust prognostic value that is independent of the current clinical risk score (CRS) system in independent clinical cohorts. Our framework enables fully automated tissue classification of H&E images of CRLM, which could significantly reduce assessment subjectivity and the workload of pathologists. The risk-scoring system provides a time- and cost-efficient tool to assist clinical decision-making for patients with CRLM, which could potentially be implemented in clinical practice.
Collapse
Affiliation(s)
- Lin Qi
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, China
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong SAR, China
- Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| | - Jie-ying Liang
- Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Department of Medical Oncology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
- Department of Medical Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhong-wu Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital & Institute, Beijing, China
| | - Shao-yan Xi
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
- Department of Pathology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yu-ni Lai
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong SAR, China
| | - Feng Gao
- Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Disease, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xian-rui Zhang
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, China
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong SAR, China
- Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| | - De-shen Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
- Department of Medical Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Ming-tao Hu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
- Department of Medical Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yi Cao
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, China
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong SAR, China
- Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| | - Li-jian Xu
- Centre for Perceptual and Interactive Intelligence, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Ronald C.K. Chan
- Department of Pathology, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Bao-cai Xing
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Hepatopancreatobiliary Surgery Department I, Peking University Cancer Hospital & Institute, Beijing, China
| | - Xin Wang
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, China
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong SAR, China
- Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| | - Yu-hong Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
- Department of Medical Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| |
Collapse
|
61
|
Anjum S, Ahmed I, Asif M, Aljuaid H, Alturise F, Ghadi YY, Elhabob R. Lung Cancer Classification in Histopathology Images Using Multiresolution Efficient Nets. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:7282944. [PMID: 37876944 PMCID: PMC10593544 DOI: 10.1155/2023/7282944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 11/07/2022] [Accepted: 11/29/2022] [Indexed: 10/26/2023]
Abstract
Histopathological images are very effective for investigating the status of various biological structures and diagnosing diseases like cancer. In addition, digital histopathology increases diagnosis precision and provides better image quality and more detail for the pathologist with multiple viewing options and team annotations. As a result of the benefits above, faster treatment is available, increasing therapy success rates and patient recovery and survival chances. However, the present manual examination of these images is tedious and time-consuming for pathologists. Therefore, reliable automated techniques are needed to effectively classify normal and malignant cancer images. This paper applied a deep learning approach, namely, EfficientNet and its variants from B0 to B7. We used different image resolutions for each model, from 224 × 224 pixels to 600 × 600 pixels. We also applied transfer learning and parameter tuning techniques to improve the results and overcome the overfitting problem. We collected the dataset from the Lung and Colon Cancer Histopathological Image LC25000 image dataset. The dataset acquisition consists of 25,000 histopathology images of five classes (lung adenocarcinoma, lung squamous cell carcinoma, benign lung tissue, colon adenocarcinoma, and colon benign tissue). Then, we performed preprocessing on the dataset to remove the noisy images and bring them into a standard format. The model's performance was evaluated in terms of classification accuracy and loss. We have achieved good accuracy results for all variants; however, the results of EfficientNetB2 stand excellent, with an accuracy of 97% for 260 × 260 pixels resolution images.
Collapse
Affiliation(s)
- Sunila Anjum
- Center of Excellence in Information Technology, Institute of Management Sciences, Hayatabad, Peshawar 25000, Pakistan
| | - Imran Ahmed
- School of Computing and Information Science, Anglia Ruskin University, Cambridge, UK
| | - Muhammad Asif
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Fahad Alturise
- Department of Computer, College of Science and Arts in Ar Rass, Qassim University, Ar Rass, Qassim, Saudi Arabia
| | - Yazeed Yasin Ghadi
- Department of Software Engineering/Computer Science, Al Ain University, Al Ain, UAE
| | - Rashad Elhabob
- College of Computer Science and Information Technology, Karary University, Omdurman, Sudan
| |
Collapse
|
62
|
Ariotta V, Lehtonen O, Salloum S, Micoli G, Lavikka K, Rantanen V, Hynninen J, Virtanen A, Hautaniemi S. H&E image analysis pipeline for quantifying morphological features. J Pathol Inform 2023; 14:100339. [PMID: 37915837 PMCID: PMC10616375 DOI: 10.1016/j.jpi.2023.100339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 08/15/2023] [Accepted: 09/30/2023] [Indexed: 11/03/2023] Open
Abstract
Detecting cell types from histopathological images is essential for various digital pathology applications. However, large number of cells in whole-slide images (WSIs) necessitates automated analysis pipelines for efficient cell type detection. Herein, we present hematoxylin and eosin (H&E) Image Processing pipeline (HEIP) for automatied analysis of scanned H&E-stained slides. HEIP is a flexible and modular open-source software that performs preprocessing, instance segmentation, and nuclei feature extraction. To evaluate the performance of HEIP, we applied it to extract cell types from ovarian high-grade serous carcinoma (HGSC) patient WSIs. HEIP showed high precision in instance segmentation, particularly for neoplastic and epithelial cells. We also show that there is a significant correlation between genomic ploidy values and morphological features, such as major axis of the nucleus.
Collapse
Affiliation(s)
- Valeria Ariotta
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
| | - Oskari Lehtonen
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
| | - Shams Salloum
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
- Department of Pathology, University of Helsinki and HUS Diagnostic Center, Helsinki University Hospital, 00029 Helsinki, Finland
| | - Giulia Micoli
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
| | - Kari Lavikka
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
| | - Ville Rantanen
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
| | - Johanna Hynninen
- Department of Obstetrics and Gynaecology, University of Turku and Turku University Hospital, 200521 Turku, Finland
| | - Anni Virtanen
- Department of Pathology, University of Helsinki and HUS Diagnostic Center, Helsinki University Hospital, 00029 Helsinki, Finland
| | - Sampsa Hautaniemi
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland
| |
Collapse
|
63
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Nuklearmedizin 2023; 62:306-313. [PMID: 37802058 DOI: 10.1055/a-2157-6670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET..
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
64
|
Jing Y, Li C, Du T, Jiang T, Sun H, Yang J, Shi L, Gao M, Grzegorzek M, Li X. A comprehensive survey of intestine histopathological image analysis using machine vision approaches. Comput Biol Med 2023; 165:107388. [PMID: 37696178 DOI: 10.1016/j.compbiomed.2023.107388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/06/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023]
Abstract
Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.
Collapse
Affiliation(s)
- Yujie Jing
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Minghe Gao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China.
| |
Collapse
|
65
|
Zhu D, Wang C, Zou P, Zhang R, Wang S, Song B, Yang X, Low KB, Xin HL. Deep-Learning Aided Atomic-Scale Phase Segmentation toward Diagnosing Complex Oxide Cathodes for Lithium-Ion Batteries. NANO LETTERS 2023; 23:8272-8279. [PMID: 37643420 DOI: 10.1021/acs.nanolett.3c02441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Phase transformation─a universal phenomenon in materials─plays a key role in determining their properties. Resolving complex phase domains in materials is critical to fostering a new fundamental understanding that facilitates new material development. So far, although conventional classification strategies such as order-parameter methods have been developed to distinguish remarkably disparate phases, highly accurate and efficient phase segmentation for material systems composed of multiphases remains unavailable. Here, by coupling hard-attention-enhanced U-Net network and geometry simulation with atomic-resolution transmission electron microscopy, we successfully developed a deep-learning tool enabling automated atom-by-atom phase segmentation of intertwined phase domains in technologically important cathode materials for lithium-ion batteries. The new strategy outperforms traditional methods and quantitatively elucidates the correlation between the multiple phases formed during battery operation. Our work demonstrates how deep learning can be employed to foster an in-depth understanding of phase transformation-related key issues in complex materials.
Collapse
Affiliation(s)
- Dong Zhu
- Department of Physics and Astronomy, University of California Irvine, Irvine, California 92697, United States
- Computer Network Information Centre, Chinese Academy of Sciences, Beijing, 100190, P. R. China
- University of Chinese Academy of Sciences, Beijing, 100049, P. R. China
| | - Chunyang Wang
- Department of Physics and Astronomy, University of California Irvine, Irvine, California 92697, United States
| | - Peichao Zou
- Department of Physics and Astronomy, University of California Irvine, Irvine, California 92697, United States
| | - Rui Zhang
- Department of Physics and Astronomy, University of California Irvine, Irvine, California 92697, United States
| | - Shefang Wang
- BASF Corporation, Iselin, New Jersey 08830, United States
| | - Bohang Song
- BASF Corporation, Beachwood, Ohio 44122, United States
| | - Xiaoyu Yang
- Computer Network Information Centre, Chinese Academy of Sciences, Beijing, 100190, P. R. China
- University of Chinese Academy of Sciences, Beijing, 100049, P. R. China
| | - Ke-Bin Low
- BASF Corporation, Iselin, New Jersey 08830, United States
| | - Huolin L Xin
- Department of Physics and Astronomy, University of California Irvine, Irvine, California 92697, United States
| |
Collapse
|
66
|
Zhang H, AbdulJabbar K, Grunewald T, Akarca AU, Hagos Y, Sobhani F, Lecat CSY, Patel D, Lee L, Rodriguez-Justo M, Yong K, Ledermann JA, Le Quesne J, Hwang ES, Marafioti T, Yuan Y. Self-supervised deep learning for highly efficient spatial immunophenotyping. EBioMedicine 2023; 95:104769. [PMID: 37672979 PMCID: PMC10493897 DOI: 10.1016/j.ebiom.2023.104769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 08/07/2023] [Accepted: 08/08/2023] [Indexed: 09/08/2023] Open
Abstract
BACKGROUND Efficient biomarker discovery and clinical translation depend on the fast and accurate analytical output from crucial technologies such as multiplex imaging. However, reliable cell classification often requires extensive annotations. Label-efficient strategies are urgently needed to reveal diverse cell distribution and spatial interactions in large-scale multiplex datasets. METHODS This study proposed Self-supervised Learning for Antigen Detection (SANDI) for accurate cell phenotyping while mitigating the annotation burden. The model first learns intrinsic pairwise similarities in unlabelled cell images, followed by a classification step to map learnt features to cell labels using a small set of annotated references. We acquired four multiplex immunohistochemistry datasets and one imaging mass cytometry dataset, comprising 2825 to 15,258 single-cell images to train and test the model. FINDINGS With 1% annotations (18-114 cells), SANDI achieved weighted F1-scores ranging from 0.82 to 0.98 across the five datasets, which was comparable to the fully supervised classifier trained on 1828-11,459 annotated cells (-0.002 to -0.053 of averaged weighted F1-score, Wilcoxon rank-sum test, P = 0.31). Leveraging the immune checkpoint markers stained in ovarian cancer slides, SANDI-based cell identification reveals spatial expulsion between PD1-expressing T helper cells and T regulatory cells, suggesting an interplay between PD1 expression and T regulatory cell-mediated immunosuppression. INTERPRETATION By striking a fine balance between minimal expert guidance and the power of deep learning to learn similarity within abundant data, SANDI presents new opportunities for efficient, large-scale learning for histology multiplex imaging data. FUNDING This study was funded by the Royal Marsden/ICR National Institute of Health Research Biomedical Research Centre.
Collapse
Affiliation(s)
- Hanyun Zhang
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK; Division of Molecular Pathology, The Institute of Cancer Research, London, UK
| | - Khalid AbdulJabbar
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK; Division of Molecular Pathology, The Institute of Cancer Research, London, UK
| | - Tami Grunewald
- Department of Oncology, UCL Cancer Institute, University College London, London, UK
| | - Ayse U Akarca
- Department of Cellular Pathology, University College London Hospital, London, UK
| | - Yeman Hagos
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK; Division of Molecular Pathology, The Institute of Cancer Research, London, UK
| | - Faranak Sobhani
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK; Division of Molecular Pathology, The Institute of Cancer Research, London, UK
| | - Catherine S Y Lecat
- Research Department of Hematology, Cancer Institute, University College London, UK
| | - Dominic Patel
- Research Department of Hematology, Cancer Institute, University College London, UK
| | - Lydia Lee
- Research Department of Hematology, Cancer Institute, University College London, UK
| | | | - Kwee Yong
- Research Department of Hematology, Cancer Institute, University College London, UK
| | - Jonathan A Ledermann
- Department of Oncology, UCL Cancer Institute, University College London, London, UK
| | - John Le Quesne
- School of Cancer Sciences, University of Glasgow, Glasgow, UK; CRUK Beatson Institute, Garscube Estate, Glasgow, UK; Department of Histopathology, Queen Elizabeth University Hospital, Glasgow, UK
| | - E Shelley Hwang
- Department of Surgery, Duke University Medical Center, Durham, NC, USA
| | - Teresa Marafioti
- Department of Cellular Pathology, University College London Hospital, London, UK
| | - Yinyin Yuan
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK; Division of Molecular Pathology, The Institute of Cancer Research, London, UK.
| |
Collapse
|
67
|
Qin X, Zhu J, Tu Z, Ma Q, Tang J, Zhang C. Contrast-Enhanced Ultrasound with Deep Learning with Attention Mechanisms for Predicting Microvascular Invasion in Single Hepatocellular Carcinoma. Acad Radiol 2023; 30 Suppl 1:S73-S80. [PMID: 36567144 DOI: 10.1016/j.acra.2022.12.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 11/29/2022] [Accepted: 12/02/2022] [Indexed: 12/24/2022]
Abstract
RATIONALE AND OBJECTIVES Prediction of microvascular invasion (MVI) status of hepatocellular carcinoma (HCC) holds clinical significance for decision-making regarding the treatment strategy and evaluation of patient prognosis. We developed a deep learning (DL) model based on contrast-enhanced ultrasound (CEUS) to predict MVI of HCC. MATERIALS AND METHODS We retrospectively analyzed the data for single primary HCCs that were evaluated with CEUS 1 week before surgical resection from December 2014 to February 2022. The study population was divided into training (n = 198) and test (n = 54) cohorts. In this study, three DL models (Resnet50, Resnet50+BAM, Resnet50+SE) were trained using the training cohort and tested in the test cohort. Tumor characteristics were also evaluated by radiologists, and multivariate regression analysis was performed to determine independent indicators for the development of predictive nomogram models. The performance of the three DL models was compared to that of the MVI prediction model based on radiologist evaluations. RESULTS The best-performing model, ResNet50+SE model achieved the ROC of 0.856, accuracy of 77.2, specificity of 93.9%, and sensitivity of 52.4% in the test group. The MVI prediction model based on a combination of three independent predictors showed a C-index of 0.729, accuracy of 69.4, specificity of 73.8%, and sensitivity of 62%. CONCLUSION The DL algorithm can accurately predict MVI of HCC on the basis of CEUS images, to help identify high-risk patients for the assist treatment.
Collapse
Affiliation(s)
- Xiachuan Qin
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, AH, China, 230022; Department of Ultrasound, Nanchong Central Hospital, The Second Clinical Medical College, North Sichuan Medical College (University), Nan Chong, Sichuan, China
| | - Jianhui Zhu
- Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, Anhui, China
| | - Zhengzheng Tu
- Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, Anhui, China
| | - Qianqing Ma
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, AH, China, 230022
| | - Jin Tang
- Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, Anhui, China
| | - Chaoxue Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, AH, China, 230022.
| |
Collapse
|
68
|
Das R, Bose S, Chowdhury RS, Maulik U. Dense Dilated Multi-Scale Supervised Attention-Guided Network for histopathology image segmentation. Comput Biol Med 2023; 163:107182. [PMID: 37379615 DOI: 10.1016/j.compbiomed.2023.107182] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/24/2023] [Accepted: 06/13/2023] [Indexed: 06/30/2023]
Abstract
Over the last couple of decades, the introduction and proliferation of whole-slide scanners led to increasing interest in the research of digital pathology. Although manual analysis of histopathological images is still the gold standard, the process is often tedious and time consuming. Furthermore, manual analysis also suffers from intra- and interobserver variability. Separating structures or grading morphological changes can be difficult due to architectural variability of these images. Deep learning techniques have shown great potential in histopathology image segmentation that drastically reduces the time needed for downstream tasks of analysis and providing accurate diagnosis. However, few algorithms have clinical implementations. In this paper, we propose a new deep learning model Dense Dilated Multiscale Supervised Attention-Guided (D2MSA) Network for histopathology image segmentation that makes use of deep supervision coupled with a hierarchical system of novel attention mechanisms. The proposed model surpasses state-of-the-art performance while using similar computational resources. The performance of the model has been evaluated for the tasks of gland segmentation and nuclei instance segmentation, both of which are clinically relevant tasks to assess the state and progress of malignancy. Here, we have used histopathology image datasets for three different types of cancer. We have also performed extensive ablation tests and hyperparameter tuning to ensure the validity and reproducibility of the model performance. The proposed model is available at www.github.com/shirshabose/D2MSA-Net.
Collapse
Affiliation(s)
- Rangan Das
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Shirsha Bose
- Department of Informatics, Technical University of Munich, Munich, Bavaria 85748, Germany.
| | - Ritesh Sur Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Ujjwal Maulik
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
69
|
Ruiz-Fresneda MA, Gijón A, Morales-Álvarez P. Bibliometric analysis of the global scientific production on machine learning applied to different cancer types. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2023; 30:96125-96137. [PMID: 37566331 PMCID: PMC10482761 DOI: 10.1007/s11356-023-28576-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 06/29/2023] [Indexed: 08/12/2023]
Abstract
Cancer disease is one of the main causes of death in the world, with million annual cases in the last decades. The need to find a cure has stimulated the search for efficient treatments and diagnostic procedures. One of the most promising tools that has emerged against cancer in recent years is machine learning (ML), which has raised a huge number of scientific papers published in a relatively short period of time. The present study analyzes global scientific production on ML applied to the most relevant cancer types through various bibliometric indicators. We find that over 30,000 studies have been published so far and observe that cancers with the highest number of published studies using ML (breast, lung, and colon cancer) are those with the highest incidence, being the USA and China the main scientific producers on the subject. Interestingly, the role of China and Japan in stomach cancer is correlated with the number of cases of this cancer type in Asia (78% of the worldwide cases). Knowing the countries and institutions that most study each area can be of great help for improving international collaborations between research groups and countries. Our analysis shows that medical and computer science journals lead the number of publications on the subject and could be useful for researchers in the field. Finally, keyword co-occurrence analysis suggests that ML-cancer research trends are focused not only on the use of ML as an effective diagnostic method, but also for the improvement of radiotherapy- and chemotherapy-based treatments.
Collapse
Affiliation(s)
| | - Alfonso Gijón
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
- Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, Granada, Spain
| | - Pablo Morales-Álvarez
- Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, Granada, Spain
- Department of Statistics and Operations Research, University of Granada, Granada, Spain
| |
Collapse
|
70
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
71
|
Zhou X, Mao Y, Gu M, Cheng Z. WSCNet: Biomedical Image Recognition for Cell Encapsulated Microfluidic Droplets. BIOSENSORS 2023; 13:821. [PMID: 37622907 PMCID: PMC10452702 DOI: 10.3390/bios13080821] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 08/08/2023] [Accepted: 08/12/2023] [Indexed: 08/26/2023]
Abstract
Microfluidic droplets accommodating a single cell as independent microreactors are frequently demanded for single-cell analysis of phenotype and genotype. However, challenges exist in identifying and reducing the covalence probability (following Poisson's distribution) of more than two cells encapsulated in one droplet. It is of great significance to monitor and control the quantity of encapsulated content inside each droplet. We demonstrated a microfluidic system embedded with a weakly supervised cell counting network (WSCNet) to generate microfluidic droplets, evaluate their quality, and further recognize the locations of encapsulated cells. Here, we systematically verified our approach using encapsulated droplets from three different microfluidic structures. Quantitative experimental results showed that our approach can not only distinguish droplet encapsulations (F1 score > 0.88) but also locate each cell without any supervised location information (accuracy > 89%). The probability of a "single cell in one droplet" encapsulation is systematically verified under different parameters, which shows good agreement with the distribution of the passive method (Residual Sum of Squares, RSS < 0.5). This study offers a comprehensive platform for the quantitative assessment of encapsulated microfluidic droplets.
Collapse
Affiliation(s)
| | | | | | - Zhen Cheng
- Department of Automation, Tsinghua University, Beijing 100084, China
| |
Collapse
|
72
|
Liu Y, Lawson BC, Huang X, Broom BM, Weinstein JN. Prediction of Ovarian Cancer Response to Therapy Based on Deep Learning Analysis of Histopathology Images. Cancers (Basel) 2023; 15:4044. [PMID: 37627071 PMCID: PMC10452505 DOI: 10.3390/cancers15164044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 08/06/2023] [Accepted: 08/07/2023] [Indexed: 08/27/2023] Open
Abstract
BACKGROUND Ovarian cancer remains the leading gynecological cause of cancer mortality. Predicting the sensitivity of ovarian cancer to chemotherapy at the time of pathological diagnosis is a goal of precision medicine research that we have addressed in this study using a novel deep-learning neural network framework to analyze the histopathological images. METHODS We have developed a method based on the Inception V3 deep learning algorithm that complements other methods for predicting response to standard platinum-based therapy of the disease. For the study, we used histopathological H&E images (pre-treatment) of high-grade serous carcinoma from The Cancer Genome Atlas (TCGA) Genomic Data Commons portal to train the Inception V3 convolutional neural network system to predict whether cancers had independently been labeled as sensitive or resistant to subsequent platinum-based chemotherapy. The trained model was then tested using data from patients left out of the training process. We used receiver operating characteristic (ROC) and confusion matrix analyses to evaluate model performance and Kaplan-Meier survival analysis to correlate the predicted probability of resistance with patient outcome. Finally, occlusion sensitivity analysis was piloted as a start toward correlating histopathological features with a response. RESULTS The study dataset consisted of 248 patients with stage 2 to 4 serous ovarian cancer. For a held-out test set of forty patients, the trained deep learning network model distinguished sensitive from resistant cancers with an area under the curve (AUC) of 0.846 ± 0.009 (SE). The probability of resistance calculated from the deep-learning network was also significantly correlated with patient survival and progression-free survival. In confusion matrix analysis, the network classifier achieved an overall predictive accuracy of 85% with a sensitivity of 73% and specificity of 90% for this cohort based on the Youden-J cut-off. Stage, grade, and patient age were not statistically significant for this cohort size. Occlusion sensitivity analysis suggested histopathological features learned by the network that may be associated with sensitivity or resistance to the chemotherapy, but multiple marker studies will be necessary to follow up on those preliminary results. CONCLUSIONS This type of analysis has the potential, if further developed, to improve the prediction of response to therapy of high-grade serous ovarian cancer and perhaps be useful as a factor in deciding between platinum-based and other therapies. More broadly, it may increase our understanding of the histopathological variables that predict response and may be adaptable to other cancer types and imaging modalities.
Collapse
Affiliation(s)
- Yuexin Liu
- Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
| | - Barrett C. Lawson
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
| | - Xuelin Huang
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
| | - Bradley M. Broom
- Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
| | - John N. Weinstein
- Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
- Department of Systems Biology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| |
Collapse
|
73
|
Bashir RMS, Shephard AJ, Mahmood H, Azarmehr N, Raza SEA, Khurram SA, Rajpoot NM. A digital score of peri-epithelial lymphocytic activity predicts malignant transformation in oral epithelial dysplasia. J Pathol 2023; 260:431-442. [PMID: 37294162 PMCID: PMC10952946 DOI: 10.1002/path.6094] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 04/15/2023] [Accepted: 05/02/2023] [Indexed: 06/10/2023]
Abstract
Oral squamous cell carcinoma (OSCC) is amongst the most common cancers, with more than 377,000 new cases worldwide each year. OSCC prognosis remains poor, related to cancer presentation at a late stage, indicating the need for early detection to improve patient prognosis. OSCC is often preceded by a premalignant state known as oral epithelial dysplasia (OED), which is diagnosed and graded using subjective histological criteria leading to variability and prognostic unreliability. In this work, we propose a deep learning approach for the development of prognostic models for malignant transformation and their association with clinical outcomes in histology whole slide images (WSIs) of OED tissue sections. We train a weakly supervised method on OED cases (n = 137) with malignant transformation (n = 50) and mean malignant transformation time of 6.51 years (±5.35 SD). Stratified five-fold cross-validation achieved an average area under the receiver-operator characteristic curve (AUROC) of 0.78 for predicting malignant transformation in OED. Hotspot analysis revealed various features of nuclei in the epithelium and peri-epithelial tissue to be significant prognostic factors for malignant transformation, including the count of peri-epithelial lymphocytes (PELs) (p < 0.05), epithelial layer nuclei count (NC) (p < 0.05), and basal layer NC (p < 0.05). Progression-free survival (PFS) using the epithelial layer NC (p < 0.05, C-index = 0.73), basal layer NC (p < 0.05, C-index = 0.70), and PELs count (p < 0.05, C-index = 0.73) all showed association of these features with a high risk of malignant transformation in our univariate analysis. Our work shows the application of deep learning for the prognostication and prediction of PFS of OED for the first time and offers potential to aid patient management. Further evaluation and testing on multi-centre data is required for validation and translation to clinical practice. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
| | - Adam J Shephard
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Hanya Mahmood
- Academic Unit of Oral & Maxillofacial Surgery, School of Clinical DentistryUniversity of SheffieldSheffieldUK
- Unit of Oral & Maxillofacial Pathology, School of Clinical DentistryUniversity of SheffieldSheffieldUK
| | - Neda Azarmehr
- Unit of Oral & Maxillofacial Pathology, School of Clinical DentistryUniversity of SheffieldSheffieldUK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Syed Ali Khurram
- Academic Unit of Oral & Maxillofacial Surgery, School of Clinical DentistryUniversity of SheffieldSheffieldUK
- Unit of Oral & Maxillofacial Pathology, School of Clinical DentistryUniversity of SheffieldSheffieldUK
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| |
Collapse
|
74
|
Baidar Bakht A, Javed S, Gilani SQ, Karki H, Muneeb M, Werghi N. DeepBLS: Deep Feature-Based Broad Learning System for Tissue Phenotyping in Colorectal Cancer WSIs. J Digit Imaging 2023; 36:1653-1662. [PMID: 37059892 PMCID: PMC10406762 DOI: 10.1007/s10278-023-00797-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 02/09/2023] [Accepted: 02/10/2023] [Indexed: 04/16/2023] Open
Abstract
Tissue phenotyping is a fundamental step in computational pathology for the analysis of tumor micro-environment in whole slide images (WSIs). Automatic tissue phenotyping in whole slide images (WSIs) of colorectal cancer (CRC) assists pathologists in better cancer grading and prognostication. In this paper, we propose a novel algorithm for the identification of distinct tissue components in colon cancer histology images by blending a comprehensive learning system with deep features extraction in the current work. Firstly, we extracted the features from the pre-trained VGG19 network which are then transformed into mapped features space for nodes enhancement generation. Utilizing both mapped features and enhancement nodes, the proposed algorithm classifies seven distinct tissue components including stroma, tumor, complex stroma, necrotic, normal benign, lymphocytes, and smooth muscle. To validate our proposed model, the experiments are performed on two publicly available colorectal cancer histology datasets. We showcase that our approach achieves a remarkable performance boost surpassing existing state-of-the-art methods by (1.3% AvTP, 2% F1) and (7% AvTP, 6% F1) on CRCD-1, and CRCD-2, respectively.
Collapse
Affiliation(s)
- Ahsan Baidar Bakht
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Sajid Javed
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Syed Qasim Gilani
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, 33431 USA
| | - Hamad Karki
- Mechanical Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Muhammad Muneeb
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Naoufel Werghi
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| |
Collapse
|
75
|
Fu X, Sahai E, Wilkins A. Application of digital pathology-based advanced analytics of tumour microenvironment organisation to predict prognosis and therapeutic response. J Pathol 2023; 260:578-591. [PMID: 37551703 PMCID: PMC10952145 DOI: 10.1002/path.6153] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 05/25/2023] [Accepted: 06/07/2023] [Indexed: 08/09/2023]
Abstract
In recent years, the application of advanced analytics, especially artificial intelligence (AI), to digital H&E images, and other histological image types, has begun to radically change how histological images are used in the clinic. Alongside the recognition that the tumour microenvironment (TME) has a profound impact on tumour phenotype, the technical development of highly multiplexed immunofluorescence platforms has enhanced the biological complexity that can be captured in the TME with high precision. AI has an increasingly powerful role in the recognition and quantitation of image features and the association of such features with clinically important outcomes, as occurs in distinct stages in conventional machine learning. Deep-learning algorithms are able to elucidate TME patterns inherent in the input data with minimum levels of human intelligence and, hence, have the potential to achieve clinically relevant predictions and discovery of important TME features. Furthermore, the diverse repertoire of deep-learning algorithms able to interrogate TME patterns extends beyond convolutional neural networks to include attention-based models, graph neural networks, and multimodal models. To date, AI models have largely been evaluated retrospectively, outside the well-established rigour of prospective clinical trials, in part because traditional clinical trial methodology may not always be suitable for the assessment of AI technology. However, to enable digital pathology-based advanced analytics to meaningfully impact clinical care, specific measures of 'added benefit' to the current standard of care and validation in a prospective setting are important. This will need to be accompanied by adequate measures of explainability and interpretability. Despite such challenges, the combination of expanding datasets, increased computational power, and the possibility of integration of pre-clinical experimental insights into model development means there is exciting potential for the future progress of these AI applications. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Xiao Fu
- Tumour Cell Biology LaboratoryThe Francis Crick InstituteLondonUK
- Biomolecular Modelling LaboratoryThe Francis Crick InstituteLondonUK
| | - Erik Sahai
- Tumour Cell Biology LaboratoryThe Francis Crick InstituteLondonUK
| | - Anna Wilkins
- Tumour Cell Biology LaboratoryThe Francis Crick InstituteLondonUK
- Division of Radiotherapy and ImagingInstitute of Cancer ResearchLondonUK
- Royal Marsden Hospitals NHS TrustLondonUK
| |
Collapse
|
76
|
Wu Y, Li Y, Xiong X, Liu X, Lin B, Xu B. Recent advances of pathomics in colorectal cancer diagnosis and prognosis. Front Oncol 2023; 13:1094869. [PMID: 37538112 PMCID: PMC10396402 DOI: 10.3389/fonc.2023.1094869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 06/13/2023] [Indexed: 08/05/2023] Open
Abstract
Colorectal cancer (CRC) is one of the most common malignancies, with the third highest incidence and the second highest mortality in the world. To improve the therapeutic outcome, the risk stratification and prognosis predictions would help guide clinical treatment decisions. Achieving these goals have been facilitated by the fast development of artificial intelligence (AI) -based algorithms using radiological and pathological data, in combination with genomic information. Among them, features extracted from pathological images, termed pathomics, are able to reflect sub-visual characteristics linking to better stratification and prediction of therapeutic responses. In this paper, we review recent advances in pathological image-based algorithms in CRC, focusing on diagnosis of benign and malignant lesions, micro-satellite instability, as well as prediction of neoadjuvant chemoradiotherapy and the prognosis of CRC patients.
Collapse
Affiliation(s)
- Yihan Wu
- School of Medicine, Chongqing University, Chongqing, China
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing, China
| | - Yi Li
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing, China
- Bioengineering College, Chongqing University, Chongqing, China
| | - Xiaomin Xiong
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing, China
- Bioengineering College, Chongqing University, Chongqing, China
| | - Xiaohua Liu
- Bioengineering College, Chongqing University, Chongqing, China
| | - Bo Lin
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing, China
| | - Bo Xu
- School of Medicine, Chongqing University, Chongqing, China
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital, Chongqing, China
| |
Collapse
|
77
|
Moscalu M, Moscalu R, Dascălu CG, Țarcă V, Cojocaru E, Costin IM, Țarcă E, Șerban IL. Histopathological Images Analysis and Predictive Modeling Implemented in Digital Pathology-Current Affairs and Perspectives. Diagnostics (Basel) 2023; 13:2379. [PMID: 37510122 PMCID: PMC10378281 DOI: 10.3390/diagnostics13142379] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 07/11/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023] Open
Abstract
In modern clinical practice, digital pathology has an essential role, being a technological necessity for the activity in the pathological anatomy laboratories. The development of information technology has majorly facilitated the management of digital images and their sharing for clinical use; the methods to analyze digital histopathological images, based on artificial intelligence techniques and specific models, quantify the required information with significantly higher consistency and precision compared to that provided by optical microscopy. In parallel, the unprecedented advances in machine learning facilitate, through the synergy of artificial intelligence and digital pathology, the possibility of diagnosis based on image analysis, previously limited only to certain specialties. Therefore, the integration of digital images into the study of pathology, combined with advanced algorithms and computer-assisted diagnostic techniques, extends the boundaries of the pathologist's vision beyond the microscopic image and allows the specialist to use and integrate his knowledge and experience adequately. We conducted a search in PubMed on the topic of digital pathology and its applications, to quantify the current state of knowledge. We found that computer-aided image analysis has a superior potential to identify, extract and quantify features in more detail compared to the human pathologist's evaluating possibilities; it performs tasks that exceed its manual capacity, and can produce new diagnostic algorithms and prediction models applicable in translational research that are able to identify new characteristics of diseases based on changes at the cellular and molecular level.
Collapse
Affiliation(s)
- Mihaela Moscalu
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Roxana Moscalu
- Wythenshawe Hospital, Manchester University NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester M139PT, UK
| | - Cristina Gena Dascălu
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Viorel Țarcă
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Elena Cojocaru
- Department of Morphofunctional Sciences I, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Ioana Mădălina Costin
- Faculty of Medicine, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Elena Țarcă
- Department of Surgery II-Pediatric Surgery, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Ionela Lăcrămioara Șerban
- Department of Morpho-Functional Sciences II, Faculty of Medicine, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| |
Collapse
|
78
|
Suleymanova I, Bychkov D, Kopra J. A deep convolutional neural network for efficient microglia detection. Sci Rep 2023; 13:11139. [PMID: 37429956 PMCID: PMC10333175 DOI: 10.1038/s41598-023-37963-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 06/30/2023] [Indexed: 07/12/2023] Open
Abstract
Microglial cells are a type of glial cells that make up 10-15% of all brain cells, and they play a significant role in neurodegenerative disorders and cardiovascular diseases. Despite their vital role in these diseases, developing fully automated microglia counting methods from immunohistological images is challenging. Current image analysis methods are inefficient and lack accuracy in detecting microglia due to their morphological heterogeneity. This study presents development and validation of a fully automated and efficient microglia detection method using the YOLOv3 deep learning-based algorithm. We applied this method to analyse the number of microglia in different spinal cord and brain regions of rats exposed to opioid-induced hyperalgesia/tolerance. Our numerical tests showed that the proposed method outperforms existing computational and manual methods with high accuracy, achieving 94% precision, 91% recall, and 92% F1-score. Furthermore, our tool is freely available and adds value to exploring different disease models. Our findings demonstrate the effectiveness and efficiency of our new tool in automated microglia detection, providing a valuable asset for researchers in neuroscience.
Collapse
Affiliation(s)
- Ilida Suleymanova
- Faculty of Biological and Environmental Sciences, Helsinki Institute of Life Science (HiLIFE), University of Helsinki, Helsinki, Finland.
| | - Dmitrii Bychkov
- Institute for Molecular Medicine Finland (FIMM), Helsinki Institute for Life Science (HiLIFE), University of Helsinki, Helsinki, Finland
| | - Jaakko Kopra
- Division of Pharmacology and Pharmacotherapy, Faculty of Pharmacy, University of Helsinki, Helsinki, Finland
| |
Collapse
|
79
|
Pierre K, Gupta M, Raviprasad A, Sadat Razavi SM, Patel A, Peters K, Hochhegger B, Mancuso A, Forghani R. Medical imaging and multimodal artificial intelligence models for streamlining and enhancing cancer care: opportunities and challenges. Expert Rev Anticancer Ther 2023; 23:1265-1279. [PMID: 38032181 DOI: 10.1080/14737140.2023.2286001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/16/2023] [Indexed: 12/01/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) has the potential to transform oncologic care. There have been significant developments in AI applications in medical imaging and increasing interest in multimodal models. These are likely to enable improved oncologic care through more precise diagnosis, increasingly in a more personalized and less invasive manner. In this review, we provide an overview of the current state and challenges that clinicians, administrative personnel and policy makers need to be aware of and mitigate for the technology to reach its full potential. AREAS COVERED The article provides a brief targeted overview of AI, a high-level review of the current state and future potential AI applications in diagnostic radiology and to a lesser extent digital pathology, focusing on oncologic applications. This is followed by a discussion of emerging approaches, including multimodal models. The article concludes with a discussion of technical, regulatory challenges and infrastructure needs for AI to realize its full potential. EXPERT OPINION There is a large volume of promising research, and steadily increasing commercially available tools using AI. For the most advanced and promising precision diagnostic applications of AI to be used clinically, robust and comprehensive quality monitoring systems and informatics platforms will likely be required.
Collapse
Affiliation(s)
- Kevin Pierre
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Manas Gupta
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
| | - Abheek Raviprasad
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- University of Florida College of Medicine, Gainesville, FL, USA
| | - Seyedeh Mehrsa Sadat Razavi
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- University of Florida College of Medicine, Gainesville, FL, USA
| | - Anjali Patel
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- University of Florida College of Medicine, Gainesville, FL, USA
| | - Keith Peters
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Bruno Hochhegger
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Anthony Mancuso
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Reza Forghani
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
- Division of Medical Physics, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Neurology, Division of Movement Disorders, University of Florida College of Medicine, Gainesville, FL, USA
| |
Collapse
|
80
|
Aziz MT, Mahmud SMH, Elahe MF, Jahan H, Rahman MH, Nandi D, Smirani LK, Ahmed K, Bui FM, Moni MA. A Novel Hybrid Approach for Classifying Osteosarcoma Using Deep Feature Extraction and Multilayer Perceptron. Diagnostics (Basel) 2023; 13:2106. [PMID: 37371001 DOI: 10.3390/diagnostics13122106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/10/2023] [Accepted: 06/13/2023] [Indexed: 06/29/2023] Open
Abstract
Osteosarcoma is the most common type of bone cancer that tends to occur in teenagers and young adults. Due to crowded context, inter-class similarity, inter-class variation, and noise in H&E-stained (hematoxylin and eosin stain) histology tissue, pathologists frequently face difficulty in osteosarcoma tumor classification. In this paper, we introduced a hybrid framework for improving the efficiency of three types of osteosarcoma tumor (nontumor, necrosis, and viable tumor) classification by merging different types of CNN-based architectures with a multilayer perceptron (MLP) algorithm on the WSI (whole slide images) dataset. We performed various kinds of preprocessing on the WSI images. Then, five pre-trained CNN models were trained with multiple parameter settings to extract insightful features via transfer learning, where convolution combined with pooling was utilized as a feature extractor. For feature selection, a decision tree-based RFE was designed to recursively eliminate less significant features to improve the model generalization performance for accurate prediction. Here, a decision tree was used as an estimator to select the different features. Finally, a modified MLP classifier was employed to classify binary and multiclass types of osteosarcoma under the five-fold CV to assess the robustness of our proposed hybrid model. Moreover, the feature selection criteria were analyzed to select the optimal one based on their execution time and accuracy. The proposed model achieved an accuracy of 95.2% for multiclass classification and 99.4% for binary classification. Experimental findings indicate that our proposed model significantly outperforms existing methods; therefore, this model could be applicable to support doctors in osteosarcoma diagnosis in clinics. In addition, our proposed model is integrated into a web application using the FastAPI web framework to provide a real-time prediction.
Collapse
Affiliation(s)
- Md Tarek Aziz
- Centre for Advanced Machine Learning and Applications (CAMLAs), Bashundhara R/A, Dhaka 1229, Bangladesh
| | - S M Hasan Mahmud
- Centre for Advanced Machine Learning and Applications (CAMLAs), Bashundhara R/A, Dhaka 1229, Bangladesh
- Department of Computer Science, American International University-Bangladesh (AIUB), 408/1, Kuratoli, Khilkhet, Dhaka 1229, Bangladesh
| | - Md Fazla Elahe
- Centre for Advanced Machine Learning and Applications (CAMLAs), Bashundhara R/A, Dhaka 1229, Bangladesh
- Department of Software Engineering, Daffodil International University, Daffodil Smart City (DSC), Savar, Dhaka 1216, Bangladesh
| | - Hosney Jahan
- Centre for Advanced Machine Learning and Applications (CAMLAs), Bashundhara R/A, Dhaka 1229, Bangladesh
- Department of Computer Science & Engineering (CSE), Military Institute of Science and Technology (MIST), Mirpur Cantonment, Dhaka 1216, Bangladesh
| | - Md Habibur Rahman
- Centre for Advanced Machine Learning and Applications (CAMLAs), Bashundhara R/A, Dhaka 1229, Bangladesh
- Department of Computer Science and Engineering, Islamic University, Kushtia 7003, Bangladesh
| | - Dip Nandi
- Department of Computer Science, American International University-Bangladesh (AIUB), 408/1, Kuratoli, Khilkhet, Dhaka 1229, Bangladesh
| | - Lassaad K Smirani
- The Deanship of Information Technology and E-learning, Umm Al-Qura University, Mecca 24382, Saudi Arabia
| | - Kawsar Ahmed
- Department of Electrical and Computer Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, SK S7N 5A9, Canada
- Group of Biophotomatiχ, Department of Information and Communication Technology (ICT), Mawlana Bhashani Science and Technology University (MBSTU), Tangail 1902, Bangladesh
| | - Francis M Bui
- Department of Electrical and Computer Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, SK S7N 5A9, Canada
| | - Mohammad Ali Moni
- Artificial Intelligence & Digital Health, School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland, St. Lucia, QLD 4072, Australia
| |
Collapse
|
81
|
Khazaee Fadafen M, Rezaee K. Ensemble-based multi-tissue classification approach of colorectal cancer histology images using a novel hybrid deep learning framework. Sci Rep 2023; 13:8823. [PMID: 37258631 DOI: 10.1038/s41598-023-35431-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 05/17/2023] [Indexed: 06/02/2023] Open
Abstract
Colorectal cancer (CRC) is the second leading cause of cancer death in the world, so digital pathology is essential for assessing prognosis. Due to the increasing resolution and quantity of whole slide images (WSIs), as well as the lack of annotated information, previous methodologies cannot be generalized as effective decision-making systems. Since deep learning (DL) methods can handle large-scale applications, they can provide a viable alternative to histopathology image (HI) analysis. DL architectures, however, may not be sufficient to classify CRC tissues based on anatomical histopathology data. A dilated ResNet (dResNet) structure and attention module are used to generate deep feature maps in order to classify multiple tissues in HIs. In addition, neighborhood component analysis (NCA) overcomes the constraint of computational complexity. Data is fed into a deep support vector machine (SVM) based on an ensemble learning algorithm called DeepSVM after the features have been selected. CRC-5000 and NCT-CRC-HE-100 K datasets were analyzed to validate and test the hybrid procedure. We demonstrate that the hybrid model achieves 98.75% and 99.76% accuracy on CRC datasets. The results showed that only pathologists' labels could successfully classify unseen WSIs. Furthermore, the hybrid deep learning method outperforms state-of-the-art approaches in terms of computational efficiency and time. Using the proposed mechanism for tissue analysis, it will be possible to correctly predict CRC based on accurate pathology image classification.
Collapse
Affiliation(s)
- Masoud Khazaee Fadafen
- Department of Electrical Engineering, Technical and Vocational University (TVU), Tehran, Iran
| | - Khosro Rezaee
- Department of Biomedical Engineering, Meybod University, Meybod, Iran.
| |
Collapse
|
82
|
Islam Sumon R, Bhattacharjee S, Hwang YB, Rahman H, Kim HC, Ryu WS, Kim DM, Cho NH, Choi HK. Densely Convolutional Spatial Attention Network for nuclei segmentation of histological images for computational pathology. Front Oncol 2023; 13:1009681. [PMID: 37305563 PMCID: PMC10248729 DOI: 10.3389/fonc.2023.1009681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 05/05/2023] [Indexed: 06/13/2023] Open
Abstract
Introduction Automatic nuclear segmentation in digital microscopic tissue images can aid pathologists to extract high-quality features for nuclear morphometrics and other analyses. However, image segmentation is a challenging task in medical image processing and analysis. This study aimed to develop a deep learning-based method for nuclei segmentation of histological images for computational pathology. Methods The original U-Net model sometime has a caveat in exploring significant features. Herein, we present the Densely Convolutional Spatial Attention Network (DCSA-Net) model based on U-Net to perform the segmentation task. Furthermore, the developed model was tested on external multi-tissue dataset - MoNuSeg. To develop deep learning algorithms for well-segmenting nuclei, a large quantity of data are mandatory, which is expensive and less feasible. We collected hematoxylin and eosin-stained image data sets from two hospitals to train the model with a variety of nuclear appearances. Because of the limited number of annotated pathology images, we introduced a small publicly accessible data set of prostate cancer (PCa) with more than 16,000 labeled nuclei. Nevertheless, to construct our proposed model, we developed the DCSA module, an attention mechanism for capturing useful information from raw images. We also used several other artificial intelligence-based segmentation methods and tools to compare their results to our proposed technique. Results To prioritize the performance of nuclei segmentation, we evaluated the model's outputs based on the Accuracy, Dice coefficient (DC), and Jaccard coefficient (JC) scores. The proposed technique outperformed the other methods and achieved superior nuclei segmentation with accuracy, DC, and JC of 96.4% (95% confidence interval [CI]: 96.2 - 96.6), 81.8 (95% CI: 80.8 - 83.0), and 69.3 (95% CI: 68.2 - 70.0), respectively, on the internal test data set. Conclusion Our proposed method demonstrates superior performance in segmenting cell nuclei of histological images from internal and external datasets, and outperforms many standard segmentation algorithms used for comparative analysis.
Collapse
Affiliation(s)
- Rashadul Islam Sumon
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Subrata Bhattacharjee
- Department of Computer Engineering, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Yeong-Byn Hwang
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Hafizur Rahman
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Hee-Cheol Kim
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Wi-Sun Ryu
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| | - Dong Min Kim
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| | - Nam-Hoon Cho
- Department of Pathology, Yonsei University Hospital, Seoul, Republic of Korea
| | - Heung-Kook Choi
- Department of Computer Engineering, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| |
Collapse
|
83
|
Hu W, Li X, Li C, Li R, Jiang T, Sun H, Huang X, Grzegorzek M, Li X. A state-of-the-art survey of artificial neural networks for Whole-slide Image analysis: From popular Convolutional Neural Networks to potential visual transformers. Comput Biol Med 2023; 161:107034. [PMID: 37230019 DOI: 10.1016/j.compbiomed.2023.107034] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 04/13/2023] [Accepted: 05/10/2023] [Indexed: 05/27/2023]
Abstract
In recent years, with the advancement of computer-aided diagnosis (CAD) technology and whole slide image (WSI), histopathological WSI has gradually played a crucial aspect in the diagnosis and analysis of diseases. To increase the objectivity and accuracy of pathologists' work, artificial neural network (ANN) methods have been generally needed in the segmentation, classification, and detection of histopathological WSI. However, the existing review papers only focus on equipment hardware, development status and trends, and do not summarize the art neural network used for full-slide image analysis in detail. In this paper, WSI analysis methods based on ANN are reviewed. Firstly, the development status of WSI and ANN methods is introduced. Secondly, we summarize the common ANN methods. Next, we discuss publicly available WSI datasets and evaluation metrics. These ANN architectures for WSI processing are divided into classical neural networks and deep neural networks (DNNs) and then analyzed. Finally, the application prospect of the analytical method in this field is discussed. The important potential method is Visual Transformers.
Collapse
Affiliation(s)
- Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xintong Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| | - Rui Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Xinyu Huang
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Shenyang, China.
| |
Collapse
|
84
|
Zhang H, AbdulJabbar K, Moore DA, Akarca A, Enfield KS, Jamal-Hanjani M, Raza SEA, Veeriah S, Salgado R, McGranahan N, Le Quesne J, Swanton C, Marafioti T, Yuan Y. Spatial Positioning of Immune Hotspots Reflects the Interplay between B and T Cells in Lung Squamous Cell Carcinoma. Cancer Res 2023; 83:1410-1425. [PMID: 36853169 PMCID: PMC10152235 DOI: 10.1158/0008-5472.can-22-2589] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 01/05/2023] [Accepted: 02/24/2023] [Indexed: 03/01/2023]
Abstract
Beyond tertiary lymphoid structures, a significant number of immune-rich areas without germinal center-like structures are observed in non-small cell lung cancer. Here, we integrated transcriptomic data and digital pathology images to study the prognostic implications, spatial locations, and constitution of immune rich areas (immune hotspots) in a cohort of 935 patients with lung cancer from The Cancer Genome Atlas. A high intratumoral immune hotspot score, which measures the proportion of immune hotspots interfacing with tumor islands, was correlated with poor overall survival in lung squamous cell carcinoma but not in lung adenocarcinoma. Lung squamous cell carcinomas with high intratumoral immune hotspot scores were characterized by consistent upregulation of B-cell signatures. Spatial statistical analyses conducted on serial multiplex IHC slides further revealed that only 4.87% of peritumoral immune hotspots and 0.26% of intratumoral immune hotspots were tertiary lymphoid structures. Significantly lower densities of CD20+CXCR5+ and CD79b+ B cells and less diverse immune cell interactions were found in intratumoral immune hotspots compared with peritumoral immune hotspots. Furthermore, there was a negative correlation between the percentages of CD8+ T cells and T regulatory cells in intratumoral but not in peritumoral immune hotspots, with tertiary lymphoid structures excluded. These findings suggest that the intratumoral immune hotspots reflect an immunosuppressive niche compared with peritumoral immune hotspots, independent of the distribution of tertiary lymphoid structures. A balance toward increased intratumoral immune hotspots is indicative of a compromised antitumor immune response and poor outcome in lung squamous cell carcinoma. SIGNIFICANCE Intratumoral immune hotspots beyond tertiary lymphoid structures reflect an immunosuppressive microenvironment, different from peritumoral immune hotspots, warranting further study in the context of immunotherapies.
Collapse
Affiliation(s)
- Hanyun Zhang
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, United Kingdom
- Division of Molecular Pathology, The Institute of Cancer Research, London, United Kingdom
| | - Khalid AbdulJabbar
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, United Kingdom
- Division of Molecular Pathology, The Institute of Cancer Research, London, United Kingdom
| | - David A. Moore
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, United Kingdom
- Department of Cellular Pathology, University College London Hospitals, London, United Kingdom
| | - Ayse Akarca
- Department of Cellular Pathology, University College London Hospitals, London, United Kingdom
| | - Katey S.S. Enfield
- Cancer Evolution and Genome Instability Laboratory, The Francis Crick Institute, London, United Kingdom
| | - Mariam Jamal-Hanjani
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, United Kingdom
- Department of Oncology, University College London Hospitals, London, United Kingdom
- Cancer Metastasis Lab, University College London Cancer Institute, London, United Kingdom
| | - Shan E. Ahmed Raza
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, United Kingdom
- Division of Molecular Pathology, The Institute of Cancer Research, London, United Kingdom
| | - Selvaraju Veeriah
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, United Kingdom
| | | | - Nicholas McGranahan
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, United Kingdom
- Cancer Genome Evolution Research Group, Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, United Kingdom
| | - John Le Quesne
- Cancer Research UK Beatson Institute, Glasgow, United Kingdom
- School of Cancer Sciences, University of Glasgow, Glasgow, United Kingdom
- NHS Greater Glasgow and Clyde Pathology Department, Queen Elizabeth University Hospital, London, United Kingdom
| | - Charles Swanton
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, United Kingdom
- Cancer Evolution and Genome Instability Laboratory, The Francis Crick Institute, London, United Kingdom
- Department of Oncology, University College London Hospitals, London, United Kingdom
| | - Teresa Marafioti
- Department of Cellular Pathology, University College London Hospitals, London, United Kingdom
| | - Yinyin Yuan
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, United Kingdom
- Division of Molecular Pathology, The Institute of Cancer Research, London, United Kingdom
| |
Collapse
|
85
|
A scale and region-enhanced decoding network for nuclei classification in histology image. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
86
|
Hu H, Ye R, Thiyagalingam J, Coenen F, Su J. Triple-kernel gated attention-based multiple instance learning with contrastive learning for medical image analysis. APPL INTELL 2023; 53:1-16. [PMID: 37363384 PMCID: PMC10072016 DOI: 10.1007/s10489-023-04458-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/05/2023] [Indexed: 04/07/2023]
Abstract
In machine learning, multiple instance learning is a method evolved from supervised learning algorithms, which defines a "bag" as a collection of multiple examples with a wide range of applications. In this paper, we propose a novel deep multiple instance learning model for medical image analysis, called triple-kernel gated attention-based multiple instance learning with contrastive learning. It can be used to overcome the limitations of the existing multiple instance learning approaches to medical image analysis. Our model consists of four steps. i) Extracting the representations by a simple convolutional neural network using contrastive learning for training. ii) Using three different kernel functions to obtain the importance of each instance from the entire image and forming an attention map. iii) Based on the attention map, aggregating the entire image together by attention-based MIL pooling. iv) Feeding the results into the classifier for prediction. The results on different datasets demonstrate that the proposed model outperforms state-of-the-art methods on binary and weakly supervised classification tasks. It can provide more efficient classification results for various disease models and additional explanatory information.
Collapse
Affiliation(s)
- Huafeng Hu
- Department of Electrical and Electronic Engineering, University of Liverpool based at Xi’an Jiaotong-Liverpool University, Suzhou, 215123 Jiangsu China
| | - Ruijie Ye
- Department of Computer Science, University of Liverpool, Liverpool, L69 3BX UK
| | - Jeyan Thiyagalingam
- Scientific Computing Department, Rutherford Appleton Laboratory, Science and Technology Facilities Council, Harwell Campus, Didcot, OX11 0QX UK
| | - Frans Coenen
- Department of Computer Science, University of Liverpool, Liverpool, L69 3BX UK
| | - Jionglong Su
- School of AI and Advanced Computing, XJTLU Entrepreneur College (Taicang), Xi’an Jiaotong-Liverpool University, Suzhou, 215123 Jiangsu China
| |
Collapse
|
87
|
Zhao T, Fu C, Tian Y, Song W, Sham CW. GSN-HVNET: A Lightweight, Multi-Task Deep Learning Framework for Nuclei Segmentation and Classification. Bioengineering (Basel) 2023; 10:bioengineering10030393. [PMID: 36978784 PMCID: PMC10045412 DOI: 10.3390/bioengineering10030393] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/13/2023] [Accepted: 03/20/2023] [Indexed: 03/30/2023] Open
Abstract
Nuclei segmentation and classification are two basic and essential tasks in computer-aided diagnosis of digital pathology images, and those deep-learning-based methods have achieved significant success. Unfortunately, most of the existing studies accomplish the two tasks by splicing two related neural networks directly, resulting in repetitive computation efforts and a redundant-and-large neural network. Thus, this paper proposes a lightweight deep learning framework (GSN-HVNET) with an encoder-decoder structure for simultaneous segmentation and classification of nuclei. The decoder consists of three branches outputting the semantic segmentation of nuclei, the horizontal and vertical (HV) distances of nuclei pixels to their mass centers, and the class of each nucleus, respectively. The instance segmentation results are obtained by combing the outputs of the first and second branches. To reduce the computational cost and improve the network stability under small batch sizes, we propose two newly designed blocks, Residual-Ghost-SN (RGS) and Dense-Ghost-SN (DGS). Furthermore, considering the practical usage in pathological diagnosis, we redefine the classification principle of the CoNSeP dataset. Experimental results demonstrate that the proposed model outperforms other state-of-the-art models in terms of segmentation and classification accuracy by a significant margin while maintaining high computational efficiency.
Collapse
Affiliation(s)
- Tengfei Zhao
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
- Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, Shenyang 110819, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, China
| | - Yunjia Tian
- State Grid Liaoning Information and Communication Company, Shenyang 110006, China
| | - Wei Song
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
| | - Chiu-Wing Sham
- School of Computer Science, The University of Auckland, Auckland 1142, New Zealand
| |
Collapse
|
88
|
Ramalakshmi K, Srinivasa Raghavan V. Enhanced prediction using deep neural network-based image classification. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2183621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Affiliation(s)
- K. Ramalakshmi
- Electronics and Communication Engineering, P.S.R. Engineering College, Sivakasi, Tamil Nadu, India
| | - V. Srinivasa Raghavan
- Electronics and Communication Engineering, Theni Kammavar Sangam College of Technology, Theni, Tamil Nadu, India
| |
Collapse
|
89
|
Basu A, Senapati P, Deb M, Rai R, Dhal KG. A survey on recent trends in deep learning for nucleus segmentation from histopathology images. EVOLVING SYSTEMS 2023; 15:1-46. [PMID: 38625364 PMCID: PMC9987406 DOI: 10.1007/s12530-023-09491-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023]
Abstract
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017-2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Collapse
Affiliation(s)
- Anusua Basu
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Pradip Senapati
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Mainak Deb
- Wipro Technologies, Pune, Maharashtra India
| | - Rebika Rai
- Department of Computer Applications, Sikkim University, Sikkim, India
| | - Krishna Gopal Dhal
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| |
Collapse
|
90
|
Haq MM, Ma H, Huang J. NuSegDA: Domain adaptation for nuclei segmentation. Front Big Data 2023; 6:1108659. [PMID: 36936996 PMCID: PMC10018010 DOI: 10.3389/fdata.2023.1108659] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Accepted: 02/13/2023] [Indexed: 03/06/2023] Open
Abstract
The accurate segmentation of nuclei is crucial for cancer diagnosis and further clinical treatments. To successfully train a nuclei segmentation network in a fully-supervised manner for a particular type of organ or cancer, we need the dataset with ground-truth annotations. However, such well-annotated nuclei segmentation datasets are highly rare, and manually labeling an unannotated dataset is an expensive, time-consuming, and tedious process. Consequently, we require to discover a way for training the nuclei segmentation network with unlabeled dataset. In this paper, we propose a model named NuSegUDA for nuclei segmentation on the unlabeled dataset (target domain). It is achieved by applying Unsupervised Domain Adaptation (UDA) technique with the help of another labeled dataset (source domain) that may come from different type of organ, cancer, or source. We apply UDA technique at both of feature space and output space. We additionally utilize a reconstruction network and incorporate adversarial learning into it so that the source-domain images can be accurately translated to the target-domain for further training of the segmentation network. We validate our proposed NuSegUDA on two public nuclei segmentation datasets, and obtain significant improvement as compared with the baseline methods. Extensive experiments also verify the contribution of newly proposed image reconstruction adversarial loss, and target-translated source supervised loss to the performance boost of NuSegUDA. Finally, considering the scenario when we have a small number of annotations available from the target domain, we extend our work and propose NuSegSSDA, a Semi-Supervised Domain Adaptation (SSDA) based approach.
Collapse
Affiliation(s)
- Mohammad Minhazul Haq
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX, United States
| | - Hehuan Ma
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX, United States
| | - Junzhou Huang
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX, United States
| |
Collapse
|
91
|
Alkhalaf S, Alturise F, Bahaddad AA, Elnaim BME, Shabana S, Abdel-Khalek S, Mansour RF. Adaptive Aquila Optimizer with Explainable Artificial Intelligence-Enabled Cancer Diagnosis on Medical Imaging. Cancers (Basel) 2023; 15:cancers15051492. [PMID: 36900283 PMCID: PMC10001070 DOI: 10.3390/cancers15051492] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 02/19/2023] [Accepted: 02/20/2023] [Indexed: 03/08/2023] Open
Abstract
Explainable Artificial Intelligence (XAI) is a branch of AI that mainly focuses on developing systems that provide understandable and clear explanations for their decisions. In the context of cancer diagnoses on medical imaging, an XAI technology uses advanced image analysis methods like deep learning (DL) to make a diagnosis and analyze medical images, as well as provide a clear explanation for how it arrived at its diagnoses. This includes highlighting specific areas of the image that the system recognized as indicative of cancer while also providing data on the fundamental AI algorithm and decision-making process used. The objective of XAI is to provide patients and doctors with a better understanding of the system's decision-making process and to increase transparency and trust in the diagnosis method. Therefore, this study develops an Adaptive Aquila Optimizer with Explainable Artificial Intelligence Enabled Cancer Diagnosis (AAOXAI-CD) technique on Medical Imaging. The proposed AAOXAI-CD technique intends to accomplish the effectual colorectal and osteosarcoma cancer classification process. To achieve this, the AAOXAI-CD technique initially employs the Faster SqueezeNet model for feature vector generation. As well, the hyperparameter tuning of the Faster SqueezeNet model takes place with the use of the AAO algorithm. For cancer classification, the majority weighted voting ensemble model with three DL classifiers, namely recurrent neural network (RNN), gated recurrent unit (GRU), and bidirectional long short-term memory (BiLSTM). Furthermore, the AAOXAI-CD technique combines the XAI approach LIME for better understanding and explainability of the black-box method for accurate cancer detection. The simulation evaluation of the AAOXAI-CD methodology can be tested on medical cancer imaging databases, and the outcomes ensured the auspicious outcome of the AAOXAI-CD methodology than other current approaches.
Collapse
Affiliation(s)
- Salem Alkhalaf
- Department of Computer, College of Science and Arts in Ar Rass, Qassim University, Ar Rass 58892, Saudi Arabia
- Correspondence:
| | - Fahad Alturise
- Department of Computer, College of Science and Arts in Ar Rass, Qassim University, Ar Rass 58892, Saudi Arabia
| | - Adel Aboud Bahaddad
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Bushra M. Elamin Elnaim
- Department of Computer Science, College of Science and Humanities in Al-Sulail, Prince Sattam Bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia
| | - Samah Shabana
- Pharmacognosy Department, Faculty of Pharmaceutical Sciences and Drug Manufacturing, Misr University for Science and Technology (MUST), Giza 3236101, Egypt
| | - Sayed Abdel-Khalek
- Department of Mathematics, College of Science, Taif University, Taif 21944, Saudi Arabia
| | - Romany F. Mansour
- Department of Mathematics, Faculty of Science, New Valley University, El-Kharga 1064188, Egypt
| |
Collapse
|
92
|
Dash S, Parida P, Mohanty JR. Illumination robust deep convolutional neural network for medical image classification. Soft comput 2023. [DOI: 10.1007/s00500-023-07918-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
|
93
|
Using Deep Learning with Bayesian–Gaussian Inspired Convolutional Neural Architectural Search for Cancer Recognition and Classification from Histopathological Image Frames. JOURNAL OF HEALTHCARE ENGINEERING 2023. [DOI: 10.1155/2023/4597445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
We propose a neural architectural search model which examines histopathological images to detect the presence of cancer in both lung and colon tissues. In recent times, deep artificial neural networks have made tremendous impacts in healthcare. However, obtaining an optimal artificial neural network model that could yield excellent performance during training, evaluation, and inferencing has been a bottleneck for researchers. Our method uses a Bayesian convolutional neural architectural search algorithm in collaboration with Gaussian processes to provide an efficient neural network architecture for efficient colon and lung cancer classification and recognition. The proposed model learns by using the Gaussian process to estimate the required optimal architectural values by choosing a set of model parameters through the exploitation of the expected improvement (EI) values, thereby minimizing the number of sampled trials and suggesting the best model architecture. Several experiments were conducted, and a landmark performance was obtained in both validation and test data through the evaluation of the proposed model on a dataset consisting of 25,000 images of five different classes with convergence and F1-score matrices.
Collapse
|
94
|
Liu Y, Zhang W. Design and simulation of precision marketing recommendation system based on the NSSVD++ algorithm. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08302-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|
95
|
A Heuristic Machine Learning-Based Optimization Technique to Predict Lung Cancer Patient Survival. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:4506488. [PMID: 36776617 PMCID: PMC9911240 DOI: 10.1155/2023/4506488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/26/2022] [Accepted: 11/24/2022] [Indexed: 02/05/2023]
Abstract
Cancer has been a significant threat to human health and well-being, posing the biggest obstacle in the history of human sickness. The high death rate in cancer patients is primarily due to the complexity of the disease and the wide range of clinical outcomes. Increasing the accuracy of the prediction is equally crucial as predicting the survival rate of cancer patients, which has become a key issue of cancer research. Many models have been suggested at the moment. However, most of them simply use single genetic data or clinical data to construct prediction models for cancer survival. There is a lot of emphasis in present survival studies on determining whether or not a patient will survive five years. The personal issue of how long a lung cancer patient will survive remains unanswered. The proposed technique Naive Bayes and SSA is estimating the overall survival time with lung cancer. Two machine learning challenges are derived from a single customized query. To begin with, determining whether a patient will survive for more than five years is a simple binary question. The second step is to develop a five-year survival model using regression analysis. When asked to forecast how long a lung cancer patient would survive within five years, the mean absolute error (MAE) of this technique's predictions is accurate within a month. Several biomarker genes have been associated with lung cancers. The accuracy, recall, and precision achieved from this algorithm are 98.78%, 98.4%, and 98.6%, respectively.
Collapse
|
96
|
Zhang W, Zhang J, Wang X, Yang S, Huang J, Yang W, Wang W, Han X. Merging nucleus datasets by correlation-based cross-training. Med Image Anal 2023; 84:102705. [PMID: 36525843 DOI: 10.1016/j.media.2022.102705] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 11/16/2022] [Accepted: 11/24/2022] [Indexed: 12/12/2022]
Abstract
Fine-grained nucleus classification is challenging because of the high inter-class similarity and intra-class variability. Therefore, a large number of labeled data is required for training effective nucleus classification models. However, it is challenging to label a large-scale nucleus classification dataset comparable to ImageNet in natural images, considering that high-quality nucleus labeling requires specific domain knowledge. In addition, the existing publicly available datasets are often inconsistently labeled with divergent labeling criteria. Due to this inconsistency, conventional models have to be trained on each dataset separately and work independently to infer their own classification results, limiting their classification performance. To fully utilize all annotated datasets, we formulate the nucleus classification task as a multi-label problem with missing labels to utilize all datasets in a unified framework. Specifically, we merge all datasets and combine their labels as multiple labels. Thus, each data has one ground-truth label and several missing labels. We devise a base classification module that is trained using all data but sparsely supervised by the ground-truth labels only. We then exploit the correlation among different label sets by a label correlation module. By doing so, we can have two trained basic modules and further cross-train them with both ground-truth labels and pseudo labels for the missing ones. Importantly, data without any ground-truth labels can also be involved in our framework, as we can regard them as data with all labels missing and generate the corresponding pseudo labels. We carefully re-organized multiple publicly available nucleus classification datasets, converted them into a uniform format, and tested the proposed framework on them. Experimental results show substantial improvement compared to the state-of-the-art methods. The code and data are available at https://w-h-zhang.github.io/projects/dataset_merging/dataset_merging.html.
Collapse
Affiliation(s)
- Wenhua Zhang
- Department of Computer Science, The University of Hong Kong, Hong Kong, China
| | - Jun Zhang
- Tencent AI Lab, Shen Zhen, Guang Dong, China
| | - Xiyue Wang
- Tencent AI Lab, Shen Zhen, Guang Dong, China
| | - Sen Yang
- Tencent AI Lab, Shen Zhen, Guang Dong, China
| | | | - Wei Yang
- Tencent AI Lab, Shen Zhen, Guang Dong, China
| | | | - Xiao Han
- Tencent AI Lab, Shen Zhen, Guang Dong, China.
| |
Collapse
|
97
|
Dual Consistency Semi-supervised Nuclei Detection via Global Regularization and Local Adversarial Learning. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
98
|
Mou T, Liang J, Vu TN, Tian M, Gao Y. A Comprehensive Landscape of Imaging Feature-Associated RNA Expression Profiles in Human Breast Tissue. SENSORS (BASEL, SWITZERLAND) 2023; 23:1432. [PMID: 36772473 PMCID: PMC9921444 DOI: 10.3390/s23031432] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/15/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
The expression abundance of transcripts in nondiseased breast tissue varies among individuals. The association study of genotypes and imaging phenotypes may help us to understand this individual variation. Since existing reports mainly focus on tumors or lesion areas, the heterogeneity of pathological image features and their correlations with RNA expression profiles for nondiseased tissue are not clear. The aim of this study is to discover the association between the nucleus features and the transcriptome-wide RNAs. We analyzed both microscopic histology images and RNA-sequencing data of 456 breast tissues from the Genotype-Tissue Expression (GTEx) project and constructed an automatic computational framework. We classified all samples into four clusters based on their nucleus morphological features and discovered feature-specific gene sets. The biological pathway analysis was performed on each gene set. The proposed framework evaluates the morphological characteristics of the cell nucleus quantitatively and identifies the associated genes. We found image features that capture population variation in breast tissue associated with RNA expressions, suggesting that the variation in expression pattern affects population variation in the morphological traits of breast tissue. This study provides a comprehensive transcriptome-wide view of imaging-feature-specific RNA expression for healthy breast tissue. Such a framework could also be used for understanding the connection between RNA expression and morphology in other tissues and organs. Pathway analysis indicated that the gene sets we identified were involved in specific biological processes, such as immune processes.
Collapse
Affiliation(s)
- Tian Mou
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518000, China
| | - Jianwen Liang
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518000, China
| | - Trung Nghia Vu
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, SE 17177 Stockholm, Sweden
| | - Mu Tian
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518000, China
| | - Yi Gao
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518000, China
| |
Collapse
|
99
|
Karagoz MA, Akay B, Basturk A, Karaboga D, Nalbantoglu OU. An unsupervised transfer learning model based on convolutional auto encoder for non-alcoholic steatohepatitis activity scoring and fibrosis staging of liver histopathological images. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08252-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
100
|
Xie J, Luo X, Deng X, Tang Y, Tian W, Cheng H, Zhang J, Zou Y, Guo Z, Xie X. Advances in artificial intelligence to predict cancer immunotherapy efficacy. Front Immunol 2023; 13:1076883. [PMID: 36685496 PMCID: PMC9845588 DOI: 10.3389/fimmu.2022.1076883] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Accepted: 12/09/2022] [Indexed: 01/05/2023] Open
Abstract
Tumor immunotherapy, particularly the use of immune checkpoint inhibitors, has yielded impressive clinical benefits. Therefore, it is critical to accurately screen individuals for immunotherapy sensitivity and forecast its efficacy. With the application of artificial intelligence (AI) in the medical field in recent years, an increasing number of studies have indicated that the efficacy of immunotherapy can be better anticipated with the help of AI technology to reach precision medicine. This article focuses on the current prediction models based on information from histopathological slides, imaging-omics, genomics, and proteomics, and reviews their research progress and applications. Furthermore, we also discuss the existing challenges encountered by AI in the field of immunotherapy, as well as the future directions that need to be improved, to provide a point of reference for the early implementation of AI-assisted diagnosis and treatment systems in the future.
Collapse
Affiliation(s)
- Jindong Xie
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Xiyuan Luo
- School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Xinpei Deng
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Yuhui Tang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Wenwen Tian
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Hui Cheng
- School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Junsheng Zhang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Yutian Zou
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Zhixing Guo
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Xiaoming Xie
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| |
Collapse
|