1
|
Li B, Liu Z, Zhang S, Liu X, Sun C, Liu J, Qiu B, Tian J. NuHTC: A hybrid task cascade for nuclei instance segmentation and classification. Med Image Anal 2025; 103:103595. [PMID: 40294567 DOI: 10.1016/j.media.2025.103595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 03/22/2025] [Accepted: 04/09/2025] [Indexed: 04/30/2025]
Abstract
Nuclei instance segmentation and classification of hematoxylin and eosin (H&E) stained digital pathology images are essential for further downstream cancer diagnosis and prognosis tasks. Previous works mainly focused on bottom-up methods using a single-level feature map for segmenting nuclei instances, while multilevel feature maps seemed to be more suitable for nuclei instances with various sizes and types. In this paper, we develop an effective top-down nuclei instance segmentation and classification framework (NuHTC) based on a hybrid task cascade (HTC). The NuHTC has two new components: a watershed proposal network (WSPN) and a hybrid feature extractor (HFE). The WSPN can provide additional proposals for the region proposal network which leads the model to predict bounding boxes more precisely. The HFE at the region of interest (RoI) alignment stage can better utilize both the high-level global and the low-level semantic features. It can guide NuHTC to learn nuclei instance features with less intraclass variance. We conduct extensive experiments using our method in four public multiclass nuclei instance segmentation datasets. The quantitative results of NuHTC demonstrate its superiority in both instance segmentation and classification compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Bao Li
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui 230026, China; CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Song Zhang
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Xiangyu Liu
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Caixia Sun
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, 100191, China
| | - Jiangang Liu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, 100191, China
| | - Bensheng Qiu
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui 230026, China.
| | - Jie Tian
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui 230026, China; CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, 100191, China.
| |
Collapse
|
2
|
Vong CK, Wang A, Dragunow M, Park TIH, Shim V. Brain tumour histopathology through the lens of deep learning: A systematic review. Comput Biol Med 2025; 186:109642. [PMID: 39787663 DOI: 10.1016/j.compbiomed.2024.109642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Revised: 12/26/2024] [Accepted: 12/27/2024] [Indexed: 01/12/2025]
Abstract
PROBLEM Machine learning (ML)/Deep learning (DL) techniques have been evolving to solve more complex diseases, but it has been used relatively little in Glioblastoma (GBM) histopathological studies, which could benefit greatly due to the disease's complex pathogenesis. AIM Conduct a systematic review to investigate how ML/DL techniques have influenced the progression of brain tumour histopathological research, particularly in GBM. METHODS 54 eligible studies were collected from the PubMed and ScienceDirect databases, and their information about the types of brain tumour/s used, types of -omics data used with histopathological data, origins of the data, types of ML/DL and its training and evaluation methodologies, and the ML/DL task it was set to perform in the study were extracted to inform us of trends in GBM-related ML/DL-based research. RESULTS Only 8 GBM-related studies in the eligible utilised ML/DL methodologies to gain deeper insights into GBM pathogenesis by contextualising histological data with -omics data. However, we report that these studies have been published more recently. The most popular ML/DL models used in GBM-related research are the SVM classifier and ResNet-based CNN architecture. Still, a considerable number of studies failed to state training and evaluative methodologies clearly. CONCLUSION There is a growing trend towards using ML/DL approaches to uncover relationships between biological and histopathological data to bring new insights into GBM, thus pushing GBM research forward. Much work still needs to be done to properly report the ML/DL methodologies to showcase the models' robustness and generalizability and ensure the models are reproducible.
Collapse
Affiliation(s)
- Chun Kiet Vong
- Auckland Bioengineering Institute, The University of Auckland, New Zealand; Centre for Brain Research, The University of Auckland, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, The University of Auckland, New Zealand; Centre for Brain Research, The University of Auckland, New Zealand; Faculty of Medical and Health Sciences, The University of Auckland, New Zealand
| | - Mike Dragunow
- Centre for Brain Research, The University of Auckland, New Zealand; Department of Pharmacology, The Faculty of Medical and Health Sciences, The University of Auckland, New Zealand
| | - Thomas I-H Park
- Centre for Brain Research, The University of Auckland, New Zealand; Department of Pharmacology, The Faculty of Medical and Health Sciences, The University of Auckland, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, The University of Auckland, New Zealand.
| |
Collapse
|
3
|
Jensen MP, Qiang Z, Khan DZ, Stoyanov D, Baldeweg SE, Jaunmuktane Z, Brandner S, Marcus HJ. Artificial intelligence in histopathological image analysis of central nervous system tumours: A systematic review. Neuropathol Appl Neurobiol 2024; 50:e12981. [PMID: 38738494 DOI: 10.1111/nan.12981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 04/05/2024] [Accepted: 04/10/2024] [Indexed: 05/14/2024]
Abstract
The convergence of digital pathology and artificial intelligence could assist histopathology image analysis by providing tools for rapid, automated morphological analysis. This systematic review explores the use of artificial intelligence for histopathological image analysis of digitised central nervous system (CNS) tumour slides. Comprehensive searches were conducted across EMBASE, Medline and the Cochrane Library up to June 2023 using relevant keywords. Sixty-eight suitable studies were identified and qualitatively analysed. The risk of bias was evaluated using the Prediction model Risk of Bias Assessment Tool (PROBAST) criteria. All the studies were retrospective and preclinical. Gliomas were the most frequently analysed tumour type. The majority of studies used convolutional neural networks or support vector machines, and the most common goal of the model was for tumour classification and/or grading from haematoxylin and eosin-stained slides. The majority of studies were conducted when legacy World Health Organisation (WHO) classifications were in place, which at the time relied predominantly on histological (morphological) features but have since been superseded by molecular advances. Overall, there was a high risk of bias in all studies analysed. Persistent issues included inadequate transparency in reporting the number of patients and/or images within the model development and testing cohorts, absence of external validation, and insufficient recognition of batch effects in multi-institutional datasets. Based on these findings, we outline practical recommendations for future work including a framework for clinical implementation, in particular, better informing the artificial intelligence community of the needs of the neuropathologist.
Collapse
Affiliation(s)
- Melanie P Jensen
- Pathology Department, Charing Cross Hospital, Imperial College Healthcare NHS Trust, London, UK
- Briscoe Lab, The Francis Crick Institute, London, UK
| | - Zekai Qiang
- School of Medicine and Population Health, University of Sheffield Medical School, Sheffield, UK
| | - Danyal Z Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Computer Science, University College London, London, UK
| | - Danail Stoyanov
- Department of Computer Science, University College London, London, UK
| | - Stephanie E Baldeweg
- Department of Diabetes and Endocrinology, University College London Hospitals, London, UK
- Centre for Obesity and Metabolism, Department of Experimental and Translational Medicine, Division of Medicine, University College London, London, UK
| | - Zane Jaunmuktane
- Division of Neuropathology, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Neurodegenerative Disease, University College London Queen Square Institute of Neurology, London, UK
- Department of Clinical and Movement Neurosciences, University College London Queen Square Institute of Neurology, London, UK
| | - Sebastian Brandner
- Division of Neuropathology, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Neurodegenerative Disease, University College London Queen Square Institute of Neurology, London, UK
| | - Hani J Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Computer Science, University College London, London, UK
| |
Collapse
|
4
|
Yücel Z, Akal F, Oltulu P. Automated AI-based grading of neuroendocrine tumors using Ki-67 proliferation index: comparative evaluation and performance analysis. Med Biol Eng Comput 2024; 62:1899-1909. [PMID: 38409645 DOI: 10.1007/s11517-024-03045-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 02/03/2024] [Indexed: 02/28/2024]
Abstract
Early detection is critical for successfully diagnosing cancer, and timely analysis of diagnostic tests is increasingly important. In the context of neuroendocrine tumors, the Ki-67 proliferation index serves as a fundamental biomarker, aiding pathologists in grading and diagnosing these tumors based on histopathological images. The appropriate treatment plan for the patient is determined based on the tumor grade. An artificial intelligence-based method is proposed to aid pathologists in the automated calculation and grading of the Ki-67 proliferation index. The proposed system first performs preprocessing to enhance image quality. Then, segmentation process is performed using the U-Net architecture, which is a deep learning algorithm, to separate the nuclei from the background. The identified nuclei are then evaluated as Ki-67 positive or negative based on basic color space information and other features. The Ki-67 proliferation index is then calculated, and the neuroendocrine tumor is graded accordingly. The proposed system's performance was evaluated on a dataset obtained from the Department of Pathology at Meram Faculty of Medicine Hospital, Necmettin Erbakan University. The results of the pathologist and the proposed system were compared, and the proposed system was found to have an accuracy of 95% in tumor grading when compared to the pathologist's report.
Collapse
Affiliation(s)
- Zehra Yücel
- Necmettin Erbakan University, Department of Computer Technologies, Konya, Turkey.
- Hacettepe University, Graduate School of Science and Engineering, Ankara, Turkey.
| | - Fuat Akal
- Hacettepe University, Faculty of Engineering, Department of Computer Engineering, Ankara, Turkey
| | - Pembe Oltulu
- Necmettin Erbakan University, Faculty of Medicine, Department of Pathology, Konya, Turkey
| |
Collapse
|
5
|
Peng Y, Yi X, Zhang D, Zhang L, Tian Y, Zhou Z. ConvMedSegNet: A multi-receptive field depthwise convolutional neural network for medical image segmentation. Comput Biol Med 2024; 176:108559. [PMID: 38759586 DOI: 10.1016/j.compbiomed.2024.108559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 04/11/2024] [Accepted: 05/05/2024] [Indexed: 05/19/2024]
Abstract
In order to achieve highly precise medical image segmentation, this paper presents ConvMedSegNet, a novel convolutional neural network designed with a U-shaped architecture that seamlessly integrates two crucial modules: the multi-receptive field depthwise convolution module (MRDC) and the guided fusion module (GF). The MRDC module's primary function is to capture texture information of varying sizes through multi-scale convolutional layers. This information is subsequently utilized to enhance the correlation of global feature data by expanding the network's width. This strategy adeptly preserves the inherent inductive biases of convolution while concurrently amplifying the network's ability to establish dependencies on global information. Conversely, the GF module assumes responsibility for implementing multi-scale feature fusion by connecting the encoder and decoder components. It facilitates the transfer of information between features that are separated over substantial distances through guided fusion, effectively minimizing the loss of critical data. In experiments conducted on public medical image datasets such as BUSI and ISIC2018, ConvMedSegNet outperforms several advanced competing methods, yielding superior results. Additionally, the code can be accessed at https://github.com/csust-yixin/ConvMedSegNet.
Collapse
Affiliation(s)
- Yuxu Peng
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
| | - Xin Yi
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
| | - Dengyong Zhang
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
| | | | - Yuehong Tian
- Changkuangao Beijing Technology Co., Ltd, Beijing 101100, China
| | - Zhifeng Zhou
- Wenzhou University Library, Wenzhou, 325035, China.
| |
Collapse
|
6
|
Alahmari SS, Goldgof D, Hall LO, Mouton PR. A Review of Nuclei Detection and Segmentation on Microscopy Images Using Deep Learning With Applications to Unbiased Stereology Counting. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7458-7477. [PMID: 36327184 DOI: 10.1109/tnnls.2022.3213407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.
Collapse
|
7
|
Xu Z, Lim S, Lu Y, Jung SW. Reversed domain adaptation for nuclei segmentation-based pathological image classification. Comput Biol Med 2024; 168:107726. [PMID: 37984206 DOI: 10.1016/j.compbiomed.2023.107726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/01/2023] [Accepted: 11/15/2023] [Indexed: 11/22/2023]
Abstract
Despite the fact that digital pathology has provided a new paradigm for modern medicine, the insufficiency of annotations for training remains a significant challenge. Due to the weak generalization abilities of deep-learning models, their performance is notably constrained in domains without sufficient annotations. Our research aims to enhance the model's generalization ability through domain adaptation, increasing the prediction ability for the target domain data while only using the source domain labels for training. To further enhance classification performance, we introduce nuclei segmentation to provide the classifier with more diagnostically valuable nuclei information. In contrast to the general domain adaptation that generates source-like results in the target domain, we propose a reversed domain adaptation strategy that generates target-like results in the source domain, enabling the classification model to be more robust to inaccurate segmentation results. The proposed reversed unsupervised domain adaptation can effectively reduce the disparities in nuclei segmentation between the source and target domains without any target domain labels, leading to improved image classification performance in the target domain. The whole framework is designed in a unified manner so that the segmentation and classification modules can be trained jointly. Extensive experiments demonstrate that the proposed method significantly improves the classification performance in the target domain and outperforms existing general domain adaptation methods.
Collapse
Affiliation(s)
- Zhixin Xu
- Department of Electrical Engineering, Korea University, Seoul, Republic of Korea
| | - Seohoon Lim
- Department of Electrical Engineering, Korea University, Seoul, Republic of Korea
| | - Yucheng Lu
- Education and Research Center for Socialware IT, Korea University, Seoul, Republic of Korea
| | - Seung-Won Jung
- Department of Electrical Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
8
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
9
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
10
|
Du G, Zhang P, Guo J, Pang X, Kan G, Zeng B, Chen X, Liang J, Zhan Y. MF-Net: Automated Muscle Fiber Segmentation From Immunofluorescence Images Using a Local-Global Feature Fusion Network. J Digit Imaging 2023; 36:2411-2426. [PMID: 37714969 PMCID: PMC10584774 DOI: 10.1007/s10278-023-00890-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 07/20/2023] [Accepted: 07/24/2023] [Indexed: 09/17/2023] Open
Abstract
Histological assessment of skeletal muscle slices is very important for the accurate evaluation of weightless muscle atrophy. The accurate identification and segmentation of muscle fiber boundary is an important prerequisite for the evaluation of skeletal muscle fiber atrophy. However, there are many challenges to segment muscle fiber from immunofluorescence images, including the presence of low contrast in fiber boundaries in immunofluorescence images and the influence of background noise. Due to the limitations of traditional convolutional neural network-based segmentation methods in capturing global information, they cannot achieve ideal segmentation results. In this paper, we propose a muscle fiber segmentation network (MF-Net) method for effective segmentation of macaque muscle fibers in immunofluorescence images. The network adopts a dual encoder branch composed of convolutional neural networks and transformer to effectively capture local and global feature information in the immunofluorescence image, highlight foreground features, and suppress irrelevant background noise. In addition, a low-level feature decoder module is proposed to capture more global context information by combining different image scales to supplement the missing detail pixels. In this study, a comprehensive experiment was carried out on the immunofluorescence datasets of six macaques' weightlessness models and compared with the state-of-the-art deep learning model. It is proved from five segmentation indices that the proposed automatic segmentation method can be accurately and effectively applied to muscle fiber segmentation in shank immunofluorescence images.
Collapse
Affiliation(s)
| | - Peng Zhang
- China Astronaut Research and Training Center, Beijing, 100094, People's Republic of China
| | - Jianzhong Guo
- Institute of Applied Acoustics, School of Physics and Information Technology, Shaanxi Normal University, Xi'an, 710062, China
| | - Xiangsheng Pang
- China Astronaut Research and Training Center, Beijing, 100094, People's Republic of China
| | - Guanghan Kan
- China Astronaut Research and Training Center, Beijing, 100094, People's Republic of China
| | - Bin Zeng
- China Astronaut Research and Training Center, Beijing, 100094, People's Republic of China
| | - Xiaoping Chen
- China Astronaut Research and Training Center, Beijing, 100094, People's Republic of China.
| | - Jimin Liang
- School of Electronic Engineering, Xidian University, Xi'an, Shaanxi, 710071, China.
| | - Yonghua Zhan
- School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi'an, Shaanxi, 710126, China.
| |
Collapse
|
11
|
Abu-Khudir R, Hafsa N, Badr BE. Identifying Effective Biomarkers for Accurate Pancreatic Cancer Prognosis Using Statistical Machine Learning. Diagnostics (Basel) 2023; 13:3091. [PMID: 37835833 PMCID: PMC10572229 DOI: 10.3390/diagnostics13193091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 09/08/2023] [Accepted: 09/26/2023] [Indexed: 10/15/2023] Open
Abstract
Pancreatic cancer (PC) has one of the lowest survival rates among all major types of cancer. Consequently, it is one of the leading causes of mortality worldwide. Serum biomarkers historically correlate well with the early prognosis of post-surgical complications of PC. However, attempts to identify an effective biomarker panel for the successful prognosis of PC were almost non-existent in the current literature. The current study investigated the roles of various serum biomarkers including carbohydrate antigen 19-9 (CA19-9), chemokine (C-X-C motif) ligand 8 (CXCL-8), procalcitonin (PCT), and other relevant clinical data for identifying PC progression, classified into sepsis, recurrence, and other post-surgical complications, among PC patients. The most relevant biochemical and clinical markers for PC prognosis were identified using a random-forest-powered feature elimination method. Using this informative biomarker panel, the selected machine-learning (ML) classification models demonstrated highly accurate results for classifying PC patients into three complication groups on independent test data. The superiority of the combined biomarker panel (Max AUC-ROC = 100%) was further established over using CA19-9 features exclusively (Max AUC-ROC = 75%) for the task of classifying PC progression. This novel study demonstrates the effectiveness of the combined biomarker panel in successfully diagnosing PC progression and other relevant complications among Egyptian PC survivors.
Collapse
Affiliation(s)
- Rasha Abu-Khudir
- Chemistry Department, College of Science, King Faisal University, P.O. Box 380, Hofuf 31982, Al-Ahsa, Saudi Arabia
- Chemistry Department, Biochemistry Branch, Faculty of Science, Tanta University, Tanta 31527, Egypt
| | - Noor Hafsa
- Computer Science Department, College of Computer Science and Information Technology, King Faisal University, P.O. Box 400, Hofuf 31982, Al-Ahsa, Saudi Arabia;
| | - Badr E. Badr
- Egyptian Ministry of Labor, Training and Research Department, Tanta 31512, Egypt;
- Botany Department, Microbiology Unit, Faculty of Science, Tanta University, Tanta 31527, Egypt
| |
Collapse
|
12
|
Meng X, Zou T. Clinical applications of graph neural networks in computational histopathology: A review. Comput Biol Med 2023; 164:107201. [PMID: 37517325 DOI: 10.1016/j.compbiomed.2023.107201] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 06/10/2023] [Accepted: 06/19/2023] [Indexed: 08/01/2023]
Abstract
Pathological examination is the optimal approach for diagnosing cancer, and with the advancement of digital imaging technologies, it has spurred the emergence of computational histopathology. The objective of computational histopathology is to assist in clinical tasks through image processing and analysis techniques. In the early stages, the technique involved analyzing histopathology images by extracting mathematical features, but the performance of these models was unsatisfactory. With the development of artificial intelligence (AI) technologies, traditional machine learning methods were applied in this field. Although the performance of the models improved, there were issues such as poor model generalization and tedious manual feature extraction. Subsequently, the introduction of deep learning techniques effectively addressed these problems. However, models based on traditional convolutional architectures could not adequately capture the contextual information and deep biological features in histopathology images. Due to the special structure of graphs, they are highly suitable for feature extraction in tissue histopathology images and have achieved promising performance in numerous studies. In this article, we review existing graph-based methods in computational histopathology and propose a novel and more comprehensive graph construction approach. Additionally, we categorize the methods and techniques in computational histopathology according to different learning paradigms. We summarize the common clinical applications of graph-based methods in computational histopathology. Furthermore, we discuss the core concepts in this field and highlight the current challenges and future research directions.
Collapse
Affiliation(s)
- Xiangyan Meng
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| | - Tonghui Zou
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| |
Collapse
|
13
|
Iqbal S, Qureshi AN, Alhussein M, Aurangzeb K, Kadry S. A Novel Heteromorphous Convolutional Neural Network for Automated Assessment of Tumors in Colon and Lung Histopathology Images. Biomimetics (Basel) 2023; 8:370. [PMID: 37622975 PMCID: PMC10452605 DOI: 10.3390/biomimetics8040370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 07/31/2023] [Accepted: 08/03/2023] [Indexed: 08/26/2023] Open
Abstract
The automated assessment of tumors in medical image analysis encounters challenges due to the resemblance of colon and lung tumors to non-mitotic nuclei and their heteromorphic characteristics. An accurate assessment of tumor nuclei presence is crucial for determining tumor aggressiveness and grading. This paper proposes a new method called ColonNet, a heteromorphous convolutional neural network (CNN) with a feature grafting methodology categorically configured for analyzing mitotic nuclei in colon and lung histopathology images. The ColonNet model consists of two stages: first, identifying potential mitotic patches within the histopathological imaging areas, and second, categorizing these patches into squamous cell carcinomas, adenocarcinomas (lung), benign (lung), benign (colon), and adenocarcinomas (colon) based on the model's guidelines. We develop and employ our deep CNNs, each capturing distinct structural, textural, and morphological properties of tumor nuclei, to construct the heteromorphous deep CNN. The execution of the proposed ColonNet model is analyzed by its comparison with state-of-the-art CNNs. The results demonstrate that our model surpasses others on the test set, achieving an impressive F1 score of 0.96, sensitivity and specificity of 0.95, and an area under the accuracy curve of 0.95. These outcomes underscore our hybrid model's superior performance, excellent generalization, and accuracy, highlighting its potential as a valuable tool to support pathologists in diagnostic activities.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore 54000, Pakistan;
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore 54000, Pakistan;
| | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia; (M.A.); (K.A.)
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia; (M.A.); (K.A.)
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway;
| |
Collapse
|
14
|
Liu Y, Lawson BC, Huang X, Broom BM, Weinstein JN. Prediction of Ovarian Cancer Response to Therapy Based on Deep Learning Analysis of Histopathology Images. Cancers (Basel) 2023; 15:4044. [PMID: 37627071 PMCID: PMC10452505 DOI: 10.3390/cancers15164044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 08/06/2023] [Accepted: 08/07/2023] [Indexed: 08/27/2023] Open
Abstract
BACKGROUND Ovarian cancer remains the leading gynecological cause of cancer mortality. Predicting the sensitivity of ovarian cancer to chemotherapy at the time of pathological diagnosis is a goal of precision medicine research that we have addressed in this study using a novel deep-learning neural network framework to analyze the histopathological images. METHODS We have developed a method based on the Inception V3 deep learning algorithm that complements other methods for predicting response to standard platinum-based therapy of the disease. For the study, we used histopathological H&E images (pre-treatment) of high-grade serous carcinoma from The Cancer Genome Atlas (TCGA) Genomic Data Commons portal to train the Inception V3 convolutional neural network system to predict whether cancers had independently been labeled as sensitive or resistant to subsequent platinum-based chemotherapy. The trained model was then tested using data from patients left out of the training process. We used receiver operating characteristic (ROC) and confusion matrix analyses to evaluate model performance and Kaplan-Meier survival analysis to correlate the predicted probability of resistance with patient outcome. Finally, occlusion sensitivity analysis was piloted as a start toward correlating histopathological features with a response. RESULTS The study dataset consisted of 248 patients with stage 2 to 4 serous ovarian cancer. For a held-out test set of forty patients, the trained deep learning network model distinguished sensitive from resistant cancers with an area under the curve (AUC) of 0.846 ± 0.009 (SE). The probability of resistance calculated from the deep-learning network was also significantly correlated with patient survival and progression-free survival. In confusion matrix analysis, the network classifier achieved an overall predictive accuracy of 85% with a sensitivity of 73% and specificity of 90% for this cohort based on the Youden-J cut-off. Stage, grade, and patient age were not statistically significant for this cohort size. Occlusion sensitivity analysis suggested histopathological features learned by the network that may be associated with sensitivity or resistance to the chemotherapy, but multiple marker studies will be necessary to follow up on those preliminary results. CONCLUSIONS This type of analysis has the potential, if further developed, to improve the prediction of response to therapy of high-grade serous ovarian cancer and perhaps be useful as a factor in deciding between platinum-based and other therapies. More broadly, it may increase our understanding of the histopathological variables that predict response and may be adaptable to other cancer types and imaging modalities.
Collapse
Affiliation(s)
- Yuexin Liu
- Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
| | - Barrett C. Lawson
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
| | - Xuelin Huang
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
| | - Bradley M. Broom
- Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
| | - John N. Weinstein
- Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA;
- Department of Systems Biology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| |
Collapse
|
15
|
Martos O, Hoque MZ, Keskinarkaus A, Kemi N, Näpänkangas J, Eskuri M, Pohjanen VM, Kauppila JH, Seppänen T. Optimized detection and segmentation of nuclei in gastric cancer images using stain normalization and blurred artifact removal. Pathol Res Pract 2023; 248:154694. [PMID: 37494804 DOI: 10.1016/j.prp.2023.154694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 07/03/2023] [Accepted: 07/13/2023] [Indexed: 07/28/2023]
Abstract
Histological analysis with microscopy is the gold standard to diagnose and stage cancer, where slides or whole slide images are analyzed for cell morphological and spatial features by pathologists. The nuclei of cancerous cells are characterized by nonuniform chromatin distribution, irregular shapes, and varying size. As nucleus area and shape alone carry prognostic value, detection and segmentation of nuclei are among the most important steps in disease grading. However, evaluation of nuclei is a laborious, time-consuming, and subjective process with large variation among pathologists. Recent advances in digital pathology have allowed significant applications in nuclei detection, segmentation, and classification, but automated image analysis is greatly affected by staining factors, scanner variability, and imaging artifacts, requiring robust image preprocessing, normalization, and segmentation methods for clinically satisfactory results. In this paper, we aimed to evaluate and compare the digital image analysis techniques used in clinical pathology and research in the setting of gastric cancer. A literature review was conducted to evaluate potential methods of improving nuclei detection. Digitized images of 35 patients from a retrospective cohort of gastric adenocarcinoma at Oulu University Hospital in 1987-2016 were annotated for nuclei (n = 9085) by expert pathologists and 14 images of different cancer types from public TCGA dataset with annotated nuclei (n = 7000) were used as a comparison to evaluate applicability in other cancer types. The detection and segmentation accuracy with the selected color normalization and stain separation techniques were compared between the methods. The extracted information can be supplemented by patient's medical data and fed to the existing statistical clinical tools or subjected to subsequent AI-assisted classification and prediction models. The performance of each method is evaluated by several metrics against the annotations done by expert pathologists. The F1-measure of 0.854 ± 0.068 is achieved with color normalization for the gastric cancer dataset, and 0.907 ± 0.044 with color deconvolution for the public dataset, showing comparable results to the earlier state-of-the-art works. The developed techniques serve as a basis for further research on application and interpretability of AI-assisted tools for gastric cancer diagnosis.
Collapse
Affiliation(s)
- Oleg Martos
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Md Ziaul Hoque
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland.
| | - Anja Keskinarkaus
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Niko Kemi
- Department of Pathology, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Juha Näpänkangas
- Department of Pathology, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Maarit Eskuri
- Department of Pathology, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Vesa-Matti Pohjanen
- Department of Pathology, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Joonas H Kauppila
- Department of Surgery, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Tapio Seppänen
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| |
Collapse
|
16
|
Moscalu M, Moscalu R, Dascălu CG, Țarcă V, Cojocaru E, Costin IM, Țarcă E, Șerban IL. Histopathological Images Analysis and Predictive Modeling Implemented in Digital Pathology-Current Affairs and Perspectives. Diagnostics (Basel) 2023; 13:2379. [PMID: 37510122 PMCID: PMC10378281 DOI: 10.3390/diagnostics13142379] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 07/11/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023] Open
Abstract
In modern clinical practice, digital pathology has an essential role, being a technological necessity for the activity in the pathological anatomy laboratories. The development of information technology has majorly facilitated the management of digital images and their sharing for clinical use; the methods to analyze digital histopathological images, based on artificial intelligence techniques and specific models, quantify the required information with significantly higher consistency and precision compared to that provided by optical microscopy. In parallel, the unprecedented advances in machine learning facilitate, through the synergy of artificial intelligence and digital pathology, the possibility of diagnosis based on image analysis, previously limited only to certain specialties. Therefore, the integration of digital images into the study of pathology, combined with advanced algorithms and computer-assisted diagnostic techniques, extends the boundaries of the pathologist's vision beyond the microscopic image and allows the specialist to use and integrate his knowledge and experience adequately. We conducted a search in PubMed on the topic of digital pathology and its applications, to quantify the current state of knowledge. We found that computer-aided image analysis has a superior potential to identify, extract and quantify features in more detail compared to the human pathologist's evaluating possibilities; it performs tasks that exceed its manual capacity, and can produce new diagnostic algorithms and prediction models applicable in translational research that are able to identify new characteristics of diseases based on changes at the cellular and molecular level.
Collapse
Affiliation(s)
- Mihaela Moscalu
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Roxana Moscalu
- Wythenshawe Hospital, Manchester University NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester M139PT, UK
| | - Cristina Gena Dascălu
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Viorel Țarcă
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Elena Cojocaru
- Department of Morphofunctional Sciences I, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Ioana Mădălina Costin
- Faculty of Medicine, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Elena Țarcă
- Department of Surgery II-Pediatric Surgery, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Ionela Lăcrămioara Șerban
- Department of Morpho-Functional Sciences II, Faculty of Medicine, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| |
Collapse
|
17
|
Li H, Zhong J, Lin L, Chen Y, Shi P. Semi-supervised nuclei segmentation based on multi-edge features fusion attention network. PLoS One 2023; 18:e0286161. [PMID: 37228137 DOI: 10.1371/journal.pone.0286161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 05/09/2023] [Indexed: 05/27/2023] Open
Abstract
The morphology of the nuclei represents most of the clinical pathological information, and nuclei segmentation is a vital step in current automated histopathological image analysis. Supervised machine learning-based segmentation models have already achieved outstanding performance with sufficiently precise human annotations. Nevertheless, outlining such labels on numerous nuclei is extremely professional needing and time consuming. Automatic nuclei segmentation with minimal manual interventions is highly needed to promote the effectiveness of clinical pathological researches. Semi-supervised learning greatly reduces the dependence on labeled samples while ensuring sufficient accuracy. In this paper, we propose a Multi-Edge Feature Fusion Attention Network (MEFFA-Net) with three feature inputs including image, pseudo-mask and edge, which enhances its learning ability by considering multiple features. Only a few labeled nuclei boundaries are used to train annotations on the remaining mostly unlabeled data. The MEFFA-Net creates more precise boundary masks for nucleus segmentation based on pseudo-masks, which greatly reduces the dependence on manual labeling. The MEFFA-Block focuses on the nuclei outline and selects features conducive to segment, making full use of the multiple features in segmentation. Experimental results on public multi-organ databases including MoNuSeg, CPM-17 and CoNSeP show that the proposed model has the mean IoU segmentation evaluations of 0.706, 0.751, and 0.722, respectively. The model also achieves better results than some cutting-edge methods while the labeling work is reduced to 1/8 of common supervised strategies. Our method provides a more efficient and accurate basis for nuclei segmentations and further quantifications in pathological researches.
Collapse
Affiliation(s)
- Huachang Li
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| | - Jing Zhong
- Department of Radiology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Liyan Lin
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Yanping Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Peng Shi
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| |
Collapse
|
18
|
Islam Sumon R, Bhattacharjee S, Hwang YB, Rahman H, Kim HC, Ryu WS, Kim DM, Cho NH, Choi HK. Densely Convolutional Spatial Attention Network for nuclei segmentation of histological images for computational pathology. Front Oncol 2023; 13:1009681. [PMID: 37305563 PMCID: PMC10248729 DOI: 10.3389/fonc.2023.1009681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 05/05/2023] [Indexed: 06/13/2023] Open
Abstract
Introduction Automatic nuclear segmentation in digital microscopic tissue images can aid pathologists to extract high-quality features for nuclear morphometrics and other analyses. However, image segmentation is a challenging task in medical image processing and analysis. This study aimed to develop a deep learning-based method for nuclei segmentation of histological images for computational pathology. Methods The original U-Net model sometime has a caveat in exploring significant features. Herein, we present the Densely Convolutional Spatial Attention Network (DCSA-Net) model based on U-Net to perform the segmentation task. Furthermore, the developed model was tested on external multi-tissue dataset - MoNuSeg. To develop deep learning algorithms for well-segmenting nuclei, a large quantity of data are mandatory, which is expensive and less feasible. We collected hematoxylin and eosin-stained image data sets from two hospitals to train the model with a variety of nuclear appearances. Because of the limited number of annotated pathology images, we introduced a small publicly accessible data set of prostate cancer (PCa) with more than 16,000 labeled nuclei. Nevertheless, to construct our proposed model, we developed the DCSA module, an attention mechanism for capturing useful information from raw images. We also used several other artificial intelligence-based segmentation methods and tools to compare their results to our proposed technique. Results To prioritize the performance of nuclei segmentation, we evaluated the model's outputs based on the Accuracy, Dice coefficient (DC), and Jaccard coefficient (JC) scores. The proposed technique outperformed the other methods and achieved superior nuclei segmentation with accuracy, DC, and JC of 96.4% (95% confidence interval [CI]: 96.2 - 96.6), 81.8 (95% CI: 80.8 - 83.0), and 69.3 (95% CI: 68.2 - 70.0), respectively, on the internal test data set. Conclusion Our proposed method demonstrates superior performance in segmenting cell nuclei of histological images from internal and external datasets, and outperforms many standard segmentation algorithms used for comparative analysis.
Collapse
Affiliation(s)
- Rashadul Islam Sumon
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Subrata Bhattacharjee
- Department of Computer Engineering, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Yeong-Byn Hwang
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Hafizur Rahman
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Hee-Cheol Kim
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Wi-Sun Ryu
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| | - Dong Min Kim
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| | - Nam-Hoon Cho
- Department of Pathology, Yonsei University Hospital, Seoul, Republic of Korea
| | - Heung-Kook Choi
- Department of Computer Engineering, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| |
Collapse
|
19
|
Shan T, Ying Y, Song G. Automatic Kidney Segmentation Method Based on an Enhanced Generative Adversarial Network. Diagnostics (Basel) 2023; 13:diagnostics13071358. [PMID: 37046576 PMCID: PMC10093289 DOI: 10.3390/diagnostics13071358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 02/17/2023] [Accepted: 02/20/2023] [Indexed: 04/14/2023] Open
Abstract
When deciding on a kidney tumor's diagnosis and treatment, it is critical to take its morphometry into account. It is challenging to undertake a quantitative analysis of the association between kidney tumor morphology and clinical outcomes due to a paucity of data and the need for the time-consuming manual measurement of imaging variables. To address this issue, an autonomous kidney segmentation technique, namely SegTGAN, is proposed in this paper, which is based on a conventional generative adversarial network model. Its core framework includes a discriminator network with multi-scale feature extraction and a fully convolutional generator network made up of densely linked blocks. For qualitative and quantitative comparisons with the SegTGAN technique, the widely used and related medical image segmentation networks U-Net, FCN, and SegAN are used. The experimental results show that the Dice similarity coefficient (DSC), volumetric overlap error (VOE), accuracy (ACC), and average surface distance (ASD) of SegTGAN on the Kits19 dataset reach 92.28%, 16.17%, 97.28%, and 0.61 mm, respectively. SegTGAN outscores all the other neural networks, which indicates that our proposed model has the potential to improve the accuracy of CT-based kidney segmentation.
Collapse
Affiliation(s)
- Tian Shan
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yuhan Ying
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guoli Song
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
| |
Collapse
|
20
|
Ke J, Lu Y, Shen Y, Zhu J, Zhou Y, Huang J, Yao J, Liang X, Guo Y, Wei Z, Liu S, Huang Q, Jiang F, Shen D. ClusterSeg: A crowd cluster pinpointed nucleus segmentation framework with cross-modality datasets. Med Image Anal 2023; 85:102758. [PMID: 36731275 DOI: 10.1016/j.media.2023.102758] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 11/27/2022] [Accepted: 01/18/2023] [Indexed: 01/26/2023]
Abstract
The detection and segmentation of individual cells or nuclei is often involved in image analysis across a variety of biology and biomedical applications as an indispensable prerequisite. However, the ubiquitous presence of crowd clusters with morphological variations often hinders successful instance segmentation. In this paper, nuclei cluster focused annotation strategies and frameworks are proposed to overcome this challenging practical problem. Specifically, we design a nucleus segmentation framework, namely ClusterSeg, to tackle nuclei clusters, which consists of a convolutional-transformer hybrid encoder and a 2.5-path decoder for precise predictions of nuclei instance mask, contours, and clustered-edges. Additionally, an annotation-efficient clustered-edge pointed strategy pinpoints the salient and error-prone boundaries, where a partially-supervised PS-ClusterSeg is presented using ClusterSeg as the segmentation backbone. The framework is evaluated with four privately curated image sets and two public sets with characteristic severely clustered nuclei across a variety range of image modalities, e.g., microscope, cytopathology, and histopathology images. The proposed ClusterSeg and PS-ClusterSeg are modality-independent and generalizable, and superior to current state-of-the-art approaches in multiple metrics empirically. Our collected data, the elaborate annotations to both public and private set, as well the source code, are released publicly at https://github.com/lu-yizhou/ClusterSeg.
Collapse
Affiliation(s)
- Jing Ke
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China; School of Computer Science and Engineering, University of New South Wales, Sydney, Australia.
| | - Yizhou Lu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiqing Shen
- Department of Computer Science, Johns Hopkins University, MD, USA
| | - Junchao Zhu
- School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Yijin Zhou
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Jinghan Huang
- Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Jieteng Yao
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoyao Liang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yi Guo
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, Australia
| | - Zhonghua Wei
- Department of Pathology, Shanghai Sixth people's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Sheng Liu
- Department of Thyroid Breast and Vascular Surgery, Shanghai Fourth People's Hospital, School of Medicine, Tongji University, Shanghai, China.
| | - Qin Huang
- Department of Pathology, Shanghai Sixth people's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Fusong Jiang
- Department of Endocrinology and Metabolism, Shanghai Sixth people's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
21
|
Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal 2023; 85:102762. [PMID: 36738650 PMCID: PMC10010286 DOI: 10.1016/j.media.2023.102762] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 02/01/2023]
Abstract
Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Collapse
Affiliation(s)
- Jun Li
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ce Wang
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - S Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China; School of Biomedical Engineering & Suzhou Institute for Advanced Research, Center for Medical Imaging, Robotics, and Analytic Computing & Learning (MIRACLE), University of Science and Technology of China, Suzhou 215123, China.
| |
Collapse
|
22
|
Rothman JS, Borges-Merjane C, Holderith N, Jonas P, Silver RA. Validation of a stereological method for estimating particle size and density from 2D projections with high accuracy. PLoS One 2023; 18:e0277148. [PMID: 36930689 PMCID: PMC10022809 DOI: 10.1371/journal.pone.0277148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 03/02/2023] [Indexed: 03/18/2023] Open
Abstract
Stereological methods for estimating the 3D particle size and density from 2D projections are essential to many research fields. These methods are, however, prone to errors arising from undetected particle profiles due to sectioning and limited resolution, known as 'lost caps'. A potential solution developed by Keiding, Jensen, and Ranek in 1972, which we refer to as the Keiding model, accounts for lost caps by quantifying the smallest detectable profile in terms of its limiting 'cap angle' (ϕ), a size-independent measure of a particle's distance from the section surface. However, this simple solution has not been widely adopted nor tested. Rather, model-independent design-based stereological methods, which do not explicitly account for lost caps, have come to the fore. Here, we provide the first experimental validation of the Keiding model by comparing the size and density of particles estimated from 2D projections with direct measurement from 3D EM reconstructions of the same tissue. We applied the Keiding model to estimate the size and density of somata, nuclei and vesicles in the cerebellum of mice and rats, where high packing density can be problematic for design-based methods. Our analysis reveals a Gaussian distribution for ϕ rather than a single value. Nevertheless, curve fits of the Keiding model to the 2D diameter distribution accurately estimate the mean ϕ and 3D diameter distribution. While systematic testing using simulations revealed an upper limit to determining ϕ, our analysis shows that estimated ϕ can be used to determine the 3D particle density from the 2D density under a wide range of conditions, and this method is potentially more accurate than minimum-size-based lost-cap corrections and disector methods. Our results show the Keiding model provides an efficient means of accurately estimating the size and density of particles from 2D projections even under conditions of a high density.
Collapse
Affiliation(s)
- Jason Seth Rothman
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | | | - Noemi Holderith
- Laboratory of Cellular Neurophysiology, Institute of Experimental Medicine, Budapest, Hungary
| | - Peter Jonas
- Cellular Neuroscience, Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - R. Angus Silver
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| |
Collapse
|
23
|
Göb S, Sawant S, Erick F, Schmidkonz C, Ramming A, Lang E, Wittenberg T, Götz T. Comparing ensemble methods combined with different aggregating models using micrograph cell segmentation as an initial application example. J Pathol Inform 2023; 14:100304. [PMID: 36967835 PMCID: PMC10034515 DOI: 10.1016/j.jpi.2023.100304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 02/22/2023] [Accepted: 02/23/2023] [Indexed: 03/07/2023] Open
Abstract
Strategies such as ensemble learning and averaging techniques try to reduce the variance of single deep neural networks. The focus of this study is on ensemble averaging techniques, fusing the results of differently initialized and trained networks. Thereby, using micrograph cell segmentation as an application example, various ensembles have been initialized and formed during network training, whereby the following methods have been applied: (a) random seeds, (b) L 1-norm pruning, (c) variable numbers of training examples, and (d) a combination of the latter 2 items. Furthermore, different averaging methods are in common use and were evaluated in this study. As averaging methods, the mean, the median, and the location parameter of an alpha-stable distribution, fit to the histograms of class membership probabilities (CMPs), as well as a majority vote of the members of an ensemble were considered. The performance of these methods is demonstrated and evaluated on a micrograph cell segmentation use case, employing a common state-of-the art deep convolutional neural network (DCNN) architecture exploiting the principle of the common VGG-architecture. The study demonstrates that for this data set, the choice of the ensemble averaging method only has a marginal influence on the evaluation metrics (accuracy and Dice coefficient) used to measure the segmentation performance. Nevertheless, for practical applications, a simple and fast estimate of the mean of the distribution is highly competitive with respect to the most sophisticated representation of the CMP distributions by an alpha-stable distribution, and hence seems the most proper ensemble averaging method to be used for this application.
Collapse
|
24
|
Cunha C, Narotamo H, Monteiro A, Silveira M. Detection and measurement of butterfly eyespot and spot patterns using convolutional neural networks. PLoS One 2023; 18:e0280998. [PMID: 36780440 PMCID: PMC9925015 DOI: 10.1371/journal.pone.0280998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 01/12/2023] [Indexed: 02/15/2023] Open
Abstract
Butterflies are increasingly becoming model insects where basic questions surrounding the diversity of their color patterns are being investigated. Some of these color patterns consist of simple spots and eyespots. To accelerate the pace of research surrounding these discrete and circular pattern elements we trained distinct convolutional neural networks (CNNs) for detection and measurement of butterfly spots and eyespots on digital images of butterfly wings. We compared the automatically detected and segmented spot/eyespot areas with those manually annotated. These methods were able to identify and distinguish marginal eyespots from spots, as well as distinguish these patterns from less symmetrical patches of color. In addition, the measurements of an eyespot's central area and surrounding rings were comparable with the manual measurements. These CNNs offer improvements of eyespot/spot detection and measurements relative to previous methods because it is not necessary to mathematically define the feature of interest. All that is needed is to point out the images that have those features to train the CNN.
Collapse
Affiliation(s)
- Carolina Cunha
- Institute for Systems and Robotics (ISR), Instituto Superior Técnico (IST), University of Lisbon, Lisbon, Portugal
| | - Hemaxi Narotamo
- Institute for Systems and Robotics (ISR), Instituto Superior Técnico (IST), University of Lisbon, Lisbon, Portugal
| | - Antónia Monteiro
- Biological Sciences, National University of Singapore, Singapore, Singapore
| | - Margarida Silveira
- Institute for Systems and Robotics (ISR), Instituto Superior Técnico (IST), University of Lisbon, Lisbon, Portugal
| |
Collapse
|
25
|
A Heuristic Machine Learning-Based Optimization Technique to Predict Lung Cancer Patient Survival. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:4506488. [PMID: 36776617 PMCID: PMC9911240 DOI: 10.1155/2023/4506488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/26/2022] [Accepted: 11/24/2022] [Indexed: 02/05/2023]
Abstract
Cancer has been a significant threat to human health and well-being, posing the biggest obstacle in the history of human sickness. The high death rate in cancer patients is primarily due to the complexity of the disease and the wide range of clinical outcomes. Increasing the accuracy of the prediction is equally crucial as predicting the survival rate of cancer patients, which has become a key issue of cancer research. Many models have been suggested at the moment. However, most of them simply use single genetic data or clinical data to construct prediction models for cancer survival. There is a lot of emphasis in present survival studies on determining whether or not a patient will survive five years. The personal issue of how long a lung cancer patient will survive remains unanswered. The proposed technique Naive Bayes and SSA is estimating the overall survival time with lung cancer. Two machine learning challenges are derived from a single customized query. To begin with, determining whether a patient will survive for more than five years is a simple binary question. The second step is to develop a five-year survival model using regression analysis. When asked to forecast how long a lung cancer patient would survive within five years, the mean absolute error (MAE) of this technique's predictions is accurate within a month. Several biomarker genes have been associated with lung cancers. The accuracy, recall, and precision achieved from this algorithm are 98.78%, 98.4%, and 98.6%, respectively.
Collapse
|
26
|
Sun L, Tian H, Ge H, Tian J, Lin Y, Liang C, Liu T, Zhao Y. Cross-attention multi-branch CNN using DCE-MRI to classify breast cancer molecular subtypes. Front Oncol 2023; 13:1107850. [PMID: 36959806 PMCID: PMC10028183 DOI: 10.3389/fonc.2023.1107850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 02/20/2023] [Indexed: 03/09/2023] Open
Abstract
Purpose The aim of this study is to improve the accuracy of classifying luminal or non-luminal subtypes of breast cancer by using computer algorithms based on DCE-MRI, and to validate the diagnostic efficacy of the model by considering the patient's age of menarche and nodule size. Methods DCE-MRI images of patients with non-specific invasive breast cancer admitted to the Second Affiliated Hospital of Dalian Medical University were collected. There were 160 cases in total, with 84 cases of luminal type (luminal A and luminal B and 76 cases of non-luminal type (HER 2 overexpressing and triple negative). Patients were grouped according to thresholds of nodule sizes of 20 mm and age at menarche of 14 years. A cross-attention multi-branch net CAMBNET) was proposed based on the dataset to predict the molecular subtypes of breast cancer. Diagnostic performance was assessed by accuracy, sensitivity, specificity, F1 and area under the ROC curve (AUC). And the model is visualized with Grad-CAM. Results Several classical deep learning models were included for diagnostic performance comparison. Using 5-fold cross-validation on the test dataset, all the results of CAMBNET are significantly higher than the compared deep learning models. The average prediction recall, accuracy, precision, and AUC for luminal and non-luminal types of the dataset were 89.11%, 88.44%, 88.52%, and 96.10%, respectively. For patients with tumor size <20 mm, the CAMBNET had AUC of 83.45% and ACC of 90.29% for detecting triple-negative breast cancer. When classifying luminal from non-luminal subtypes for patients with age at menarche years, our CAMBNET model achieved an ACC of 92.37%, precision of 92.42%, recall of 93.33%, F1of 92.33%, and AUC of 99.95%. Conclusions The CAMBNET can be applied in molecular subtype classification of breasts. For patients with menarche at 14 years old, our model can yield more accurate results when classifying luminal and non-luminal subtypes. For patients with tumor sizes ≤20 mm, our model can yield more accurate result in detecting triple-negative breast cancer to improve patient prognosis and survival.
Collapse
Affiliation(s)
- Liang Sun
- The College of Computer Science and Technology, Dalian University of Technology, Dalian, Liaoning, China
| | - Haowen Tian
- The College of Computer Science and Technology, Dalian University of Technology, Dalian, Liaoning, China
| | - Hongwei Ge
- The College of Computer Science and Technology, Dalian University of Technology, Dalian, Liaoning, China
| | - Juan Tian
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, Liaoning, China
| | - Yuxin Lin
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, Liaoning, China
| | - Chang Liang
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, Liaoning, China
| | - Tang Liu
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, Liaoning, China
- *Correspondence: Tang Liu, ; Yiping Zhao,
| | - Yiping Zhao
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, Liaoning, China
- *Correspondence: Tang Liu, ; Yiping Zhao,
| |
Collapse
|
27
|
Juhong A, Li B, Yao CY, Yang CW, Agnew DW, Lei YL, Huang X, Piyawattanametha W, Qiu Z. Super-resolution and segmentation deep learning for breast cancer histopathology image analysis. BIOMEDICAL OPTICS EXPRESS 2023; 14:18-36. [PMID: 36698665 PMCID: PMC9841988 DOI: 10.1364/boe.463839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 09/16/2022] [Accepted: 09/19/2022] [Indexed: 06/17/2023]
Abstract
Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images' size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model's results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings.
Collapse
Affiliation(s)
- Aniwat Juhong
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
| | - Bo Li
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
| | - Cheng-You Yao
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, MI 48824, USA
| | - Chia-Wei Yang
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Chemistry, Michigan State University, East Lansing, MI 48824, USA
| | - Dalen W. Agnew
- College of Veterinary Medicine, Michigan State University, East Lansing, MI 48824, USA
| | - Yu Leo Lei
- Department of Periodontics Oral Medicine, University of Michigan, Ann Arbor, MI 48104, USA
| | - Xuefei Huang
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, MI 48824, USA
- Department of Chemistry, Michigan State University, East Lansing, MI 48824, USA
| | - Wibool Piyawattanametha
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Biomedical Engineering, School of Engineering, King Mongkut’s Institute of Technology Ladkrabang (KMITL), Bangkok 10520, Thailand
| | - Zhen Qiu
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, MI 48824, USA
| |
Collapse
|
28
|
A Deep-Learning-Based Artificial Intelligence System for the Pathology Diagnosis of Uterine Smooth Muscle Tumor. LIFE (BASEL, SWITZERLAND) 2022; 13:life13010003. [PMID: 36675952 PMCID: PMC9864148 DOI: 10.3390/life13010003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/09/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022]
Abstract
We aimed to develop an artificial intelligence (AI) diagnosis system for uterine smooth muscle tumors (UMTs) by using deep learning. We analyzed the morphological features of UMTs on whole-slide images (233, 108, and 30 digital slides of leiomyosarcomas, leiomyomas, and smooth muscle tumors of uncertain malignant potential stained with hematoxylin and eosin, respectively). Aperio ImageScope software randomly selected ≥10 areas of the total field of view. Pathologists randomly selected a marked region in each section that was no smaller than the total area of 10 high-power fields in which necrotic, vascular, collagenous, and mitotic areas were labeled. We constructed an automatic identification algorithm for cytological atypia and necrosis by using ResNet and constructed an automatic detection algorithm for mitosis by using YOLOv5. A logical evaluation algorithm was then designed to obtain an automatic UMT diagnostic aid that can "study and synthesize" a pathologist's experience. The precision, recall, and F1 index reached more than 0.920. The detection network could accurately detect the mitoses (0.913 precision, 0.893 recall). For the prediction ability, the AI system had a precision of 0.90. An AI-assisted system for diagnosing UMTs in routine practice scenarios is feasible and can improve the accuracy and efficiency of diagnosis.
Collapse
|
29
|
Taher F, Shoaib MR, Emara HM, Abdelwahab KM, Abd El-Samie FE, Haweel MT. Efficient framework for brain tumor detection using different deep learning techniques. Front Public Health 2022; 10:959667. [PMID: 36530682 PMCID: PMC9752904 DOI: 10.3389/fpubh.2022.959667] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 08/31/2022] [Indexed: 12/03/2022] Open
Abstract
The brain tumor is an urgent malignancy caused by unregulated cell division. Tumors are classified using a biopsy, which is normally performed after the final brain surgery. Deep learning technology advancements have assisted the health professionals in medical imaging for the medical diagnosis of several symptoms. In this paper, transfer-learning-based models in addition to a Convolutional Neural Network (CNN) called BRAIN-TUMOR-net trained from scratch are introduced to classify brain magnetic resonance images into tumor or normal cases. A comparison between the pre-trained InceptionResNetv2, Inceptionv3, and ResNet50 models and the proposed BRAIN-TUMOR-net is introduced. The performance of the proposed model is tested on three publicly available Magnetic Resonance Imaging (MRI) datasets. The simulation results show that the BRAIN-TUMOR-net achieves the highest accuracy compared to other models. It achieves 100%, 97%, and 84.78% accuracy levels for three different MRI datasets. In addition, the k-fold cross-validation technique is used to allow robust classification. Moreover, three different unsupervised clustering techniques are utilized for segmentation.
Collapse
Affiliation(s)
- Fatma Taher
- College of Technological Innovative, Zayed University, Abu Dhabi, United Arab Emirates
| | - Mohamed R. Shoaib
- Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
| | - Heba M. Emara
- Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt,*Correspondence: Heba M. Emara
| | | | - Fathi E. Abd El-Samie
- Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt,Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Mohammad T. Haweel
- Department of Electrical Engineering, Shaqra University, Shaqraa, Saudi Arabia
| |
Collapse
|
30
|
Wu H, Souedet N, Jan C, Clouchoux C, Delzescaux T. A general deep learning framework for neuron instance segmentation based on Efficient UNet and morphological post-processing. Comput Biol Med 2022; 150:106180. [PMID: 36244305 DOI: 10.1016/j.compbiomed.2022.106180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 09/21/2022] [Accepted: 10/01/2022] [Indexed: 11/03/2022]
Abstract
Recent studies have demonstrated the superiority of deep learning in medical image analysis, especially in cell instance segmentation, a fundamental step for many biological studies. However, the excellent performance of the neural networks requires training on large, unbiased dataset and annotations, which is labor-intensive and expertise-demanding. This paper presents an end-to-end framework to automatically detect and segment NeuN stained neuronal cells on histological images using only point annotations. Unlike traditional nuclei segmentation with point annotation, we propose using point annotation and binary segmentation to synthesize pixel-level annotations. The synthetic masks are used as the ground truth to train the neural network, a U-Net-like architecture with a state-of-the-art network, EfficientNet, as the encoder. Validation results show the superiority of our model compared to other recent methods. In addition, we investigated multiple post-processing schemes and proposed an original strategy to convert the probability map into segmented instances using ultimate erosion and dynamic reconstruction. This approach is easy to configure and outperforms other classical post-processing techniques. This work aims to develop a robust and efficient framework for analyzing neurons using optical microscopic data, which can be used in preclinical biological studies and, more specifically, in the context of neurodegenerative diseases. Code is available at: https://github.com/MIRCen/NeuronInstanceSeg.
Collapse
Affiliation(s)
- Huaqian Wu
- CEA-CNRS-UMR 9199, MIRCen, Fontenay-aux-Roses, France
| | | | - Caroline Jan
- CEA-CNRS-UMR 9199, MIRCen, Fontenay-aux-Roses, France
| | | | | |
Collapse
|
31
|
Breast cancer image analysis using deep learning techniques – a survey. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00703-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
32
|
Hossain MS, Syeed MMM, Fatema K, Hossain MS, Uddin MF. Singular Nuclei Segmentation for Automatic HER2 Quantification Using CISH Whole Slide Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:7361. [PMID: 36236459 PMCID: PMC9571354 DOI: 10.3390/s22197361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Revised: 09/20/2022] [Accepted: 09/22/2022] [Indexed: 06/16/2023]
Abstract
Human epidermal growth factor receptor 2 (HER2) quantification is performed routinely for all breast cancer patients to determine their suitability for HER2-targeted therapy. Fluorescence in situ hybridization (FISH) and chromogenic in situ hybridization (CISH) are the US Food and Drug Administration (FDA) approved tests for HER2 quantification in which at least 20 cancer-affected singular nuclei are quantified for HER2 grading. CISH is more advantageous than FISH for cost, time and practical usability. In clinical practice, nuclei suitable for HER2 quantification are selected manually by pathologists which is time-consuming and laborious. Previously, a method was proposed for automatic HER2 quantification using a support vector machine (SVM) to detect suitable singular nuclei from CISH slides. However, the SVM-based method occasionally failed to detect singular nuclei resulting in inaccurate results. Therefore, it is necessary to develop a robust nuclei detection method for reliable automatic HER2 quantification. In this paper, we propose a robust U-net-based singular nuclei detection method with complementary color correction and deconvolution adapted for accurate HER2 grading using CISH whole slide images (WSIs). The efficacy of the proposed method was demonstrated for automatic HER2 quantification during a comparison with the SVM-based approach.
Collapse
Affiliation(s)
- Md Shakhawat Hossain
- Department of CS, American International University-Bangladesh, Dhaka 1229, Bangladesh
- RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
| | - M. M. Mahbubul Syeed
- RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
- Department of CSE, Independent University, Bangladesh, Dhaka 1229, Bangladesh
| | - Kaniz Fatema
- RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
- Department of CSE, Independent University, Bangladesh, Dhaka 1229, Bangladesh
| | - Md Sakir Hossain
- Department of CS, American International University-Bangladesh, Dhaka 1229, Bangladesh
| | - Mohammad Faisal Uddin
- RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
- Department of CSE, Independent University, Bangladesh, Dhaka 1229, Bangladesh
| |
Collapse
|
33
|
Wu H, Pang KKY, Pang GKH, Au-Yeung RKH. A soft-computing based approach to overlapped cells analysis in histopathology images with genetic algorithm. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
34
|
Yin H, Zhang F, Yang X, Meng X, Miao Y, Noor Hussain MS, Yang L, Li Z. Research trends of artificial intelligence in pancreatic cancer: a bibliometric analysis. Front Oncol 2022; 12:973999. [PMID: 35982967 PMCID: PMC9380440 DOI: 10.3389/fonc.2022.973999] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 07/13/2022] [Indexed: 01/03/2023] Open
Abstract
Purpose We evaluated the related research on artificial intelligence (AI) in pancreatic cancer (PC) through bibliometrics analysis and explored the research hotspots and current status from 1997 to 2021. Methods Publications related to AI in PC were retrieved from the Web of Science Core Collection (WoSCC) during 1997-2021. Bibliometrix package of R software 4.0.3 and VOSviewer were used to bibliometrics analysis. Results A total of 587 publications in this field were retrieved from WoSCC database. After 2018, the number of publications grew rapidly. The United States and Johns Hopkins University were the most influential country and institution, respectively. A total of 2805 keywords were investigated, 81 of which appeared more than 10 times. Co-occurrence analysis categorized these keywords into five types of clusters: (1) AI in biology of PC, (2) AI in pathology and radiology of PC, (3) AI in the therapy of PC, (4) AI in risk assessment of PC and (5) AI in endoscopic ultrasonography (EUS) of PC. Trend topics and thematic maps show that keywords " diagnosis ", “survival”, “classification”, and “management” are the research hotspots in this field. Conclusion The research related to AI in pancreatic cancer is still in the initial stage. Currently, AI is widely studied in biology, diagnosis, treatment, risk assessment, and EUS of pancreatic cancer. This bibliometrics study provided an insight into AI in PC research and helped researchers identify new research orientations.
Collapse
Affiliation(s)
- Hua Yin
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
- Postgraduate Training Base in Shanghai Gongli Hospital, Ningxia Medical University, Shanghai, China
| | - Feixiong Zhang
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
| | - Xiaoli Yang
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
| | - Xiangkun Meng
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
| | - Yu Miao
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
| | | | - Li Yang
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
- *Correspondence: Zhaoshen Li, ; Li Yang,
| | - Zhaoshen Li
- Postgraduate Training Base in Shanghai Gongli Hospital, Ningxia Medical University, Shanghai, China
- Clinical Medical College, Ningxia Medical University, Yinchuan, China
- *Correspondence: Zhaoshen Li, ; Li Yang,
| |
Collapse
|
35
|
Chand S. Semantic segmentation of human cell nucleus using deep U-Net and other versions of U-Net models. NETWORK (BRISTOL, ENGLAND) 2022; 33:167-186. [PMID: 35822269 DOI: 10.1080/0954898x.2022.2096938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 04/04/2022] [Accepted: 06/27/2022] [Indexed: 06/15/2023]
Abstract
The deep learning models play an essential role in many areas, including medical image analysis. These models extract important features without human intervention. In this paper, we propose a deep convolution neural network, named as deep U-Net model, for the segmentation of the cell nucleus, a critical functional unit that determines the function and structure of the body. The nucleus contains all kinds of DNA, RNA, chromosomes, and genes governing all life activities, and its disorder may lead to different types of diseases such as cancer, heart disease, diabetes, Alzheimer's, etc. If the nucleus structure is known correctly, diseases due to nucleus disorder may be detected early. It may also reduce the drug discovery time if the shape and size of the nucleus are known. We evaluate the performance of the proposed models on the nucleus segmentation data set used by the Data Science Bowl 2018 competition hosted by Kaggle. We compare its performance with that of the U-Net, Attention U-Net, R2U-Net, Attention R2U-Net, and both versions of the U-Net++ with and without supervision, in terms of loss, dice coefficient, dice loss, intersection over union, and accuracy. Our model performs better than the existing models.
Collapse
Affiliation(s)
- Satish Chand
- School of Computer and Systems Sciences, Jawaharlal Nehru Univesity, New Delhi, India
| |
Collapse
|
36
|
Khairandish M, Sharma M, Jain V, Chatterjee J, Jhanjhi N. A Hybrid CNN-SVM Threshold Segmentation Approach for Tumor Detection and Classification of MRI Brain Images. Ing Rech Biomed 2022; 43:290-299. [DOI: 10.1016/j.irbm.2021.06.003] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
37
|
Ghaznavi A, Rychtáriková R, Saberioon M, Štys D. Cell segmentation from telecentric bright-field transmitted light microscopy images using a Residual Attention U-Net: A case study on HeLa line. Comput Biol Med 2022; 147:105805. [PMID: 35809410 DOI: 10.1016/j.compbiomed.2022.105805] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 06/03/2022] [Accepted: 06/26/2022] [Indexed: 11/20/2022]
Abstract
Living cell segmentation from bright-field light microscopy images is challenging due to the image complexity and temporal changes in the living cells. Recently developed deep learning (DL)-based methods became popular in medical and microscopy image segmentation tasks due to their success and promising outcomes. The main objective of this paper is to develop a deep learning, U-Net-based method to segment the living cells of the HeLa line in bright-field transmitted light microscopy. To find the most suitable architecture for our datasets, a residual attention U-Net was proposed and compared with an attention and a simple U-Net architecture. The attention mechanism highlights the remarkable features and suppresses activations in the irrelevant image regions. The residual mechanism overcomes with vanishing gradient problem. The Mean-IoU score for our datasets reaches 0.9505, 0.9524, and 0.9530 for the simple, attention, and residual attention U-Net, respectively. The most accurate semantic segmentation results was achieved in the Mean-IoU and Dice metrics by applying the residual and attention mechanisms together. The watershed method applied to this best - Residual Attention - semantic segmentation result gave the segmentation with the specific information for each cell.
Collapse
Affiliation(s)
- Ali Ghaznavi
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Institute of Complex Systems, University of South Bohemia in České Budějovice, Zámek 136, 373 33, Nové Hrady, Czech Republic.
| | - Renata Rychtáriková
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Institute of Complex Systems, University of South Bohemia in České Budějovice, Zámek 136, 373 33, Nové Hrady, Czech Republic.
| | - Mohammadmehdi Saberioon
- Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing and Geoinformatics, Telegrafenberg, Potsdam 14473, Germany.
| | - Dalibor Štys
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Institute of Complex Systems, University of South Bohemia in České Budějovice, Zámek 136, 373 33, Nové Hrady, Czech Republic.
| |
Collapse
|
38
|
Computational Methods for Neuron Segmentation in Two-Photon Calcium Imaging Data: A Survey. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12146876] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Calcium imaging has rapidly become a methodology of choice for real-time in vivo neuron analysis. Its application to large sets of data requires automated tools to annotate and segment cells, allowing scalable image segmentation under reproducible criteria. In this paper, we review and summarize the most recent methods for computational segmentation of calcium imaging. The contributions of the paper are three-fold: we provide an overview of the main algorithms taxonomized in three categories (signal processing, matrix factorization and machine learning-based approaches), we highlight the main advantages and disadvantages of each category and we provide a summary of the performance of the methods that have been tested on public benchmarks (with links to the public code when available).
Collapse
|
39
|
Zhang S, Zhu L, Gao Y. An efficient deep equilibrium model for medical image segmentation. Comput Biol Med 2022; 148:105831. [PMID: 35849947 DOI: 10.1016/j.compbiomed.2022.105831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 04/25/2022] [Accepted: 07/03/2022] [Indexed: 11/03/2022]
Abstract
In this paper, we propose an effective method that takes the advantages of classical methods and deep learning technology for medical image segmentation through modeling the neural network as a fixed point iteration seeking for system equilibrium by adding a feedback loop. In particular, the nuclear segmentation of medical image is used as an example to demonstrate the proposed method where it can successfully complete the challenge of segmenting nuclei from cells in different histopathological images. Specifically, the nuclei segmentation is formulated as a dynamic process to search for the system equilibrium. Starting from an initial segmentation generated either by a classic algorithm or pre-trained deep learning model, a sequence of segmentation output is created and combined with the original image to dynamically drive the segmentation towards the expected value. This dynamical extension to neural networks requires little extra change on the backbone deep neural network while it significantly increased model accuracy, generalizability, and stability as demonstrated by intensive experimental results from pathological images of different tissue types across different open datasets.
Collapse
Affiliation(s)
- Sai Zhang
- The School of Biomedical Engineering, Health Science Center, Shen zhen University, Shenzhen, 518060, China.
| | - Liangjia Zhu
- An Individual Researcher, Shenzhen, Guangdong, 518060, China.
| | - Yi Gao
- The School of Biomedical Engineering, Health Science Center, Shen zhen University, Shenzhen, 518060, China; Shenzhen Key Laboratory of Precision Medicine for Hematological Malignancies, Shenzhen 518060, China; Marshall Laboratory of Biomedical Engineering, Shenzhen 518060, China; Pengcheng Laboratory, Shenzhen 518066, China.
| |
Collapse
|
40
|
Graph-Embedded Online Learning for Cell Detection and Tumour Proportion Score Estimation. ELECTRONICS 2022. [DOI: 10.3390/electronics11101642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Cell detection in microscopy images can provide useful clinical information. Most methods based on deep learning for cell detection are fully supervised. Without enough labelled samples, the accuracy of these methods would drop rapidly. To handle limited annotations and massive unlabelled data, semi-supervised learning methods have been developed. However, many of these are trained off-line, and are unable to process new incoming data to meet the needs of clinical diagnosis. Therefore, we propose a novel graph-embedded online learning network (GeoNet) for cell detection. It can locate and classify cells with dot annotations, saving considerable manpower. Trained by both historical data and reliable new samples, the online network can predict nuclear locations for upcoming new images while being optimized. To be more easily adapted to open data, it engages dynamic graph regularization and learns the inherent nonlinear structures of cells. Moreover, GeoNet can be applied to downstream tasks such as quantitative estimation of tumour proportion score (TPS), which is a useful indicator for lung squamous cell carcinoma treatment and prognostics. Experimental results for five large datasets with great variability in cell type and morphology validate the effectiveness and generalizability of the proposed method. For the lung squamous cell carcinoma (LUSC) dataset, the detection F1-scores of GeoNet for negative and positive tumour cells are 0.734 and 0.769, respectively, and the relative error of GeoNet for TPS estimation is 11.1%.
Collapse
|
41
|
Alom Z, Asari VK, Parwani A, Taha TM. Microscopic nuclei classification, segmentation, and detection with improved deep convolutional neural networks (DCNN). Diagn Pathol 2022; 17:38. [PMID: 35436941 PMCID: PMC9017017 DOI: 10.1186/s13000-022-01189-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 12/30/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Nuclei classification, segmentation, and detection from pathological images are challenging tasks due to cellular heterogeneity in the Whole Slide Images (WSI). METHODS In this work, we propose advanced DCNN models for nuclei classification, segmentation, and detection tasks. The Densely Connected Neural Network (DCNN) and Densely Connected Recurrent Convolutional Network (DCRN) models are applied for the nuclei classification tasks. The Recurrent Residual U-Net (R2U-Net) and the R2UNet-based regression model named the University of Dayton Net (UD-Net) are applied for nuclei segmentation and detection tasks respectively. The experiments are conducted on publicly available datasets, including Routine Colon Cancer (RCC) classification and detection and the Nuclei Segmentation Challenge 2018 datasets for segmentation tasks. The experimental results were evaluated with a five-fold cross-validation method, and the average testing results are compared against the existing approaches in terms of precision, recall, Dice Coefficient (DC), Mean Squared Error (MSE), F1-score, and overall testing accuracy by calculating pixels and cell-level analysis. RESULTS The results demonstrate around 2.6% and 1.7% higher performance in terms of F1-score for nuclei classification and detection tasks when compared to the recently published DCNN based method. Also, for nuclei segmentation, the R2U-Net shows around 91.90% average testing accuracy in terms of DC, which is around 1.54% higher than the U-Net model. CONCLUSION The proposed methods demonstrate robustness with better quantitative and qualitative results in three different tasks for analyzing the WSI.
Collapse
Affiliation(s)
- Zahangir Alom
- Department of Pathology, St. Jude Children's Research Hospital, Memphis, TN, USA.
| | - Vijayan K Asari
- Department of Electrical and Computer Engineering, University of Dayton, Dayton, OH, USA
| | - Anil Parwani
- Department of Pathology, The Ohio State University, Columbus, OH, USA
| | - Tarek M Taha
- Department of Electrical and Computer Engineering, University of Dayton, Dayton, OH, USA
| |
Collapse
|
42
|
Pantelis AG, Panagopoulou PA, Lapatsanis DP. Artificial Intelligence and Machine Learning in the Diagnosis and Management of Gastroenteropancreatic Neuroendocrine Neoplasms-A Scoping Review. Diagnostics (Basel) 2022; 12:874. [PMID: 35453922 PMCID: PMC9027316 DOI: 10.3390/diagnostics12040874] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 03/27/2022] [Accepted: 03/29/2022] [Indexed: 12/21/2022] Open
Abstract
Neuroendocrine neoplasms (NENs) and tumors (NETs) are rare neoplasms that may affect any part of the gastrointestinal system. In this scoping review, we attempt to map existing evidence on the role of artificial intelligence, machine learning and deep learning in the diagnosis and management of NENs of the gastrointestinal system. After implementation of inclusion and exclusion criteria, we retrieved 44 studies with 53 outcome analyses. We then classified the papers according to the type of studied NET (26 Pan-NETs, 59.1%; 3 metastatic liver NETs (6.8%), 2 small intestinal NETs, 4.5%; colorectal, rectal, non-specified gastroenteropancreatic and non-specified gastrointestinal NETs had from 1 study each, 2.3%). The most frequently used AI algorithms were Supporting Vector Classification/Machine (14 analyses, 29.8%), Convolutional Neural Network and Random Forest (10 analyses each, 21.3%), Random Forest (9 analyses, 19.1%), Logistic Regression (8 analyses, 17.0%), and Decision Tree (6 analyses, 12.8%). There was high heterogeneity on the description of the prediction model, structure of datasets, and performance metrics, whereas the majority of studies did not report any external validation set. Future studies should aim at incorporating a uniform structure in accordance with existing guidelines for purposes of reproducibility and research quality, which are prerequisites for integration into clinical practice.
Collapse
Affiliation(s)
- Athanasios G. Pantelis
- 4th Department of Surgery, Evaggelismos General Hospital of Athens, 10676 Athens, Greece;
| | | | - Dimitris P. Lapatsanis
- 4th Department of Surgery, Evaggelismos General Hospital of Athens, 10676 Athens, Greece;
| |
Collapse
|
43
|
Zhu X, Wu Y, Hu H, Zhuang X, Yao J, Ou D, Li W, Song M, Feng N, Xu D. Medical lesion segmentation by combining multi‐modal images with modality weighted UNet. Med Phys 2022; 49:3692-3704. [PMID: 35312077 DOI: 10.1002/mp.15610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 02/25/2022] [Accepted: 03/04/2022] [Indexed: 11/09/2022] Open
Affiliation(s)
- Xiner Zhu
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Yichao Wu
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Haoji Hu
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Xianwei Zhuang
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Jincao Yao
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Di Ou
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Wei Li
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Mei Song
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Na Feng
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Dong Xu
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| |
Collapse
|
44
|
Guo Z, Lin X, Hui Y, Wang J, Zhang Q, Kong F. Circulating Tumor Cell Identification Based on Deep Learning. Front Oncol 2022; 12:843879. [PMID: 35252012 PMCID: PMC8889528 DOI: 10.3389/fonc.2022.843879] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 01/21/2022] [Indexed: 12/18/2022] Open
Abstract
As a major reason for tumor metastasis, circulating tumor cell (CTC) is one of the critical biomarkers for cancer diagnosis and prognosis. On the one hand, CTC count is closely related to the prognosis of tumor patients; on the other hand, as a simple blood test with the advantages of safety, low cost and repeatability, CTC test has an important reference value in determining clinical results and studying the mechanism of drug resistance. However, the determination of CTC usually requires a big effort from pathologist and is also error-prone due to inexperience and fatigue. In this study, we developed a novel convolutional neural network (CNN) method to automatically detect CTCs in patients’ peripheral blood based on immunofluorescence in situ hybridization (imFISH) images. We collected the peripheral blood of 776 patients from Chifeng Municipal Hospital in China, and then used Cyttel to delete leukocytes and enrich CTCs. CTCs were identified by imFISH with CD45+, DAPI+ immunofluorescence staining and chromosome 8 centromeric probe (CEP8+). The sensitivity and specificity based on traditional CNN prediction were 95.3% and 91.7% respectively, and the sensitivity and specificity based on transfer learning were 97.2% and 94.0% respectively. The traditional CNN model and transfer learning method introduced in this paper can detect CTCs with high sensitivity, which has a certain clinical reference value for judging prognosis and diagnosing metastasis.
Collapse
Affiliation(s)
- Zhifeng Guo
- Department of Oncology, Chifeng Municipal Hospital, Chifeng, China
| | - Xiaoxi Lin
- Department of Oncology, Chifeng Municipal Hospital, Chifeng, China
| | - Yan Hui
- Department of Oncology, Chifeng Municipal Hospital, Chifeng, China
| | - Jingchun Wang
- Department of Oncology, Chifeng Municipal Hospital, Chifeng, China
| | - Qiuli Zhang
- Department of Oncology, Chifeng Municipal Hospital, Chifeng, China
| | - Fanlong Kong
- Department of Oncology, Chifeng Municipal Hospital, Chifeng, China
| |
Collapse
|
45
|
Wu Y, Cheng M, Huang S, Pei Z, Zuo Y, Liu J, Yang K, Zhu Q, Zhang J, Hong H, Zhang D, Huang K, Cheng L, Shao W. Recent Advances of Deep Learning for Computational Histopathology: Principles and Applications. Cancers (Basel) 2022; 14:1199. [PMID: 35267505 PMCID: PMC8909166 DOI: 10.3390/cancers14051199] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 02/16/2022] [Accepted: 02/22/2022] [Indexed: 01/10/2023] Open
Abstract
With the remarkable success of digital histopathology, we have witnessed a rapid expansion of the use of computational methods for the analysis of digital pathology and biopsy image patches. However, the unprecedented scale and heterogeneous patterns of histopathological images have presented critical computational bottlenecks requiring new computational histopathology tools. Recently, deep learning technology has been extremely successful in the field of computer vision, which has also boosted considerable interest in digital pathology applications. Deep learning and its extensions have opened several avenues to tackle many challenging histopathological image analysis problems including color normalization, image segmentation, and the diagnosis/prognosis of human cancers. In this paper, we provide a comprehensive up-to-date review of the deep learning methods for digital H&E-stained pathology image analysis. Specifically, we first describe recent literature that uses deep learning for color normalization, which is one essential research direction for H&E-stained histopathological image analysis. Followed by the discussion of color normalization, we review applications of the deep learning method for various H&E-stained image analysis tasks such as nuclei and tissue segmentation. We also summarize several key clinical studies that use deep learning for the diagnosis and prognosis of human cancers from H&E-stained histopathological images. Finally, online resources and open research problems on pathological image analysis are also provided in this review for the convenience of researchers who are interested in this exciting field.
Collapse
Affiliation(s)
- Yawen Wu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Michael Cheng
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Shuo Huang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Zongxiang Pei
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Yingli Zuo
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jianxin Liu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kai Yang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Qi Zhu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jie Zhang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Honghai Hong
- Department of Clinical Laboratory, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510006, China;
| | - Daoqiang Zhang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kun Huang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Liang Cheng
- Departments of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - Wei Shao
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| |
Collapse
|
46
|
Yao K, Sun J, Huang K, Jing L, Liu H, Huang D, Jude C. Analyzing Cell-Scaffold Interaction through Unsupervised 3D Nuclei Segmentation. Int J Bioprint 2022; 8:495. [PMID: 35187282 PMCID: PMC8852265 DOI: 10.18063/ijb.v8i1.495] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 12/07/2021] [Indexed: 11/23/2022] Open
Abstract
Fibrous scaffolds have been extensively used in three-dimensional (3D) cell culture systems to establish in vitro models in cell biology, tissue engineering, and drug screening. It is a common practice to characterize cell behaviors on such scaffolds using confocal laser scanning microscopy (CLSM). As a noninvasive technology, CLSM images can be utilized to describe cell-scaffold interaction under varied morphological features, biomaterial composition, and internal structure. Unfortunately, such information has not been fully translated and delivered to researchers due to the lack of effective cell segmentation methods. We developed herein an end-to-end model called Aligned Disentangled Generative Adversarial Network (AD-GAN) for 3D unsupervised nuclei segmentation of CLSM images. AD-GAN utilizes representation disentanglement to separate content representation (the underlying nuclei spatial structure) from style representation (the rendering of the structure) and align the disentangled content in the latent space. The CLSM images collected from fibrous scaffold-based culturing A549, 3T3, and HeLa cells were utilized for nuclei segmentation study. Compared with existing commercial methods such as Squassh and CellProfiler, our AD-GAN can effectively and efficiently distinguish nuclei with the preserved shape and location information. Building on such information, we can rapidly screen cell-scaffold interaction in terms of adhesion, migration and proliferation, so as to improve scaffold design.
Collapse
Affiliation(s)
- Kai Yao
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University, 111 Ren'ai Road, Suzhou, Jiangsu 215123, China.,School of Engineering, University of Liverpool, The Quadrangle, Brownlow Hill, L69 3GH, UK
| | - Jie Sun
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University, 111 Ren'ai Road, Suzhou, Jiangsu 215123, China
| | - Kaizhu Huang
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University, 111 Ren'ai Road, Suzhou, Jiangsu 215123, China
| | - Linzhi Jing
- National University of Singapore (Suzhou) Research Institute, 377 Linquan Street, Suzhou, Jiangsu 215123, China
| | - Hang Liu
- Department of Food Science and Technology, National University of Singapore, 3 Science Drive 2, 117542, Singapore
| | - Dejian Huang
- National University of Singapore (Suzhou) Research Institute, 377 Linquan Street, Suzhou, Jiangsu 215123, China.,Department of Food Science and Technology, National University of Singapore, 3 Science Drive 2, 117542, Singapore
| | - Curran Jude
- School of Engineering, University of Liverpool, The Quadrangle, Brownlow Hill, L69 3GH, UK
| |
Collapse
|
47
|
Adamson PM, Bhattbhatt V, Principi S, Beriwal S, Strain LS, Offe M, Wang AS, Vo N, Schmidt TG, Jordan P. Technical note: Evaluation of a V‐Net autosegmentation algorithm for pediatric CT scans: Performance, generalizability and application to patient‐specific CT dosimetry. Med Phys 2022; 49:2342-2354. [PMID: 35128672 PMCID: PMC9007850 DOI: 10.1002/mp.15521] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Revised: 12/23/2021] [Accepted: 01/08/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This study developed and evaluated a fully convolutional network (FCN) for pediatric CT organ segmentation and investigated the generalizability of the FCN across image heterogeneities such as CT scanner model protocols and patient age. We also evaluated the autosegmentation models as part of a software tool for patient-specific CT dose estimation. METHODS A collection of 359 pediatric CT datasets with expert organ contours were used for model development and evaluation. Autosegmentation models were trained for each organ using a modified FCN 3D V-Net. An independent test set of 60 patients was withheld for testing. To evaluate the impact of CT scanner model protocol and patient age heterogeneities, separate models were trained using a subset of scanner model protocols and pediatric age groups. Train and test sets were split to answer questions about the generalizability of pediatric FCN autosegmentation models to unseen age groups and scanner model protocols, as well as the merit of scanner model protocol or age-group-specific models. Finally, the organ contours resulting from the autosegmentation models were applied to patient-specific dose maps to evaluate the impact of segmentation errors on organ dose estimation. RESULTS Results demonstrate that the autosegmentation models generalize to CT scanner acquisition and reconstruction methods which were not present in the training dataset. While models are not equally generalizable across age groups, age-group-specific models do not hold any advantage over combining heterogeneous age groups into a single training set. Dice similarity coefficient (DSC) and mean surface distance results are presented for 19 organ structures, for example, median DSC of 0.52 (duodenum), 0.74 (pancreas), 0.92 (stomach), and 0.96 (heart). The FCN models achieve a mean dose error within 5% of expert segmentations for all 19 organs except for the spinal canal, where the mean error was 6.31%. CONCLUSIONS Overall, these results are promising for the adoption of FCN autosegmentation models for pediatric CT, including applications for patient-specific CT dose estimation.
Collapse
Affiliation(s)
| | | | - Sara Principi
- Department of Biomedical Engineering Marquette University and Medical College of Wisconsin Milwaukee WI 53201 United States
| | | | - Linda S. Strain
- Department of Radiology Children's Wisconsin and Medical College of Wisconsin Milwaukee WI 53226 United States
| | - Michael Offe
- Department of Biomedical Engineering Marquette University and Medical College of Wisconsin Milwaukee WI 53201 United States
| | - Adam S. Wang
- Department of Radiology Stanford University Stanford CA 94305 United States
| | - Nghia‐Jack Vo
- Department of Radiology Children's Wisconsin and Medical College of Wisconsin Milwaukee WI 53226 United States
| | - Taly Gilat Schmidt
- Department of Biomedical Engineering Marquette University and Medical College of Wisconsin Milwaukee WI 53201 United States
| | - Petr Jordan
- Varian Medical Systems Palo Alto CA 94304 United States
| |
Collapse
|
48
|
McGenity C, Wright A, Treanor D. AIM in Surgical Pathology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
49
|
Götz T, Göb S, Sawant S, Erick X, Wittenberg T, Schmidkonz C, Tomé A, Lang E, Ramming A. Number of necessary training examples for Neural Networks with different number of trainable parameters. J Pathol Inform 2022; 13:100114. [PMID: 36268092 PMCID: PMC9577052 DOI: 10.1016/j.jpi.2022.100114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/12/2021] [Indexed: 11/03/2022] Open
Abstract
In this work, the network complexity should be reduced with a concomitant reduction in the number of necessary training examples. The focus thus was on the dependence of proper evaluation metrics on the number of adjustable parameters of the considered deep neural network. The used data set encompassed Hematoxylin and Eosin (H&E) colored cell images provided by various clinics. We used a deep convolutional neural network to get the relation between a model’s complexity, its concomitant set of parameters, and the size of the training sample necessary to achieve a certain classification accuracy. The complexity of the deep neural networks was reduced by pruning a certain amount of filters in the network. As expected, the unpruned neural network showed best performance. The network with the highest number of trainable parameter achieved, within the estimated standard error of the optimized cross-entropy loss, best results up to 30% pruning. Strongly pruned networks are highly viable and the classification accuracy declines quickly with decreasing number of training patterns. However, up to a pruning ratio of 40%, we found a comparable performance of pruned and unpruned deep convolutional neural networks (DCNN) and densely connected convolutional networks (DCCN).
Collapse
|
50
|
Liang H, Cheng Z, Zhong H, Qu A, Chen L. A region-based convolutional network for nuclei detection and segmentation in microscopy images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103276] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|