1
|
Du Y, Chen X, Fu Y. Multiscale transformers and multi-attention mechanism networks for pathological nuclei segmentation. Sci Rep 2025; 15:12549. [PMID: 40221423 PMCID: PMC11993704 DOI: 10.1038/s41598-025-90397-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Accepted: 02/12/2025] [Indexed: 04/14/2025] Open
Abstract
Pathology nuclei segmentation is crucial of computer-aided diagnosis in pathology. However, due to the high density, complex backgrounds, and blurred cell boundaries, it makes pathology cell segmentation still a challenging problem. In this paper, we propose a network model for pathology image segmentation based on a multi-scale Transformer multi-attention mechanism. To solve the problem that the high density of cell nuclei and the complexity of the background make it difficult to extract features, a dense attention module is embedded in the encoder, which improves the learning of the target cell information to minimize target information loss; Additionally, to solve the problem of poor segmentation accuracy due to the blurred cell boundaries, the Multi-scale Transformer Attention module is embedded between encoder and decoder, improving the transfer of the boundary feature information and makes the segmented cell boundaries more accurate. Experimental results on MoNuSeg, GlaS and CoNSeP datasets demonstrate the network's superior accuracy.
Collapse
Affiliation(s)
- Yongzhao Du
- College of Engineering, Huaqiao University, Fujian, 362021, China.
- College of Internet of Things Industry, Huaqiao University, Fujian, 362021, China.
| | - Xin Chen
- College of Engineering, Huaqiao University, Fujian, 362021, China
| | - Yuqing Fu
- College of Engineering, Huaqiao University, Fujian, 362021, China
- College of Internet of Things Industry, Huaqiao University, Fujian, 362021, China
| |
Collapse
|
2
|
Kim D, Sundling KE, Virk R, Thrall MJ, Alperstein S, Bui MM, Chen-Yost H, Donnelly AD, Lin O, Liu X, Madrigal E, Michelow P, Schmitt FC, Vielh PR, Zakowski MF, Parwani AV, Jenkins E, Siddiqui MT, Pantanowitz L, Li Z. Digital cytology part 2: artificial intelligence in cytology: a concept paper with review and recommendations from the American Society of Cytopathology Digital Cytology Task Force. J Am Soc Cytopathol 2024; 13:97-110. [PMID: 38158317 DOI: 10.1016/j.jasc.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 11/28/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024]
Abstract
Digital cytology and artificial intelligence (AI) are gaining greater adoption in the cytology laboratory. However, peer-reviewed real-world data and literature are lacking in regard to the current clinical landscape. The American Society of Cytopathology in conjunction with the International Academy of Cytology and the Digital Pathology Association established a special task force comprising 20 members with expertise and/or interest in digital cytology. The aim of the group was to investigate the feasibility of incorporating digital cytology, specifically cytology whole slide scanning and AI applications, into the workflow of the laboratory. In turn, the impact on cytopathologists, cytologists (cytotechnologists), and cytology departments were also assessed. The task force reviewed existing literature on digital cytology, conducted a worldwide survey, and held a virtual roundtable discussion on digital cytology and AI with multiple industry corporate representatives. This white paper, presented in 2 parts, summarizes the current state of digital cytology and AI practice in global cytology practice. Part 1 of the white paper is presented as a separate paper which details a review and best practice recommendations for incorporating digital cytology into practice. Part 2 of the white paper presented here provides a comprehensive review of AI in cytology practice along with best practice recommendations and legal considerations. Additionally, the cytology global survey results highlighting current AI practices by various laboratories, as well as current attitudes, are reported.
Collapse
Affiliation(s)
- David Kim
- Department of Pathology & Laboratory Medicine, Memorial Sloan-Kettering Cancer Center, New York, New York
| | - Kaitlin E Sundling
- The Wisconsin State Laboratory of Hygiene and Department of Pathology and Laboratory Medicine, University of Wisconsin-Madison, Madison, Wisconsin
| | - Renu Virk
- Department of Pathology and Cell Biology, Columbia University, New York, New York
| | - Michael J Thrall
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Houston, Texas
| | - Susan Alperstein
- Department of Pathology and Laboratory Medicine, New York Presbyterian-Weill Cornell Medicine, New York, New York
| | - Marilyn M Bui
- The Department of Pathology, Moffitt Cancer Center & Research Institute, Tampa, Florida
| | | | - Amber D Donnelly
- Diagnostic Cytology Education, University of Nebraska Medical Center, College of Allied Health Professions, Omaha, Nebraska
| | - Oscar Lin
- Department of Pathology & Laboratory Medicine, Memorial Sloan-Kettering Cancer Center, New York, New York
| | - Xiaoying Liu
- Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire
| | - Emilio Madrigal
- Department of Pathology, Massachusetts General Hospital, Boston, Massachusetts
| | - Pamela Michelow
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa; Department of Pathology, National Health Laboratory Services, Johannesburg, South Africa
| | - Fernando C Schmitt
- Department of Pathology, Medical Faculty of Porto University, Porto, Portugal
| | - Philippe R Vielh
- Department of Pathology, Medipath and American Hospital of Paris, Paris, France
| | | | - Anil V Parwani
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | | | - Momin T Siddiqui
- Department of Pathology and Laboratory Medicine, New York Presbyterian-Weill Cornell Medicine, New York, New York
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.
| | - Zaibo Li
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio.
| |
Collapse
|
3
|
Song Y, Zhang A, Zhou J, Luo Y, Lin Z, Zhou T. Overlapping cytoplasms segmentation via constrained multi-shape evolution for cervical cancer screening. Artif Intell Med 2024; 148:102756. [PMID: 38325933 DOI: 10.1016/j.artmed.2023.102756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 12/03/2023] [Accepted: 12/29/2023] [Indexed: 02/09/2024]
Abstract
Segmenting overlapping cytoplasms in cervical smear images is a clinically essential task for quantitatively measuring cell-level features to screen cervical cancer This task, however, remains rather challenging, mainly due to the deficiency of intensity (or color) information in the overlapping region Although shape prior-based models that compensate intensity deficiency by introducing prior shape information about cytoplasm are firmly established, they often yield visually implausible results, as they model shape priors only by limited shape hypotheses about cytoplasm, exploit cytoplasm-level shape priors alone, and impose no shape constraint on the resulting shape of the cytoplasm In this paper, we present an effective shape prior-based approach, called constrained multi-shape evolution, that segments all overlapping cytoplasms in the clump simultaneously by jointly evolving each cytoplasm's shape guided by the modeled shape priors We model local shape priors (cytoplasm-level) by an infinitely large shape hypothesis set which contains all possible shapes of the cytoplasm In the shape evolution, we compensate intensity deficiency for the segmentation by introducing not only the modeled local shape priors but also global shape priors (clump-level) modeled by considering mutual shape constraints of cytoplasms in the clump We also constrain the resulting shape in each evolution to be in the built shape hypothesis set for further reducing implausible segmentation results We evaluated the proposed method in two typical cervical smear datasets, and the extensive experimental results confirm its effectiveness.
Collapse
Affiliation(s)
- Youyi Song
- School of Biomedical Engineering, Shenzhen University, Shenzhen, 518060, China
| | - Ao Zhang
- School of Biomedical Engineering, Shenzhen University, Shenzhen, 518060, China
| | - Jinglin Zhou
- School of Philosophy, Fudan University, Shanghai, 200433, China
| | - Yu Luo
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, 510006, China
| | - Zhizhe Lin
- School of Information and Communication Engineering, Hainan University, Haikou, 570228, China
| | - Teng Zhou
- School of Cyberspace Security, Hainan University, Haikou, 570228, China.
| |
Collapse
|
4
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
5
|
Rasheed A, Shirazi SH, Umar AI, Shahzad M, Yousaf W, Khan Z. Cervical cell's nucleus segmentation through an improved UNet architecture. PLoS One 2023; 18:e0283568. [PMID: 37788295 PMCID: PMC10547184 DOI: 10.1371/journal.pone.0283568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 03/11/2023] [Indexed: 10/05/2023] Open
Abstract
Precise segmentation of the nucleus is vital for computer-aided diagnosis (CAD) in cervical cytology. Automated delineation of the cervical nucleus has notorious challenges due to clumped cells, color variation, noise, and fuzzy boundaries. Due to its standout performance in medical image analysis, deep learning has gained attention from other techniques. We have proposed a deep learning model, namely C-UNet (Cervical-UNet), to segment cervical nuclei from overlapped, fuzzy, and blurred cervical cell smear images. Cross-scale features integration based on a bi-directional feature pyramid network (BiFPN) and wide context unit are used in the encoder of classic UNet architecture to learn spatial and local features. The decoder of the improved network has two inter-connected decoders that mutually optimize and integrate these features to produce segmentation masks. Each component of the proposed C-UNet is extensively evaluated to judge its effectiveness on a complex cervical cell dataset. Different data augmentation techniques were employed to enhance the proposed model's training. Experimental results have shown that the proposed model outperformed extant models, i.e., CGAN (Conditional Generative Adversarial Network), DeepLabv3, Mask-RCNN (Region-Based Convolutional Neural Network), and FCN (Fully Connected Network), on the employed dataset used in this study and ISBI-2014 (International Symposium on Biomedical Imaging 2014), ISBI-2015 datasets. The C-UNet achieved an object-level accuracy of 93%, pixel-level accuracy of 92.56%, object-level recall of 95.32%, pixel-level recall of 92.27%, Dice coefficient of 93.12%, and F1-score of 94.96% on complex cervical images dataset.
Collapse
Affiliation(s)
- Assad Rasheed
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Syed Hamad Shirazi
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Arif Iqbal Umar
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Muhammad Shahzad
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Waqas Yousaf
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Zakir Khan
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| |
Collapse
|
6
|
Sauter D, Lodde G, Nensa F, Schadendorf D, Livingstone E, Kukuk M. Deep learning in computational dermatopathology of melanoma: A technical systematic literature review. Comput Biol Med 2023; 163:107083. [PMID: 37315382 DOI: 10.1016/j.compbiomed.2023.107083] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 05/10/2023] [Accepted: 05/27/2023] [Indexed: 06/16/2023]
Abstract
Deep learning (DL) has become one of the major approaches in computational dermatopathology, evidenced by a significant increase in this topic in the current literature. We aim to provide a structured and comprehensive overview of peer-reviewed publications on DL applied to dermatopathology focused on melanoma. In comparison to well-published DL methods on non-medical images (e.g., classification on ImageNet), this field of application comprises a specific set of challenges, such as staining artifacts, large gigapixel images, and various magnification levels. Thus, we are particularly interested in the pathology-specific technical state-of-the-art. We also aim to summarize the best performances achieved thus far with respect to accuracy, along with an overview of self-reported limitations. Accordingly, we conducted a systematic literature review of peer-reviewed journal and conference articles published between 2012 and 2022 in the databases ACM Digital Library, Embase, IEEE Xplore, PubMed, and Scopus, expanded by forward and backward searches to identify 495 potentially eligible studies. After screening for relevance and quality, a total of 54 studies were included. We qualitatively summarized and analyzed these studies from technical, problem-oriented, and task-oriented perspectives. Our findings suggest that the technical aspects of DL for histopathology in melanoma can be further improved. The DL methodology was adopted later in this field, and still lacks the wider adoption of DL methods already shown to be effective for other applications. We also discuss upcoming trends toward ImageNet-based feature extraction and larger models. While DL has achieved human-competitive accuracy in routine pathological tasks, its performance on advanced tasks is still inferior to wet-lab testing (for example). Finally, we discuss the challenges impeding the translation of DL methods to clinical practice and provide insight into future research directions.
Collapse
Affiliation(s)
- Daniel Sauter
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany.
| | - Georg Lodde
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | - Felix Nensa
- Institute for AI in Medicine (IKIM), University Hospital Essen, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, 45147 Essen, Germany
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | | | - Markus Kukuk
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany
| |
Collapse
|
7
|
Liang Y, Feng S, Liu Q, Kuang H, Liu J, Liao L, Du Y, Wang J. Exploring Contextual Relationships for Cervical Abnormal Cell Detection. IEEE J Biomed Health Inform 2023; 27:4086-4097. [PMID: 37192032 DOI: 10.1109/jbhi.2023.3276919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Cervical abnormal cell detection is a challenging task as the morphological discrepancies between abnormal and normal cells are usually subtle. To determine whether a cervical cell is normal or abnormal, cytopathologists always take surrounding cells as references to identify its abnormality. To mimic these behaviors, we propose to explore contextual relationships to boost the performance of cervical abnormal cell detection. Specifically, both contextual relationships between cells and cell-to-global images are exploited to enhance features of each region of interest (RoI) proposal. Accordingly, two modules, dubbed as RoI-relationship attention module (RRAM) and global RoI attention module (GRAM), are developed and their combination strategies are also investigated. We establish a strong baseline by using Double-Head Faster R-CNN with a feature pyramid network (FPN) and integrate our RRAM and GRAM into it to validate the effectiveness of the proposed modules. Experiments conducted on a large cervical cell detection dataset reveal that the introduction of RRAM and GRAM both achieves better average precision (AP) than the baseline methods. Moreover, when cascading RRAM and GRAM, our method outperforms the state-of-the-art (SOTA) methods. Furthermore, we show that the proposed feature-enhancing scheme can facilitate image- and smear-level classification.
Collapse
|
8
|
Moscalu M, Moscalu R, Dascălu CG, Țarcă V, Cojocaru E, Costin IM, Țarcă E, Șerban IL. Histopathological Images Analysis and Predictive Modeling Implemented in Digital Pathology-Current Affairs and Perspectives. Diagnostics (Basel) 2023; 13:2379. [PMID: 37510122 PMCID: PMC10378281 DOI: 10.3390/diagnostics13142379] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 07/11/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023] Open
Abstract
In modern clinical practice, digital pathology has an essential role, being a technological necessity for the activity in the pathological anatomy laboratories. The development of information technology has majorly facilitated the management of digital images and their sharing for clinical use; the methods to analyze digital histopathological images, based on artificial intelligence techniques and specific models, quantify the required information with significantly higher consistency and precision compared to that provided by optical microscopy. In parallel, the unprecedented advances in machine learning facilitate, through the synergy of artificial intelligence and digital pathology, the possibility of diagnosis based on image analysis, previously limited only to certain specialties. Therefore, the integration of digital images into the study of pathology, combined with advanced algorithms and computer-assisted diagnostic techniques, extends the boundaries of the pathologist's vision beyond the microscopic image and allows the specialist to use and integrate his knowledge and experience adequately. We conducted a search in PubMed on the topic of digital pathology and its applications, to quantify the current state of knowledge. We found that computer-aided image analysis has a superior potential to identify, extract and quantify features in more detail compared to the human pathologist's evaluating possibilities; it performs tasks that exceed its manual capacity, and can produce new diagnostic algorithms and prediction models applicable in translational research that are able to identify new characteristics of diseases based on changes at the cellular and molecular level.
Collapse
Affiliation(s)
- Mihaela Moscalu
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Roxana Moscalu
- Wythenshawe Hospital, Manchester University NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester M139PT, UK
| | - Cristina Gena Dascălu
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Viorel Țarcă
- Department of Preventive Medicine and Interdisciplinarity, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Elena Cojocaru
- Department of Morphofunctional Sciences I, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Ioana Mădălina Costin
- Faculty of Medicine, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Elena Țarcă
- Department of Surgery II-Pediatric Surgery, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| | - Ionela Lăcrămioara Șerban
- Department of Morpho-Functional Sciences II, Faculty of Medicine, "Grigore T. Popa" University of Medicine and Pharmacy, 700115 Iassy, Romania
| |
Collapse
|
9
|
Miranda Ruiz F, Lahrmann B, Bartels L, Krauthoff A, Keil A, Härtel S, Tao AS, Ströbel P, Clarke MA, Wentzensen N, Grabe N. CNN stability training improves robustness to scanner and IHC-based image variability for epithelium segmentation in cervical histology. Front Med (Lausanne) 2023; 10:1173616. [PMID: 37476610 PMCID: PMC10354251 DOI: 10.3389/fmed.2023.1173616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 06/06/2023] [Indexed: 07/22/2023] Open
Abstract
Background In digital pathology, image properties such as color, brightness, contrast and blurriness may vary based on the scanner and sample preparation. Convolutional Neural Networks (CNNs) are sensitive to these variations and may underperform on images from a different domain than the one used for training. Robustness to these image property variations is required to enable the use of deep learning in clinical practice and large scale clinical research. Aims CNN Stability Training (CST) is proposed and evaluated as a method to increase CNN robustness to scanner and Immunohistochemistry (IHC)-based image variability. Methods CST was applied to segment epithelium in immunohistological cervical Whole Slide Images (WSIs). CST randomly distorts input tiles and factors the difference between the CNN prediction for the original and distorted inputs within the loss function. CNNs were trained using 114 p16-stained WSIs from the same scanner, and evaluated on 6 WSI test sets, each with 23 to 24 WSIs of the same tissue but different scanner/IHC combinations. Relative robustness (rAUC) was measured as the difference between the AUC on the training domain test set (i.e., baseline test set) and the remaining test sets. Results Across all test sets, The AUC of CST models outperformed "No CST" models (AUC: 0.940-0.989 vs. 0.905-0.986, p < 1e - 8), and obtained an improved robustness (rAUC: [-0.038, -0.003] vs. [-0.081, -0.002]). At a WSI level, CST models showed an increase in performance in 124 of the 142 WSIs. CST models also outperformed models trained with random on-the-fly data augmentation (DA) in all test sets ([0.002, 0.021], p < 1e-6). Conclusion CST offers a path to improve CNN performance without the need for more data and allows customizing distortions to specific use cases. A python implementation of CST is publicly available at https://github.com/TIGACenter/CST_v1.
Collapse
Affiliation(s)
- Felipe Miranda Ruiz
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
| | - Bernd Lahrmann
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
| | - Liam Bartels
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
- Medical Oncology Department, National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Alexandra Krauthoff
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
- Medical Oncology Department, National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Andreas Keil
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
| | - Steffen Härtel
- Medical Faculty, Center of Medical Informatics and Telemedicine (CIMT), University of Chile, Santiago, Chile
| | - Amy S. Tao
- Division of Cancer Epidemiology and Genetics, US National Cancer Institute (NCI), Bethesda, MD, United States
| | - Philipp Ströbel
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
| | - Megan A. Clarke
- Division of Cancer Epidemiology and Genetics, US National Cancer Institute (NCI), Bethesda, MD, United States
| | - Nicolas Wentzensen
- Division of Cancer Epidemiology and Genetics, US National Cancer Institute (NCI), Bethesda, MD, United States
| | - Niels Grabe
- Institute of Pathology, University Medical Center Göttingen UMG, Göttingen, Germany
- Hamamatsu Tissue Imaging and Analysis Center (TIGA), BIOQUANT Center, Heidelberg University, Heidelberg, Germany
- Medical Oncology Department, National Center for Tumor Diseases (NCT), Heidelberg, Germany
| |
Collapse
|
10
|
Hwang JH, Lim M, Han G, Park H, Kim YB, Park J, Jun SY, Lee J, Cho JW. Preparing pathological data to develop an artificial intelligence model in the nonclinical study. Sci Rep 2023; 13:3896. [PMID: 36890209 PMCID: PMC9994413 DOI: 10.1038/s41598-023-30944-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 03/03/2023] [Indexed: 03/10/2023] Open
Abstract
Artificial intelligence (AI)-based analysis has recently been adopted in the examination of histological slides via the digitization of glass slides using a digital scanner. In this study, we examined the effect of varying the staining color tone and magnification level of a dataset on the result of AI model prediction in hematoxylin and eosin stained whole slide images (WSIs). The WSIs of liver tissues with fibrosis were used as an example, and three different datasets (N20, B20, and B10) were prepared with different color tones and magnifications. Using these datasets, we built five models trained Mask R-CNN algorithm by a single or mixed dataset of N20, B20, and B10. We evaluated their model performance using the test dataset of three datasets. It was found that the models that were trained with mixed datasets (models B20/N20 and B10/B20), which consist of different color tones or magnifications, performed better than the single dataset trained models. Consequently, superior performance of the mixed models was obtained from the actual prediction results of the test images. We suggest that training the algorithm with various staining color tones and multi-scaled image datasets would be more optimized for consistent remarkable performance in predicting pathological lesions of interest.
Collapse
Affiliation(s)
- Ji-Hee Hwang
- Toxicologic Pathology Research Group, Department of Advanced Toxicology Research, Korea Institute of Toxicology, Daejeon, 34114, Korea
| | - Minyoung Lim
- Toxicologic Pathology Research Group, Department of Advanced Toxicology Research, Korea Institute of Toxicology, Daejeon, 34114, Korea
| | - Gyeongjin Han
- Toxicologic Pathology Research Group, Department of Advanced Toxicology Research, Korea Institute of Toxicology, Daejeon, 34114, Korea
| | - Heejin Park
- Toxicologic Pathology Research Group, Department of Advanced Toxicology Research, Korea Institute of Toxicology, Daejeon, 34114, Korea
| | - Yong-Bum Kim
- Department of Advanced Toxicology Research, Korea Institute of Toxicology, Daejeon, 34114, Korea
| | - Jinseok Park
- Research and Development Team, LAC Inc., Seoul, 07807, Korea
| | - Sang-Yeop Jun
- Research and Development Team, LAC Inc., Seoul, 07807, Korea
| | - Jaeku Lee
- Research and Development Team, LAC Inc., Seoul, 07807, Korea
| | - Jae-Woo Cho
- Toxicologic Pathology Research Group, Department of Advanced Toxicology Research, Korea Institute of Toxicology, Daejeon, 34114, Korea.
| |
Collapse
|
11
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
12
|
Juhong A, Li B, Yao CY, Yang CW, Agnew DW, Lei YL, Huang X, Piyawattanametha W, Qiu Z. Super-resolution and segmentation deep learning for breast cancer histopathology image analysis. BIOMEDICAL OPTICS EXPRESS 2023; 14:18-36. [PMID: 36698665 PMCID: PMC9841988 DOI: 10.1364/boe.463839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 09/16/2022] [Accepted: 09/19/2022] [Indexed: 06/17/2023]
Abstract
Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images' size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model's results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings.
Collapse
Affiliation(s)
- Aniwat Juhong
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
| | - Bo Li
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
| | - Cheng-You Yao
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, MI 48824, USA
| | - Chia-Wei Yang
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Chemistry, Michigan State University, East Lansing, MI 48824, USA
| | - Dalen W. Agnew
- College of Veterinary Medicine, Michigan State University, East Lansing, MI 48824, USA
| | - Yu Leo Lei
- Department of Periodontics Oral Medicine, University of Michigan, Ann Arbor, MI 48104, USA
| | - Xuefei Huang
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, MI 48824, USA
- Department of Chemistry, Michigan State University, East Lansing, MI 48824, USA
| | - Wibool Piyawattanametha
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Biomedical Engineering, School of Engineering, King Mongkut’s Institute of Technology Ladkrabang (KMITL), Bangkok 10520, Thailand
| | - Zhen Qiu
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, MI 48824, USA
| |
Collapse
|
13
|
Chowdary GJ, G S, M P, Yogarajah P. Nucleus segmentation and classification using residual SE-UNet and feature concatenation approach incervical cytopathology cell images. Technol Cancer Res Treat 2023; 22:15330338221134833. [PMID: 36744768 PMCID: PMC9905035 DOI: 10.1177/15330338221134833] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 09/30/2022] [Indexed: 02/07/2023] Open
Abstract
Introduction: Pap smear is considered to be the primary examination for the diagnosis of cervical cancer. But the analysis of pap smear slides is a time-consuming task and tedious as it requires manual intervention. The diagnostic efficiency depends on the medical expertise of the pathologist, and human error often hinders the diagnosis. Automated segmentation and classification of cervical nuclei will help diagnose cervical cancer in earlier stages. Materials and Methods: The proposed methodology includes three models: a Residual-Squeeze-and-Excitation-module based segmentation model, a fusion-based feature extraction model, and a Multi-layer Perceptron classification model. In the fusion-based feature extraction model, three sets of deep features are extracted from these segmented nuclei using the pre-trained and fine-tuned VGG19, VGG-F, and CaffeNet models, and two hand-crafted descriptors, Bag-of-Features and Linear-Binary-Patterns, are extracted for each image. For this work, Herlev, SIPaKMeD, and ISBI2014 datasets are used for evaluation. The Herlev datasetis used for evaluating both segmentation and classification models. Whereas the SIPaKMeD and ISBI2014 are used for evaluating the classification model, and the segmentation model respectively. Results: The segmentation network enhanced the precision and ZSI by 2.04%, and 2.00% on the Herlev dataset, and the precision and recall by 0.68%, and 2.59% on the ISBI2014 dataset. The classification approach enhanced the accuracy, recall, and specificity by 0.59%, 0.47%, and 1.15% on the Herlev dataset, and by 0.02%, 0.15%, and 0.22% on the SIPaKMed dataset. Conclusion: The experiments demonstrate that the proposed work achieves promising performance on segmentation and classification in cervical cytopathology cell images..
Collapse
Affiliation(s)
| | - Suganya G
- Vellore Institute of Technology, Chennai, India
| | | | | |
Collapse
|
14
|
Wu H, Souedet N, Jan C, Clouchoux C, Delzescaux T. A general deep learning framework for neuron instance segmentation based on Efficient UNet and morphological post-processing. Comput Biol Med 2022; 150:106180. [PMID: 36244305 DOI: 10.1016/j.compbiomed.2022.106180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 09/21/2022] [Accepted: 10/01/2022] [Indexed: 11/03/2022]
Abstract
Recent studies have demonstrated the superiority of deep learning in medical image analysis, especially in cell instance segmentation, a fundamental step for many biological studies. However, the excellent performance of the neural networks requires training on large, unbiased dataset and annotations, which is labor-intensive and expertise-demanding. This paper presents an end-to-end framework to automatically detect and segment NeuN stained neuronal cells on histological images using only point annotations. Unlike traditional nuclei segmentation with point annotation, we propose using point annotation and binary segmentation to synthesize pixel-level annotations. The synthetic masks are used as the ground truth to train the neural network, a U-Net-like architecture with a state-of-the-art network, EfficientNet, as the encoder. Validation results show the superiority of our model compared to other recent methods. In addition, we investigated multiple post-processing schemes and proposed an original strategy to convert the probability map into segmented instances using ultimate erosion and dynamic reconstruction. This approach is easy to configure and outperforms other classical post-processing techniques. This work aims to develop a robust and efficient framework for analyzing neurons using optical microscopic data, which can be used in preclinical biological studies and, more specifically, in the context of neurodegenerative diseases. Code is available at: https://github.com/MIRCen/NeuronInstanceSeg.
Collapse
Affiliation(s)
- Huaqian Wu
- CEA-CNRS-UMR 9199, MIRCen, Fontenay-aux-Roses, France
| | | | - Caroline Jan
- CEA-CNRS-UMR 9199, MIRCen, Fontenay-aux-Roses, France
| | | | | |
Collapse
|
15
|
Liu G, Ding Q, Luo H, Sha M, Li X, Ju M. Cx22: A new publicly available dataset for deep learning-based segmentation of cervical cytology images. Comput Biol Med 2022; 150:106194. [PMID: 37859287 DOI: 10.1016/j.compbiomed.2022.106194] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 09/12/2022] [Accepted: 10/09/2022] [Indexed: 11/24/2022]
Abstract
The segmentation of cervical cytology images plays an important role in the automatic analysis of cervical cytology screening. Although deep learning-based segmentation methods are well-developed in other image segmentation areas, their application in the segmentation of cervical cytology images is still in the early stage. The most important reason for the slow progress is the lack of publicly available and high-quality datasets, and the study on the deep learning-based segmentation methods may be hampered by the present datasets which are either artificial or plagued by the issue of false-negative objects. In this paper, we develop a new dataset of cervical cytology images named Cx22, which consists of the completely annotated labels of the cellular instances based on the open-source images released by our institute previously. Firstly, we meticulously delineate the contours of 14,946 cellular instances in1320 images that are generated by our proposed ROI-based label cropping algorithm. Then, we propose the baseline methods for the deep learning-based semantic and instance segmentation tasks based on Cx22. Finally, through the experiments, we validate the task suitability of Cx22, and the results reveal the impact of false-negative objects on the performance of the baseline methods. Based on our work, Cx22 can provide a foundation for fellow researchers to develop high-performance deep learning-based methods for the segmentation of cervical cytology images. Other detailed information and step-by-step guidance on accessing the dataset are made available to fellow researchers at https://github.com/LGQ330/Cx22.
Collapse
Affiliation(s)
- Guangqi Liu
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang, 110016, China; Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, 110169, China; University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Qinghai Ding
- Space Star Technology Co, Ltd., Beijing, 100086, China.
| | - Haibo Luo
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang, 110016, China; Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, 110169, China.
| | - Min Sha
- Archives of NEU, Northeastern University, Shenyang, 110819, China.
| | - Xiang Li
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang, 110016, China; Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, 110169, China; University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Moran Ju
- College of Information Science and Technology, Dalian Maritime University, Dalian, 116026, China.
| |
Collapse
|
16
|
Spectral features and optimal Hierarchical attention networks for pulmonary abnormality detection from the respiratory sound signals. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103905] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
17
|
Cervical Cell Segmentation Method Based on Global Dependency and Local Attention. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157742] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
The refined segmentation of nuclei and the cytoplasm is the most challenging task in the automation of cervical cell screening. The U-Shape network structure has demonstrated great superiority in the field of biomedical imaging. However, the classical U-Net network cannot effectively utilize mixed domain information and contextual information, and fails to achieve satisfactory results in this task. To address the above problems, a module based on global dependency and local attention (GDLA) for contextual information modeling and features refinement, is proposed in this study. It consists of three components computed in parallel, which are the global dependency module, the spatial attention module, and the channel attention module. The global dependency module models global contextual information to capture a priori knowledge of cervical cells, such as the positional dependence of the nuclei and cytoplasm, and the closure and uniqueness of the nuclei. The spatial attention module combines contextual information to extract cell boundary information and refine target boundaries. The channel and spatial attention modules are used to provide adaption of the input information, and make it easy to identify subtle but dominant differences of similar objects. Comparative and ablation experiments are conducted on the Herlev dataset, and the experimental results demonstrate the effectiveness of the proposed method, which surpasses the most popular existing channel attention, hybrid attention, and context networks in terms of the nuclei and cytoplasm segmentation metrics, achieving better segmentation performance than most previous advanced methods.
Collapse
|
18
|
Ghaznavi A, Rychtáriková R, Saberioon M, Štys D. Cell segmentation from telecentric bright-field transmitted light microscopy images using a Residual Attention U-Net: A case study on HeLa line. Comput Biol Med 2022; 147:105805. [PMID: 35809410 DOI: 10.1016/j.compbiomed.2022.105805] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 06/03/2022] [Accepted: 06/26/2022] [Indexed: 11/20/2022]
Abstract
Living cell segmentation from bright-field light microscopy images is challenging due to the image complexity and temporal changes in the living cells. Recently developed deep learning (DL)-based methods became popular in medical and microscopy image segmentation tasks due to their success and promising outcomes. The main objective of this paper is to develop a deep learning, U-Net-based method to segment the living cells of the HeLa line in bright-field transmitted light microscopy. To find the most suitable architecture for our datasets, a residual attention U-Net was proposed and compared with an attention and a simple U-Net architecture. The attention mechanism highlights the remarkable features and suppresses activations in the irrelevant image regions. The residual mechanism overcomes with vanishing gradient problem. The Mean-IoU score for our datasets reaches 0.9505, 0.9524, and 0.9530 for the simple, attention, and residual attention U-Net, respectively. The most accurate semantic segmentation results was achieved in the Mean-IoU and Dice metrics by applying the residual and attention mechanisms together. The watershed method applied to this best - Residual Attention - semantic segmentation result gave the segmentation with the specific information for each cell.
Collapse
Affiliation(s)
- Ali Ghaznavi
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Institute of Complex Systems, University of South Bohemia in České Budějovice, Zámek 136, 373 33, Nové Hrady, Czech Republic.
| | - Renata Rychtáriková
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Institute of Complex Systems, University of South Bohemia in České Budějovice, Zámek 136, 373 33, Nové Hrady, Czech Republic.
| | - Mohammadmehdi Saberioon
- Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing and Geoinformatics, Telegrafenberg, Potsdam 14473, Germany.
| | - Dalibor Štys
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Institute of Complex Systems, University of South Bohemia in České Budějovice, Zámek 136, 373 33, Nové Hrady, Czech Republic.
| |
Collapse
|
19
|
Zhang S, Zhu L, Gao Y. An efficient deep equilibrium model for medical image segmentation. Comput Biol Med 2022; 148:105831. [PMID: 35849947 DOI: 10.1016/j.compbiomed.2022.105831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 04/25/2022] [Accepted: 07/03/2022] [Indexed: 11/03/2022]
Abstract
In this paper, we propose an effective method that takes the advantages of classical methods and deep learning technology for medical image segmentation through modeling the neural network as a fixed point iteration seeking for system equilibrium by adding a feedback loop. In particular, the nuclear segmentation of medical image is used as an example to demonstrate the proposed method where it can successfully complete the challenge of segmenting nuclei from cells in different histopathological images. Specifically, the nuclei segmentation is formulated as a dynamic process to search for the system equilibrium. Starting from an initial segmentation generated either by a classic algorithm or pre-trained deep learning model, a sequence of segmentation output is created and combined with the original image to dynamically drive the segmentation towards the expected value. This dynamical extension to neural networks requires little extra change on the backbone deep neural network while it significantly increased model accuracy, generalizability, and stability as demonstrated by intensive experimental results from pathological images of different tissue types across different open datasets.
Collapse
Affiliation(s)
- Sai Zhang
- The School of Biomedical Engineering, Health Science Center, Shen zhen University, Shenzhen, 518060, China.
| | - Liangjia Zhu
- An Individual Researcher, Shenzhen, Guangdong, 518060, China.
| | - Yi Gao
- The School of Biomedical Engineering, Health Science Center, Shen zhen University, Shenzhen, 518060, China; Shenzhen Key Laboratory of Precision Medicine for Hematological Malignancies, Shenzhen 518060, China; Marshall Laboratory of Biomedical Engineering, Shenzhen 518060, China; Pengcheng Laboratory, Shenzhen 518066, China.
| |
Collapse
|
20
|
D'Amato M, Szostak P, Torben-Nielsen B. A Comparison Between Single- and Multi-Scale Approaches for Classification of Histopathology Images. Front Public Health 2022; 10:892658. [PMID: 35859771 PMCID: PMC9289164 DOI: 10.3389/fpubh.2022.892658] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 06/13/2022] [Indexed: 01/31/2023] Open
Abstract
Whole slide images (WSIs) are digitized histopathology images. WSIs are stored in a pyramidal data structure that contains the same images at multiple magnification levels. In digital pathology, most algorithmic approaches to analyze WSIs use a single magnification level. However, images at different magnification levels may reveal relevant and distinct properties in the image, such as global context or detailed spatial arrangement. Given their high resolution, WSIs cannot be processed as a whole and are broken down into smaller pieces called tiles. Then, a prediction at the tile-level is made for each tile in the larger image. As many classification problems require a prediction at a slide-level, there exist common strategies to integrate the tile-level insights into a slide-level prediction. We explore two approaches to tackle this problem, namely a multiple instance learning framework and a representation learning algorithm (the so-called “barcode approach”) based on clustering. In this work, we apply both approaches in a single- and multi-scale setting and compare the results in a multi-label histopathology classification task to show the promises and pitfalls of multi-scale analysis. Our work shows a consistent improvement in performance of the multi-scale models over single-scale ones. Using multiple instance learning and the barcode approach we achieved a 0.06 and 0.06 improvement in F1 score, respectively, highlighting the importance of combining multiple scales to integrate contextual and detailed information.
Collapse
Affiliation(s)
- Marina D'Amato
- Roche Information Solutions, F. Hoffmann-La Roche AG, Basel, Switzerland
| | - Przemysław Szostak
- Personalized Healthcare, Billennium by Order of Roche Polska Sp. z o.o., Warsaw, Poland
| | - Benjamin Torben-Nielsen
- Roche Information Solutions, F. Hoffmann-La Roche AG, Basel, Switzerland
- *Correspondence: Benjamin Torben-Nielsen
| |
Collapse
|
21
|
Rojas F, Hernandez S, Lazcano R, Laberiano-Fernandez C, Parra ER. Multiplex Immunofluorescence and the Digital Image Analysis Workflow for Evaluation of the Tumor Immune Environment in Translational Research. Front Oncol 2022; 12:889886. [PMID: 35832550 PMCID: PMC9271766 DOI: 10.3389/fonc.2022.889886] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/27/2022] [Indexed: 11/13/2022] Open
Abstract
A robust understanding of the tumor immune environment has important implications for cancer diagnosis, prognosis, research, and immunotherapy. Traditionally, immunohistochemistry (IHC) has been regarded as the standard method for detecting proteins in situ, but this technique allows for the evaluation of only one cell marker per tissue sample at a time. However, multiplexed imaging technologies enable the multiparametric analysis of a tissue section at the same time. Also, through the curation of specific antibody panels, these technologies enable researchers to study the cell subpopulations within a single immunological cell group. Thus, multiplexed imaging gives investigators the opportunity to better understand tumor cells, immune cells, and the interactions between them. In the multiplexed imaging technology workflow, once the protocol for a tumor immune micro environment study has been defined, histological slides are digitized to produce high-resolution images in which regions of interest are selected for the interrogation of simultaneously expressed immunomarkers (including those co-expressed by the same cell) by using an image analysis software and algorithm. Most currently available image analysis software packages use similar machine learning approaches in which tissue segmentation first defines the different components that make up the regions of interest and cell segmentation, then defines the different parameters, such as the nucleus and cytoplasm, that the software must utilize to segment single cells. Image analysis tools have driven dramatic evolution in the field of digital pathology over the past several decades and provided the data necessary for translational research and the discovery of new therapeutic targets. The next step in the growth of digital pathology is optimization and standardization of the different tasks in cancer research, including image analysis algorithm creation, to increase the amount of data generated and their accuracy in a short time as described herein. The aim of this review is to describe this process, including an image analysis algorithm creation for multiplex immunofluorescence analysis, as an essential part of the optimization and standardization of the different processes in cancer research, to increase the amount of data generated and their accuracy in a short time.
Collapse
|
22
|
Rasoolijaberi M, Babaei M, Riasatian A, Hemati S, Ashrafi P, Gonzalez R, Tizhoosh HR. Multi-Magnification Image Search in Digital Pathology. IEEE J Biomed Health Inform 2022; 26:4611-4622. [PMID: 35687644 DOI: 10.1109/jbhi.2022.3181531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper investigates the effect of magnification on content-based image search in digital pathology archives and proposes to use multi-magnification image representation. Image search in large archives of digital pathology slides provides researchers and medical professionals with an opportunity to match records of current and past patients and learn from evidently diagnosed and treated cases. When working with microscopes, pathologists switch between different magnification levels while examining tissue specimens to find and evaluate various morphological features. Inspired by the conventional pathology workflow, we have investigated several magnification levels in digital pathology and their combinations to minimize the gap between AI-enabled image search methods and clinical settings. The proposed searching framework does not rely on any regional annotation and potentially applies to millions of unlabelled (raw) whole slide images. This paper suggests two approaches for combining magnification levels and compares their performance. The first approach obtains a single-vector deep feature representation for a digital slide, whereas the second approach works with a multi-vector deep feature representation. We report the search results of 20×, 10×, and 5× magnifications and their combinations on a subset of The Cancer Genome Atlas (TCGA) repository. The experiments verify that cell-level information at the highest magnification is essential for searching for diagnostic purposes. In contrast, low-magnification information may improve this assessment depending on the tumor type. Our multi-magnification approach achieved up to 11% F1-score improvement in searching among the urinary tract and brain tumor subtypes compared to the single-magnification image search.
Collapse
|
23
|
Devaraj S, Madian N, Suresh S. Mathematical approach for segmenting chromosome clusters in metaspread images. Exp Cell Res 2022; 418:113251. [PMID: 35691379 DOI: 10.1016/j.yexcr.2022.113251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 05/15/2022] [Accepted: 06/06/2022] [Indexed: 11/04/2022]
Abstract
Karyotyping is an examination that helps in detecting chromosomal abnormalities. Chromosome analysis is a very challenging task which requires various steps to obtain a karyotype. The challenges associated with chromosome analysis are overlapping and touching of chromosomes. The input considered for chromosome analysis is the metaspread G band chromosomes. The proposed work mainly focus on separation the overlapped and touching chromosomes which is considered to be the major challenge in karyotype. There are various research contribution in chromosome analysis in progress which includes both low (Machine Learning) and high level (Deep Learning) methods. This paper proposes a mathematical based approaches which is very effective in segmentation of clustered chromosomes. The accuracy of segmentation is robust compared to high level approaches.
Collapse
Affiliation(s)
| | - Nirmala Madian
- Department of BME, Dr.N.G.P Institute of Technology, Coimbatore, India.
| | - S Suresh
- Mediscan Systems, Chennai, India
| |
Collapse
|
24
|
MITNET: a novel dataset and a two-stage deep learning approach for mitosis recognition in whole slide images of breast cancer tissue. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07441-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
AbstractMitosis assessment of breast cancer has a strong prognostic importance and is visually evaluated by pathologists. The inter, and intra-observer variability of this assessment is high. In this paper, a two-stage deep learning approach, named MITNET, has been applied to automatically detect nucleus and classify mitoses in whole slide images (WSI) of breast cancer. Moreover, this paper introduces two new datasets. The first dataset is used to detect the nucleus in the WSIs, which contains 139,124 annotated nuclei in 1749 patches extracted from 115 WSIs of breast cancer tissue, and the second dataset consists of 4908 mitotic cells and 4908 non-mitotic cells image samples extracted from 214 WSIs which is used for mitosis classification. The created datasets are used to train the MITNET network, which consists of two deep learning architectures, called MITNET-det and MITNET-rec, respectively, to isolate nuclei cells and identify the mitoses in WSIs. In MITNET-det architecture, to extract features from nucleus images and fuse them, CSPDarknet and Path Aggregation Network (PANet) are used, respectively, and then, a detection strategy using You Look Only Once (scaled-YOLOv4) is employed to detect nucleus at three different scales. In the classification part, the detected isolated nucleus images are passed through proposed MITNET-rec deep learning architecture, to identify the mitosis in the WSIs. Various deep learning classifiers and the proposed classifier are trained with a publicly available mitosis datasets (MIDOG and ATYPIA) and then, validated over our created dataset. The results verify that deep learning-based classifiers trained on MIDOG and ATYPIA have difficulties to recognize mitosis on our dataset which shows that the created mitosis dataset has unique features and characteristics. Besides this, the proposed classifier outperforms the state-of-the-art classifiers significantly and achieves a $$68.7\%$$
68.7
%
F1-score and $$49.0\%$$
49.0
%
F1-score on the MIDOG and the created mitosis datasets, respectively. Moreover, the experimental results reveal that the overall proposed MITNET framework detects the nucleus in WSIs with high detection rates and recognizes the mitotic cells in WSI with high F1-score which leads to the improvement of the accuracy of pathologists’ decision.
Collapse
|
25
|
Hoorali F, Khosravi H, Moradi B. Automatic microscopic diagnosis of diseases using an improved UNet++ architecture. Tissue Cell 2022; 76:101816. [DOI: 10.1016/j.tice.2022.101816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 05/02/2022] [Accepted: 05/04/2022] [Indexed: 12/01/2022]
|
26
|
Zhang S, Zhou Y, Tang D, Ni M, Zheng J, Xu G, Peng C, Shen S, Zhan Q, Wang X, Hu D, Li WJ, Wang L, Lv Y, Zou X. A deep learning-based segmentation system for rapid onsite cytologic pathology evaluation of pancreatic masses: A retrospective, multicenter, diagnostic study. EBioMedicine 2022; 80:104022. [PMID: 35512608 PMCID: PMC9079232 DOI: 10.1016/j.ebiom.2022.104022] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 04/08/2022] [Accepted: 04/08/2022] [Indexed: 11/30/2022] Open
Abstract
Background We aimed to develop a deep learning-based segmentation system for rapid on-site cytopathology evaluation (ROSE) to improve the diagnostic efficiency of endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) biopsy. Methods A retrospective, multicenter, diagnostic study was conducted using 5345 cytopathological slide images from 194 patients who underwent EUS-FNA. These patients were from Nanjing Drum Tower Hospital (109 patients), Wuxi People's Hospital (30 patients), Wuxi Second People's Hospital (25 patients), and The Second Affiliated Hospital of Soochow University (30 patients). A deep convolutional neural network (DCNN) system was developed to segment cell clusters and identify cancer cell clusters with cytopathological slide images. Internal testing, external testing, subgroup analysis, and human–machine competition were used to evaluate the performance of the system. Findings The DCNN system segmented stained cells from the background in cytopathological slides with an F1-score of 0·929 and 0·899–0·938 in internal and external testing, respectively. For cancer identification, the DCNN system identified images containing cancer clusters with AUCs of 0·958 and 0·948–0·976 in internal and external testing, respectively. The generalizable and robust performance of the DCNN system was validated in sensitivity analysis (AUC > 0·900) and was superior to that of trained endoscopists and comparable to cytopathologists on our testing datasets. Interpretation The DCNN system is feasible and robust for identifying sample adequacy and pancreatic cancer cell clusters. Prospective studies are warranted to evaluate the clinical significance of the system. Funding Jiangsu Natural Science Foundation; Nanjing Medical Science and Technology Development Funding; National Natural Science Foundation of China.
Collapse
Affiliation(s)
- Song Zhang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, Jiangsu 210008, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu 210008, China
| | - Yangfan Zhou
- National Institute of Healthcare Data Science at Nanjing University, Nanjing, Jiangsu 210008, China; National Key Laboratory for Novel Software Technology, Department of Computer Science and Technology, Nanjing University, Nanjing, Jiangsu 210008, China
| | - Dehua Tang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu 210008, China
| | - Muhan Ni
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu 210008, China
| | - Jinyu Zheng
- Department of Pathology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu 210008, China
| | - Guifang Xu
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, Jiangsu 210008, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu 210008, China
| | - Chunyan Peng
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, Jiangsu 210008, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu 210008, China
| | - Shanshan Shen
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, Jiangsu 210008, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu 210008, China
| | - Qiang Zhan
- Department of Gastroenterology, Wuxi People's Hospital Affiliated to Nanjing Medical University, Wuxi, Jiangsu 214023, China
| | - Xiaoyun Wang
- Department of Gastroenterology, Wuxi Second People's Hospital Affiliated to Nanjing Medical University, Wuxi, Jiangsu 214002, China
| | - Duanmin Hu
- Department of Gastroenterology, The Second Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215004, China
| | - Wu-Jun Li
- National Institute of Healthcare Data Science at Nanjing University, Nanjing, Jiangsu 210008, China; National Key Laboratory for Novel Software Technology, Department of Computer Science and Technology, Nanjing University, Nanjing, Jiangsu 210008, China; Center for Medical Big Data, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu 210008, China.
| | - Lei Wang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, Jiangsu 210008, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu 210008, China.
| | - Ying Lv
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, Jiangsu 210008, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu 210008, China.
| | - Xiaoping Zou
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, Jiangsu 210008, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu 210008, China.
| |
Collapse
|
27
|
Zhao Y, Fu C, Xu S, Cao L, Ma HF. LFANet: Lightweight feature attention network for abnormal cell segmentation in cervical cytology images. Comput Biol Med 2022; 145:105500. [PMID: 35421793 DOI: 10.1016/j.compbiomed.2022.105500] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 03/16/2022] [Accepted: 04/04/2022] [Indexed: 11/19/2022]
Abstract
With the widely applied computer-aided diagnosis techniques in cervical cancer screening, cell segmentation has become a necessary step to determine the progression of cervical cancer. Traditional manual methods alleviate the dilemma caused by the shortage of medical resources to a certain extent. Unfortunately, with their low segmentation accuracy for abnormal cells, the complex process cannot realize an automatic diagnosis. In addition, various methods on deep learning can automatically extract image features with high accuracy and small error, making artificial intelligence increasingly popular in computer-aided diagnosis. However, they are not suitable for clinical practice because those complicated models would result in more redundant parameters from networks. To address the above problems, a lightweight feature attention network (LFANet), extracting differentially abundant feature information of objects with various resolutions, is proposed in this study. The model can accurately segment both the nucleus and cytoplasm regions in cervical images. Specifically, a lightweight feature extraction module is designed as an encoder to extract abundant features of input images, combining with depth-wise separable convolution, residual connection and attention mechanism. Besides, the feature layer attention module is added to precisely recover pixel location, which employs the global high-level information as a guide for the low-level features, capturing dependencies of channel features. Finally, our LFANet model is evaluated on all four independent datasets. The experimental results demonstrate that compared with other advanced methods, our proposed network achieves state-of-the-art performance with a low computational complexity.
Collapse
Affiliation(s)
- Yanli Zhao
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China; School of Electrical Information Engineering, Ningxia Institute of Technology, Shizuishan, 753000, China
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110819, China; Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, China.
| | - Sen Xu
- General Hospital of Northern Theatre Command, Shenyang, 110016, China
| | - Lin Cao
- School of Information and Communication Engineering, Beijing Information Science and Technology University, Beijing, 100101, China
| | - Hong-Feng Ma
- Dopamine Group Ltd., Auckland, 1542, New Zealand
| |
Collapse
|
28
|
Su CH, Chung PC, Lin SF, Tsai HW, Yang TL, Su YC. Multi-Scale Attention Convolutional Network for Masson Stained Bile Duct Segmentation from Liver Pathology Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22072679. [PMID: 35408293 PMCID: PMC9003085 DOI: 10.3390/s22072679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 03/24/2022] [Accepted: 03/29/2022] [Indexed: 05/07/2023]
Abstract
In clinical practice, the Ishak Score system would be adopted to perform the evaluation of the grading and staging of hepatitis according to whether portal areas have fibrous expansion, bridging with other portal areas, or bridging with central veins. Based on these staging criteria, it is necessary to identify portal areas and central veins when performing the Ishak Score staging. The bile ducts have variant types and are very difficult to be detected under a single magnification, hence pathologists must observe bile ducts at different magnifications to obtain sufficient information. This pathologic examinations in routine clinical practice, however, would result in the labor intensive and expensive examination process. Therefore, the automatic quantitative analysis for pathologic examinations has had an increased demand and attracted significant attention recently. A multi-scale inputs of attention convolutional network is proposed in this study to simulate pathologists' examination procedure for observing bile ducts under different magnifications in liver biopsy. The proposed multi-scale attention network integrates cell-level information and adjacent structural feature information for bile duct segmentation. In addition, the attention mechanism of proposed model enables the network to focus the segmentation task on the input of high magnification, reducing the influence from low magnification input, but still helps to provide wider field of surrounding information. In comparison with existing models, including FCN, U-Net, SegNet, DeepLabv3 and DeepLabv3-plus, the experimental results demonstrated that the proposed model improved the segmentation performance on Masson bile duct segmentation task with 72.5% IOU and 84.1% F1-score.
Collapse
Affiliation(s)
- Chun-Han Su
- Institute of Computer and Communication Engineering, National Cheng Kung University, Tainan City 701, Taiwan; (C.-H.S.); (P.-C.C.)
| | - Pau-Choo Chung
- Institute of Computer and Communication Engineering, National Cheng Kung University, Tainan City 701, Taiwan; (C.-H.S.); (P.-C.C.)
| | - Sheng-Fung Lin
- Division of Hematology and Oncology, Department of Internal Medicine, E-Da Hospital, Kaohsiung 824, Taiwan;
| | - Hung-Wen Tsai
- Department of Pathology, National Cheng Kung University Hospital, Tainan City 704, Taiwan;
| | - Tsung-Lung Yang
- Kaohsiung Veterans General Hospital, Kaohsiung 813414, Taiwan;
| | - Yu-Chieh Su
- Division of Hematology and Oncology, Department of Internal Medicine, E-Da Hospital, Kaohsiung 824, Taiwan;
- School of Medicine, I-Shou University, Kaohsiung 824, Taiwan
- Correspondence:
| |
Collapse
|
29
|
Wilm F, Benz M, Bruns V, Baghdadlian S, Dexl J, Hartmann D, Kuritcyn P, Weidenfeller M, Wittenberg T, Merkel S, Hartmann A, Eckstein M, Geppert CI. Fast whole-slide cartography in colon cancer histology using superpixels and CNN classification. J Med Imaging (Bellingham) 2022; 9:027501. [PMID: 35300344 PMCID: PMC8920491 DOI: 10.1117/1.jmi.9.2.027501] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 02/17/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Automatic outlining of different tissue types in digitized histological specimen provides a basis for follow-up analyses and can potentially guide subsequent medical decisions. The immense size of whole-slide-images (WSIs), however, poses a challenge in terms of computation time. In this regard, the analysis of nonoverlapping patches outperforms pixelwise segmentation approaches but still leaves room for optimization. Furthermore, the division into patches, regardless of the biological structures they contain, is a drawback due to the loss of local dependencies. Approach: We propose to subdivide the WSI into coherent regions prior to classification by grouping visually similar adjacent pixels into superpixels. Afterward, only a random subset of patches per superpixel is classified and patch labels are combined into a superpixel label. We propose a metric for identifying superpixels with an uncertain classification and evaluate two medical applications, namely tumor area and invasive margin estimation and tumor composition analysis. Results: The algorithm has been developed on 159 hand-annotated WSIs of colon resections and its performance is compared with an analysis without prior segmentation. The algorithm shows an average speed-up of 41% and an increase in accuracy from 93.8% to 95.7%. By assigning a rejection label to uncertain superpixels, we further increase the accuracy by 0.4%. While tumor area estimation shows high concordance to the annotated area, the analysis of tumor composition highlights limitations of our approach. Conclusion: By combining superpixel segmentation and patch classification, we designed a fast and accurate framework for whole-slide cartography that is AI-model agnostic and provides the basis for various medical endpoints.
Collapse
Affiliation(s)
- Frauke Wilm
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany.,Friedrich-Alexander-University, Erlangen-Nuremberg, Department of Computer Science, Erlangen, Germany
| | - Michaela Benz
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Volker Bruns
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Serop Baghdadlian
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Jakob Dexl
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - David Hartmann
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Petr Kuritcyn
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Martin Weidenfeller
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Thomas Wittenberg
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany.,Friedrich-Alexander-University, Erlangen-Nuremberg, Department of Computer Science, Erlangen, Germany
| | - Susanne Merkel
- University Hospital Erlangen, Department of Surgery, FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany
| | - Arndt Hartmann
- University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Institute of Pathology, FAU Erlangen-Nuremberg, Erlangen, Germany
| | - Markus Eckstein
- University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Institute of Pathology, FAU Erlangen-Nuremberg, Erlangen, Germany
| | - Carol Immanuel Geppert
- University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Institute of Pathology, FAU Erlangen-Nuremberg, Erlangen, Germany
| |
Collapse
|
30
|
Jiang H, Li S, Li H. Parallel ‘same’ and ‘valid’ convolutional block and input-collaboration strategy for histopathological image classification. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
31
|
Astono IP, Welsh JS, Rowe CW, Jobling P. Objective quantification of nerves in immunohistochemistry specimens of thyroid cancer utilising deep learning. PLoS Comput Biol 2022; 18:e1009912. [PMID: 35226665 PMCID: PMC8912900 DOI: 10.1371/journal.pcbi.1009912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/10/2022] [Accepted: 02/10/2022] [Indexed: 11/18/2022] Open
Abstract
Accurate quantification of nerves in cancer specimens is important to understand cancer behaviour. Typically, nerves are manually detected and counted in digitised images of thin tissue sections from excised tumours using immunohistochemistry. However the images are of a large size with nerves having substantial variation in morphology that renders accurate and objective quantification difficult using existing manual and automated counting techniques. Manual counting is precise, but time-consuming, susceptible to inconsistency and has a high rate of false negatives. Existing automated techniques using digitised tissue sections and colour filters are sensitive, however, have a high rate of false positives. In this paper we develop a new automated nerve detection approach, based on a deep learning model with an augmented classification structure. This approach involves pre-processing to extract the image patches for the deep learning model, followed by pixel-level nerve detection utilising the proposed deep learning model. Outcomes assessed were a) sensitivity of the model in detecting manually identified nerves (expert annotations), and b) the precision of additional model-detected nerves. The proposed deep learning model based approach results in a sensitivity of 89% and a precision of 75%. The code and pre-trained model are publicly available at https://github.com/IA92/Automated_Nerves_Quantification. The study of nerves as a prognostic marker for cancer is becoming increasingly important. However, accurate quantification of nerves in cancer specimens is difficult to achieve due to limitations in the existing manual and automated quantification methods. Manual quantification is time-consuming and subject to bias, whilst automated quantification, in general, has a high rate of false detections that makes it somewhat unreliable. In this paper, we propose an automated nerve quantification approach based on a novel deep learning model structure for objective nerve quantification in immunohistochemistry specimens of thyroid cancer. We evaluate the performance of the proposed approach by comparing it with existing manual and automated quantification methods. We show that our proposed approach is superior to the existing manual and automated quantification methods. The proposed approach is shown to have a high precision as well as being able to detect a significant number of nerves not detected by the experts in manual counting.
Collapse
Affiliation(s)
- Indriani P. Astono
- School of Engineering, The University of Newcastle, Newcastle, Australia
- * E-mail:
| | - James S. Welsh
- School of Engineering, The University of Newcastle, Newcastle, Australia
| | - Christopher W. Rowe
- School of Medicine and Public Health, The University of Newcastle, Newcastle, Australia
| | - Phillip Jobling
- School of Biomedical Sciences and Pharmacy, The University of Newcastle, Newcastle, Australia
| |
Collapse
|
32
|
Wu Y, Cheng M, Huang S, Pei Z, Zuo Y, Liu J, Yang K, Zhu Q, Zhang J, Hong H, Zhang D, Huang K, Cheng L, Shao W. Recent Advances of Deep Learning for Computational Histopathology: Principles and Applications. Cancers (Basel) 2022; 14:1199. [PMID: 35267505 PMCID: PMC8909166 DOI: 10.3390/cancers14051199] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 02/16/2022] [Accepted: 02/22/2022] [Indexed: 01/10/2023] Open
Abstract
With the remarkable success of digital histopathology, we have witnessed a rapid expansion of the use of computational methods for the analysis of digital pathology and biopsy image patches. However, the unprecedented scale and heterogeneous patterns of histopathological images have presented critical computational bottlenecks requiring new computational histopathology tools. Recently, deep learning technology has been extremely successful in the field of computer vision, which has also boosted considerable interest in digital pathology applications. Deep learning and its extensions have opened several avenues to tackle many challenging histopathological image analysis problems including color normalization, image segmentation, and the diagnosis/prognosis of human cancers. In this paper, we provide a comprehensive up-to-date review of the deep learning methods for digital H&E-stained pathology image analysis. Specifically, we first describe recent literature that uses deep learning for color normalization, which is one essential research direction for H&E-stained histopathological image analysis. Followed by the discussion of color normalization, we review applications of the deep learning method for various H&E-stained image analysis tasks such as nuclei and tissue segmentation. We also summarize several key clinical studies that use deep learning for the diagnosis and prognosis of human cancers from H&E-stained histopathological images. Finally, online resources and open research problems on pathological image analysis are also provided in this review for the convenience of researchers who are interested in this exciting field.
Collapse
Affiliation(s)
- Yawen Wu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Michael Cheng
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Shuo Huang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Zongxiang Pei
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Yingli Zuo
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jianxin Liu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kai Yang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Qi Zhu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jie Zhang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Honghai Hong
- Department of Clinical Laboratory, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510006, China;
| | - Daoqiang Zhang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kun Huang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Liang Cheng
- Departments of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - Wei Shao
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| |
Collapse
|
33
|
|
34
|
Pathologic liver tumor detection using feature aligned multi-scale convolutional network. Artif Intell Med 2022; 125:102244. [DOI: 10.1016/j.artmed.2022.102244] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 11/03/2021] [Accepted: 01/03/2022] [Indexed: 11/20/2022]
|
35
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
36
|
Luo D, Kang H, Long J, Zhang J, Chen L, Quan T, Liu X. Dual supervised sampling networks for real-time segmentation of cervical cell nucleus. Comput Struct Biotechnol J 2022; 20:4360-4368. [PMID: 36051871 PMCID: PMC9411584 DOI: 10.1016/j.csbj.2022.08.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 08/09/2022] [Accepted: 08/09/2022] [Indexed: 12/24/2022] Open
|
37
|
Xie X, Wang X, Liang Y, Yang J, Wu Y, Li L, Sun X, Bing P, He B, Tian G, Shi X. Evaluating Cancer-Related Biomarkers Based on Pathological Images: A Systematic Review. Front Oncol 2021; 11:763527. [PMID: 34900711 PMCID: PMC8660076 DOI: 10.3389/fonc.2021.763527] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 10/18/2021] [Indexed: 12/12/2022] Open
Abstract
Many diseases are accompanied by changes in certain biochemical indicators called biomarkers in cells or tissues. A variety of biomarkers, including proteins, nucleic acids, antibodies, and peptides, have been identified. Tumor biomarkers have been widely used in cancer risk assessment, early screening, diagnosis, prognosis, treatment, and progression monitoring. For example, the number of circulating tumor cell (CTC) is a prognostic indicator of breast cancer overall survival, and tumor mutation burden (TMB) can be used to predict the efficacy of immune checkpoint inhibitors. Currently, clinical methods such as polymerase chain reaction (PCR) and next generation sequencing (NGS) are mainly adopted to evaluate these biomarkers, which are time-consuming and expansive. Pathological image analysis is an essential tool in medical research, disease diagnosis and treatment, functioning by extracting important physiological and pathological information or knowledge from medical images. Recently, deep learning-based analysis on pathological images and morphology to predict tumor biomarkers has attracted great attention from both medical image and machine learning communities, as this combination not only reduces the burden on pathologists but also saves high costs and time. Therefore, it is necessary to summarize the current process of processing pathological images and key steps and methods used in each process, including: (1) pre-processing of pathological images, (2) image segmentation, (3) feature extraction, and (4) feature model construction. This will help people choose better and more appropriate medical image processing methods when predicting tumor biomarkers.
Collapse
Affiliation(s)
- Xiaoliang Xie
- Department of Colorectal Surgery, General Hospital of Ningxia Medical University, Yinchuan, China.,College of Clinical Medicine, Ningxia Medical University, Yinchuan, China
| | - Xulin Wang
- Department of Oncology Surgery, Central Hospital of Jia Mu Si City, Jia Mu Si, China
| | - Yuebin Liang
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Jingya Yang
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China.,School of Electrical and Information Engineering, Anhui University of Technology, Ma'anshan, China
| | - Yan Wu
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Li Li
- Beijing Shanghe Jiye Biotech Co., Ltd., Bejing, China
| | - Xin Sun
- Department of Medical Affairs, Central Hospital of Jia Mu Si City, Jia Mu Si, China
| | - Pingping Bing
- Academician Workstation, Changsha Medical University, Changsha, China
| | - Binsheng He
- Academician Workstation, Changsha Medical University, Changsha, China
| | - Geng Tian
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China.,IBMC-BGI Center, T`he Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, China
| | - Xiaoli Shi
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| |
Collapse
|
38
|
Zhong Y, Piao Y, Zhang G. Dilated and soft attention-guided convolutional neural network for breast cancer histology images classification. Microsc Res Tech 2021; 85:1248-1257. [PMID: 34859543 DOI: 10.1002/jemt.23991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 10/03/2021] [Accepted: 10/18/2021] [Indexed: 01/22/2023]
Abstract
Breast cancer is one of the most common types of cancer in women, and histopathological imaging is considered the gold standard for its diagnosis. However, the great complexity of histopathological images and the considerable workload make this work extremely time-consuming, and the results may be affected by the subjectivity of the pathologist. Therefore, the development of an accurate, automated method for analysis of histopathological images is critical to this field. In this article, we propose a deep learning method guided by the attention mechanism for fast and effective classification of haematoxylin and eosin-stained breast biopsy images. First, this method takes advantage of DenseNet and uses the feature map's information. Second, we introduce dilated convolution to produce a larger receptive field. Finally, spatial attention and channel attention are used to guide the extraction of the most useful visual features. With the use of fivefold cross-validation, the best model obtained an accuracy of 96.47% on the BACH2018 dataset. We also evaluated our method on other datasets, and the experimental results demonstrated that our model has reliable performance. This study indicates that our histopathological image classifier with a soft attention-guided deep learning model for breast cancer shows significantly better results than the latest methods. It has great potential as an effective tool for automatic evaluation of digital histopathological microscopic images for computer-aided diagnosis.
Collapse
Affiliation(s)
- Yutong Zhong
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, China
| | - Yan Piao
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, China
| | - Guohui Zhang
- Pneumoconiosis Diagnosis and Treatment Center, Occupational Preventive and Treatment Hospital in Jilin Province, Changchun, China
| |
Collapse
|
39
|
Mu X, Cui Y, Bian R, Long L, Zhang D, Wang H, Shen Y, Wu J, Zou G. In-depth learning of automatic segmentation of shoulder joint magnetic resonance images based on convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106325. [PMID: 34536635 DOI: 10.1016/j.cmpb.2021.106325] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Accepted: 07/25/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE Magnetic resonance imaging (MRI) is gradually replacing computed tomography (CT) in the examination of bones and joints. The accurate and automatic segmentation of the bone structure in the MRI of the shoulder joint is essential for the measurement and diagnosis of bone injuries and diseases. The existing bone segmentation algorithms cannot achieve automatic segmentation without any prior knowledge, and their versatility and accuracy are relatively low. For this reason, an automatic segmentation algorithm based on the combination of image blocks and convolutional neural networks is proposed. METHODS First, we establish 4 segmentation models, including 3 U-Net-based bone segmentation models (humeral segmentation model, joint bone segmentation model, humeral head and articular bone segmentation model as a whole) and a block-based Alex Net segmentation model; Then we use 4 segmentation models to obtain the candidate bone area, and accurately detect the location area of the humerus and joint bone by voting. Finally, the Alex Net segmentation model is further used in the detected bone area to segment the bone edge with the accuracy of the pixel level. RESULTS The experimental data is obtained from 8 groups of patients in the orthopedics department of our hospital. Each group of scan sequence includes about 100 images, which have been segmented and labeled. Five groups of patients were used for training and five-fold cross-validation, and three groups of patients were used to test the actual segmentation effect. The average accuracy of Dice Coefficient, Positive Predicted Value (PPV) and Sensitivity reached 0.91 ± 0.02, respectively. 0.95 ± 0.03 and 0.95 ± 0.02. CONCLUSIONS The method in this paper is for a small sample of patient data sets, and only through deep learning on 2D medical images, very accurate shoulder joint segmentation results can be obtained, provide clinical diagnostic guidance to orthopedics. At the same time, the proposed algorithm framework has a certain versatility and is suitable for the precise segmentation of specific organs and tissues in MRI based on a small sample data.
Collapse
Affiliation(s)
- Xinhong Mu
- Yancheng First Hospital, Affiliated Hospital of Nanjing University Medical School, 166 Yulong Road West, Tinghu District, China; The First People's Hospital of Yancheng, 166 Yulong Road West, Tinghu District, China.
| | - Yi Cui
- Yancheng First Hospital, Affiliated Hospital of Nanjing University Medical School, 166 Yulong Road West, Tinghu District, China; The First People's Hospital of Yancheng, 166 Yulong Road West, Tinghu District, China
| | - Rongpeng Bian
- Yancheng First Hospital, Affiliated Hospital of Nanjing University Medical School, 166 Yulong Road West, Tinghu District, China; The First People's Hospital of Yancheng, 166 Yulong Road West, Tinghu District, China
| | - Long Long
- Yancheng First Hospital, Affiliated Hospital of Nanjing University Medical School, 166 Yulong Road West, Tinghu District, China; The First People's Hospital of Yancheng, 166 Yulong Road West, Tinghu District, China
| | - Daliang Zhang
- Yancheng First Hospital, Affiliated Hospital of Nanjing University Medical School, 166 Yulong Road West, Tinghu District, China; The First People's Hospital of Yancheng, 166 Yulong Road West, Tinghu District, China
| | - Huawen Wang
- Yancheng First Hospital, Affiliated Hospital of Nanjing University Medical School, 166 Yulong Road West, Tinghu District, China; The First People's Hospital of Yancheng, 166 Yulong Road West, Tinghu District, China
| | - Yidong Shen
- Yancheng First Hospital, Affiliated Hospital of Nanjing University Medical School, 166 Yulong Road West, Tinghu District, China; The First People's Hospital of Yancheng, 166 Yulong Road West, Tinghu District, China
| | - Jingjing Wu
- Yancheng First Hospital, Affiliated Hospital of Nanjing University Medical School, 166 Yulong Road West, Tinghu District, China; The First People's Hospital of Yancheng, 166 Yulong Road West, Tinghu District, China
| | - Guoyou Zou
- Yancheng First Hospital, Affiliated Hospital of Nanjing University Medical School, 166 Yulong Road West, Tinghu District, China; The First People's Hospital of Yancheng, 166 Yulong Road West, Tinghu District, China
| |
Collapse
|
40
|
Li X, Xu Z, Shen X, Zhou Y, Xiao B, Li TQ. Detection of Cervical Cancer Cells in Whole Slide Images Using Deformable and Global Context Aware Faster RCNN-FPN. Curr Oncol 2021; 28:3585-3601. [PMID: 34590614 PMCID: PMC8482136 DOI: 10.3390/curroncol28050307] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 09/06/2021] [Accepted: 09/12/2021] [Indexed: 01/16/2023] Open
Abstract
Cervical cancer is a worldwide public health problem with a high rate of illness and mortality among women. In this study, we proposed a novel framework based on Faster RCNN-FPN architecture for the detection of abnormal cervical cells in cytology images from a cancer screening test. We extended the Faster RCNN-FPN model by infusing deformable convolution layers into the feature pyramid network (FPN) to improve scalability. Furthermore, we introduced a global contextual aware module alongside the Region Proposal Network (RPN) to enhance the spatial correlation between the background and the foreground. Extensive experimentations with the proposed deformable and global context aware (DGCA) RCNN were carried out using the cervical image dataset of "Digital Human Body" Vision Challenge from the Alibaba Cloud TianChi Company. Performance evaluation based on the mean average precision (mAP) and receiver operating characteristic (ROC) curve has demonstrated considerable advantages of the proposed framework. Particularly, when combined with tagging of the negative image samples using traditional computer-vision techniques, 6-9% increase in mAP has been achieved. The proposed DGCA-RCNN model has potential to become a clinically useful AI tool for automated detection of cervical cancer cells in whole slide images of Pap smear.
Collapse
Affiliation(s)
- Xia Li
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Zhenhao Xu
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Xi Shen
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Yongxia Zhou
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Binggang Xiao
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
| | - Tie-Qiang Li
- Institute of Information Engineering, China Jiliang University, Hangzhou 310018, China; (X.L.); (Z.X.); (X.S.); (Y.Z.); (B.X.)
- Department of Clinical Science, Intervention and Technology, Karolinska Institutet, S-17177 Stockholm, Sweden
- Department of Medical Radiation and Nuclear Medicine, Karolinska University Hospital, S-14186 Stockholm, Sweden
| |
Collapse
|
41
|
Rastghalam R, Danyali H, Helfroush MS, Celebi ME, Mokhtari M. Skin Melanoma Detection in Microscopic Images Using HMM-Based Asymmetric Analysis and Expectation Maximization. IEEE J Biomed Health Inform 2021; 25:3486-3497. [PMID: 34003756 DOI: 10.1109/jbhi.2021.3081185] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Melanoma is one of the deadliest types of skin cancer with increasing incidence. The most definitive diagnosis method is the histopathological examination of the tissue sample. In this paper, a melanoma detection algorithm is proposed based on decision-level fusion and a Hidden Markov Model (HMM), whose parameters are optimized using Expectation Maximization (EM) and asymmetric analysis. The texture heterogeneity of the samples is determined using asymmetric analysis. A fusion-based HMM classifier trained using EM is introduced. For this purpose, a novel texture feature is extracted based on two local binary patterns, namely local difference pattern (LDP) and statistical histogram features of the microscopic image. Extensive experiments demonstrate that the proposed melanoma detection algorithm yields a total error of less than 0.04%.
Collapse
|
42
|
Li D, Tang P, Zhang R, Sun C, Li Y, Qian J, Liang Y, Yang J, Zhang L. Robust Blood Cell Image Segmentation Method Based on Neural Ordinary Differential Equations. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:5590180. [PMID: 34413897 PMCID: PMC8369191 DOI: 10.1155/2021/5590180] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 05/10/2021] [Accepted: 07/27/2021] [Indexed: 11/17/2022]
Abstract
For the analysis of medical images, one of the most basic methods is to diagnose diseases by examining blood smears through a microscope to check the morphology, number, and ratio of red blood cells and white blood cells. Therefore, accurate segmentation of blood cell images is essential for cell counting and identification. The aim of this paper is to perform blood smear image segmentation by combining neural ordinary differential equations (NODEs) with U-Net networks to improve the accuracy of image segmentation. In order to study the effect of ODE-solve on the speed and accuracy of the network, the ODE-block module was added to the nine convolutional layers in the U-Net network. Firstly, blood cell images are preprocessed to enhance the contrast between the regions to be segmented; secondly, the same dataset was used for the training set and testing set to test segmentation results. According to the experimental results, we select the location where the ordinary differential equation block (ODE-block) module is added, select the appropriate error tolerance, and balance the calculation time and the segmentation accuracy, in order to exert the best performance; finally, the error tolerance of the ODE-block is adjusted to increase the network depth, and the training NODEs-UNet network model is used for cell image segmentation. Using our proposed network model to segment blood cell images in the testing set, it can achieve 95.3% pixel accuracy and 90.61% mean intersection over union. By comparing the U-Net and ResNet networks, the pixel accuracy of our network model is increased by 0.88% and 0.46%, respectively, and the mean intersection over union is increased by 2.18% and 1.13%, respectively. Our proposed network model improves the accuracy of blood cell image segmentation and reduces the computational cost of the network.
Collapse
Affiliation(s)
- Dongming Li
- School of Information Technology, Jilin Agricultural University, Changchun 130118, China
- College of Opto-Electronic Engineering, Changchun University of Science and Technology, Changchun 130022, China
| | - Peng Tang
- School of Information Technology, Jilin Agricultural University, Changchun 130118, China
| | - Run Zhang
- College of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin 130012, China
| | | | - Yong Li
- College of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin 130012, China
| | - Jingning Qian
- College of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin 130012, China
| | - Yan Liang
- School of Information Technology, Jilin Agricultural University, Changchun 130118, China
| | - Jinhua Yang
- College of Opto-Electronic Engineering, Changchun University of Science and Technology, Changchun 130022, China
| | - Lijuan Zhang
- College of Computer Science and Engineering, Changchun University of Technology, Changchun, Jilin 130012, China
| |
Collapse
|
43
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
44
|
Liew XY, Hameed N, Clos J. A Review of Computer-Aided Expert Systems for Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:2764. [PMID: 34199444 PMCID: PMC8199592 DOI: 10.3390/cancers13112764] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 05/25/2021] [Accepted: 05/28/2021] [Indexed: 11/18/2022] Open
Abstract
A computer-aided diagnosis (CAD) expert system is a powerful tool to efficiently assist a pathologist in achieving an early diagnosis of breast cancer. This process identifies the presence of cancer in breast tissue samples and the distinct type of cancer stages. In a standard CAD system, the main process involves image pre-processing, segmentation, feature extraction, feature selection, classification, and performance evaluation. In this review paper, we reviewed the existing state-of-the-art machine learning approaches applied at each stage involving conventional methods and deep learning methods, the comparisons within methods, and we provide technical details with advantages and disadvantages. The aims are to investigate the impact of CAD systems using histopathology images, investigate deep learning methods that outperform conventional methods, and provide a summary for future researchers to analyse and improve the existing techniques used. Lastly, we will discuss the research gaps of existing machine learning approaches for implementation and propose future direction guidelines for upcoming researchers.
Collapse
Affiliation(s)
- Xin Yu Liew
- Jubilee Campus, University of Nottingham, Wollaton Road, Nottingham NG8 1BB, UK; (N.H.); (J.C.)
| | | | | |
Collapse
|
45
|
Segmentation and analysis of Pap smear microscopic images using the K-means and J48 algorithms. JURNAL TEKNOLOGI DAN SISTEM KOMPUTER 2021. [DOI: 10.14710/jtsiskom.2021.13943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
A Pap smear is used to early detection cervical cancer. This study proposes the segmentation and analysis method of Pap smear cells images using the K-means algorithm so that cytoplasmic cells, nuclear cells, and inflammatory cells can be segmented automatically. The results of the feature analysis from the cytoplasmic, nuclear, and inflammatory cell images were classified using the J48 algorithm with 37 training data. The training resulted in an accuracy of 94.594 %, precision of 95 %, and sensitivity of 94.6 %. The classification of 24 testing images resulted in an accuracy of 91.6%, a precision of 92.5 %, and a sensitivity of 91.7 %.
Collapse
|
46
|
Sobhani F, Robinson R, Hamidinekoo A, Roxanis I, Somaiah N, Yuan Y. Artificial intelligence and digital pathology: Opportunities and implications for immuno-oncology. Biochim Biophys Acta Rev Cancer 2021; 1875:188520. [PMID: 33561505 PMCID: PMC9062980 DOI: 10.1016/j.bbcan.2021.188520] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 01/04/2021] [Accepted: 01/30/2021] [Indexed: 02/08/2023]
Abstract
The field of immuno-oncology has expanded rapidly over the past decade, but key questions remain. How does tumour-immune interaction regulate disease progression? How can we prospectively identify patients who will benefit from immunotherapy? Identifying measurable features of the tumour immune-microenvironment which have prognostic or predictive value will be key to making meaningful gains in these areas. Recent developments in deep learning enable big-data analysis of pathological samples. Digital approaches allow data to be acquired, integrated and analysed far beyond what is possible with conventional techniques, and to do so efficiently and at scale. This has the potential to reshape what can be achieved in terms of volume, precision and reliability of output, enabling data for large cohorts to be summarised and compared. This review examines applications of artificial intelligence (AI) to important questions in immuno-oncology (IO). We discuss general considerations that need to be taken into account before AI can be applied in any clinical setting. We describe AI methods that have been applied to the field of IO to date and present several examples of their use.
Collapse
Affiliation(s)
- Faranak Sobhani
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| | - Ruth Robinson
- Division of Radiotherapy and Imaging, Institute of Cancer Research, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Azam Hamidinekoo
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| | - Ioannis Roxanis
- The Breast Cancer Now Toby Robins Research Centre, The Institute of Cancer Research, London, UK.
| | - Navita Somaiah
- Division of Radiotherapy and Imaging, Institute of Cancer Research, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Yinyin Yuan
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| |
Collapse
|
47
|
Ke J, Shen Y, Lu Y, Deng J, Wright JD, Zhang Y, Huang Q, Wang D, Jing N, Liang X, Jiang F. Quantitative analysis of abnormalities in gynecologic cytopathology with deep learning. J Transl Med 2021; 101:513-524. [PMID: 33526806 DOI: 10.1038/s41374-021-00537-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 12/21/2020] [Accepted: 01/04/2021] [Indexed: 12/19/2022] Open
Abstract
Cervical cancer is one of the most frequent cancers in women worldwide, yet the early detection and treatment of lesions via regular cervical screening have led to a drastic reduction in the mortality rate. However, the routine examination of screening as a regular health checkup of women is characterized as time-consuming and labor-intensive, while there is lack of characteristic phenotypic profile and quantitative analysis. In this research, over the analysis of a privately collected and manually annotated dataset of 130 cytological whole-slide images, the authors proposed a deep-learning diagnostic system to localize, grade, and quantify squamous cell abnormalities. The system can distinguish abnormalities at the morphology level, namely atypical squamous cells of undetermined significance, low-grade squamous intraepithelial lesion, high-grade squamous intraepithelial lesion, and squamous cell carcinoma, as well as differential phenotypes of normal cells. The case study covered 51 positive and 79 negative digital gynecologic cytology slides collected from 2016 to 2018. Our automatic diagnostic system demonstrated its sensitivity of 100% at slide-level abnormality prediction, with the confirmation with three pathologists who performed slide-level diagnosis and training sample annotations. In the cellular-level classification, we yielded an accuracy of 94.5% in the binary classification between normality and abnormality, and the AUC was above 85% for each subtype of epithelial abnormality. Although the final confirmation from pathologists is often a must, empirically, computer-aided methods are capable of the effective extraction, interpretation, and quantification of morphological features, while also making it more objective and reproducible.
Collapse
Affiliation(s)
- Jing Ke
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China.
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Yiqing Shen
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Yizhou Lu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Junwei Deng
- School of Information, University of Michigan, Ann Arbor, MI, USA
| | - Jason D Wright
- Department of Obstetrics and Gynecology, Columbia University, New York, NY, USA
| | - Yan Zhang
- Department of Pathology, Shanghai Tongshu Medical Laboratory Co.Ltd, Shanghai, China
| | - Qin Huang
- Department of Endocrinology and Metabolism, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai Clinical Center for Diabetes, Shanghai, China
| | - Dadong Wang
- Quantitative Imaging, Data61 CSIRO, Sydney, NSW, Australia
| | - Naifeng Jing
- Department of Micro-Nano Electronics, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoyao Liang
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Biren Research, Shanghai, China, Shanghai, China
| | - Fusong Jiang
- Department of Endocrinology and Metabolism, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai Clinical Center for Diabetes, Shanghai, China
| |
Collapse
|
48
|
Yang Y, Yang J, Liang Y, Liao B, Zhu W, Mo X, Huang K. Identification and Validation of Efficacy of Immunological Therapy for Lung Cancer From Histopathological Images Based on Deep Learning. Front Genet 2021; 12:642981. [PMID: 33633793 PMCID: PMC7900553 DOI: 10.3389/fgene.2021.642981] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 01/18/2021] [Indexed: 12/26/2022] Open
Abstract
Cancer immunotherapy, as a novel treatment against cancer metastasis and recurrence, has brought a significantly promising and effective therapy for cancer treatments. At present, programmed death 1 (PD-1) and programmed cell death-Ligand 1 (PD-L1) treatment for lung cancer is primarily recognized as an immune checkpoint inhibitor (ICI) to play an anti-tumor effect; however, it remains uncertain regarding of its efficacy though. Thereafter, tumor mutation burden (TMB) was recognized as a high-potential to be a predictive marker for the immune therapy, but it is invasive and costly. Therefore, discovering more immune-related biomarkers that have a guiding role in immunotherapy is a crucial step in the development of immunotherapy. In our study, we proposed a deep convolutional neural network (CNN)-based framework, DeepLRHE, which can efficiently analyze immunological stained pathological images of lung cancer tissues, as well as to identify and explore pathogenesis which can be used for immunological treatment in clinical field. In this study, we used 180 whole slice images (WSIs) of lung cancer downloaded from TCGA which was model training and validation. After two cross-validation used for this model, we compared with the area under the curve (AUC) of multiple mutant genes, TP53 had highest AUC, which reached 0.87, and EGFR, DNMT3A, PBRM1, STK11 also reached ranged from 0.71 to 0.84. The study results showed that the deep learning can used to assist health professionals for target-therapy as well as immunotherapies, therefore to improve the disease prognosis.
Collapse
Affiliation(s)
- Yachao Yang
- Key Laboratory of Computational Science and Application of Hainan Province, Haikou, China.,Key Laboratory of Data Science and Intelligence Education (Hainan Normal University) Ministry of Education, Haikou, China.,School of Mathematics and Statistics, Hainan Normal University, Haikou, China
| | - Jialiang Yang
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China.,Geneis (Beijing) Co., Ltd., Beijing, China
| | - Yuebin Liang
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China.,Geneis (Beijing) Co., Ltd., Beijing, China
| | - Bo Liao
- Key Laboratory of Computational Science and Application of Hainan Province, Haikou, China.,Key Laboratory of Data Science and Intelligence Education (Hainan Normal University) Ministry of Education, Haikou, China.,School of Mathematics and Statistics, Hainan Normal University, Haikou, China
| | - Wen Zhu
- Key Laboratory of Computational Science and Application of Hainan Province, Haikou, China.,Key Laboratory of Data Science and Intelligence Education (Hainan Normal University) Ministry of Education, Haikou, China.,School of Mathematics and Statistics, Hainan Normal University, Haikou, China
| | - Xiaofei Mo
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China.,Geneis (Beijing) Co., Ltd., Beijing, China
| | - Kaimei Huang
- Key Laboratory of Computational Science and Application of Hainan Province, Haikou, China.,Key Laboratory of Data Science and Intelligence Education (Hainan Normal University) Ministry of Education, Haikou, China.,School of Mathematics and Statistics, Hainan Normal University, Haikou, China
| |
Collapse
|
49
|
Lin H, Chen H, Wang X, Wang Q, Wang L, Heng PA. Dual-path network with synergistic grouping loss and evidence driven risk stratification for whole slide cervical image analysis. Med Image Anal 2021; 69:101955. [PMID: 33588122 DOI: 10.1016/j.media.2021.101955] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 12/28/2020] [Accepted: 01/02/2021] [Indexed: 12/26/2022]
Abstract
Cervical cancer has been one of the most lethal cancers threatening women's health. Nevertheless, the incidence of cervical cancer can be effectively minimized with preventive clinical management strategies, including vaccines and regular screening examinations. Screening cervical smears under microscope by cytologist is a widely used routine in regular examination, which consumes cytologists' large amount of time and labour. Computerized cytology analysis appropriately caters to such an imperative need, which alleviates cytologists' workload and reduce potential misdiagnosis rate. However, automatic analysis of cervical smear via digitalized whole slide images (WSIs) remains a challenging problem, due to the extreme huge image resolution, existence of tiny lesions, noisy dataset and intricate clinical definition of classes with fuzzy boundaries. In this paper, we design an efficient deep convolutional neural network (CNN) with dual-path (DP) encoder for lesion retrieval, which ensures the inference efficiency and the sensitivity on both tiny and large lesions. Incorporated with synergistic grouping loss (SGL), the network can be effectively trained on noisy dataset with fuzzy inter-class boundaries. Inspired by the clinical diagnostic criteria from the cytologists, a novel smear-level classifier, i.e., rule-based risk stratification (RRS), is proposed for accurate smear-level classification and risk stratification, which aligns reasonably with intricate cytological definition of the classes. Extensive experiments on the largest dataset including 19,303 WSIs from multiple medical centers validate the robustness of our method. With high sensitivity of 0.907 and specificity of 0.80 being achieved, our method manifests the potential to reduce the workload for cytologists in the routine practice.
Collapse
Affiliation(s)
- Huangjing Lin
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| | - Hao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Xi Wang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qiong Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| | - Liansheng Wang
- Department of Computer Science, Xiamen University, Xiamen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
50
|
Zormpas-Petridis K, Noguera R, Ivankovic DK, Roxanis I, Jamin Y, Yuan Y. SuperHistopath: A Deep Learning Pipeline for Mapping Tumor Heterogeneity on Low-Resolution Whole-Slide Digital Histopathology Images. Front Oncol 2021; 10:586292. [PMID: 33552964 PMCID: PMC7855703 DOI: 10.3389/fonc.2020.586292] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 11/30/2020] [Indexed: 12/27/2022] Open
Abstract
High computational cost associated with digital pathology image analysis approaches is a challenge towards their translation in routine pathology clinic. Here, we propose a computationally efficient framework (SuperHistopath), designed to map global context features reflecting the rich tumor morphological heterogeneity. SuperHistopath efficiently combines i) a segmentation approach using the linear iterative clustering (SLIC) superpixels algorithm applied directly on the whole-slide images at low resolution (5x magnification) to adhere to region boundaries and form homogeneous spatial units at tissue-level, followed by ii) classification of superpixels using a convolution neural network (CNN). To demonstrate how versatile SuperHistopath was in accomplishing histopathology tasks, we classified tumor tissue, stroma, necrosis, lymphocytes clusters, differentiating regions, fat, hemorrhage and normal tissue, in 127 melanomas, 23 triple-negative breast cancers, and 73 samples from transgenic mouse models of high-risk childhood neuroblastoma with high accuracy (98.8%, 93.1% and 98.3% respectively). Furthermore, SuperHistopath enabled discovery of significant differences in tumor phenotype of neuroblastoma mouse models emulating genomic variants of high-risk disease, and stratification of melanoma patients (high ratio of lymphocyte-to-tumor superpixels (p = 0.015) and low stroma-to-tumor ratio (p = 0.028) were associated with a favorable prognosis). Finally, SuperHistopath is efficient for annotation of ground-truth datasets (as there is no need of boundary delineation), training and application (~5 min for classifying a whole-slide image and as low as ~30 min for network training). These attributes make SuperHistopath particularly attractive for research in rich datasets and could also facilitate its adoption in the clinic to accelerate pathologist workflow with the quantification of phenotypes, predictive/prognosis markers.
Collapse
Affiliation(s)
| | - Rosa Noguera
- Department of Pathology, Medical School, University of Valencia-INCLIVA Biomedical Health Research Institute, Valencia, Spain.,Low Prevalence Tumors, Centro de Investigación Biomédica en Red de Cáncer (CIBERONC), Instituto de Salud Carlos III, Madrid, Spain
| | | | - Ioannis Roxanis
- Breast Cancer Now Toby Robins Research Centre, The Institute of Cancer Research, London, United Kingdom
| | - Yann Jamin
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, United Kingdom
| | - Yinyin Yuan
- Division of Molecular Pathology, The Institute of Cancer Research, London, United Kingdom
| |
Collapse
|