1
|
Xiao H, Wang Y, Xiong S, Ren Y, Zhang H. CUAMT: A MRI semi-supervised medical image segmentation framework based on contextual information and mixed uncertainty. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 267:108755. [PMID: 40306001 DOI: 10.1016/j.cmpb.2025.108755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2025] [Revised: 03/04/2025] [Accepted: 03/27/2025] [Indexed: 05/02/2025]
Abstract
BACKGROUND AND OBJECTIVE Semi-supervised medical image segmentation is a class of machine learning paradigms for segmentation model training and inference using both labeled and unlabeled medical images, which can effectively reduce the data labeling workload. However, existing consistency semi-supervised segmentation models mainly focus on investigating more complex consistency strategies and lack efficient utilization of volumetric contextual information, which leads to vague or uncertain understanding of the boundary between the object and the background by the model, resulting in ambiguous or even erroneous boundary segmentation results. METHODS For this reason, this study proposes a hybrid uncertainty network CUAMT based on contextual information. In this model, a contextual information extraction module CIE is proposed, which learns the connection between image contexts by extracting semantic features at different scales, and guides the model to enhance learning contextual information. In addition, a hybrid uncertainty module HUM is proposed, which guides the model to focus on segmentation boundary information by combining the global and local uncertainty information of two different networks to improve the segmentation performance of the networks at the boundary. RESULTS In the left atrial segmentation and brain tumor segmentation dataset, validation experiments were conducted on the proposed model. The experiments show that our model achieves 89.84%, 79.89%, and 8.73 on the Dice metric, Jaccard metric, and 95HD metric, respectively, which significantly outperforms several current SOTA semi-supervised methods. This study confirms that the CIE and HUM strategies are effective. CONCLUSION A semi-supervised segmentation framework is proposed for medical image segmentation.
Collapse
Affiliation(s)
- Hanguang Xiao
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China.
| | - Yangjian Wang
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China.
| | - Shidong Xiong
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China.
| | - Yanjun Ren
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China.
| | - Hongmin Zhang
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China.
| |
Collapse
|
2
|
Nicke T, Schäfer JR, Höfener H, Feuerhake F, Merhof D, Kießling F, Lotz J. Tissue concepts: Supervised foundation models in computational pathology. Comput Biol Med 2025; 186:109621. [PMID: 39793348 DOI: 10.1016/j.compbiomed.2024.109621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 11/14/2024] [Accepted: 12/23/2024] [Indexed: 01/13/2025]
Abstract
Due to the increasing workload of pathologists, the need for automation to support diagnostic tasks and quantitative biomarker evaluation is becoming more and more apparent. Foundation models have the potential to improve generalizability within and across centers and serve as starting points for data efficient development of specialized yet robust AI models. However, the training of foundation models themselves is usually very expensive in terms of data, computation, and time. This paper proposes a supervised training method that drastically reduces these expenses. The proposed method is based on multi-task learning to train a joint encoder, by combining 16 different classification, segmentation, and detection tasks on a total of 912,000 patches. Since the encoder is capable of capturing the properties of the samples, we term it the Tissue Concepts encoder. To evaluate the performance and generalizability of the Tissue Concepts encoder across centers, classification of whole slide images from four of the most prevalent solid cancers - breast, colon, lung, and prostate - was used. The experiments show that the Tissue Concepts model achieve comparable performance to models trained with self-supervision, while requiring only 6% of the amount of training patches. Furthermore, the Tissue Concepts encoder outperforms an ImageNet pre-trained encoder on both in-domain and out-of-domain data. The pre-trained models and will be made available under https://github.com/FraunhoferMEVIS/MedicalMultitaskModeling.
Collapse
Affiliation(s)
- Till Nicke
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen/Lübeck/Aachen, Germany.
| | - Jan Raphael Schäfer
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen/Lübeck/Aachen, Germany
| | - Henning Höfener
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen/Lübeck/Aachen, Germany
| | - Friedrich Feuerhake
- Institute for Pathology, Hannover Medical School, Hannover, Germany; Institute of Neuropathology, Medical Center - University of Freiburg, Freiburg, Germany
| | - Dorit Merhof
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen/Lübeck/Aachen, Germany; Institute of Image Analysis and Computer Vision, University of Regensburg, Regensburg, Germany
| | - Fabian Kießling
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen/Lübeck/Aachen, Germany; Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany
| | - Johannes Lotz
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen/Lübeck/Aachen, Germany
| |
Collapse
|
3
|
Zhu M, Zhai Z, Wang Y, Chen F, Liu R, Yang X, Zhao G. Advancements in the application of artificial intelligence in the field of colorectal cancer. Front Oncol 2025; 15:1499223. [PMID: 40071094 PMCID: PMC11893421 DOI: 10.3389/fonc.2025.1499223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Accepted: 02/10/2025] [Indexed: 03/14/2025] Open
Abstract
Colorectal cancer (CRC) is a prevalent malignant tumor in the digestive system. As reported in the 2020 global cancer statistics, CRC accounted for more than 1.9 million new cases and 935,000 deaths, making it the third most common cancer worldwide in terms of incidence and the second leading cause of cancer-related deaths globally. This poses a significant threat to global public health. Early screening methods, such as fecal occult blood tests, colonoscopies, and imaging techniques, are crucial for detecting early lesions and enabling timely intervention before cancer becomes invasive. Early detection greatly enhances treatment possibilities, such as surgery, radiation therapy, and chemotherapy, with surgery being the main approach for treating early-stage CRC. In this context, artificial intelligence (AI) has shown immense potential in revolutionizing CRC management, serving as one of the most effective screening tools. AI, utilizing machine learning (ML) and deep learning (DL) algorithms, improves early detection, diagnosis, and treatment by processing large volumes of medical data, uncovering hidden patterns, and forecasting disease development. DL, a more advanced form of ML, simulates the brain's processing power, enhancing the accuracy of tumor detection, differentiation, and prognosis predictions. These innovations offer the potential to revolutionize cancer care by boosting diagnostic accuracy, refining treatment approaches, and ultimately enhancing patient outcomes.
Collapse
Affiliation(s)
- Mengying Zhu
- Liaoning University of Traditional Chinese Medicine, Shenyang, China
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Zhenzhu Zhai
- Liaoning University of Traditional Chinese Medicine, Shenyang, China
| | - Yue Wang
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Fang Chen
- Department of Gynecology, People’s Hospital of Liaoning Province, Shenyang, China
| | - Ruibin Liu
- Liaoning University of Traditional Chinese Medicine, Shenyang, China
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Xiaoquan Yang
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Guohua Zhao
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
| |
Collapse
|
4
|
Zhang Q, Li Y, Xue C, Wang H, Li X. GlandSAM: Injecting Morphology Knowledge Into Segment Anything Model for Label-Free Gland Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:1070-1082. [PMID: 39378253 DOI: 10.1109/tmi.2024.3476176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2024]
Abstract
This paper presents a label-free gland segmentation, GlandSAM, which achieves comparable performance with supervised methods while no label is required during its training or inference phase. We observe that the Segment Anything model produces sub-optimal results on gland dataset: It either over-segments a gland into many fractions or under-segments the gland regions by confusing many of them with the background, due to the complex morphology of glands and lack of sufficient labels. To address this challenge, our GlandSAM innovatively injects two clues about gland morphology into SAM to guide the segmentation process: (1) Heterogeneity within glands and (2) Similarity with the background. Initially, we leverage the clues to decompose the intricate glands by selectively extracting a proposal for each gland sub-region of heterogeneous appearances. Then, we inject the morphology clues into SAM in a fine-tuning manner with a novel morphology-aware semantic grouping module that explicitly groups the high-level semantics of gland sub-regions. In this way, our GlandSAM could capture comprehensive knowledge about gland morphology, and produce well-delineated and complete segmentation results. Extensive experiments conducted on the GlaS dataset and the CRAG dataset reveal that GlandSAM outperforms state-of-the-art label-free methods by a significant margin. Notably, our GlandSAM even surpasses several fully-supervised methods that require pixel-wise labels for training, which highlights the remarkable performance and potential of GlandSAM in the realm of gland segmentation.
Collapse
|
5
|
Nunes JD, Montezuma D, Oliveira D, Pereira T, Cardoso JS. A survey on cell nuclei instance segmentation and classification: Leveraging context and attention. Med Image Anal 2025; 99:103360. [PMID: 39383642 DOI: 10.1016/j.media.2024.103360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 08/26/2024] [Accepted: 09/27/2024] [Indexed: 10/11/2024]
Abstract
Nuclear-derived morphological features and biomarkers provide relevant insights regarding the tumour microenvironment, while also allowing diagnosis and prognosis in specific cancer types. However, manually annotating nuclei from the gigapixel Haematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs) is a laborious and costly task, meaning automated algorithms for cell nuclei instance segmentation and classification could alleviate the workload of pathologists and clinical researchers and at the same time facilitate the automatic extraction of clinically interpretable features for artificial intelligence (AI) tools. But due to high intra- and inter-class variability of nuclei morphological and chromatic features, as well as H&E-stains susceptibility to artefacts, state-of-the-art algorithms cannot correctly detect and classify instances with the necessary performance. In this work, we hypothesize context and attention inductive biases in artificial neural networks (ANNs) could increase the performance and generalization of algorithms for cell nuclei instance segmentation and classification. To understand the advantages, use-cases, and limitations of context and attention-based mechanisms in instance segmentation and classification, we start by reviewing works in computer vision and medical imaging. We then conduct a thorough survey on context and attention methods for cell nuclei instance segmentation and classification from H&E-stained microscopy imaging, while providing a comprehensive discussion of the challenges being tackled with context and attention. Besides, we illustrate some limitations of current approaches and present ideas for future research. As a case study, we extend both a general (Mask-RCNN) and a customized (HoVer-Net) instance segmentation and classification methods with context- and attention-based mechanisms and perform a comparative analysis on a multicentre dataset for colon nuclei identification and counting. Although pathologists rely on context at multiple levels while paying attention to specific Regions of Interest (RoIs) when analysing and annotating WSIs, our findings suggest translating that domain knowledge into algorithm design is no trivial task, but to fully exploit these mechanisms in ANNs, the scientific understanding of these methods should first be addressed.
Collapse
Affiliation(s)
- João D Nunes
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, R. Dr. Roberto Frias, Porto, 4200-465, Portugal; University of Porto - Faculty of Engineering, R. Dr. Roberto Frias, Porto, 4200-465, Portugal.
| | - Diana Montezuma
- IMP Diagnostics, Praça do Bom Sucesso, 4150-146 Porto, Portugal; Cancer Biology and Epigenetics Group, Research Center of IPO Porto (CI-IPOP)/[RISE@CI-IPOP], Portuguese Oncology Institute of Porto (IPO Porto)/Porto Comprehensive Cancer Center (Porto.CCC), R. Dr. António Bernardino de Almeida, 4200-072, Porto, Portugal; Doctoral Programme in Medical Sciences, School of Medicine and Biomedical Sciences - University of Porto (ICBAS-UP), Porto, Portugal
| | | | - Tania Pereira
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, R. Dr. Roberto Frias, Porto, 4200-465, Portugal; FCTUC - Faculty of Science and Technology, University of Coimbra, Coimbra, 3004-516, Portugal
| | - Jaime S Cardoso
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, R. Dr. Roberto Frias, Porto, 4200-465, Portugal; University of Porto - Faculty of Engineering, R. Dr. Roberto Frias, Porto, 4200-465, Portugal
| |
Collapse
|
6
|
Illarionova S, Hamoudi R, Zapevalina M, Fedin I, Alsahanova N, Bernstein A, Burnaev E, Alferova V, Khrameeva E, Shadrin D, Talaat I, Bouridane A, Sharaev M. A hierarchical algorithm with randomized learning for robust tissue segmentation and classification in digital pathology. Inf Sci (N Y) 2025; 686:121358. [DOI: 10.1016/j.ins.2024.121358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2025]
|
7
|
Barua B, Chyrmang G, Bora K, Saikia MJ. Optimizing colorectal cancer segmentation with MobileViT-UNet and multi-criteria decision analysis. PeerJ Comput Sci 2024; 10:e2633. [PMID: 39896394 PMCID: PMC11784762 DOI: 10.7717/peerj-cs.2633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Accepted: 12/05/2024] [Indexed: 02/04/2025]
Abstract
Colorectal cancer represents a significant health challenge as one of the deadliest forms of malignancy. Manual examination methods are subjective, leading to inconsistent interpretations among different examiners and compromising reliability. Additionally, process is time-consuming and labor-intensive, necessitating the development of computer-aided diagnostic systems. This study investigates the segmentation of colorectal cancer regions of normal tissue, polyps, high-grade intraepithelial neoplasia, low-grade intraepithelial neoplasia, adenocarcinoma, and serrated Adenoma, using proposed segmentation models: VGG16-UNet, ResNet50-UNet, MobileNet-UNet, and MobileViT-UNet. This is the first study to integrate MobileViT as a UNet encoder. Each model was trained with two distinct loss functions, binary cross-entropy and dice loss, and evaluated using metrics including Dice ratio, Jaccard index, precision, and recall. The MobileViT-UNet+Dice loss emerged as the leading model in colorectal histopathology segmentation, consistently achieving high scores across all evaluation metrics. Specifically, it achieved a Dice ratio of 0.944 ± 0.030 and a Jaccard index of 0.897 ± 0.049, with precision at 0.955 ± 0.046 and Recall at 0.939 ± 0.038 across all classes. To further obtain the best performing model, we employed multi-criteria decision analysis (MCDA) using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). This analysis revealed that the MobileViT-UNet+Dice model achieved the highest TOPSIS scores of 1, thereby attaining the highest ranking among all models. Our comparative analysis includes benchmarking with existing works, the results highlight that our best-performing model (MobileViT-UNet+Dice) significantly outperforms existing models, showcasing its potential to enhance the accuracy and efficiency of colorectal cancer segmentation.
Collapse
Affiliation(s)
- Barun Barua
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India
| | - Genevieve Chyrmang
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, India
| | - Manob Jyoti Saikia
- Electrical and Computer Engineering Department, University of Memphis, Memphis, TN, United States of America
- Biomedical Sensors & Systems Lab, University of Memphis, Memphis, TN, United States of America
| |
Collapse
|
8
|
Wang Q, Deng X, Huang P, Ma Q, Zhao L, Feng Y, Wang Y, Zhao Y, Chen Y, Zhong P, He P, Ma M, Feng P, Xiao H. Prediction of PD-L1 tumor positive score in lung squamous cell carcinoma with H&E staining images and deep learning. Front Artif Intell 2024; 7:1452563. [PMID: 39759385 PMCID: PMC11695341 DOI: 10.3389/frai.2024.1452563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Accepted: 12/10/2024] [Indexed: 01/07/2025] Open
Abstract
Background Detecting programmed death ligand 1 (PD-L1) expression based on immunohistochemical (IHC) staining is an important guide for the treatment of lung cancer with immune checkpoint inhibitors. However, this method has problems such as high staining costs, tumor heterogeneity, and subjective differences among pathologists. Therefore, the application of deep learning models to segment and quantitatively predict PD-L1 expression in digital sections of Hematoxylin and eosin (H&E) stained lung squamous cell carcinoma is of great significance. Methods We constructed a dataset comprising H&E-stained digital sections of lung squamous cell carcinoma and used a Transformer Unet (TransUnet) deep learning network with an encoder-decoder design to segment PD-L1 negative and positive regions and quantitatively predict the tumor cell positive score (TPS). Results The results showed that the dice similarity coefficient (DSC) and intersection overunion (IoU) of deep learning for PD-L1 expression segmentation of H&E-stained digital slides of lung squamous cell carcinoma were 80 and 72%, respectively, which were better than the other seven cutting-edge segmentation models. The root mean square error (RMSE) of quantitative prediction TPS was 26.8, and the intra-group correlation coefficients with the gold standard was 0.92 (95% CI: 0.90-0.93), which was better than the consistency between the results of five pathologists and the gold standard. Conclusion The deep learning model is capable of segmenting and quantitatively predicting PD-L1 expression in H&E-stained digital sections of lung squamous cell carcinoma, which has significant implications for the application and guidance of immune checkpoint inhibitor treatments. And the link to the code is https://github.com/Baron-Huang/PD-L1-prediction-via-HE-image.
Collapse
Affiliation(s)
- Qiushi Wang
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| | - Xixiang Deng
- The Key Lab of Optoelectronic Technology and Systems, Ministry of Education, Chongqing University, Chongqing, China
| | - Pan Huang
- The Key Lab of Optoelectronic Technology and Systems, Ministry of Education, Chongqing University, Chongqing, China
| | - Qiang Ma
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| | - Lianhua Zhao
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| | - Yangyang Feng
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| | - Yiying Wang
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| | - Yuan Zhao
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| | - Yan Chen
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| | - Peng Zhong
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| | - Peng He
- The Key Lab of Optoelectronic Technology and Systems, Ministry of Education, Chongqing University, Chongqing, China
| | - Mingrui Ma
- Department of Information, Affiliated Tumor Hospital of Xinjiang Medical University, Urumchi, China
| | - Peng Feng
- The Key Lab of Optoelectronic Technology and Systems, Ministry of Education, Chongqing University, Chongqing, China
| | - Hualiang Xiao
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| |
Collapse
|
9
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
10
|
Zhang W, Yang S, Luo M, He C, Li Y, Zhang J, Wang X, Wang F. Keep it accurate and robust: An enhanced nuclei analysis framework. Comput Struct Biotechnol J 2024; 24:699-710. [PMID: 39650700 PMCID: PMC11621583 DOI: 10.1016/j.csbj.2024.10.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 10/21/2024] [Accepted: 10/27/2024] [Indexed: 12/11/2024] Open
Abstract
Accurate segmentation and classification of nuclei in histology images is critical but challenging due to nuclei heterogeneity, staining variations, and tissue complexity. Existing methods often struggle with limited dataset variability, with patches extracted from similar whole slide images (WSI), making models prone to falling into local optima. Here we propose a new framework to address this limitation and enable robust nuclear analysis. Our method leverages dual-level ensemble modeling to overcome issues stemming from limited dataset variation. Intra-ensembling applies diverse transformations to individual samples, while inter-ensembling combines networks of different scales. We also introduce enhancements to the HoVer-Net architecture, including updated encoders, nested dense decoding and model regularization strategy. We achieve state-of-the-art results on public benchmarks, including 1st place for nuclear composition prediction and 3rd place for segmentation/classification in the 2022 Colon Nuclei Identification and Counting (CoNIC) Challenge. This success validates our approach for accurate histological nuclei analysis. Extensive experiments and ablation studies provide insights into optimal network design choices and training techniques. In conclusion, this work proposes an improved framework advancing the state-of-the-art in nuclei analysis. We will release our code and models to serve as a toolkit for the community.
Collapse
Affiliation(s)
- Wenhua Zhang
- Institute of Artificial Intelligence, Shanghai University, Shanghai 200444, China
| | - Sen Yang
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305 USA
| | | | - Chuan He
- Shanghai Aitrox Technology Corporation Limited, Shanghai, 200444, China
| | - Yuchen Li
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305 USA
| | - Jun Zhang
- Tencent AI Lab, Shenzhen 518057, China
| | - Xiyue Wang
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305 USA
| | - Fang Wang
- Department of Pathology, The Affiliated Yantai Yuhuangding Hospital of Qingdao University, Yantai, 264000, China
| |
Collapse
|
11
|
Fiorin A, López Pablo C, Lejeune M, Hamza Siraj A, Della Mea V. Enhancing AI Research for Breast Cancer: A Comprehensive Review of Tumor-Infiltrating Lymphocyte Datasets. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2996-3008. [PMID: 38806950 PMCID: PMC11612116 DOI: 10.1007/s10278-024-01043-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 01/19/2024] [Accepted: 02/07/2024] [Indexed: 05/30/2024]
Abstract
The field of immunology is fundamental to our understanding of the intricate dynamics of the tumor microenvironment. In particular, tumor-infiltrating lymphocyte (TIL) assessment emerges as essential aspect in breast cancer cases. To gain comprehensive insights, the quantification of TILs through computer-assisted pathology (CAP) tools has become a prominent approach, employing advanced artificial intelligence models based on deep learning techniques. The successful recognition of TILs requires the models to be trained, a process that demands access to annotated datasets. Unfortunately, this task is hampered not only by the scarcity of such datasets, but also by the time-consuming nature of the annotation phase required to create them. Our review endeavors to examine publicly accessible datasets pertaining to the TIL domain and thereby become a valuable resource for the TIL community. The overall aim of the present review is thus to make it easier to train and validate current and upcoming CAP tools for TIL assessment by inspecting and evaluating existing publicly available online datasets.
Collapse
Affiliation(s)
- Alessio Fiorin
- Oncological Pathology and Bioinformatics Research Group, Institut d'Investigació Sanitària Pere Virgili (IISPV), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Pathology, Hospital de Tortosa Verge de la Cinta (HTVC), Institut Català de la Salut (ICS), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili (URV), Tarragona, Spain.
| | - Carlos López Pablo
- Oncological Pathology and Bioinformatics Research Group, Institut d'Investigació Sanitària Pere Virgili (IISPV), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Pathology, Hospital de Tortosa Verge de la Cinta (HTVC), Institut Català de la Salut (ICS), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili (URV), Tarragona, Spain.
| | - Marylène Lejeune
- Oncological Pathology and Bioinformatics Research Group, Institut d'Investigació Sanitària Pere Virgili (IISPV), C/Esplanetes no 14, 43500, Tortosa, Spain
- Department of Pathology, Hospital de Tortosa Verge de la Cinta (HTVC), Institut Català de la Salut (ICS), C/Esplanetes no 14, 43500, Tortosa, Spain
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili (URV), Tarragona, Spain
| | - Ameer Hamza Siraj
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Vincenzo Della Mea
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| |
Collapse
|
12
|
You C, Dai W, Liu F, Min Y, Dvornek NC, Li X, Clifton DA, Staib L, Duncan JS. Mine Your Own Anatomy: Revisiting Medical Image Segmentation With Extremely Limited Labels. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; PP:11136-11151. [PMID: 39269798 PMCID: PMC11903367 DOI: 10.1109/tpami.2024.3461321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/15/2024]
Abstract
Recent studies on contrastive learning have achieved remarkable performance solely by leveraging few labels in the context of medical image segmentation. Existing methods mainly focus on instance discrimination and invariant mapping (i.e., pulling positive samples closer and negative samples apart in the feature space). However, they face three common pitfalls: (1) tailness: medical image data usually follows an implicit long-tail class distribution. Blindly leveraging all pixels in training hence can lead to the data imbalance issues, and cause deteriorated performance; (2) consistency: it remains unclear whether a segmentation model has learned meaningful and yet consistent anatomical features due to the intra-class variations between different anatomical features; and (3) diversity: the intra-slice correlations within the entire dataset have received significantly less attention. This motivates us to seek a principled approach for strategically making use of the dataset itself to discover similar yet distinct samples from different anatomical views. In this paper, we introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owNAnatomy (MONA), and make three contributions. First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features, mainly due to lacking the supervision signal. We show two simple solutions towards learning invariances-through the use of stronger data augmentations and nearest neighbors. Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features in an unsupervised manner. Lastly, we both empirically and theoretically, demonstrate the efficacy of our MONA on three benchmark datasets with different labeled settings, achieving new state-of-the-art under different labeled semi-supervised settings. MONA makes minimal assumptions on domain expertise, and hence constitutes a practical and versatile solution in medical image analysis. We provide the PyTorch-like pseudo-code in supplementary.
Collapse
|
13
|
Wang R, Yang S, Li Q, Zhong D. CytoGAN: Unpaired staining transfer by structure preservation for cytopathology image analysis. Comput Biol Med 2024; 180:108942. [PMID: 39096614 DOI: 10.1016/j.compbiomed.2024.108942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 07/23/2024] [Accepted: 07/24/2024] [Indexed: 08/05/2024]
Abstract
With the development of digital pathology, deep learning is increasingly being applied to endometrial cell morphology analysis for cancer screening. And cytology images with different staining may degrade the performance of these analysis algorithms. To address the impact of staining patterns, many strategies have been proposed and hematoxylin and eosin (H&E) images have been transferred to other staining styles. However, none of the existing methods are able to generate realistic cytological images with preserved cellular layout, and many important clinical structural information is lost. To address the above issues, we propose a different staining transformation model, CytoGAN, which can quickly and realistically generate images with different staining styles. It includes a novel structure preservation module that preserves the cell structure well, even if the resolution or cell size between the source and target domains do not match. Meanwhile, a stain adaptive module is designed to help the model generate realistic and high-quality endometrial cytology images. We compared our model with ten state-of-the-art stain transformation models and evaluated by two pathologists. Furthermore, in the downstream endometrial cancer classification task, our algorithm improves the robustness of the classification model on multimodal datasets, with more than 20 % improvement in accuracy. We found that generating specified specific stains from existing H&E images improves the diagnosis of endometrial cancer. Our code will be available on github.
Collapse
Affiliation(s)
- Ruijie Wang
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, PR China.
| | - Sicheng Yang
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, PR China.
| | - Qiling Li
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, PR China.
| | - Dexing Zhong
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, PR China; Pazhou Laboratory, Guangzhou, 510335, PR China; Research Institute of Xi'an Jiaotong University, Zhejiang, 311215, PR China.
| |
Collapse
|
14
|
Schäfer R, Nicke T, Höfener H, Lange A, Merhof D, Feuerhake F, Schulz V, Lotz J, Kiessling F. Overcoming data scarcity in biomedical imaging with a foundational multi-task model. NATURE COMPUTATIONAL SCIENCE 2024; 4:495-509. [PMID: 39030386 PMCID: PMC11288886 DOI: 10.1038/s43588-024-00662-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 06/17/2024] [Indexed: 07/21/2024]
Abstract
Foundational models, pretrained on a large scale, have demonstrated substantial success across non-medical domains. However, training these models typically requires large, comprehensive datasets, which contrasts with the smaller and more specialized datasets common in biomedical imaging. Here we propose a multi-task learning strategy that decouples the number of training tasks from memory requirements. We trained a universal biomedical pretrained model (UMedPT) on a multi-task database including tomographic, microscopic and X-ray images, with various labeling strategies such as classification, segmentation and object detection. The UMedPT foundational model outperformed ImageNet pretraining and previous state-of-the-art models. For classification tasks related to the pretraining database, it maintained its performance with only 1% of the original training data and without fine-tuning. For out-of-domain tasks it required only 50% of the original training data. In an external independent validation, imaging features extracted using UMedPT proved to set a new standard for cross-center transferability.
Collapse
Affiliation(s)
- Raphael Schäfer
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Till Nicke
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Henning Höfener
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Annkristin Lange
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Dorit Merhof
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Institute of Image Analysis and Computer Vision, Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany
| | - Friedrich Feuerhake
- Institute for Pathology, Hannover Medical School, Hanover, Germany
- Institute for Neuropathology, Medical Center, University of Freiburg, Freiburg, Germany
| | - Volkmar Schulz
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany
| | - Johannes Lotz
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| | - Fabian Kiessling
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
- Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany.
| |
Collapse
|
15
|
Villanueva-Miranda I, Rong R, Quan P, Wen Z, Zhan X, Yang DM, Chi Z, Xie Y, Xiao G. Enhancing Medical Imaging Segmentation with GB-SAM: A Novel Approach to Tissue Segmentation Using Granular Box Prompts. Cancers (Basel) 2024; 16:2391. [PMID: 39001452 PMCID: PMC11240495 DOI: 10.3390/cancers16132391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 06/23/2024] [Accepted: 06/25/2024] [Indexed: 07/16/2024] Open
Abstract
Recent advances in foundation models have revolutionized model development in digital pathology, reducing dependence on extensive manual annotations required by traditional methods. The ability of foundation models to generalize well with few-shot learning addresses critical barriers in adapting models to diverse medical imaging tasks. This work presents the Granular Box Prompt Segment Anything Model (GB-SAM), an improved version of the Segment Anything Model (SAM) fine-tuned using granular box prompts with limited training data. The GB-SAM aims to reduce the dependency on expert pathologist annotators by enhancing the efficiency of the automated annotation process. Granular box prompts are small box regions derived from ground truth masks, conceived to replace the conventional approach of using a single large box covering the entire H&E-stained image patch. This method allows a localized and detailed analysis of gland morphology, enhancing the segmentation accuracy of individual glands and reducing the ambiguity that larger boxes might introduce in morphologically complex regions. We compared the performance of our GB-SAM model against U-Net trained on different sizes of the CRAG dataset. We evaluated the models across histopathological datasets, including CRAG, GlaS, and Camelyon16. GB-SAM consistently outperformed U-Net, with reduced training data, showing less segmentation performance degradation. Specifically, on the CRAG dataset, GB-SAM achieved a Dice coefficient of 0.885 compared to U-Net's 0.857 when trained on 25% of the data. Additionally, GB-SAM demonstrated segmentation stability on the CRAG testing dataset and superior generalization across unseen datasets, including challenging lymph node segmentation in Camelyon16, which achieved a Dice coefficient of 0.740 versus U-Net's 0.491. Furthermore, compared to SAM-Path and Med-SAM, GB-SAM showed competitive performance. GB-SAM achieved a Dice score of 0.900 on the CRAG dataset, while SAM-Path achieved 0.884. On the GlaS dataset, Med-SAM reported a Dice score of 0.956, whereas GB-SAM achieved 0.885 with significantly less training data. These results highlight GB-SAM's advanced segmentation capabilities and reduced dependency on large datasets, indicating its potential for practical deployment in digital pathology, particularly in settings with limited annotated datasets.
Collapse
Affiliation(s)
- Ismael Villanueva-Miranda
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (I.V.-M.); (R.R.); (P.Q.); (Z.W.); (X.Z.); (D.M.Y.); (Y.X.)
| | - Ruichen Rong
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (I.V.-M.); (R.R.); (P.Q.); (Z.W.); (X.Z.); (D.M.Y.); (Y.X.)
| | - Peiran Quan
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (I.V.-M.); (R.R.); (P.Q.); (Z.W.); (X.Z.); (D.M.Y.); (Y.X.)
| | - Zhuoyu Wen
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (I.V.-M.); (R.R.); (P.Q.); (Z.W.); (X.Z.); (D.M.Y.); (Y.X.)
| | - Xiaowei Zhan
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (I.V.-M.); (R.R.); (P.Q.); (Z.W.); (X.Z.); (D.M.Y.); (Y.X.)
| | - Donghan M. Yang
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (I.V.-M.); (R.R.); (P.Q.); (Z.W.); (X.Z.); (D.M.Y.); (Y.X.)
| | - Zhikai Chi
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA;
| | - Yang Xie
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (I.V.-M.); (R.R.); (P.Q.); (Z.W.); (X.Z.); (D.M.Y.); (Y.X.)
- Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (I.V.-M.); (R.R.); (P.Q.); (Z.W.); (X.Z.); (D.M.Y.); (Y.X.)
- Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
16
|
Zhang S, Yuan Z, Zhou X, Wang H, Chen B, Wang Y. VENet: Variational energy network for gland segmentation of pathological images and early gastric cancer diagnosis of whole slide images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108178. [PMID: 38652995 DOI: 10.1016/j.cmpb.2024.108178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 04/08/2024] [Accepted: 04/13/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND AND OBJECTIVE Gland segmentation of pathological images is an essential but challenging step for adenocarcinoma diagnosis. Although deep learning methods have recently made tremendous progress in gland segmentation, they have not given satisfactory boundary and region segmentation results of adjacent glands. These glands usually have a large difference in glandular appearance, and the statistical distribution between the training and test sets in deep learning is inconsistent. These problems make networks not generalize well in the test dataset, bringing difficulties to gland segmentation and early cancer diagnosis. METHODS To address these problems, we propose a Variational Energy Network named VENet with a traditional variational energy Lv loss for gland segmentation of pathological images and early gastric cancer detection in whole slide images (WSIs). It effectively integrates the variational mathematical model and the data-adaptability of deep learning methods to balance boundary and region segmentation. Furthermore, it can effectively segment and classify glands in large-size WSIs with reliable nucleus width and nucleus-to-cytoplasm ratio features. RESULTS The VENet was evaluated on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset, the Colorectal Adenocarcinoma Glands (CRAG) dataset, and the self-collected Nanfang Hospital dataset. Compared with state-of-the-art methods, our method achieved excellent performance for GlaS Test A (object dice 0.9562, object F1 0.9271, object Hausdorff distance 73.13), GlaS Test B (object dice 94.95, object F1 95.60, object Hausdorff distance 59.63), and CRAG (object dice 95.08, object F1 92.94, object Hausdorff distance 28.01). For the Nanfang Hospital dataset, our method achieved a kappa of 0.78, an accuracy of 0.9, a sensitivity of 0.98, and a specificity of 0.80 on the classification task of test 69 WSIs. CONCLUSIONS The experimental results show that the proposed model accurately predicts boundaries and outperforms state-of-the-art methods. It can be applied to the early diagnosis of gastric cancer by detecting regions of high-grade gastric intraepithelial neoplasia in WSI, which can assist pathologists in analyzing large WSI and making accurate diagnostic decisions.
Collapse
Affiliation(s)
- Shuchang Zhang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Ziyang Yuan
- Academy of Military Sciences of the People's Liberation Army, Beijing, China.
| | - Xianchen Zhou
- Department of Mathematics, National University of Defense Technology, Changsha, China
| | - Hongxia Wang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Bo Chen
- Suzhou Research Center, Institute of Automation, Chinese Academy of Sciences, Suzhou, China
| | - Yadong Wang
- Department of Laboratory Pathology, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
17
|
Lin H, Falahkheirkhah K, Kindratenko V, Bhargava R. INSTRAS: INfrared Spectroscopic imaging-based TRAnsformers for medical image Segmentation. MACHINE LEARNING WITH APPLICATIONS 2024; 16:100549. [PMID: 39036499 PMCID: PMC11258863 DOI: 10.1016/j.mlwa.2024.100549] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/23/2024] Open
Abstract
Infrared (IR) spectroscopic imaging is of potentially wide use in medical imaging applications due to its ability to capture both chemical and spatial information. This complexity of the data both necessitates using machine intelligence as well as presents an opportunity to harness a high-dimensionality data set that offers far more information than today's manually-interpreted images. While convolutional neural networks (CNNs), including the well-known U-Net model, have demonstrated impressive performance in image segmentation, the inherent locality of convolution limits the effectiveness of these models for encoding IR data, resulting in suboptimal performance. In this work, we propose an INfrared Spectroscopic imaging-based TRAnsformers for medical image Segmentation (INSTRAS). This novel model leverages the strength of the transformer encoders to segment IR breast images effectively. Incorporating skip-connection and transformer encoders, INSTRAS overcomes the issue of pure convolution models, such as the difficulty of capturing long-range dependencies. To evaluate the performance of our model and existing convolutional models, we conducted training on various encoder-decoder models using a breast dataset of IR images. INSTRAS, utilizing 9 spectral bands for segmentation, achieved a remarkable AUC score of 0.9788, underscoring its superior capabilities compared to purely convolutional models. These experimental results attest to INSTRAS's advanced and improved segmentation abilities for IR imaging.
Collapse
Affiliation(s)
- Hangzheng Lin
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, IL, United States
| | | | - Volodymyr Kindratenko
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, IL, United States
- Center for Artificial Intelligence Innovation, National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, IL, United States
| | - Rohit Bhargava
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, IL, United States
- Beckman Institute, University of Illinois at Urbana-Champaign, IL, United States
- Departments of Bioengineering, Mechanical Science and Engineering and Cancer Center at Illinois, University of Illinois at Urbana-Champaign, IL, United States
| |
Collapse
|
18
|
Zhou C, Ye L, Peng H, Liu Z, Wang J, Ramírez-De-Arellano A. A Parallel Convolutional Network Based on Spiking Neural Systems. Int J Neural Syst 2024; 34:2450022. [PMID: 38487872 DOI: 10.1142/s0129065724500229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Deep convolutional neural networks have shown advanced performance in accurately segmenting images. In this paper, an SNP-like convolutional neuron structure is introduced, abstracted from the nonlinear mechanism in nonlinear spiking neural P (NSNP) systems. Then, a U-shaped convolutional neural network named SNP-like parallel-convolutional network, or SPC-Net, is constructed for segmentation tasks. The dual-convolution concatenate (DCC) and dual-convolution addition (DCA) network blocks are designed, respectively, in the encoder and decoder stages. The two blocks employ parallel convolution with different kernel sizes to improve feature representation ability and make full use of spatial detail information. Meanwhile, different feature fusion strategies are used to fuse their features to achieve feature complementarity and augmentation. Furthermore, a dual-scale pooling (DSP) module in the bottleneck is designed to improve the feature extraction capability, which can extract multi-scale contextual information and reduce information loss while extracting salient features. The SPC-Net is applied in medical image segmentation tasks and is compared with several recent segmentation methods on the GlaS and CRAG datasets. The proposed SPC-Net achieves 90.77% DICE coefficient, 83.76% IoU score and 83.93% F1 score, 86.33% ObjDice coefficient, 135.60 Obj-Hausdorff distance, respectively. The experimental results show that the proposed model can achieve good segmentation performance.
Collapse
Affiliation(s)
- Chi Zhou
- School of Computer and Software Engineering, Xihua University, Chengdu 610039, P. R. China
| | - Lulin Ye
- School of Computer and Software Engineering, Xihua University, Chengdu 610039, P. R. China
| | - Hong Peng
- School of Computer and Software Engineering, Xihua University, Chengdu 610039, P. R. China
| | - Zhicai Liu
- School of Computer and Software Engineering, Xihua University, Chengdu 610039, P. R. China
| | - Jun Wang
- School of Electrical Engineering and Electronic Information, Xihua University, Chengdu 610039, P. R. China
| | - Antonio Ramírez-De-Arellano
- Research Group of Natural Computing, Department of Computer Science and Artificial Intelligence, University of Seville, Sevilla 41012, Spain
| |
Collapse
|
19
|
Zhou J, Xiong H, Liu Q. A novel Dual-Branch Asymmetric Encoder-Decoder Segmentation Network for accurate colonic crypt segmentation. Comput Biol Med 2024; 173:108354. [PMID: 38522251 DOI: 10.1016/j.compbiomed.2024.108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 03/04/2024] [Accepted: 03/19/2024] [Indexed: 03/26/2024]
Abstract
Colorectal cancer (CRC) is a leading cause of cancer-related deaths, with colonic crypts (CC) being crucial in its development. Accurate segmentation of CC is essential for decisions CRC and developing diagnostic strategies. However, colonic crypts' blurred boundaries and morphological diversity bring substantial challenges for automatic segmentation. To mitigate this problem, we proposed the Dual-Branch Asymmetric Encoder-Decoder Segmentation Network (DAUNet), a novel and efficient model tailored for confocal laser endomicroscopy (CLE) CC images. In DAUNet, we crafted a dual-branch feature extraction module (DFEM), employing Focus operations and dense depth-wise separable convolution (DDSC) to extract multiscale features, boosting semantic understanding and coping with the morphological diversity of CC. We also introduced the feature fusion guided module (FFGM) to adaptively combine features from both branches using cross-group spatial and channel attention to improve the model representation in focusing on specific lesion features. These modules are seamlessly integrated into the encoder for effective multiscale information extraction and fusion, and DDSC is further introduced in the decoder to provide rich representations for precise segmentation. Moreover, the local multi-layer perceptron (LMLP) module is designed to decouple and recalibrate features through a local linear transformation that filters out the noise and refines features to provide edge-enriched representation. Experimental evaluations on two datasets demonstrate that the proposed method achieves Intersection over Union (IoU) scores of 81.54% and 84.83%, respectively, which are on par with state-of-the-art methods, exhibiting its effectiveness for CC segmentation. The proposed method holds great potential in assisting physicians with precise lesion localization and region analysis, thereby improving the diagnostic accuracy of CRC.
Collapse
Affiliation(s)
- Jingjun Zhou
- School of Biomedical Engineering, Hainan University, Haikou, 570228, China.
| | - Hong Xiong
- School of Biomedical Engineering, Hainan University, Haikou, 570228, China.
| | - Qian Liu
- School of Biomedical Engineering, Hainan University, Haikou, 570228, China; Key Laboratory of Biomedical Engineering of Hainan Province, Hainan University, Haikou, 570228, China.
| |
Collapse
|
20
|
Lambert B, Forbes F, Doyle S, Dehaene H, Dojat M. Trustworthy clinical AI solutions: A unified review of uncertainty quantification in Deep Learning models for medical image analysis. Artif Intell Med 2024; 150:102830. [PMID: 38553168 DOI: 10.1016/j.artmed.2024.102830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 02/28/2024] [Accepted: 03/01/2024] [Indexed: 04/02/2024]
Abstract
The full acceptance of Deep Learning (DL) models in the clinical field is rather low with respect to the quantity of high-performing solutions reported in the literature. End users are particularly reluctant to rely on the opaque predictions of DL models. Uncertainty quantification methods have been proposed in the literature as a potential solution, to reduce the black-box effect of DL models and increase the interpretability and the acceptability of the result by the final user. In this review, we propose an overview of the existing methods to quantify uncertainty associated with DL predictions. We focus on applications to medical image analysis, which present specific challenges due to the high dimensionality of images and their variable quality, as well as constraints associated with real-world clinical routine. Moreover, we discuss the concept of structural uncertainty, a corpus of methods to facilitate the alignment of segmentation uncertainty estimates with clinical attention. We then discuss the evaluation protocols to validate the relevance of uncertainty estimates. Finally, we highlight the open challenges for uncertainty quantification in the medical field.
Collapse
Affiliation(s)
- Benjamin Lambert
- Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut des Neurosciences, Grenoble, 38000, France; Pixyl Research and Development Laboratory, Grenoble, 38000, France
| | - Florence Forbes
- Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, Grenoble, 38000, France
| | - Senan Doyle
- Pixyl Research and Development Laboratory, Grenoble, 38000, France
| | - Harmonie Dehaene
- Pixyl Research and Development Laboratory, Grenoble, 38000, France
| | - Michel Dojat
- Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut des Neurosciences, Grenoble, 38000, France.
| |
Collapse
|
21
|
Li J, Cheng J, Meng L, Yan H, He Y, Shi H, Guan T, Han A. DeepTree: Pathological Image Classification Through Imitating Tree-Like Strategies of Pathologists. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1501-1512. [PMID: 38090840 DOI: 10.1109/tmi.2023.3341846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Digitization of pathological slides has promoted the research of computer-aided diagnosis, in which artificial intelligence analysis of pathological images deserves attention. Appropriate deep learning techniques in natural images have been extended to computational pathology. Still, they seldom take into account prior knowledge in pathology, especially the analysis process of lesion morphology by pathologists. Inspired by the diagnosis decision of pathologists, we design a novel deep learning architecture based on tree-like strategies called DeepTree. It imitates pathological diagnosis methods, designed as a binary tree structure, to conditionally learn the correlation between tissue morphology, and optimizes branches to finetune the performance further. To validate and benchmark DeepTree, we build a dataset of frozen lung cancer tissues and design experiments on a public dataset of breast tumor subtypes and our dataset. Results show that the deep learning architecture based on tree-like strategies makes the pathological image classification more accurate, transparent, and convincing. Simultaneously, prior knowledge based on diagnostic strategies yields superior representation ability compared to alternative methods. Our proposed methodology helps improve the trust of pathologists in artificial intelligence analysis and promotes the practical clinical application of pathology-assisted diagnosis.
Collapse
|
22
|
Vanea C, Džigurski J, Rukins V, Dodi O, Siigur S, Salumäe L, Meir K, Parks WT, Hochner-Celnikier D, Fraser A, Hochner H, Laisk T, Ernst LM, Lindgren CM, Nellåker C. Mapping cell-to-tissue graphs across human placenta histology whole slide images using deep learning with HAPPY. Nat Commun 2024; 15:2710. [PMID: 38548713 PMCID: PMC10978962 DOI: 10.1038/s41467-024-46986-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
Accurate placenta pathology assessment is essential for managing maternal and newborn health, but the placenta's heterogeneity and temporal variability pose challenges for histology analysis. To address this issue, we developed the 'Histology Analysis Pipeline.PY' (HAPPY), a deep learning hierarchical method for quantifying the variability of cells and micro-anatomical tissue structures across placenta histology whole slide images. HAPPY differs from patch-based features or segmentation approaches by following an interpretable biological hierarchy, representing cells and cellular communities within tissues at a single-cell resolution across whole slide images. We present a set of quantitative metrics from healthy term placentas as a baseline for future assessments of placenta health and we show how these metrics deviate in placentas with clinically significant placental infarction. HAPPY's cell and tissue predictions closely replicate those from independent clinical experts and placental biology literature.
Collapse
Affiliation(s)
- Claudia Vanea
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK.
| | | | | | - Omri Dodi
- Faculty of Medicine, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Siim Siigur
- Department of Pathology, Tartu University Hospital, Tartu, Estonia
| | - Liis Salumäe
- Department of Pathology, Tartu University Hospital, Tartu, Estonia
| | - Karen Meir
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - W Tony Parks
- Department of Laboratory Medicine & Pathobiology, University of Toronto, Toronto, Canada
| | | | - Abigail Fraser
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
- MRC Integrative Epidemiology Unit at the University of Bristol, Bristol, UK
| | - Hagit Hochner
- Braun School of Public Health, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Triin Laisk
- Institute of Genomics, University of Tartu, Tartu, Estonia
| | - Linda M Ernst
- Department of Pathology and Laboratory Medicine, NorthShore University HealthSystem, Chicago, USA
- Department of Pathology, University of Chicago Pritzker School of Medicine, Chicago, USA
| | - Cecilia M Lindgren
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK
- Centre for Human Genetics, Nuffield Department, University of Oxford, Oxford, UK
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Nuffield Department of Population Health Health, University of Oxford, Oxford, UK
| | - Christoffer Nellåker
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK.
| |
Collapse
|
23
|
Luna M, Chikontwe P, Park SH. Enhanced Nuclei Segmentation and Classification via Category Descriptors in the SAM Model. Bioengineering (Basel) 2024; 11:294. [PMID: 38534568 DOI: 10.3390/bioengineering11030294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 03/13/2024] [Accepted: 03/19/2024] [Indexed: 03/28/2024] Open
Abstract
Segmenting and classifying nuclei in H&E histopathology images is often limited by the long-tailed distribution of nuclei types. However, the strong generalization ability of image segmentation foundation models like the Segment Anything Model (SAM) can help improve the detection quality of rare types of nuclei. In this work, we introduce category descriptors to perform nuclei segmentation and classification by prompting the SAM model. We close the domain gap between histopathology and natural scene images by aligning features in low-level space while preserving the high-level representations of SAM. We performed extensive experiments on the Lizard dataset, validating the ability of our model to perform automatic nuclei segmentation and classification, especially for rare nuclei types, where achieved a significant detection improvement in the F1 score of up to 12%. Our model also maintains compatibility with manual point prompts for interactive refinement during inference without requiring any additional training.
Collapse
Affiliation(s)
- Miguel Luna
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| | - Philip Chikontwe
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| | - Sang Hyun Park
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
- AI Graduate School, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| |
Collapse
|
24
|
Ren J, Che J, Gong P, Wang X, Li X, Li A, Xiao C. Cross comparison representation learning for semi-supervised segmentation of cellular nuclei in immunofluorescence staining. Comput Biol Med 2024; 171:108102. [PMID: 38350398 DOI: 10.1016/j.compbiomed.2024.108102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 01/29/2024] [Accepted: 02/04/2024] [Indexed: 02/15/2024]
Abstract
The morphological analysis of cells from optical images is vital for interpreting brain function in disease states. Extracting comprehensive cell morphology from intricate backgrounds, common in neural and some medical images, poses a significant challenge. Due to the huge workload of manual recognition, automated neuron cell segmentation using deep learning algorithms with labeled data is integral to neural image analysis tools. To combat the high cost of acquiring labeled data, we propose a novel semi-supervised cell segmentation algorithm for immunofluorescence-stained cell image datasets (ISC), utilizing a mean-teacher semi-supervised learning framework. We include a "cross comparison representation learning block" to enhance the teacher-student model comparison on high-dimensional channels, thereby improving feature compactness and separability, which results in the extraction of higher-dimensional features from unlabeled data. We also suggest a new network, the Multi Pooling Layer Attention Dense Network (MPAD-Net), serving as the backbone of the student model to augment segmentation accuracy. Evaluations on the immunofluorescence staining datasets and the public CRAG dataset illustrate our method surpasses other top semi-supervised learning methods, achieving average Jaccard, Dice and Normalized Surface Dice (NSD) indicators of 83.22%, 90.95% and 81.90% with only 20% labeled data. The datasets and code are available on the website at https://github.com/Brainsmatics/CCRL.
Collapse
Affiliation(s)
- Jianran Ren
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Jingyi Che
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Peicong Gong
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Xiaojun Wang
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Xiangning Li
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Anan Li
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China; Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Chi Xiao
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China.
| |
Collapse
|
25
|
Huang Y, Yang X, Liu L, Zhou H, Chang A, Zhou X, Chen R, Yu J, Chen J, Chen C, Liu S, Chi H, Hu X, Yue K, Li L, Grau V, Fan DP, Dong F, Ni D. Segment anything model for medical images? Med Image Anal 2024; 92:103061. [PMID: 38086235 DOI: 10.1016/j.media.2023.103061] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 09/28/2023] [Accepted: 12/05/2023] [Indexed: 01/12/2024]
Abstract
The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging because of the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks. We comprehensively analyzed different models and strategies on the so-called COSMOS 1050K dataset. Our findings mainly include the following: (1) SAM showed remarkable performance in some specific objects but was unstable, imperfect, or even totally failed in other situations. (2) SAM with the large ViT-H showed better overall performance than that with the small ViT-B. (3) SAM performed better with manual hints, especially box, than the Everything mode. (4) SAM could help human annotation with high labeling quality and less time. (5) SAM was sensitive to the randomness in the center point and tight box prompts, and may suffer from a serious performance drop. (6) SAM performed better than interactive methods with one or a few points, but will be outpaced as the number of points increases. (7) SAM's performance correlated to different factors, including boundary complexity, intensity differences, etc. (8) Finetuning the SAM on specific medical tasks could improve its average DICE performance by 4.39% and 6.68% for ViT-B and ViT-H, respectively. Codes and models are available at: https://github.com/yuhoo0302/Segment-Anything-Model-for-Medical-Images. We hope that this comprehensive report can help researchers explore the potential of SAM applications in MIS, and guide how to appropriately use and develop SAM.
Collapse
Affiliation(s)
- Yuhao Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Lian Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Han Zhou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Ao Chang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xinrui Zhou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Rusi Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Junxuan Yu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Jiongquan Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Chaoyu Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Sijing Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | | | - Xindi Hu
- Shenzhen RayShape Medical Technology Co., Ltd, Shenzhen, China
| | - Kejuan Yue
- Hunan First Normal University, Changsha, China
| | - Lei Li
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Vicente Grau
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Deng-Ping Fan
- Computer Vision Lab (CVL), ETH Zurich, Zurich, Switzerland
| | - Fajin Dong
- Ultrasound Department, the Second Clinical Medical College, Jinan University, China; First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China.
| |
Collapse
|
26
|
Roetzer-Pejrimovsky T, Nenning KH, Kiesel B, Klughammer J, Rajchl M, Baumann B, Langs G, Woehrer A. Deep learning links localized digital pathology phenotypes with transcriptional subtype and patient outcome in glioblastoma. Gigascience 2024; 13:giae057. [PMID: 39185700 PMCID: PMC11345537 DOI: 10.1093/gigascience/giae057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 05/13/2024] [Accepted: 07/20/2024] [Indexed: 08/27/2024] Open
Abstract
BACKGROUND Deep learning has revolutionized medical image analysis in cancer pathology, where it had a substantial clinical impact by supporting the diagnosis and prognostic rating of cancer. Among the first available digital resources in the field of brain cancer is glioblastoma, the most common and fatal brain cancer. At the histologic level, glioblastoma is characterized by abundant phenotypic variability that is poorly linked with patient prognosis. At the transcriptional level, 3 molecular subtypes are distinguished with mesenchymal-subtype tumors being associated with increased immune cell infiltration and worse outcome. RESULTS We address genotype-phenotype correlations by applying an Xception convolutional neural network to a discovery set of 276 digital hematozylin and eosin (H&E) slides with molecular subtype annotation and an independent The Cancer Genome Atlas-based validation cohort of 178 cases. Using this approach, we achieve high accuracy in H&E-based mapping of molecular subtypes (area under the curve for classical, mesenchymal, and proneural = 0.84, 0.81, and 0.71, respectively; P < 0.001) and regions associated with worse outcome (univariable survival model P < 0.001, multivariable P = 0.01). The latter were characterized by higher tumor cell density (P < 0.001), phenotypic variability of tumor cells (P < 0.001), and decreased T-cell infiltration (P = 0.017). CONCLUSIONS We modify a well-known convolutional neural network architecture for glioblastoma digital slides to accurately map the spatial distribution of transcriptional subtypes and regions predictive of worse outcome, thereby showcasing the relevance of artificial intelligence-enabled image mining in brain cancer.
Collapse
Affiliation(s)
- Thomas Roetzer-Pejrimovsky
- Division of Neuropathology and Neurochemistry, Department of Neurology, Medical University of Vienna, 1090 Vienna, Austria
- Comprehensive Center for Clinical Neurosciences and Mental Health, Medical University of Vienna, 1090 Vienna, Austria
| | - Karl-Heinz Nenning
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY 10962, USA
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, 1090 Vienna, Austria
| | - Barbara Kiesel
- Department of Neurosurgery, Medical University of Vienna, 1090 Vienna, Austria
| | - Johanna Klughammer
- Gene Center and Department of Biochemistry, Ludwig-Maximilians-Universität München, 80539 Munich, Germany
| | - Martin Rajchl
- Department of Computing and Medicine, Imperial College London, London SW7 2AZ, UK
| | - Bernhard Baumann
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, 1090 Vienna, Austria
| | - Georg Langs
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, 1090 Vienna, Austria
| | - Adelheid Woehrer
- Division of Neuropathology and Neurochemistry, Department of Neurology, Medical University of Vienna, 1090 Vienna, Austria
- Comprehensive Center for Clinical Neurosciences and Mental Health, Medical University of Vienna, 1090 Vienna, Austria
- Department of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, 6020 Innsbruck, Austria
| |
Collapse
|
27
|
Bashir RMS, Qaiser T, Raza SEA, Rajpoot NM. Consistency regularisation in varying contexts and feature perturbations for semi-supervised semantic segmentation of histology images. Med Image Anal 2024; 91:102997. [PMID: 37866169 DOI: 10.1016/j.media.2023.102997] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 10/05/2023] [Accepted: 10/06/2023] [Indexed: 10/24/2023]
Abstract
Semantic segmentation of various tissue and nuclei types in histology images is fundamental to many downstream tasks in the area of computational pathology (CPath). In recent years, Deep Learning (DL) methods have been shown to perform well on segmentation tasks but DL methods generally require a large amount of pixel-wise annotated data. Pixel-wise annotation sometimes requires expert's knowledge and time which is laborious and costly to obtain. In this paper, we present a consistency based semi-supervised learning (SSL) approach that can help mitigate this challenge by exploiting a large amount of unlabelled data for model training thus alleviating the need for a large annotated dataset. However, SSL models might also be susceptible to changing context and features perturbations exhibiting poor generalisation due to the limited training data. We propose an SSL method that learns robust features from both labelled and unlabelled images by enforcing consistency against varying contexts and feature perturbations. The proposed method incorporates context-aware consistency by contrasting pairs of overlapping images in a pixel-wise manner from changing contexts resulting in robust and context invariant features. We show that cross-consistency training makes the encoder features invariant to different perturbations and improves the prediction confidence. Finally, entropy minimisation is employed to further boost the confidence of the final prediction maps from unlabelled data. We conduct an extensive set of experiments on two publicly available large datasets (BCSS and MoNuSeg) and show superior performance compared to the state-of-the-art methods.
Collapse
Affiliation(s)
| | - Talha Qaiser
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom.
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom.
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, United Kingdom; The Alan Turing Institute, London, United Kingdom; Histofy Ltd, United Kingdom; Department of Pathology, University Hospitals Coventry & Warwickshire, United Kingdom.
| |
Collapse
|
28
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
29
|
Pocevičiūtė M, Eilertsen G, Lundström C. Benefits of spatial uncertainty aggregation for segmentation in digital pathology. J Med Imaging (Bellingham) 2024; 11:017501. [PMID: 38234584 PMCID: PMC10790788 DOI: 10.1117/1.jmi.11.1.017501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 12/11/2023] [Accepted: 12/14/2023] [Indexed: 01/19/2024] Open
Abstract
Purpose Uncertainty estimation has gained significant attention in recent years for its potential to enhance the performance of deep learning (DL) algorithms in medical applications and even potentially address domain shift challenges. However, it is not straightforward to incorporate uncertainty estimation with a DL system to achieve a tangible positive effect. The objective of our work is to evaluate if the proposed spatial uncertainty aggregation (SUA) framework may improve the effectiveness of uncertainty estimation in segmentation tasks. We evaluate if SUA boosts the observed correlation between the uncertainty estimates and false negative (FN) predictions. We also investigate if the observed benefits can translate to tangible improvements in segmentation performance. Approach Our SUA framework processes negative prediction regions from a segmentation algorithm and detects FNs based on an aggregated uncertainty score. It can be utilized with many existing uncertainty estimation methods to boost their performance. We compare the SUA framework with a baseline of processing individual pixel's uncertainty independently. Results The results demonstrate that SUA is able to detect FN regions. It achieved F β = 0.5 of 0.92 on the in-domain and 0.85 on the domain-shift test data compared with 0.81 and 0.48 achieved by the baseline uncertainty, respectively. We also demonstrate that SUA yields improved general segmentation performance compared with utilizing the baseline uncertainty. Conclusions We propose the SUA framework for incorporating and utilizing uncertainty estimates for FN detection in DL segmentation algorithms for histopathology. The evaluation confirms the benefits of our approach compared with assessing pixel uncertainty independently.
Collapse
Affiliation(s)
- Milda Pocevičiūtė
- Linköping University, Center for Medical Image Science and Visualization, Linköping, Sweden
- Linköping University, Department of Science and Technology, Linköping, Sweden
| | - Gabriel Eilertsen
- Linköping University, Center for Medical Image Science and Visualization, Linköping, Sweden
- Linköping University, Department of Science and Technology, Linköping, Sweden
| | - Claes Lundström
- Linköping University, Center for Medical Image Science and Visualization, Linköping, Sweden
- Linköping University, Department of Science and Technology, Linköping, Sweden
- Sectra AB, Linköping, Sweden
| |
Collapse
|
30
|
Deshpande S, Dawood M, Minhas F, Rajpoot N. SynCLay: Interactive synthesis of histology images from bespoke cellular layouts. Med Image Anal 2024; 91:102995. [PMID: 37898050 DOI: 10.1016/j.media.2023.102995] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 09/27/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Automated synthesis of histology images has several potential applications in computational pathology. However, no existing method can generate realistic tissue images with a bespoke cellular layout or user-defined histology parameters. In this work, we propose a novel framework called SynCLay (Synthesis from Cellular Layouts) that can construct realistic and high-quality histology images from user-defined cellular layouts along with annotated cellular boundaries. Tissue image generation based on bespoke cellular layouts through the proposed framework allows users to generate different histological patterns from arbitrary topological arrangement of different types of cells (e.g., neutrophils, lymphocytes, epithelial cells and others). SynCLay generated synthetic images can be helpful in studying the role of different types of cells present in the tumor microenvironment. Additionally, they can assist in balancing the distribution of cellular counts in tissue images for designing accurate cellular composition predictors by minimizing the effects of data imbalance. We train SynCLay in an adversarial manner and integrate a nuclear segmentation and classification model in its training to refine nuclear structures and generate nuclear masks in conjunction with synthetic images. During inference, we combine the model with another parametric model for generating colon images and associated cellular counts as annotations given the grade of differentiation and cellularities (cell densities) of different cells. We assess the generated images quantitatively using the Frechet Inception Distance and report on feedback from trained pathologists who assigned realism scores to a set of images generated by the framework. The average realism score across all pathologists for synthetic images was as high as that for the real images. Moreover, with the assistance from pathologists, we showcase the ability of the generated images to accurately differentiate between benign and malignant tumors, thus reinforcing their reliability. We demonstrate that the proposed framework can be used to add new cells to a tissue images and alter cellular positions. We also show that augmenting limited real data with the synthetic data generated by our framework can significantly boost prediction performance of the cellular composition prediction task. The implementation of the proposed SynCLay framework is available at https://github.com/Srijay/SynCLay-Framework.
Collapse
Affiliation(s)
- Srijay Deshpande
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK.
| | - Muhammad Dawood
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, London, UK; Department of Pathology, University Hospitals Coventry & Warwickshire, UK; Histofy Ltd, Birmingham, UK.
| |
Collapse
|
31
|
Griem J, Eich ML, Schallenberg S, Pryalukhin A, Bychkov A, Fukuoka J, Zayats V, Hulla W, Munkhdelger J, Seper A, Tsvetkov T, Mukhopadhyay A, Sanner A, Stieber J, Fuchs M, Babendererde N, Schömig-Markiefka B, Klein S, Buettner R, Quaas A, Tolkach Y. Artificial Intelligence-Based Tool for Tumor Detection and Quantitative Tissue Analysis in Colorectal Specimens. Mod Pathol 2023; 36:100327. [PMID: 37683932 DOI: 10.1016/j.modpat.2023.100327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 08/11/2023] [Accepted: 08/31/2023] [Indexed: 09/10/2023]
Abstract
Digital pathology adoption allows for applying computational algorithms to routine pathology tasks. Our study aimed to develop a clinical-grade artificial intelligence (AI) tool for precise multiclass tissue segmentation in colorectal specimens (resections and biopsies) and clinically validate the tool for tumor detection in biopsy specimens. The training data set included 241 precisely manually annotated whole-slide images (WSIs) from multiple institutions. The algorithm was trained for semantic segmentation of 11 tissue classes with an additional module for biopsy WSI classification. Six case cohorts from 5 pathology departments (4 countries) were used for formal and clinical validation, digitized by 4 different scanning systems. The developed algorithm showed high precision of segmentation of different tissue classes in colorectal specimens with composite multiclass Dice score of up to 0.895 and pixel-wise tumor detection specificity and sensitivity of up to 0.958 and 0.987, respectively. In the clinical validation study on multiple external cohorts, the AI tool reached sensitivity of 1.0 and specificity of up to 0.969 for tumor detection in biopsy WSI. The AI tool analyzes most biopsy cases in less than 1 minute, allowing effective integration into clinical routine. We developed and extensively validated a highly accurate, clinical-grade tool for assistive diagnostic processing of colorectal specimens. This tool allows for quantitative deciphering of colorectal cancer tissue for development of prognostic and predictive biomarkers and personalization of oncologic care. This study is a foundation for a SemiCOL computational challenge. We open-source multiple manually annotated and weakly labeled test data sets, representing a significant contribution to the colorectal cancer computational pathology field.
Collapse
Affiliation(s)
- Johanna Griem
- Institute of Pathology, University Hospital Cologne, Cologne, Germany
| | - Marie-Lisa Eich
- Institute of Pathology, University Hospital Cologne, Cologne, Germany
| | | | - Alexey Pryalukhin
- Institute of Pathology, State Hospital Wiener Neustadt, Wiener Neustadt, Austria
| | - Andrey Bychkov
- Department of Pathology Informatics, Nagasaki University Graduate School of Biomedical Sciences, Nagasaki, Japan; Department of Pathology, Kameda Medical Center, Kamogawa, Japan
| | - Junya Fukuoka
- Department of Pathology Informatics, Nagasaki University Graduate School of Biomedical Sciences, Nagasaki, Japan; Department of Pathology, Kameda Medical Center, Kamogawa, Japan
| | - Vitaliy Zayats
- Laboratory for Medical Artificial Intelligence, The Resource Center for Universal Design and Rehabilitation Technologies (RCUD and RT), Moscow, Russia
| | - Wolfgang Hulla
- Institute of Pathology, State Hospital Wiener Neustadt, Wiener Neustadt, Austria
| | | | - Alexander Seper
- Danube Private University, Medical Faculty, Krems-Stein, Austria
| | - Tsvetan Tsvetkov
- Institute of Pathology, University Hospital Cologne, Cologne, Germany
| | | | | | | | - Moritz Fuchs
- Technical University Darmstadt, Darmstadt, Germany
| | | | | | - Sebastian Klein
- Institute of Pathology, University Hospital Cologne, Cologne, Germany
| | - Reinhard Buettner
- Institute of Pathology, University Hospital Cologne, Cologne, Germany
| | - Alexander Quaas
- Institute of Pathology, University Hospital Cologne, Cologne, Germany
| | - Yuri Tolkach
- Institute of Pathology, University Hospital Cologne, Cologne, Germany.
| |
Collapse
|
32
|
Li S, Shi S, Fan Z, He X, Zhang N. Deep information-guided feature refinement network for colorectal gland segmentation. Int J Comput Assist Radiol Surg 2023; 18:2319-2328. [PMID: 36934367 DOI: 10.1007/s11548-023-02857-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 02/22/2023] [Indexed: 03/20/2023]
Abstract
PURPOSE Reliable quantification of colorectal histopathological images is based on the precise segmentation of glands but precise segmentation of glands is challenging as glandular morphology varies widely across histological grades, such as malignant glands and non-gland tissues are too similar to be identified, and tightly connected glands are even highly possibly to be incorrectly segmented as one gland. METHODS A deep information-guided feature refinement network is proposed to improve gland segmentation. Specifically, the backbone deepens the network structure to obtain effective features while maximizing the retained information, and a Multi-Scale Fusion module is proposed to increase the receptive field. In addition, to segment dense glands individually, a Multi-Scale Edge-Refined module is designed to strengthen the boundaries of glands. RESULTS The comparative experiments on the eight recently proposed deep learning methods demonstrated that our proposed network has better overall performance and is more competitive on Test B. The F1 score of Test A and Test B is 0.917 and 0.876, respectively; the object-level Dice is 0.921 and 0.884; and the object-level Hausdorff is 43.428 and 87.132, respectively. CONCLUSION The proposed colorectal gland segmentation network can effectively extract features with high representational ability and enhance edge features while retaining details to the maximum, dramatically improving the segmentation performance on malignant glands, and better segmentation results of multi-scale and closed glands can also be obtained.
Collapse
Affiliation(s)
- Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Shuling Shi
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Zhenbang Fan
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Ni Zhang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China.
| |
Collapse
|
33
|
Kataria T, Rajamani S, Ayubi AB, Bronner M, Jedrzkiewicz J, Knudsen BS, Elhabian SY. Automating Ground Truth Annotations for Gland Segmentation Through Immunohistochemistry. Mod Pathol 2023; 36:100331. [PMID: 37716506 DOI: 10.1016/j.modpat.2023.100331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 08/14/2023] [Accepted: 09/08/2023] [Indexed: 09/18/2023]
Abstract
Microscopic evaluation of glands in the colon is of utmost importance in the diagnosis of inflammatory bowel disease and cancer. When properly trained, deep learning pipelines can provide a systematic, reproducible, and quantitative assessment of disease-related changes in glandular tissue architecture. The training and testing of deep learning models require large amounts of manual annotations, which are difficult, time-consuming, and expensive to obtain. Here, we propose a method for automated generation of ground truth in digital hematoxylin and eosin (H&E)-stained slides using immunohistochemistry (IHC) labels. The image processing pipeline generates annotations of glands in H&E histopathology images from colon biopsy specimens by transfer of gland masks from KRT8/18, CDX2, or EPCAM IHC. The IHC gland outlines are transferred to coregistered H&E images for training of deep learning models. We compared the performance of the deep learning models to that of manual annotations using an internal held-out set of biopsy specimens as well as 2 public data sets. Our results show that EPCAM IHC provides gland outlines that closely match manual gland annotations (Dice = 0.89) and are resilient to damage by inflammation. In addition, we propose a simple data sampling technique that allows models trained on data from several sources to be adapted to a new data source using just a few newly annotated samples. The best performing models achieved average Dice scores of 0.902 and 0.89 on Gland Segmentation and Colorectal Adenocarcinoma Gland colon cancer public data sets, respectively, when trained with only 10% of annotated cases from either public cohort. Altogether, the performances of our models indicate that automated annotations using cell type-specific IHC markers can safely replace manual annotations. Automated IHC labels from single-institution cohorts can be combined with small numbers of hand-annotated cases from multi-institutional cohorts to train models that generalize well to diverse data sources.
Collapse
Affiliation(s)
- Tushar Kataria
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah; Kahlert School of Computing, Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Saradha Rajamani
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah; Kahlert School of Computing, Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Abdul Bari Ayubi
- Department of Pathology, University of Utah, Salt Lake City, Utah
| | - Mary Bronner
- Department of Pathology, University of Utah, Salt Lake City, Utah; Department of Pathology, ARUP Laboratories, Salt Lake City, Utah
| | - Jolanta Jedrzkiewicz
- Department of Pathology, University of Utah, Salt Lake City, Utah; Department of Pathology, ARUP Laboratories, Salt Lake City, Utah
| | - Beatrice S Knudsen
- Kahlert School of Computing, Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah; Department of Pathology, University of Utah, Salt Lake City, Utah.
| | - Shireen Y Elhabian
- Kahlert School of Computing, University of Utah, Salt Lake City, Utah; Kahlert School of Computing, Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah.
| |
Collapse
|
34
|
Sun M, Wang J, Gong Q, Huang W. Enhancing gland segmentation in colon histology images using an instance-aware diffusion model. Comput Biol Med 2023; 166:107527. [PMID: 37778210 DOI: 10.1016/j.compbiomed.2023.107527] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 08/17/2023] [Accepted: 09/19/2023] [Indexed: 10/03/2023]
Abstract
In pathological image analysis, determination of gland morphology in histology images of the colon is essential to determine the grade of colon cancer. However, manual segmentation of glands is extremely challenging and there is a need to develop automatic methods for segmenting gland instances. Recently, due to the powerful noise-to-image denoising pipeline, the diffusion model has become one of the hot spots in computer vision research and has been explored in the field of image segmentation. In this paper, we propose an instance segmentation method based on the diffusion model that can perform automatic gland instance segmentation. Firstly, we model the instance segmentation process for colon histology images as a denoising process based on a diffusion model. Secondly, to recover details lost during denoising, we use Instance Aware Filters and multi-scale Mask Branch to construct global mask instead of predicting only local masks. Thirdly, to improve the distinction between the object and the background, we apply Conditional Encoding to enhance the intermediate features with the original image encoding. To objectively validate the proposed method, we compared several state-of-the-art deep learning models on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset (165 images), the Colorectal Adenocarcinoma Glands (CRAG) dataset (213 images) and the RINGS dataset (1500 images). Our proposed method obtains significantly improved results for CRAG (Object F1 0.853 ± 0.054, Object Dice 0.906 ± 0.043), GlaS Test A (Object F1 0.941 ± 0.039, Object Dice 0.939 ± 0.060), GlaS Test B (Object F1 0.893 ± 0.073, Object Dice 0.889 ± 0.069), and RINGS dataset (Precision 0.893 ± 0.096, Dice 0.904 ± 0.091). The experimental results show that our method significantly improves the segmentation accuracy, and the experiment results demonstrate the efficacy of the method.
Collapse
Affiliation(s)
- Mengxue Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Jiale Wang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Qingtao Gong
- Ulsan Ship and Ocean College, Ludong University, Yantai, 264025, China
| | - Wenhui Huang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China.
| |
Collapse
|
35
|
Neary-Zajiczek L, Beresna L, Razavi B, Pawar V, Shaw M, Stoyanov D. Minimum resolution requirements of digital pathology images for accurate classification. Med Image Anal 2023; 89:102891. [PMID: 37536022 DOI: 10.1016/j.media.2023.102891] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 05/22/2023] [Accepted: 07/06/2023] [Indexed: 08/05/2023]
Abstract
Digitization of pathology has been proposed as an essential mitigation strategy for the severe staffing crisis facing most pathology departments. Despite its benefits, several barriers have prevented widespread adoption of digital workflows, including cost and pathologist reluctance due to subjective image quality concerns. In this work, we quantitatively determine the minimum image quality requirements for binary classification of histopathology images of breast tissue in terms of spatial and sampling resolution. We train an ensemble of deep learning classifier models on publicly available datasets to obtain a baseline accuracy and computationally degrade these images according to our derived theoretical model to identify the minimum resolution necessary for acceptable diagnostic accuracy. Our results show that images can be degraded significantly below the resolution of most commercial whole-slide imaging systems while maintaining reasonable accuracy, demonstrating that macroscopic features are sufficient for binary classification of stained breast tissue. A rapid low-cost imaging system capable of identifying healthy tissue not requiring human assessment could serve as a triage system for reducing caseloads and alleviating the significant strain on the current workforce.
Collapse
Affiliation(s)
- Lydia Neary-Zajiczek
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, Charles Bell House, 43-45 Foley Street, Fitzrovia, London, W1W 7TS, United Kingdom; Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, United Kingdom.
| | - Linas Beresna
- Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, United Kingdom
| | - Benjamin Razavi
- University College London Medical School, 74 Huntley Street, London, WC1E 6BT, United Kingdom
| | - Vijay Pawar
- Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, United Kingdom
| | - Michael Shaw
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, Charles Bell House, 43-45 Foley Street, Fitzrovia, London, W1W 7TS, United Kingdom; Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, United Kingdom; National Physical Laboratory, Hampton Road, Teddington, TW11 0LW, United Kingdom
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, Charles Bell House, 43-45 Foley Street, Fitzrovia, London, W1W 7TS, United Kingdom; Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, United Kingdom
| |
Collapse
|
36
|
Sun J, Zhang X, Li X, Liu R, Wang T. DARMF-UNet: A dual-branch attention-guided refinement network with multi-scale features fusion U-Net for gland segmentation. Comput Biol Med 2023; 163:107218. [PMID: 37393784 DOI: 10.1016/j.compbiomed.2023.107218] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 06/08/2023] [Accepted: 06/25/2023] [Indexed: 07/04/2023]
Abstract
Accurate gland segmentation is critical in determining adenocarcinoma. Automatic gland segmentation methods currently suffer from challenges such as less accurate edge segmentation, easy mis-segmentation, and incomplete segmentation. To solve these problems, this paper proposes a novel gland segmentation network Dual-branch Attention-guided Refinement and Multi-scale Features Fusion U-Net (DARMF-UNet), which fuses multi-scale features using deep supervision. At the first three layers of feature concatenation, a Coordinate Parallel Attention (CPA) is proposed to guide the network to focus on the key regions. A Dense Atrous Convolution (DAC) block is used in the fourth layer of feature concatenation to perform multi-scale features extraction and obtain global information. A hybrid loss function is adopted to calculate the loss of each segmentation result of the network to achieve deep supervision and improve the accuracy of segmentation. Finally, the segmentation results at different scales in each part of the network are fused to obtain the final gland segmentation result. The experimental results on the gland datasets Warwick-QU and Crag show that the network improves in terms of the evaluation metrics of F1 Score, Object Dice, Object Hausdorff, and the segmentation effect is better than the state-of-the-art network models.
Collapse
Affiliation(s)
- Junmei Sun
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| | - Xin Zhang
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| | - Xiumei Li
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China.
| | - Ruyu Liu
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| | - Tianyang Wang
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| |
Collapse
|
37
|
Das R, Bose S, Chowdhury RS, Maulik U. Dense Dilated Multi-Scale Supervised Attention-Guided Network for histopathology image segmentation. Comput Biol Med 2023; 163:107182. [PMID: 37379615 DOI: 10.1016/j.compbiomed.2023.107182] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/24/2023] [Accepted: 06/13/2023] [Indexed: 06/30/2023]
Abstract
Over the last couple of decades, the introduction and proliferation of whole-slide scanners led to increasing interest in the research of digital pathology. Although manual analysis of histopathological images is still the gold standard, the process is often tedious and time consuming. Furthermore, manual analysis also suffers from intra- and interobserver variability. Separating structures or grading morphological changes can be difficult due to architectural variability of these images. Deep learning techniques have shown great potential in histopathology image segmentation that drastically reduces the time needed for downstream tasks of analysis and providing accurate diagnosis. However, few algorithms have clinical implementations. In this paper, we propose a new deep learning model Dense Dilated Multiscale Supervised Attention-Guided (D2MSA) Network for histopathology image segmentation that makes use of deep supervision coupled with a hierarchical system of novel attention mechanisms. The proposed model surpasses state-of-the-art performance while using similar computational resources. The performance of the model has been evaluated for the tasks of gland segmentation and nuclei instance segmentation, both of which are clinically relevant tasks to assess the state and progress of malignancy. Here, we have used histopathology image datasets for three different types of cancer. We have also performed extensive ablation tests and hyperparameter tuning to ensure the validity and reproducibility of the model performance. The proposed model is available at www.github.com/shirshabose/D2MSA-Net.
Collapse
Affiliation(s)
- Rangan Das
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Shirsha Bose
- Department of Informatics, Technical University of Munich, Munich, Bavaria 85748, Germany.
| | - Ritesh Sur Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Ujjwal Maulik
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
38
|
Li Y, Zhang Y, Liu JY, Wang K, Zhang K, Zhang GS, Liao XF, Yang G. Global Transformer and Dual Local Attention Network via Deep-Shallow Hierarchical Feature Fusion for Retinal Vessel Segmentation. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:5826-5839. [PMID: 35984806 DOI: 10.1109/tcyb.2022.3194099] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Clinically, retinal vessel segmentation is a significant step in the diagnosis of fundus diseases. However, recent methods generally neglect the difference of semantic information between deep and shallow features, which fail to capture the global and local characterizations in fundus images simultaneously, resulting in the limited segmentation performance for fine vessels. In this article, a global transformer (GT) and dual local attention (DLA) network via deep-shallow hierarchical feature fusion (GT-DLA-dsHFF) are investigated to solve the above limitations. First, the GT is developed to integrate the global information in the retinal image, which effectively captures the long-distance dependence between pixels, alleviating the discontinuity of blood vessels in the segmentation results. Second, DLA, which is constructed using dilated convolutions with varied dilation rates, unsupervised edge detection, and squeeze-excitation block, is proposed to extract local vessel information, consolidating the edge details in the segmentation result. Finally, a novel deep-shallow hierarchical feature fusion (dsHFF) algorithm is studied to fuse the features in different scales in the deep learning framework, respectively, which can mitigate the attenuation of valid information in the process of feature fusion. We verified the GT-DLA-dsHFF on four typical fundus image datasets. The experimental results demonstrate our GT-DLA-dsHFF achieves superior performance against the current methods and detailed discussions verify the efficacy of the proposed three modules. Segmentation results of diseased images show the robustness of our proposed GT-DLA-dsHFF. Implementation codes will be available on https://github.com/YangLibuaa/GT-DLA-dsHFF.
Collapse
|
39
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
40
|
Rauf Z, Khan AR, Sohail A, Alquhayz H, Gwak J, Khan A. Lymphocyte detection for cancer analysis using a novel fusion block based channel boosted CNN. Sci Rep 2023; 13:14047. [PMID: 37640739 PMCID: PMC10462751 DOI: 10.1038/s41598-023-40581-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 08/13/2023] [Indexed: 08/31/2023] Open
Abstract
Tumor-infiltrating lymphocytes, specialized immune cells, are considered an important biomarker in cancer analysis. Automated lymphocyte detection is challenging due to its heterogeneous morphology, variable distribution, and presence of artifacts. In this work, we propose a novel Boosted Channels Fusion-based CNN "BCF-Lym-Detector" for lymphocyte detection in multiple cancer histology images. The proposed network initially selects candidate lymphocytic regions at the tissue level and then detects lymphocytes at the cellular level. The proposed "BCF-Lym-Detector" generates diverse boosted channels by utilizing the feature learning capability of different CNN architectures. In this connection, a new adaptive fusion block is developed to combine and select the most relevant lymphocyte-specific features from the generated enriched feature space. Multi-level feature learning is used to retain lymphocytic spatial information and detect lymphocytes with variable appearances. The assessment of the proposed "BCF-Lym-Detector" show substantial improvement in terms of F-score (0.93 and 0.84 on LYSTO and NuClick, respectively), which suggests that the diverse feature extraction and dynamic feature selection enhanced the feature learning capacity of the proposed network. Moreover, the proposed technique's generalization on unseen test sets with a good recall (0.75) and F-score (0.73) shows its potential use for pathologists' assistance.
Collapse
Affiliation(s)
- Zunaira Rauf
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan
- PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan
| | - Abdul Rehman Khan
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi, UAE
| | - Hani Alquhayz
- Department of Computer Science and Information, College of Science in Zulfi, Majmaah University, 11952, Al-Majmaah, Saudi Arabia
| | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju, 27469, Republic of Korea.
| | - Asifullah Khan
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan.
- PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan.
- Center for Mathematical Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan.
| |
Collapse
|
41
|
Bashir RMS, Shephard AJ, Mahmood H, Azarmehr N, Raza SEA, Khurram SA, Rajpoot NM. A digital score of peri-epithelial lymphocytic activity predicts malignant transformation in oral epithelial dysplasia. J Pathol 2023; 260:431-442. [PMID: 37294162 PMCID: PMC10952946 DOI: 10.1002/path.6094] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 04/15/2023] [Accepted: 05/02/2023] [Indexed: 06/10/2023]
Abstract
Oral squamous cell carcinoma (OSCC) is amongst the most common cancers, with more than 377,000 new cases worldwide each year. OSCC prognosis remains poor, related to cancer presentation at a late stage, indicating the need for early detection to improve patient prognosis. OSCC is often preceded by a premalignant state known as oral epithelial dysplasia (OED), which is diagnosed and graded using subjective histological criteria leading to variability and prognostic unreliability. In this work, we propose a deep learning approach for the development of prognostic models for malignant transformation and their association with clinical outcomes in histology whole slide images (WSIs) of OED tissue sections. We train a weakly supervised method on OED cases (n = 137) with malignant transformation (n = 50) and mean malignant transformation time of 6.51 years (±5.35 SD). Stratified five-fold cross-validation achieved an average area under the receiver-operator characteristic curve (AUROC) of 0.78 for predicting malignant transformation in OED. Hotspot analysis revealed various features of nuclei in the epithelium and peri-epithelial tissue to be significant prognostic factors for malignant transformation, including the count of peri-epithelial lymphocytes (PELs) (p < 0.05), epithelial layer nuclei count (NC) (p < 0.05), and basal layer NC (p < 0.05). Progression-free survival (PFS) using the epithelial layer NC (p < 0.05, C-index = 0.73), basal layer NC (p < 0.05, C-index = 0.70), and PELs count (p < 0.05, C-index = 0.73) all showed association of these features with a high risk of malignant transformation in our univariate analysis. Our work shows the application of deep learning for the prognostication and prediction of PFS of OED for the first time and offers potential to aid patient management. Further evaluation and testing on multi-centre data is required for validation and translation to clinical practice. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
| | - Adam J Shephard
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Hanya Mahmood
- Academic Unit of Oral & Maxillofacial Surgery, School of Clinical DentistryUniversity of SheffieldSheffieldUK
- Unit of Oral & Maxillofacial Pathology, School of Clinical DentistryUniversity of SheffieldSheffieldUK
| | - Neda Azarmehr
- Unit of Oral & Maxillofacial Pathology, School of Clinical DentistryUniversity of SheffieldSheffieldUK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Syed Ali Khurram
- Academic Unit of Oral & Maxillofacial Surgery, School of Clinical DentistryUniversity of SheffieldSheffieldUK
- Unit of Oral & Maxillofacial Pathology, School of Clinical DentistryUniversity of SheffieldSheffieldUK
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| |
Collapse
|
42
|
Jain Y, Godwin LL, Ju Y, Sood N, Quardokus EM, Bueckle A, Longacre T, Horning A, Lin Y, Esplin ED, Hickey JW, Snyder MP, Patterson NH, Spraggins JM, Börner K. Segmentation of human functional tissue units in support of a Human Reference Atlas. Commun Biol 2023; 6:717. [PMID: 37468557 PMCID: PMC10356924 DOI: 10.1038/s42003-023-04848-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 04/17/2023] [Indexed: 07/21/2023] Open
Abstract
The Human BioMolecular Atlas Program (HuBMAP) aims to compile a Human Reference Atlas (HRA) for the healthy adult body at the cellular level. Functional tissue units (FTUs), relevant for HRA construction, are of pathobiological significance. Manual segmentation of FTUs does not scale; highly accurate and performant, open-source machine-learning algorithms are needed. We designed and hosted a Kaggle competition that focused on development of such algorithms and 1200 teams from 60 countries participated. We present the competition outcomes and an expanded analysis of the winning algorithms on additional kidney and colon tissue data, and conduct a pilot study to understand spatial location and density of FTUs across the kidney. The top algorithm from the competition, Tom, outperforms other algorithms in the expanded study, while using fewer computational resources. Tom was added to the HuBMAP infrastructure to run kidney FTU segmentation at scale-showcasing the value of Kaggle competitions for advancing research.
Collapse
Affiliation(s)
- Yashvardhan Jain
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA.
| | - Leah L Godwin
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Yingnan Ju
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Naveksha Sood
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Ellen M Quardokus
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Andreas Bueckle
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Teri Longacre
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Aaron Horning
- Thermo Fisher Scientific, South San Francisco, CA, 94080, USA
| | - Yiing Lin
- Department of Surgery, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Edward D Esplin
- Department of Genetics, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - John W Hickey
- Department of Microbiology & Immunology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Michael P Snyder
- Department of Genetics, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | | | - Jeffrey M Spraggins
- Mass Spectrometry Research Center, Vanderbilt University, Nashville, TN, 37232, USA
- Department of Cell and Developmental Biology, Vanderbilt University, Nashville, TN, 37232, USA
| | - Katy Börner
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA.
| |
Collapse
|
43
|
Meng Z, Wang G, Su F, Liu Y, Wang Y, Yang J, Luo J, Cao F, Zhen P, Huang B, Yin Y, Zhao Z, Guo L. A Deep Learning-Based System Trained for Gastrointestinal Stromal Tumor Screening Can Identify Multiple Types of Soft Tissue Tumors. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:899-912. [PMID: 37068638 DOI: 10.1016/j.ajpath.2023.03.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/26/2023] [Accepted: 03/28/2023] [Indexed: 04/19/2023]
Abstract
The accuracy and timeliness of the pathologic diagnosis of soft tissue tumors (STTs) critically affect treatment decision and patient prognosis. Thus, it is crucial to make a preliminary judgement on whether the tumor is benign or malignant with hematoxylin and eosin-stained images. A deep learning-based system, Soft Tissue Tumor Box (STT-BOX), is presented herein, with only hematoxylin and eosin images for malignant STT identification from benign STTs with histopathologic similarity. STT-BOX assumed gastrointestinal stromal tumor as a baseline for malignant STT evaluation, and distinguished gastrointestinal stromal tumor from leiomyoma and schwannoma with 100% area under the curve in patients from three hospitals, which achieved higher accuracy than the interpretation of experienced pathologists. Particularly, this system performed well on six common types of malignant STTs from The Cancer Genome Atlas data set, accurately highlighting the malignant mass lesion. STT-BOX was able to distinguish ovarian malignant sex-cord stromal tumors without any fine-tuning. This study included mesenchymal tumors that originated from the digestive system, bone and soft tissues, and reproductive system, where the high accuracy of migration verification may reveal the morphologic similarity of the nine types of malignant tumors. Further evaluation in a pan-STT setting would be potential and prospective, obviating the overuse of immunohistochemistry and molecular tests, and providing a practical basis for clinical treatment selection in a timely manner.
Collapse
Affiliation(s)
- Zhu Meng
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Guangxi Wang
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Fei Su
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China; Beijing Key Laboratory of Network System and Network Culture, Beijing, China
| | - Yan Liu
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Yuxiang Wang
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Jing Yang
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Jianyuan Luo
- Department of Medical Genetics, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Fang Cao
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Panpan Zhen
- Department of Pathology, Beijing Luhe Hospital, Capital Medical University, Beijing, China
| | - Binhua Huang
- Department of Pathology, Dongguan Houjie Hospital, Dongguan, China
| | - Yuxin Yin
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China
| | - Zhicheng Zhao
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China; Beijing Key Laboratory of Network System and Network Culture, Beijing, China.
| | - Limei Guo
- Beijing University of Posts and Telecommunications and Department of Pathology, Peking University Third Hospital, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing, China.
| |
Collapse
|
44
|
AlGhamdi R, Asar TO, Assiri FY, Mansouri RA, Ragab M. Al-Biruni Earth Radius Optimization with Transfer Learning Based Histopathological Image Analysis for Lung and Colon Cancer Detection. Cancers (Basel) 2023; 15:3300. [PMID: 37444410 PMCID: PMC10340056 DOI: 10.3390/cancers15133300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 06/13/2023] [Accepted: 06/14/2023] [Indexed: 07/15/2023] Open
Abstract
An early diagnosis of lung and colon cancer (LCC) is critical for improved patient outcomes and effective treatment. Histopathological image (HSI) analysis has emerged as a robust tool for cancer diagnosis. HSI analysis for a LCC diagnosis includes the analysis and examination of tissue samples attained from the LCC to recognize lesions or cancerous cells. It has a significant role in the staging and diagnosis of this tumor, which aids in the prognosis and treatment planning, but a manual analysis of the image is subject to human error and is also time-consuming. Therefore, a computer-aided approach is needed for the detection of LCC using HSI. Transfer learning (TL) leverages pretrained deep learning (DL) algorithms that have been trained on a larger dataset for extracting related features from the HIS, which are then used for training a classifier for a tumor diagnosis. This manuscript offers the design of the Al-Biruni Earth Radius Optimization with Transfer Learning-based Histopathological Image Analysis for Lung and Colon Cancer Detection (BERTL-HIALCCD) technique. The purpose of the study is to detect LCC effectually in histopathological images. To execute this, the BERTL-HIALCCD method follows the concepts of computer vision (CV) and transfer learning for accurate LCC detection. When using the BERTL-HIALCCD technique, an improved ShuffleNet model is applied for the feature extraction process, and its hyperparameters are chosen by the BER system. For the effectual recognition of LCC, a deep convolutional recurrent neural network (DCRNN) model is applied. Finally, the coati optimization algorithm (COA) is exploited for the parameter choice of the DCRNN approach. For examining the efficacy of the BERTL-HIALCCD technique, a comprehensive group of experiments was conducted on a large dataset of histopathological images. The experimental outcomes demonstrate that the combination of AER and COA algorithms attain an improved performance in cancer detection over the compared models.
Collapse
Affiliation(s)
- Rayed AlGhamdi
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Turky Omar Asar
- Department of Biology, College of Science and Arts at Alkamil, University of Jeddah, Jeddah, Saudi Arabia
| | - Fatmah Y. Assiri
- Department of Software Engineering, College of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia
| | - Rasha A. Mansouri
- Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
- Department of Biochemistry, Faculty of Sciences, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Mathematics Department, Faculty of Science, Al-Azhar University, Naser City 11884, Cairo, Egypt
- King Abdulaziz University-University of Oxford Centre for Artificial Intelligence in Precision Medicines, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
45
|
Bokhorst JM, Nagtegaal ID, Fraggetta F, Vatrano S, Mesker W, Vieth M, van der Laak J, Ciompi F. Deep learning for multi-class semantic segmentation enables colorectal cancer detection and classification in digital pathology images. Sci Rep 2023; 13:8398. [PMID: 37225743 PMCID: PMC10209185 DOI: 10.1038/s41598-023-35491-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 05/18/2023] [Indexed: 05/26/2023] Open
Abstract
In colorectal cancer (CRC), artificial intelligence (AI) can alleviate the laborious task of characterization and reporting on resected biopsies, including polyps, the numbers of which are increasing as a result of CRC population screening programs ongoing in many countries all around the globe. Here, we present an approach to address two major challenges in the automated assessment of CRC histopathology whole-slide images. We present an AI-based method to segment multiple ([Formula: see text]) tissue compartments in the H &E-stained whole-slide image, which provides a different, more perceptible picture of tissue morphology and composition. We test and compare a panel of state-of-the-art loss functions available for segmentation models, and provide indications about their use in histopathology image segmentation, based on the analysis of (a) a multi-centric cohort of CRC cases from five medical centers in the Netherlands and Germany, and (b) two publicly available datasets on segmentation in CRC. We used the best performing AI model as the basis for a computer-aided diagnosis system that classifies colon biopsies into four main categories that are relevant pathologically. We report the performance of this system on an independent cohort of more than 1000 patients. The results show that with a good segmentation network as a base, a tool can be developed which can support pathologists in the risk stratification of colorectal cancer patients, among other possible uses. We have made the segmentation model available for research use on https://grand-challenge.org/algorithms/colon-tissue-segmentation/ .
Collapse
Affiliation(s)
- John-Melle Bokhorst
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Iris D Nagtegaal
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Filippo Fraggetta
- Pathology Unit Gravina Hospital, Gravina Hospital, Caltagirone, Italy
| | - Simona Vatrano
- Pathology Unit Gravina Hospital, Gravina Hospital, Caltagirone, Italy
| | - Wilma Mesker
- Leids Universitair Medisch Centrum, Leiden, The Netherlands
| | - Michael Vieth
- Klinikum Bayreuth, Friedrich-Alexander-University Erlangen-Nuremberg, Bayreuth, Germany
| | - Jeroen van der Laak
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | - Francesco Ciompi
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
46
|
A scale and region-enhanced decoding network for nuclei classification in histology image. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
47
|
Li K, Qian Z, Han Y, Chang EIC, Wei B, Lai M, Liao J, Fan Y, Xu Y. Weakly supervised histopathology image segmentation with self-attention. Med Image Anal 2023; 86:102791. [PMID: 36933385 DOI: 10.1016/j.media.2023.102791] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 01/09/2023] [Accepted: 02/24/2023] [Indexed: 03/13/2023]
Abstract
Accurate segmentation in histopathology images at pixel-level plays a critical role in the digital pathology workflow. The development of weakly supervised methods for histopathology image segmentation liberates pathologists from time-consuming and labor-intensive works, opening up possibilities of further automated quantitative analysis of whole-slide histopathology images. As an effective subgroup of weakly supervised methods, multiple instance learning (MIL) has achieved great success in histopathology images. In this paper, we specially treat pixels as instances so that the histopathology image segmentation task is transformed into an instance prediction task in MIL. However, the lack of relations between instances in MIL limits the further improvement of segmentation performance. Therefore, we propose a novel weakly supervised method called SA-MIL for pixel-level segmentation in histopathology images. SA-MIL introduces a self-attention mechanism into the MIL framework, which captures global correlation among all instances. In addition, we use deep supervision to make the best use of information from limited annotations in the weakly supervised method. Our approach makes up for the shortcoming that instances are independent of each other in MIL by aggregating global contextual information. We demonstrate state-of-the-art results compared to other weakly supervised methods on two histopathology image datasets. It is evident that our approach has generalization ability for the high performance on both tissue and cell histopathology datasets. There is potential in our approach for various applications in medical images.
Collapse
Affiliation(s)
- Kailu Li
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China.
| | - Ziniu Qian
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China.
| | - Yingnan Han
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China.
| | | | | | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang University, Hangzhou 310027, China.
| | - Jing Liao
- Department of Computer Science, City University of Hong Kong, 999077, Hong Kong SAR, China.
| | - Yubo Fan
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China.
| | - Yan Xu
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China; Microsoft Research, Beijing 100080, China.
| |
Collapse
|
48
|
Guo R, Xie K, Pagnucco M, Song Y. SAC-Net: Learning with weak and noisy labels in histopathology image segmentation. Med Image Anal 2023; 86:102790. [PMID: 36878159 DOI: 10.1016/j.media.2023.102790] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 11/24/2022] [Accepted: 02/23/2023] [Indexed: 03/06/2023]
Abstract
Deep convolutional neural networks have been highly effective in segmentation tasks. However, segmentation becomes more difficult when training images include many complex instances to segment, such as the task of nuclei segmentation in histopathology images. Weakly supervised learning can reduce the need for large-scale, high-quality ground truth annotations by involving non-expert annotators or algorithms to generate supervision information for segmentation. However, there is still a significant performance gap between weakly supervised learning and fully supervised learning approaches. In this work, we propose a weakly-supervised nuclei segmentation method in a two-stage training manner that only requires annotation of the nuclear centroids. First, we generate boundary and superpixel-based masks as pseudo ground truth labels to train our SAC-Net, which is a segmentation network enhanced by a constraint network and an attention network to effectively address the problems caused by noisy labels. Then, we refine the pseudo labels at the pixel level based on Confident Learning to train the network again. Our method shows highly competitive performance of cell nuclei segmentation in histopathology images on three public datasets. Code will be available at: https://github.com/RuoyuGuo/MaskGA_Net.
Collapse
Affiliation(s)
- Ruoyu Guo
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Kunzi Xie
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Maurice Pagnucco
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Australia.
| |
Collapse
|
49
|
Dabass M, Dabass J. An Atrous Convolved Hybrid Seg-Net Model with residual and attention mechanism for gland detection and segmentation in histopathological images. Comput Biol Med 2023; 155:106690. [PMID: 36827788 DOI: 10.1016/j.compbiomed.2023.106690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 02/06/2023] [Accepted: 02/14/2023] [Indexed: 02/21/2023]
Abstract
PURPOSE A clinically compatible computerized segmentation model is presented here that aspires to supply clinical gland informative details by seizing every small and intricate variation in medical images, integrate second opinions, and reduce human errors. APPROACH It comprises of enhanced learning capability that extracts denser multi-scale gland-specific features, recover semantic gap during concatenation, and effectively handle resolution-degradation and vanishing gradient problems. It is having three proposed modules namely Atrous Convolved Residual Learning Module in the encoder as well as decoder, Residual Attention Module in the skip connection paths, and Atrous Convolved Transitional Module as the transitional and output layer. Also, pre-processing techniques like patch-sampling, stain-normalization, augmentation, etc. are employed to develop its generalization capability. To verify its robustness and invigorate network invariance against digital variability, extensive experiments are carried out employing three different public datasets i.e., GlaS (Gland Segmentation Challenge), CRAG (Colorectal Adenocarcinoma Gland) and LC-25000 (Lung Colon-25000) dataset and a private HosC (Hospital Colon) dataset. RESULTS The presented model accomplished combative gland detection outcomes having F1-score (GlaS(Test A(0.957), Test B(0.926)), CRAG(0.935), LC 25000(0.922), HosC(0.963)); and gland segmentation results having Object-Dice Index (GlaS(Test A(0.961), Test B(0.933)), CRAG(0.961), LC-25000(0.940), HosC(0.929)), and Object-Hausdorff Distance (GlaS(Test A(21.77) and Test B(69.74)), CRAG(87.63), LC-25000(95.85), HosC(83.29)). In addition, validation score (GlaS (Test A(0.945), Test B(0.937)), CRAG(0.934), LC-25000(0.911), HosC(0.928)) supplied by the proficient pathologists is integrated for the end segmentation results to corroborate the applicability and appropriateness for assistance at the clinical level applications. CONCLUSION The proposed system will assist pathologists in devising precise diagnoses by offering a referential perspective during morphology assessment of colon histopathology images.
Collapse
Affiliation(s)
- Manju Dabass
- EECE Deptt, The NorthCap University, Gurugram, India.
| | - Jyoti Dabass
- DBT Centre of Excellence Biopharmaceutical Technology, IIT, Delhi, India
| |
Collapse
|
50
|
Chaitanya K, Erdil E, Karani N, Konukoglu E. Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation. Med Image Anal 2023; 87:102792. [PMID: 37054649 DOI: 10.1016/j.media.2023.102792] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 11/25/2022] [Accepted: 03/02/2023] [Indexed: 03/13/2023]
Abstract
Supervised deep learning-based methods yield accurate results for medical image segmentation. However, they require large labeled datasets for this, and obtaining them is a laborious task that requires clinical expertise. Semi/self-supervised learning-based approaches address this limitation by exploiting unlabeled data along with limited annotated data. Recent self-supervised learning methods use contrastive loss to learn good global level representations from unlabeled images and achieve high performance in classification tasks on popular natural image datasets like ImageNet. In pixel-level prediction tasks such as segmentation, it is crucial to also learn good local level representations along with global representations to achieve better accuracy. However, the impact of the existing local contrastive loss-based methods remains limited for learning good local representations because similar and dissimilar local regions are defined based on random augmentations and spatial proximity; not based on the semantic label of local regions due to lack of large-scale expert annotations in the semi/self-supervised setting. In this paper, we propose a local contrastive loss to learn good pixel level features useful for segmentation by exploiting semantic label information obtained from pseudo-labels of unlabeled images alongside limited annotated images with ground truth (GT) labels. In particular, we define the proposed contrastive loss to encourage similar representations for the pixels that have the same pseudo-label/GT label while being dissimilar to the representation of pixels with different pseudo-label/GT label in the dataset. We perform pseudo-label based self-training and train the network by jointly optimizing the proposed contrastive loss on both labeled and unlabeled sets and segmentation loss on only the limited labeled set. We evaluated the proposed approach on three public medical datasets of cardiac and prostate anatomies, and obtain high segmentation performance with a limited labeled set of one or two 3D volumes. Extensive comparisons with the state-of-the-art semi-supervised and data augmentation methods and concurrent contrastive learning methods demonstrate the substantial improvement achieved by the proposed method. The code is made publicly available at https://github.com/krishnabits001/pseudo_label_contrastive_training.
Collapse
Affiliation(s)
- Krishna Chaitanya
- Computer Vision Laboratory, ETH Zurich, Sternwartstrasse 7, Zurich 8092, Switzerland.
| | - Ertunc Erdil
- Computer Vision Laboratory, ETH Zurich, Sternwartstrasse 7, Zurich 8092, Switzerland
| | - Neerav Karani
- Computer Vision Laboratory, ETH Zurich, Sternwartstrasse 7, Zurich 8092, Switzerland
| | - Ender Konukoglu
- Computer Vision Laboratory, ETH Zurich, Sternwartstrasse 7, Zurich 8092, Switzerland
| |
Collapse
|