51
|
Tan L, Li H, Yu J, Zhou H, Wang Z, Niu Z, Li J, Li Z. Colorectal cancer lymph node metastasis prediction with weakly supervised transformer-based multi-instance learning. Med Biol Eng Comput 2023; 61:1565-1580. [PMID: 36809427 PMCID: PMC10182132 DOI: 10.1007/s11517-023-02799-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 01/31/2023] [Indexed: 02/23/2023]
Abstract
Lymph node metastasis examined by the resected lymph nodes is considered one of the most important prognostic factors for colorectal cancer (CRC). However, it requires careful and comprehensive inspection by expert pathologists. To relieve the pathologists' burden and speed up the diagnostic process, in this paper, we develop a deep learning system with the binary positive/negative labels of the lymph nodes to solve the CRC lymph node classification task. The multi-instance learning (MIL) framework is adopted in our method to handle the whole slide images (WSIs) of gigapixels in size at once and get rid of the labor-intensive and time-consuming detailed annotations. First, a transformer-based MIL model, DT-DSMIL, is proposed in this paper based on the deformable transformer backbone and the dual-stream MIL (DSMIL) framework. The local-level image features are extracted and aggregated with the deformable transformer, and the global-level image features are obtained with the DSMIL aggregator. The final classification decision is made based on both the local and the global-level features. After the effectiveness of our proposed DT-DSMIL model is demonstrated by comparing its performance with its predecessors, a diagnostic system is developed to detect, crop, and finally identify the single lymph nodes within the slides based on the DT-DSMIL and the Faster R-CNN model. The developed diagnostic model is trained and tested on a clinically collected CRC lymph node metastasis dataset composed of 843 slides (864 metastasis lymph nodes and 1415 non-metastatic lymph nodes), achieving the accuracy of 95.3% and the area under the receiver operating characteristic curve (AUC) of 0.9762 (95% confidence interval [CI]: 0.9607-0.9891) for the single lymph node classification. As for the lymph nodes with micro-metastasis and macro-metastasis, our diagnostic system achieves the AUC of 0.9816 (95% CI: 0.9659-0.9935) and 0.9902 (95% CI: 0.9787-0.9983), respectively. Moreover, the system shows reliable diagnostic region localizing performance: the model can always identify the most likely metastases, no matter the model's predictions or manual labels, showing great potential in avoiding false negatives and discovering incorrectly labeled slides in actual clinical use.
Collapse
Affiliation(s)
- Luxin Tan
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Huan Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Jinze Yu
- Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, 100191, China.,School of Computer Science and Engineering, Beihang University, Beijing, 100191, China.,Shenyuan Honors College, Beihang University, Beijing, 100191, China
| | - Haoyi Zhou
- Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, 100191, China.,College of Software, Beihang University, Beijing, 100191, China
| | - Zhi Wang
- Blot Info & Tech (Beijing) Co. Ltd, Beijing, China
| | - Zhiyong Niu
- Blot Info & Tech (Beijing) Co. Ltd, Beijing, China.
| | - Jianxin Li
- Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, 100191, China. .,School of Computer Science and Engineering, Beihang University, Beijing, 100191, China.
| | - Zhongwu Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital & Institute, Beijing, 100142, China.
| |
Collapse
|
52
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
53
|
Wang Y, Tian S, Yu L, Wu W, Zhang D, Wang J, Cheng J. FSOU-Net: Feature supplement and optimization U-Net for 2D medical image segmentation. Technol Health Care 2023; 31:181-195. [PMID: 35754242 DOI: 10.3233/thc-220174] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
BACKGROUND The results of medical image segmentation can provide reliable evidence for clinical diagnosis and treatment. The U-Net proposed previously has been widely used in the field of medical image segmentation. Its encoder extracts semantic features of different scales at different stages, but does not carry out special processing for semantic features of each scale. OBJECTIVE To improve the feature expression ability and segmentation performance of U-Net, we proposed a feature supplement and optimization U-Net (FSOU-Net). METHODS First, we put forward the view that semantic features of different scales should be treated differently. Based on this view, we classify the semantic features automatically extracted by encoders into two categories: shallow semantic features and deep semantic features. Then, we propose the shallow feature supplement module (SFSM), which obtains fine-grained semantic features through up-sampling to supplement the shallow semantic information. Finally, we propose the deep feature optimization module (DFOM), which uses the expansive convolution of different receptive fields to obtain multi-scale features and then performs multi-scale feature fusion to optimize the deep semantic information. RESULTS The proposed model is experimented on three medical image segmentation public datasets, and the experimental results prove the correctness of the proposed idea. The segmentation performance of the model is higher than the advanced models for medical image segmentation. Compared with baseline network U-NET, the main index of Dice index is 0.75% higher on the RITE dataset, 2.3% higher on the Kvasir-SEG dataset, and 0.24% higher on the GlaS dataset. CONCLUSIONS The proposed method can greatly improve the feature representation ability and segmentation performance of the model.
Collapse
Affiliation(s)
- Yongtao Wang
- College of Software Engineering, Xinjiang University, Urumqi, Xinjiang, China.,Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, Xinjiang, China
| | - Shengwei Tian
- College of Software Engineering, Xinjiang University, Urumqi, Xinjiang, China.,Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, Xinjiang, China
| | - Long Yu
- College of Information Science and Engineering, Xinjiang University, Urumqi, Xinjiang, China
| | - Weidong Wu
- People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang, China.,China Xinjiang Key Laboratory of Dermatology Research (XJYS1707), Urumqi, Xinjiang, China
| | - Dezhi Zhang
- People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang, China
| | - Junwen Wang
- College of Software Engineering, Xinjiang University, Urumqi, Xinjiang, China.,Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, Xinjiang, China
| | - Junlong Cheng
- College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
54
|
Gao Z, Hong B, Li Y, Zhang X, Wu J, Wang C, Zhang X, Gong T, Zheng Y, Meng D, Li C. A semi-supervised multi-task learning framework for cancer classification with weak annotation in whole-slide images. Med Image Anal 2023; 83:102652. [PMID: 36327654 DOI: 10.1016/j.media.2022.102652] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 09/15/2022] [Accepted: 10/08/2022] [Indexed: 11/06/2022]
Abstract
Cancer region detection (CRD) and subtyping are two fundamental tasks in digital pathology image analysis. The development of data-driven models for CRD and subtyping on whole-slide images (WSIs) would mitigate the burden of pathologists and improve their accuracy in diagnosis. However, the existing models are facing two major limitations. Firstly, they typically require large-scale datasets with precise annotations, which contradicts with the original intention of reducing labor effort. Secondly, for the subtyping task, the non-cancerous regions are treated as the same as cancerous regions within a WSI, which confuses a subtyping model in its training process. To tackle the latter limitation, the previous research proposed to perform CRD first for ruling out the non-cancerous region, then train a subtyping model based on the remaining cancerous patches. However, separately training ignores the interaction of these two tasks, also leads to propagating the error of the CRD task to the subtyping task. To address these issues and concurrently improve the performance on both CRD and subtyping tasks, we propose a semi-supervised multi-task learning (MTL) framework for cancer classification. Our framework consists of a backbone feature extractor, two task-specific classifiers, and a weight control mechanism. The backbone feature extractor is shared by two task-specific classifiers, such that the interaction of CRD and subtyping tasks can be captured. The weight control mechanism preserves the sequential relationship of these two tasks and guarantees the error back-propagation from the subtyping task to the CRD task under the MTL framework. We train the overall framework in a semi-supervised setting, where datasets only involve small quantities of annotations produced by our minimal point-based (min-point) annotation strategy. Extensive experiments on four large datasets with different cancer types demonstrate the effectiveness of the proposed framework in both accuracy and generalization.
Collapse
Affiliation(s)
- Zeyu Gao
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Bangyang Hong
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Yang Li
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Xianli Zhang
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Jialun Wu
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Chunbao Wang
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Department of Pathology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an 710061, China
| | - Xiangrong Zhang
- School of Artificial Intelligence, Xidian University, Xi'an 710071, China
| | - Tieliang Gong
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Yefeng Zheng
- Tencent Jarvis Lab, Shenzhen, Guangdong 518075, China
| | - Deyu Meng
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China
| | - Chen Li
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China.
| |
Collapse
|
55
|
Graham S, Vu QD, Jahanifar M, Raza SEA, Minhas F, Snead D, Rajpoot N. One model is all you need: Multi-task learning enables simultaneous histology image segmentation and classification. Med Image Anal 2023; 83:102685. [PMID: 36410209 DOI: 10.1016/j.media.2022.102685] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 10/20/2022] [Accepted: 11/03/2022] [Indexed: 11/13/2022]
Abstract
The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.
Collapse
Affiliation(s)
- Simon Graham
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom; Histofy Ltd, United Kingdom.
| | - Quoc Dang Vu
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom
| | - David Snead
- Histofy Ltd, United Kingdom; Department of Pathology, University Hospitals Coventry & Warwickshire, United Kingdom
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, United Kingdom; Histofy Ltd, United Kingdom; Department of Pathology, University Hospitals Coventry & Warwickshire, United Kingdom
| |
Collapse
|
56
|
Barmpoutis P, Waddingham W, Yuan J, Ross C, Kayhanian H, Stathaki T, Alexander DC, Jansen M. A digital pathology workflow for the segmentation and classification of gastric glands: Study of gastric atrophy and intestinal metaplasia cases. PLoS One 2022; 17:e0275232. [PMID: 36584163 PMCID: PMC9803139 DOI: 10.1371/journal.pone.0275232] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 09/12/2022] [Indexed: 01/01/2023] Open
Abstract
Gastric cancer is one of the most frequent causes of cancer-related deaths worldwide. Gastric atrophy (GA) and gastric intestinal metaplasia (IM) of the mucosa of the stomach have been found to increase the risk of gastric cancer and are considered precancerous lesions. Therefore, the early detection of GA and IM may have a valuable role in histopathological risk assessment. However, GA and IM are difficult to confirm endoscopically and, following the Sydney protocol, their diagnosis depends on the analysis of glandular morphology and on the identification of at least one well-defined goblet cell in a set of hematoxylin and eosin (H&E) -stained biopsy samples. To this end, the precise segmentation and classification of glands from the histological images plays an important role in the diagnostic confirmation of GA and IM. In this paper, we propose a digital pathology end-to-end workflow for gastric gland segmentation and classification for the analysis of gastric tissues. The proposed GAGL-VTNet, initially, extracts both global and local features combining multi-scale feature maps for the segmentation of glands and, subsequently, it adopts a vision transformer that exploits the visual dependences of the segmented glands towards their classification. For the analysis of gastric tissues, segmentation of mucosa is performed through an unsupervised model combining energy minimization and a U-Net model. Then, features of the segmented glands and mucosa are extracted and analyzed. To evaluate the efficiency of the proposed methodology we created the GAGL dataset consisting of 85 WSI, collected from 20 patients. The results demonstrate the existence of significant differences of the extracted features between normal, GA and IM cases. The proposed approach for gland and mucosa segmentation achieves an object dice score equal to 0.908 and 0.967 respectively, while for the classification of glands it achieves an F1 score equal to 0.94 showing great potential for the automated quantification and analysis of gastric biopsies.
Collapse
Affiliation(s)
- Panagiotis Barmpoutis
- Department of Computer Science, Centre for Medical Image Computing, University College London, London, United Kingdom
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| | - William Waddingham
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| | - Jing Yuan
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Christopher Ross
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| | - Hamzeh Kayhanian
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| | - Tania Stathaki
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Daniel C. Alexander
- Department of Computer Science, Centre for Medical Image Computing, University College London, London, United Kingdom
| | - Marnix Jansen
- Department of Pathology, UCL Cancer Institute, University College London, London, United Kingdom
| |
Collapse
|
57
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
58
|
Nan Y, Tang P, Zhang G, Zeng C, Liu Z, Gao Z, Zhang H, Yang G. Unsupervised Tissue Segmentation via Deep Constrained Gaussian Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3799-3811. [PMID: 35905069 DOI: 10.1109/tmi.2022.3195123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Tissue segmentation is the mainstay of pathological examination, whereas the manual delineation is unduly burdensome. To assist this time-consuming and subjective manual step, researchers have devised methods to automatically segment structures in pathological images. Recently, automated machine and deep learning based methods dominate tissue segmentation research studies. However, most machine and deep learning based approaches are supervised and developed using a large number of training samples, in which the pixel-wise annotations are expensive and sometimes can be impossible to obtain. This paper introduces a novel unsupervised learning paradigm by integrating an end-to-end deep mixture model with a constrained indicator to acquire accurate semantic tissue segmentation. This constraint aims to centralise the components of deep mixture models during the calculation of the optimisation function. In so doing, the redundant or empty class issues, which are common in current unsupervised learning methods, can be greatly reduced. By validation on both public and in-house datasets, the proposed deep constrained Gaussian network achieves significantly (Wilcoxon signed-rank test) better performance (with the average Dice scores of 0.737 and 0.735, respectively) on tissue segmentation with improved stability and robustness, compared to other existing unsupervised segmentation approaches. Furthermore, the proposed method presents a similar performance (p-value >0.05) compared to the fully supervised U-Net.
Collapse
|
59
|
Gao E, Jiang H, Zhou Z, Yang C, Chen M, Zhu W, Shi F, Chen X, Zheng J, Bian Y, Xiang D. Automatic multi-tissue segmentation in pancreatic pathological images with selected multi-scale attention network. Comput Biol Med 2022; 151:106228. [PMID: 36306579 DOI: 10.1016/j.compbiomed.2022.106228] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 10/13/2022] [Accepted: 10/16/2022] [Indexed: 12/27/2022]
Abstract
The morphology of tissues in pathological images has been used routinely by pathologists to assess the degree of malignancy of pancreatic ductal adenocarcinoma (PDAC). Automatic and accurate segmentation of tumor cells and their surrounding tissues is often a crucial step to obtain reliable morphological statistics. Nonetheless, it is still a challenge due to the great variation of appearance and morphology. In this paper, a selected multi-scale attention network (SMANet) is proposed to segment tumor cells, blood vessels, nerves, islets and ducts in pancreatic pathological images. The selected multi-scale attention module is proposed to enhance effective information, supplement useful information and suppress redundant information at different scales from the encoder and decoder. It includes selection unit (SU) module and multi-scale attention (MA) module. The selection unit module can effectively filter features. The multi-scale attention module enhances effective information through spatial attention and channel attention, and combines different level features to supplement useful information. This helps learn the information of different receptive fields to improve the segmentation of tumor cells, blood vessels and nerves. An original-feature fusion unit is also proposed to supplement the original image information to reduce the under-segmentation of small tissues such as islets and ducts. The proposed method outperforms state-of-the-arts deep learning algorithms on our PDAC pathological images and achieves competitive results on the GlaS challenge dataset. The mDice and mIoU have reached 0.769 and 0.665 in our PDAC dataset.
Collapse
Affiliation(s)
- Enting Gao
- School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China
| | - Hui Jiang
- Department of Pathology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Zhibang Zhou
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Changxing Yang
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Muyang Chen
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Weifang Zhu
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Fei Shi
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Jian Zheng
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Jiangsu 215163, China
| | - Yun Bian
- Department of Radiology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Dehui Xiang
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China.
| |
Collapse
|
60
|
Dolezal JM, Srisuwananukorn A, Karpeyev D, Ramesh S, Kochanny S, Cody B, Mansfield AS, Rakshit S, Bansal R, Bois MC, Bungum AO, Schulte JJ, Vokes EE, Garassino MC, Husain AN, Pearson AT. Uncertainty-informed deep learning models enable high-confidence predictions for digital histopathology. Nat Commun 2022; 13:6572. [PMID: 36323656 PMCID: PMC9630455 DOI: 10.1038/s41467-022-34025-x] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 10/07/2022] [Indexed: 11/06/2022] Open
Abstract
A model's ability to express its own predictive uncertainty is an essential attribute for maintaining clinical user confidence as computational biomarkers are deployed into real-world medical settings. In the domain of cancer digital histopathology, we describe a clinically-oriented approach to uncertainty quantification for whole-slide images, estimating uncertainty using dropout and calculating thresholds on training data to establish cutoffs for low- and high-confidence predictions. We train models to identify lung adenocarcinoma vs. squamous cell carcinoma and show that high-confidence predictions outperform predictions without uncertainty, in both cross-validation and testing on two large external datasets spanning multiple institutions. Our testing strategy closely approximates real-world application, with predictions generated on unsupervised, unannotated slides using predetermined thresholds. Furthermore, we show that uncertainty thresholding remains reliable in the setting of domain shift, with accurate high-confidence predictions of adenocarcinoma vs. squamous cell carcinoma for out-of-distribution, non-lung cancer cohorts.
Collapse
Affiliation(s)
- James M Dolezal
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | | | | | - Siddhi Ramesh
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Sara Kochanny
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Brittany Cody
- Department of Pathology, University of Chicago, Chicago, IL, USA
| | | | - Sagar Rakshit
- Division of Medical Oncology, Mayo Clinic, Rochester, MN, USA
| | - Radhika Bansal
- Division of Medical Oncology, Mayo Clinic, Rochester, MN, USA
| | - Melanie C Bois
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA
| | - Aaron O Bungum
- Divisions of Pulmonary Medicine and Critical Care, Mayo Clinic, Rochester, MN, USA
| | - Jefree J Schulte
- Department of Pathology and Laboratory Medicine, University of Wisconsin at Madison, Madison, WN, USA
| | - Everett E Vokes
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Marina Chiara Garassino
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA
| | - Aliya N Husain
- Department of Pathology, University of Chicago, Chicago, IL, USA
| | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medical Center, Chicago, IL, USA.
| |
Collapse
|
61
|
Ben Hamida A, Devanne M, Weber J, Truntzer C, Derangère V, Ghiringhelli F, Forestier G, Wemmert C. Weakly Supervised Learning using Attention gates for colon cancer histopathological image segmentation. Artif Intell Med 2022; 133:102407. [PMID: 36328667 DOI: 10.1016/j.artmed.2022.102407] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 09/07/2022] [Accepted: 09/15/2022] [Indexed: 02/08/2023]
Abstract
Recently, Artificial Intelligence namely Deep Learning methods have revolutionized a wide range of domains and applications. Besides, Digital Pathology has so far played a major role in the diagnosis and the prognosis of tumors. However, the characteristics of the Whole Slide Images namely the gigapixel size, high resolution and the shortage of richly labeled samples have hindered the efficiency of classical Machine Learning methods. That goes without saying that traditional methods are poor in generalization to different tasks and data contents. Regarding the success of Deep learning when dealing with Large Scale applications, we have resorted to the use of such models for histopathological image segmentation tasks. First, we review and compare the classical UNet and Att-UNet models for colon cancer WSI segmentation in a sparsely annotated data scenario. Then, we introduce novel enhanced models of the Att-UNet where different schemes are proposed for the skip connections and spatial attention gates positions in the network. In fact, spatial attention gates assist the training process and enable the model to avoid irrelevant feature learning. Alternating the presence of such modules namely in our Alter-AttUNet model adds robustness and ensures better image segmentation results. In order to cope with the lack of richly annotated data in our AiCOLO colon cancer dataset, we suggest the use of a multi-step training strategy that also deals with the WSI sparse annotations and unbalanced class issues. All proposed methods outperform state-of-the-art approaches but Alter-AttUNet generates the best compromise between accurate results and light network. The model achieves 95.88% accuracy with our sparse AiCOLO colon cancer datasets. Finally, to evaluate and validate our proposed architectures we resort to publicly available WSI data: the NCT-CRC-HE-100K, the CRC-5000 and the Warwick colon cancer histopathological dataset. Respective accuracies of 99.65%, 99.73% and 79.03% were reached. A comparison with state-of-art approaches is established to view and compare the key solutions for histopathological image segmentation.
Collapse
Affiliation(s)
| | - M Devanne
- IRIMAS, University of Haute-Alsace, France
| | - J Weber
- IRIMAS, University of Haute-Alsace, France
| | - C Truntzer
- Platform of Transform in Biological Oncology, Dijon, France
| | - V Derangère
- Platform of Transform in Biological Oncology, Dijon, France
| | - F Ghiringhelli
- Platform of Transform in Biological Oncology, Dijon, France
| | | | - C Wemmert
- ICube, University of Strasbourg, France
| |
Collapse
|
62
|
Zhou S, Xu X, Bai J, Bragin M. Combining multi-view ensemble and surrogate lagrangian relaxation for real-time 3D biomedical image segmentation on the edge. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
63
|
Dabass M, Vashisth S, Vig R. MTU: A multi-tasking U-net with hybrid convolutional learning and attention modules for cancer classification and gland Segmentation in Colon Histopathological Images. Comput Biol Med 2022; 150:106095. [PMID: 36179516 DOI: 10.1016/j.compbiomed.2022.106095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/31/2022] [Accepted: 09/10/2022] [Indexed: 11/17/2022]
Abstract
A clinically comparable multi-tasking computerized deep U-Net-based model is demonstrated in this paper. It intends to offer clinical gland morphometric information and cancer grade classification to be provided as referential opinions for pathologists in order to abate human errors. It embraces enhanced feature learning capability that aids in extraction of potent multi-scale features; efficacious semantic gap recovery during feature concatenation; and successful interception of resolution-degradation and vanishing gradient problems while performing moderate computations. It is proposed by integrating three unique novel structural components namely Hybrid Convolutional Learning Units in the encoder and decoder, Attention Learning Units in skip connection, and Multi-Scalar Dilated Transitional Unit as the transitional layer in the traditional U-Net architecture. These units are composed of the amalgamated phenomenon of multi-level convolutional learning through conventional, atrous, residual, depth-wise, and point-wise convolutions which are further incorporated with target-specific attention learning and enlarged effectual receptive field size. Also, pre-processing techniques of patch-sampling, augmentation (color and morphological), stain-normalization, etc. are employed to burgeon its generalizability. To build network invariance towards digital variability, exhaustive experiments are conducted using three public datasets (Colorectal Adenocarcinoma Gland (CRAG), Gland Segmentation (GlaS) challenge, and Lung Colon-25000 (LC-25K) dataset)) and then its robustness is verified using an in-house private dataset of Hospital Colon (HosC). For the cancer classification, the proposed model achieved results of Accuracy (CRAG(95%), GlaS(97.5%), LC-25K(99.97%), HosC(99.45%)), Precision (CRAG(0.9678), GlaS(0.9768), LC-25K(1), HosC(1)), F1-score (CRAG(0.968), GlaS(0.977), LC 25K(0.9997), HosC(0.9965)), and Recall (CRAG(0.9677), GlaS(0.9767), LC-25K(0.9994), HosC(0.9931)). For the gland detection and segmentation, the proposed model achieved competitive results of F1-score (CRAG(0.924), GlaS(Test A(0.949), Test B(0.918)), LC-25K(0.916), HosC(0.959)); Object-Dice Index (CRAG(0.959), GlaS(Test A(0.956), Test B(0.909)), LC-25K(0.929), HosC(0.922)), and Object-Hausdorff Distance (CRAG(90.47), GlaS(Test A(23.17), Test B(71.53)), LC-25K(96.28), HosC(85.45)). In addition, the activation mappings for testing the interpretability of the classification decision-making process are reported by utilizing techniques of Local Interpretable Model-Agnostic Explanations, Occlusion Sensitivity, and Gradient-Weighted Class Activation Mappings. This is done to provide further evidence about the model's self-learning capability of the comparable patterns considered relevant by pathologists without any pre-requisite for annotations. These activation mapping visualization outcomes are evaluated by proficient pathologists, and they delivered these images with a class-path validation score of (CRAG(9.31), GlaS(9.25), LC-25K(9.05), and HosC(9.85)). Furthermore, the seg-path validation score of (GlaS (Test A(9.40), Test B(9.25)), CRAG(9.27), LC-25K(9.01), HosC(9.19)) given by multiple pathologists is included for the final segmented outcomes to substantiate the clinical relevance and suitability for facilitation at the clinical level. The proposed model will aid pathologists to formulate an accurate diagnosis by providing a referential opinion during the morphology assessment of histopathology images. It will reduce unintentional human error in cancer diagnosis and consequently will enhance patient survival rate.
Collapse
Affiliation(s)
- Manju Dabass
- EECE Deptt, The NorthCap University, Gurugram, 122017, India.
| | - Sharda Vashisth
- EECE Deptt, The NorthCap University, Gurugram, 122017, India
| | - Rekha Vig
- EECE Deptt, The NorthCap University, Gurugram, 122017, India
| |
Collapse
|
64
|
Liang P, Zhang Y, Ding Y, Chen J, Madukoma CS, Weninger T, Shrout JD, Chen DZ. H-EMD: A Hierarchical Earth Mover's Distance Method for Instance Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2582-2597. [PMID: 35446762 DOI: 10.1109/tmi.2022.3169449] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep learning (DL) based semantic segmentation methods have achieved excellent performance in biomedical image segmentation, producing high quality probability maps to allow extraction of rich instance information to facilitate good instance segmentation. While numerous efforts were put into developing new DL semantic segmentation models, less attention was paid to a key issue of how to effectively explore their probability maps to attain the best possible instance segmentation. We observe that probability maps by DL semantic segmentation models can be used to generate many possible instance candidates, and accurate instance segmentation can be achieved by selecting from them a set of "optimized" candidates as output instances. Further, the generated instance candidates form a well-behaved hierarchical structure (a forest), which allows selecting instances in an optimized manner. Hence, we propose a novel framework, called hierarchical earth mover's distance (H-EMD), for instance segmentation in biomedical 2D+time videos and 3D images, which judiciously incorporates consistent instance selection with semantic-segmentation-generated probability maps. H-EMD contains two main stages: (1) instance candidate generation: capturing instance-structured information in probability maps by generating many instance candidates in a forest structure; (2) instance candidate selection: selecting instances from the candidate set for final instance segmentation. We formulate a key instance selection problem on the instance candidate forest as an optimization problem based on the earth mover's distance (EMD), and solve it by integer linear programming. Extensive experiments on eight biomedical video or 3D datasets demonstrate that H-EMD consistently boosts DL semantic segmentation models and is highly competitive with state-of-the-art methods.
Collapse
|
65
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
66
|
Wang X, Yang S, Zhang J, Wang M, Zhang J, Yang W, Huang J, Han X. Transformer-based unsupervised contrastive learning for histopathological image classification. Med Image Anal 2022; 81:102559. [DOI: 10.1016/j.media.2022.102559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 06/24/2022] [Accepted: 07/25/2022] [Indexed: 10/16/2022]
|
67
|
A Novel Method Based on GAN Using a Segmentation Module for Oligodendroglioma Pathological Image Generation. SENSORS 2022; 22:s22103960. [PMID: 35632368 PMCID: PMC9144585 DOI: 10.3390/s22103960] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 04/22/2022] [Accepted: 05/20/2022] [Indexed: 02/05/2023]
Abstract
Digital pathology analysis using deep learning has been the subject of several studies. As with other medical data, pathological data are not easily obtained. Because deep learning-based image analysis requires large amounts of data, augmentation techniques are used to increase the size of pathological datasets. This study proposes a novel method for synthesizing brain tumor pathology data using a generative model. For image synthesis, we used embedding features extracted from a segmentation module in a general generative model. We also introduce a simple solution for training a segmentation model in an environment in which the masked label of the training dataset is not supplied. As a result of this experiment, the proposed method did not make great progress in quantitative metrics but showed improved results in the confusion rate of more than 70 subjects and the quality of the visual output.
Collapse
|
68
|
Pocevičiūtė M, Eilertsen G, Jarkman S, Lundström C. Generalisation effects of predictive uncertainty estimation in deep learning for digital pathology. Sci Rep 2022; 12:8329. [PMID: 35585087 PMCID: PMC9117245 DOI: 10.1038/s41598-022-11826-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 04/27/2022] [Indexed: 01/20/2023] Open
Abstract
Deep learning (DL) has shown great potential in digital pathology applications. The robustness of a diagnostic DL-based solution is essential for safe clinical deployment. In this work we evaluate if adding uncertainty estimates for DL predictions in digital pathology could result in increased value for the clinical applications, by boosting the general predictive performance or by detecting mispredictions. We compare the effectiveness of model-integrated methods (MC dropout and Deep ensembles) with a model-agnostic approach (Test time augmentation, TTA). Moreover, four uncertainty metrics are compared. Our experiments focus on two domain shift scenarios: a shift to a different medical center and to an underrepresented subtype of cancer. Our results show that uncertainty estimates increase reliability by reducing a model’s sensitivity to classification threshold selection as well as by detecting between 70 and 90% of the mispredictions done by the model. Overall, the deep ensembles method achieved the best performance closely followed by TTA.
Collapse
Affiliation(s)
- Milda Pocevičiūtė
- Department of Science and Technology, Linköping University, Linköping, Sweden. .,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.
| | - Gabriel Eilertsen
- Department of Science and Technology, Linköping University, Linköping, Sweden.,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | - Sofia Jarkman
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.,Department of Clinical Pathology, and Department of Biomedical and Clinical Sciences, Linköping University, Linköping, Sweden
| | - Claes Lundström
- Department of Science and Technology, Linköping University, Linköping, Sweden.,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.,Sectra AB, Linköping, Sweden
| |
Collapse
|
69
|
Wen Y, Chen L, Deng Y, Zhang Z, Zhou C. Pixel-wise triplet learning for enhancing boundary discrimination in medical image segmentation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108424] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
70
|
ECAU-Net: Efficient channel attention U-Net for fetal ultrasound cerebellum segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103528] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
71
|
Sharma P, Balabantaray BK, Bora K, Mallik S, Kasugai K, Zhao Z. An Ensemble-Based Deep Convolutional Neural Network for Computer-Aided Polyps Identification From Colonoscopy. Front Genet 2022; 13:844391. [PMID: 35559018 PMCID: PMC9086187 DOI: 10.3389/fgene.2022.844391] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 03/14/2022] [Indexed: 01/16/2023] Open
Abstract
Colorectal cancer (CRC) is the third leading cause of cancer death globally. Early detection and removal of precancerous polyps can significantly reduce the chance of CRC patient death. Currently, the polyp detection rate mainly depends on the skill and expertise of gastroenterologists. Over time, unidentified polyps can develop into cancer. Machine learning has recently emerged as a powerful method in assisting clinical diagnosis. Several classification models have been proposed to identify polyps, but their performance has not been comparable to an expert endoscopist yet. Here, we propose a multiple classifier consultation strategy to create an effective and powerful classifier for polyp identification. This strategy benefits from recent findings that different classification models can better learn and extract various information within the image. Therefore, our Ensemble classifier can derive a more consequential decision than each individual classifier. The extracted combined information inherits the ResNet's advantage of residual connection, while it also extracts objects when covered by occlusions through depth-wise separable convolution layer of the Xception model. Here, we applied our strategy to still frames extracted from a colonoscopy video. It outperformed other state-of-the-art techniques with a performance measure greater than 95% in each of the algorithm parameters. Our method will help researchers and gastroenterologists develop clinically applicable, computational-guided tools for colonoscopy screening. It may be extended to other clinical diagnoses that rely on image.
Collapse
Affiliation(s)
- Pallabi Sharma
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Bunil Kumar Balabantaray
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Kangkana Bora
- Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Kunio Kasugai
- Department of Gastroenterology, Aichi Medical University, Nagakute, Japan
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Human Genetics Center, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX, United States
- MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, United States
| |
Collapse
|
72
|
Zhao Y, Fu C, Xu S, Cao L, Ma HF. LFANet: Lightweight feature attention network for abnormal cell segmentation in cervical cytology images. Comput Biol Med 2022; 145:105500. [PMID: 35421793 DOI: 10.1016/j.compbiomed.2022.105500] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 03/16/2022] [Accepted: 04/04/2022] [Indexed: 11/19/2022]
Abstract
With the widely applied computer-aided diagnosis techniques in cervical cancer screening, cell segmentation has become a necessary step to determine the progression of cervical cancer. Traditional manual methods alleviate the dilemma caused by the shortage of medical resources to a certain extent. Unfortunately, with their low segmentation accuracy for abnormal cells, the complex process cannot realize an automatic diagnosis. In addition, various methods on deep learning can automatically extract image features with high accuracy and small error, making artificial intelligence increasingly popular in computer-aided diagnosis. However, they are not suitable for clinical practice because those complicated models would result in more redundant parameters from networks. To address the above problems, a lightweight feature attention network (LFANet), extracting differentially abundant feature information of objects with various resolutions, is proposed in this study. The model can accurately segment both the nucleus and cytoplasm regions in cervical images. Specifically, a lightweight feature extraction module is designed as an encoder to extract abundant features of input images, combining with depth-wise separable convolution, residual connection and attention mechanism. Besides, the feature layer attention module is added to precisely recover pixel location, which employs the global high-level information as a guide for the low-level features, capturing dependencies of channel features. Finally, our LFANet model is evaluated on all four independent datasets. The experimental results demonstrate that compared with other advanced methods, our proposed network achieves state-of-the-art performance with a low computational complexity.
Collapse
Affiliation(s)
- Yanli Zhao
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China; School of Electrical Information Engineering, Ningxia Institute of Technology, Shizuishan, 753000, China
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110819, China; Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, China.
| | - Sen Xu
- General Hospital of Northern Theatre Command, Shenyang, 110016, China
| | - Lin Cao
- School of Information and Communication Engineering, Beijing Information Science and Technology University, Beijing, 100101, China
| | - Hong-Feng Ma
- Dopamine Group Ltd., Auckland, 1542, New Zealand
| |
Collapse
|
73
|
Zhang Z, Tian C, Bai HX, Jiao Z, Tian X. Discriminative Error Prediction Network for Semi-supervised Colon Gland Segmentation. Med Image Anal 2022; 79:102458. [DOI: 10.1016/j.media.2022.102458] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 04/10/2022] [Accepted: 04/11/2022] [Indexed: 10/18/2022]
|
74
|
Deep Learning on Histopathological Images for Colorectal Cancer Diagnosis: A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12040837. [PMID: 35453885 PMCID: PMC9028395 DOI: 10.3390/diagnostics12040837] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 03/22/2022] [Accepted: 03/25/2022] [Indexed: 02/04/2023] Open
Abstract
Colorectal cancer (CRC) is the second most common cancer in women and the third most common in men, with an increasing incidence. Pathology diagnosis complemented with prognostic and predictive biomarker information is the first step for personalized treatment. The increased diagnostic load in the pathology laboratory, combined with the reported intra- and inter-variability in the assessment of biomarkers, has prompted the quest for reliable machine-based methods to be incorporated into the routine practice. Recently, Artificial Intelligence (AI) has made significant progress in the medical field, showing potential for clinical applications. Herein, we aim to systematically review the current research on AI in CRC image analysis. In histopathology, algorithms based on Deep Learning (DL) have the potential to assist in diagnosis, predict clinically relevant molecular phenotypes and microsatellite instability, identify histological features related to prognosis and correlated to metastasis, and assess the specific components of the tumor microenvironment.
Collapse
|
75
|
Kumar I, Bhatt C, Vimal V, Qamar S. Automated white corpuscles nucleus segmentation using deep neural network from microscopic blood smear. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-189773] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
The white corpuscles nucleus segmentation from microscopic blood images is major steps to diagnose blood-related diseases. The perfect and speedy segmentation system assists the hematologists to identify the diseases and take appropriate decision for better treatment. Therefore, fully automated white corpuscles nucleus segmentation model using deep convolution neural network, is proposed in the present study. The proposed model uses the combination of ‘binary_cross_entropy’ and ‘adam’ for maintaining learning rate in each network weight. To validate the potential and capability of the above proposed solution, ALL-IDB2 dataset is used. The complete set of images is partitioned into training and testing set and tedious experimentations have been performed. The best performing model is selected and the obtained training and testing accuracy of best performing model is reported as 98.69 % and 99.02 %, respectively. The staging analysis of proposed model is evaluated using sensitivity, specificity, Jaccard index, dice coefficient, accuracy and structure similarity index. The capability of proposed model is compared with performance of the region-based contour and fuzzy-based level-set method for same set of images and concluded that proposed model method is more accurate and effective for clinical purpose.
Collapse
Affiliation(s)
- Indrajeet Kumar
- Graphic Era Hill University, CSE Department, Dehradun, India
| | | | - Vrince Vimal
- Graphic Era Hill University, CSE Department, Dehradun, India
| | - Shamimul Qamar
- College of Science and Arts Dhahran Al Janub King Khalid University ABHA, Saudi Arabia
| |
Collapse
|
76
|
Xie Y, Zhang J, Liao Z, Verjans J, Shen C, Xia Y. Intra- and Inter-Pair Consistency for Semi-Supervised Gland Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:894-905. [PMID: 34951847 DOI: 10.1109/tip.2021.3136716] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Accurate gland segmentation in histology tissue images is a critical but challenging task. Although deep models have demonstrated superior performance in medical image segmentation, they commonly require a large amount of annotated data, which are hard to obtain due to the extensive labor costs and expertise required. In this paper, we propose an intra- and inter-pair consistency-based semi-supervised (I2CS) model that can be trained on both labeled and unlabeled histology images for gland segmentation. Considering that each image contains glands and hence different images could potentially share consistent semantics in the feature space, we introduce a novel intra- and inter-pair consistency module to explore such consistency for learning with unlabeled data. It first characterizes the pixel-level relation between a pair of images in the feature space to create an attention map that highlights the regions with the same semantics but on different images. Then, it imposes a consistency constraint on the attention maps obtained from multiple image pairs, and thus filters low-confidence attention regions to generate refined attention maps that are then merged with original features to improve their representation ability. In addition, we also design an object-level loss to address the issues caused by touching glands. We evaluated our model against several recent gland segmentation methods and three typical semi-supervised methods on the GlaS and CRAG datasets. Our results not only demonstrate the effectiveness of the proposed due consistency module and Obj-Dice loss, but also indicate that the proposed I2CS model achieves state-of-the-art gland segmentation performance on both benchmarks.
Collapse
|
77
|
Wang H, Xian M, Vakanski A. TA-Net: Topology-Aware Network for Gland Segmentation. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2022; 2022:3241-3249. [PMID: 35509894 PMCID: PMC9063467 DOI: 10.1109/wacv51458.2022.00330] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Gland segmentation is a critical step to quantitatively assess the morphology of glands in histopathology image analysis. However, it is challenging to separate densely clustered glands accurately. Existing deep learning-based approaches attempted to use contour-based techniques to alleviate this issue but only achieved limited success. To address this challenge, we propose a novel topology-aware network (TA-Net) to accurately separate densely clustered and severely deformed glands. The proposed TA-Net has a multitask learning architecture and enhances the generalization of gland segmentation by learning shared representation from two tasks: instance segmentation and gland topology estimation. The proposed topology loss computes gland topology using gland skeletons and markers. It drives the network to generate segmentation results that comply with the true gland topology. We validate the proposed approach on the GlaS and CRAG datasets using three quantitative metrics, F1-score, object-level Dice coefficient, and object-level Hausdorff distance. Extensive experiments demonstrate that TA-Net achieves state-of-the-art performance on the two datasets. TA-Net outperforms other approaches in the presence of densely clustered glands.
Collapse
|
78
|
Hosseinzadeh Kassani S, Hosseinzadeh Kassani P, Wesolowski MJ, Schneider KA, Deters R. Deep transfer learning based model for colorectal cancer histopathology segmentation: A comparative study of deep pre-trained models. Int J Med Inform 2021; 159:104669. [PMID: 34979435 DOI: 10.1016/j.ijmedinf.2021.104669] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 11/28/2021] [Accepted: 12/16/2021] [Indexed: 11/19/2022]
Abstract
Colorectal cancer is one of the leading causes of cancer-related death, worldwide. Early detection of suspicious tissues can significantly improve the survival rate. In this study, the performance of a wide variety of deep learning-based architectures is evaluated for automatic tumor segmentation of colorectal tissue samples. The proposed approach highlights the utility of incorporating convolutional neural network modules and transfer learning in the encoder part of a segmentation architecture for histopathology image analysis. A comparative and extensive experiment was conducted on a challenging histopathological segmentation task to demonstrate the effectiveness of incorporating deep modules in the segmentation encoder-decoder network as well as the contributions of its components. Experimental results demonstrate that shared DenseNet and LinkNet architecture is promising, achieves the state-of-the-art performance, and outperforms other methods with a dice similarity index of 82.74%±1.77, accuracy of 87.07%±1.56, and f1-score value of 82.79%±1.79.
Collapse
Affiliation(s)
| | | | | | | | - Ralph Deters
- Department of Computer Science, University of Saskatchewan, Canada.
| |
Collapse
|
79
|
SAFRON: Stitching Across the Frontier Network for Generating Colorectal Cancer Histology Images. Med Image Anal 2021; 77:102337. [PMID: 35016078 DOI: 10.1016/j.media.2021.102337] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 10/13/2021] [Accepted: 12/14/2021] [Indexed: 12/12/2022]
Abstract
Automated synthesis of histology images has several potential applications including the development of data-efficient deep learning algorithms. In the field of computational pathology, where histology images are large in size and visual context is crucial, synthesis of large high-resolution images via generative modeling is an important but challenging task due to memory and computational constraints. To address this challenge, we propose a novel framework called SAFRON (Stitching Across the FROntier Network) to construct realistic, large high-resolution tissue images conditioned on input tissue component masks. The main novelty in the framework is integration of stitching in its loss function which enables generation of images of arbitrarily large sizes after training on relatively small image patches while preserving morphological features with minimal boundary artifacts. We have used the proposed framework for generating, to the best of our knowledge, the largest-sized synthetic histology images to date (up to 11K×8K pixels). Compared to existing approaches, our framework is efficient in terms of the memory required for training and computations needed for synthesizing large high-resolution images. The quality of generated images was assessed quantitatively using Frechet Inception Distance as well as by 7 trained pathologists, who assigned a realism score to a set of images generated by SAFRON. The average realism score across all pathologists for synthetic images was as high as that of real images. We also show that training with additional synthetic data generated by SAFRON can significantly boost prediction performance of gland segmentation and cancer detection algorithms in colorectal cancer histology images.
Collapse
|
80
|
Cianci P, Restini E. Artificial intelligence in colorectal cancer management. Artif Intell Cancer 2021; 2:79-89. [DOI: 10.35713/aic.v2.i6.79] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 12/22/2021] [Accepted: 12/29/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is a new branch of computer science involving many disciplines and technologies. Since its application in the medical field, it has been constantly studied and developed. AI includes machine learning and neural networks to create new technologies or to improve existing ones. Various AI supporting systems are available for a personalized and novel strategy for the management of colorectal cancer (CRC). This mini-review aims to summarize the progress of research and possible clinical applications of AI in the investigation, early diagnosis, treatment, and management of CRC, to offer elements of knowledge as a starting point for new studies and future applications.
Collapse
Affiliation(s)
- Pasquale Cianci
- Department of Surgery and Traumatology, ASL BAT, Lorenzo Bonomo Hospital, Andria 76123, Puglia, Italy
| | - Enrico Restini
- Department of Surgery and Traumatology, ASL BAT, Lorenzo Bonomo Hospital, Andria 76123, Puglia, Italy
| |
Collapse
|
81
|
Zhang J, Zhang Y, Qiu H, Xie W, Yao Z, Yuan H, Jia Q, Wang T, Shi Y, Huang M, Zhuang J, Xu X. Pyramid-Net: Intra-layer Pyramid-Scale Feature Aggregation Network for Retinal Vessel Segmentation. Front Med (Lausanne) 2021; 8:761050. [PMID: 34950679 PMCID: PMC8688400 DOI: 10.3389/fmed.2021.761050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 11/05/2021] [Indexed: 11/18/2022] Open
Abstract
Retinal vessel segmentation plays an important role in the diagnosis of eye-related diseases and biomarkers discovery. Existing works perform multi-scale feature aggregation in an inter-layer manner, namely inter-layer feature aggregation. However, such an approach only fuses features at either a lower scale or a higher scale, which may result in a limited segmentation performance, especially on thin vessels. This discovery motivates us to fuse multi-scale features in each layer, intra-layer feature aggregation, to mitigate the problem. Therefore, in this paper, we propose Pyramid-Net for accurate retinal vessel segmentation, which features intra-layer pyramid-scale aggregation blocks (IPABs). At each layer, IPABs generate two associated branches at a higher scale and a lower scale, respectively, and the two with the main branch at the current scale operate in a pyramid-scale manner. Three further enhancements including pyramid inputs enhancement, deep pyramid supervision, and pyramid skip connections are proposed to boost the performance. We have evaluated Pyramid-Net on three public retinal fundus photography datasets (DRIVE, STARE, and CHASE-DB1). The experimental results show that Pyramid-Net can effectively improve the segmentation performance especially on thin vessels, and outperforms the current state-of-the-art methods on all the adopted three datasets. In addition, our method is more efficient than existing methods with a large reduction in computational cost. We have released the source code at https://github.com/JerRuy/Pyramid-Net.
Collapse
Affiliation(s)
- Jiawei Zhang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
- Shanghai key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
| | - Yanchun Zhang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, China
- College of Engineering and Science, Victoria University, Melbourne, VIC, Australia
| | - Hailong Qiu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wen Xie
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Zeyang Yao
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Haiyun Yuan
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Qianjun Jia
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Tianchen Wang
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Yiyu Shi
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Meiping Huang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Jian Zhuang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Xiaowei Xu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
82
|
Deep Learning Approaches to Colorectal Cancer Diagnosis: A Review. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112210982] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Unprecedented breakthroughs in the development of graphical processing systems have led to great potential for deep learning (DL) algorithms in analyzing visual anatomy from high-resolution medical images. Recently, in digital pathology, the use of DL technologies has drawn a substantial amount of attention for use in the effective diagnosis of various cancer types, especially colorectal cancer (CRC), which is regarded as one of the dominant causes of cancer-related deaths worldwide. This review provides an in-depth perspective on recently published research articles on DL-based CRC diagnosis and prognosis. Overall, we provide a retrospective synopsis of simple image-processing-based and machine learning (ML)-based computer-aided diagnosis (CAD) systems, followed by a comprehensive appraisal of use cases with different types of state-of-the-art DL algorithms for detecting malignancies. We first list multiple standardized and publicly available CRC datasets from two imaging types: colonoscopy and histopathology. Secondly, we categorize the studies based on the different types of CRC detected (tumor tissue, microsatellite instability, and polyps), and we assess the data preprocessing steps and the adopted DL architectures before presenting the optimum diagnostic results. CRC diagnosis with DL algorithms is still in the preclinical phase, and therefore, we point out some open issues and provide some insights into the practicability and development of robust diagnostic systems in future health care and oncology.
Collapse
|
83
|
Pal A, Xue Z, Desai K, Aina F Banjo A, Adepiti CA, Long LR, Schiffman M, Antani S. Deep multiple-instance learning for abnormal cell detection in cervical histopathology images. Comput Biol Med 2021; 138:104890. [PMID: 34601391 PMCID: PMC11977668 DOI: 10.1016/j.compbiomed.2021.104890] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 09/15/2021] [Accepted: 09/22/2021] [Indexed: 01/18/2023]
Abstract
Cervical cancer is a disease of significant concern affecting women's health worldwide. Early detection of and treatment at the precancerous stage can help reduce mortality. High-grade cervical abnormalities and precancer are confirmed using microscopic analysis of cervical histopathology. However, manual analysis of cervical biopsy slides is time-consuming, needs expert pathologists, and suffers from reader variability errors. Prior work in the literature has suggested using automated image analysis algorithms for analyzing cervical histopathology images captured with the whole slide digital scanners (e.g., Aperio, Hamamatsu, etc.). However, whole-slide digital tissue scanners with good optical magnification and acceptable imaging quality are cost-prohibitive and difficult to acquire in low and middle-resource regions. Hence, the development of low-cost imaging systems and automated image analysis algorithms are of critical importance. Motivated by this, we conduct an experimental study to assess the feasibility of developing a low-cost diagnostic system with the H&E stained cervical tissue image analysis algorithm. In our imaging system, the image acquisition is performed by a smartphone affixing it on the top of a commonly available light microscope which magnifies the cervical tissues. The images are not captured in a constant optical magnification, and, unlike whole-slide scanners, our imaging system is unable to record the magnification. The images are mega-pixel images and are labeled based on the presence of abnormal cells. In our dataset, there are total 1331 (train: 846, validation: 116 test: 369) images. We formulate the classification task as a deep multiple instance learning problem and quantitatively evaluate the classification performance of four different types of multiple instance learning algorithms trained with five different architectures designed with varying instance sizes. Finally, we designed a sparse attention-based multiple instance learning framework that can produce a maximum of 84.55% classification accuracy on the test set.
Collapse
Affiliation(s)
- Anabik Pal
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
| | - Zhiyun Xue
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Kanan Desai
- National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | | | | | - L Rodney Long
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Mark Schiffman
- National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Sameer Antani
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
84
|
Pati P, Jaume G, Foncubierta-Rodríguez A, Feroce F, Anniciello AM, Scognamiglio G, Brancati N, Fiche M, Dubruc E, Riccio D, Di Bonito M, De Pietro G, Botti G, Thiran JP, Frucci M, Goksel O, Gabrani M. Hierarchical graph representations in digital pathology. Med Image Anal 2021; 75:102264. [PMID: 34781160 DOI: 10.1016/j.media.2021.102264] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 09/22/2021] [Accepted: 10/06/2021] [Indexed: 01/01/2023]
Abstract
Cancer diagnosis, prognosis, and therapy response predictions from tissue specimens highly depend on the phenotype and topological distribution of constituting histological entities. Thus, adequate tissue representations for encoding histological entities is imperative for computer aided cancer patient care. To this end, several approaches have leveraged cell-graphs, capturing the cell-microenvironment, to depict the tissue. These allow for utilizing graph theory and machine learning to map the tissue representation to tissue functionality, and quantify their relationship. Though cellular information is crucial, it is incomplete alone to comprehensively characterize complex tissue structure. We herein treat the tissue as a hierarchical composition of multiple types of histological entities from fine to coarse level, capturing multivariate tissue information at multiple levels. We propose a novel multi-level hierarchical entity-graph representation of tissue specimens to model the hierarchical compositions that encode histological entities as well as their intra- and inter-entity level interactions. Subsequently, a hierarchical graph neural network is proposed to operate on the hierarchical entity-graph and map the tissue structure to tissue functionality. Specifically, for input histology images, we utilize well-defined cells and tissue regions to build HierArchical Cell-to-Tissue (HACT) graph representations, and devise HACT-Net, a message passing graph neural network, to classify the HACT representations. As part of this work, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of Haematoxylin & Eosin stained breast tumor regions-of-interest, to evaluate and benchmark our proposed methodology against pathologists and state-of-the-art computer-aided diagnostic approaches. Through comparative assessment and ablation studies, our proposed method is demonstrated to yield superior classification results compared to alternative methods as well as individual pathologists. The code, data, and models can be accessed at https://github.com/histocartography/hact-net.
Collapse
Affiliation(s)
- Pushpak Pati
- IBM Zurich Research Lab, Zurich, Switzerland; Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland.
| | - Guillaume Jaume
- IBM Zurich Research Lab, Zurich, Switzerland; Signal Processing Laboratory 5, EPFL, Lausanne, Switzerland
| | | | - Florinda Feroce
- National Cancer Institute - IRCCS-Fondazione Pascale, Naples, Italy
| | | | | | - Nadia Brancati
- Institute for High Performance Computing and Networking - CNR, Naples, Italy
| | - Maryse Fiche
- Aurigen- Centre de Pathologie, Lausanne, Switzerland
| | | | - Daniel Riccio
- Institute for High Performance Computing and Networking - CNR, Naples, Italy
| | | | - Giuseppe De Pietro
- Institute for High Performance Computing and Networking - CNR, Naples, Italy
| | - Gerardo Botti
- National Cancer Institute - IRCCS-Fondazione Pascale, Naples, Italy
| | | | - Maria Frucci
- Institute for High Performance Computing and Networking - CNR, Naples, Italy
| | - Orcun Goksel
- Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland; Department of Information Technology, Uppsala University, Sweden
| | | |
Collapse
|
85
|
Cao X, Chen H, Li Y, Peng Y, Wang S, Cheng L. Dilated densely connected U-Net with uncertainty focus loss for 3D ABUS mass segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106313. [PMID: 34364182 DOI: 10.1016/j.cmpb.2021.106313] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Accepted: 07/21/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate segmentation of breast mass in 3D automated breast ultrasound (ABUS) images plays an important role in qualitative and quantitative ABUS image analysis. Yet this task is challenging due to the low signal to noise ratio and serious artifacts in ABUS images, the large shape and size variation of breast masses, as well as the small training dataset compared with natural images. The purpose of this study is to address these difficulties by designing a dilated densely connected U-Net (D2U-Net) together with an uncertainty focus loss. METHODS A lightweight yet effective densely connected segmentation network is constructed to extensively explore feature representations in the small ABUS dataset. In order to deal with the high variation in shape and size of breast masses, a set of hybrid dilated convolutions is integrated into the dense blocks of the D2U-Net. We further suggest an uncertainty focus loss to put more attention on unreliable network predictions, especially the ambiguous mass boundaries caused by low signal to noise ratio and artifacts. Our segmentation algorithm is evaluated on an ABUS dataset of 170 volumes from 107 patients. Ablation analysis and comparison with existing methods are conduct to verify the effectiveness of the proposed method. RESULTS Experiment results demonstrate that the proposed algorithm outperforms existing methods on 3D ABUS mass segmentation tasks, with Dice similarity coefficient, Jaccard index and 95% Hausdorff distance of 69.02%, 56.61% and 4.92 mm, respectively. CONCLUSIONS The proposed method is effective in segmenting breast masses on our small ABUS dataset, especially breast masses with large shape and size variations.
Collapse
Affiliation(s)
- Xuyang Cao
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Houjin Chen
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China.
| | - Yanfeng Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Yahui Peng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Shu Wang
- Peking University People's Hospital, Beijing 100044, China
| | - Lin Cheng
- Peking University People's Hospital, Beijing 100044, China
| |
Collapse
|
86
|
Aatresh AA, Yatgiri RP, Chanchal AK, Kumar A, Ravi A, Das D, Bs R, Lal S, Kini J. Efficient deep learning architecture with dimension-wise pyramid pooling for nuclei segmentation of histopathology images. Comput Med Imaging Graph 2021; 93:101975. [PMID: 34461375 DOI: 10.1016/j.compmedimag.2021.101975] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 08/05/2021] [Accepted: 08/19/2021] [Indexed: 11/30/2022]
Abstract
Image segmentation remains to be one of the most vital tasks in the area of computer vision and more so in the case of medical image processing. Image segmentation quality is the main metric that is often considered with memory and computation efficiency overlooked, limiting the use of power hungry models for practical use. In this paper, we propose a novel framework (Kidney-SegNet) that combines the effectiveness of an attention based encoder-decoder architecture with atrous spatial pyramid pooling with highly efficient dimension-wise convolutions. The segmentation results of the proposed Kidney-SegNet architecture have been shown to outperform existing state-of-the-art deep learning methods by evaluating them on two publicly available kidney and TNBC breast H&E stained histopathology image datasets. Further, our simulation experiments also reveal that the computational complexity and memory requirement of our proposed architecture is very efficient compared to existing deep learning state-of-the-art methods for the task of nuclei segmentation of H&E stained histopathology images. The source code of our implementation will be available at https://github.com/Aaatresh/Kidney-SegNet.
Collapse
Affiliation(s)
- Anirudh Ashok Aatresh
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Rohit Prashant Yatgiri
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Amit Kumar Chanchal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Aman Kumar
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Akansh Ravi
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Devikalyan Das
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Raghavendra Bs
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Shyam Lal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Jyoti Kini
- Department of Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
87
|
Senousy Z, Abdelsamea MM, Gaber MM, Abdar M, Acharya UR, Khosravi A, Nahavandi S. MCUa: Multi-level Context and Uncertainty aware Dynamic Deep Ensemble for Breast Cancer Histology Image Classification. IEEE Trans Biomed Eng 2021; 69:818-829. [PMID: 34460359 DOI: 10.1109/tbme.2021.3107446] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Breast histology image classification is a crucial step in the early diagnosis of breast cancer. In breast pathological diagnosis, Convolutional Neural Networks (CNNs) have demonstrated great success using digitized histology slides. However, tissue classification is still challenging due to the high visual variability of the large-sized digitized samples and the lack of contextual information. In this paper, we propose a novel CNN, called Multi-level Context and Uncertainty aware (MCUa) dynamic deep learning ensemble model. MCUa model consists of several multi-level context-aware models to learn the spatial dependency between image patches in a layer-wise fashion. It exploits the high sensitivity to the multi-level contextual information using an uncertainty quantification component to accomplish a novel dynamic ensemble model. MCUa model has achieved a high accuracy of 98.11% on a breast cancer histology image dataset. Experimental results show the superior effectiveness of the proposed solution compared to the state-of-the-art histology classification models.
Collapse
|
88
|
Shan D, Zheng J, Klimowicz A, Panzenbeck M, Liu Z, Feng D. Deep learning for discovering pathological continuum of crypts and evaluating therapeutic effects: An implication for in vivo preclinical study. PLoS One 2021; 16:e0252429. [PMID: 34125849 PMCID: PMC8202954 DOI: 10.1371/journal.pone.0252429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Accepted: 05/16/2021] [Indexed: 11/21/2022] Open
Abstract
Applying deep learning to the field of preclinical in vivo studies is a new and exciting prospect with the potential to unlock decades worth of underutilized data. As a proof of concept, we performed a feasibility study on a colitis model treated with Sulfasalazine, a drug used in therapeutic care of inflammatory bowel disease. We aimed to evaluate the colonic mucosa improvement associated with the recovery response of the crypts, a complex histologic structure reflecting tissue homeostasis and repair in response to inflammation. Our approach requires robust image segmentation of objects of interest from whole slide images, a composite low dimensional representation of the typical or novel morphological variants of the segmented objects, and exploration of image features of significance towards biology and treatment efficacy. Both interpretable features (eg. counts, area, distance and angle) as well as statistical texture features calculated using Gray Level Co-Occurance Matrices (GLCMs), are shown to have significance in analysis. Ultimately, this analytic framework of supervised image segmentation, unsupervised learning, and feature analysis can be generally applied to preclinical data. We hope our report will inspire more efforts to utilize deep learning in preclinical in vivo studies and ultimately make the field more innovative and efficient.
Collapse
Affiliation(s)
- Dechao Shan
- Global Computational Biology and Digital Sciences, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Jie Zheng
- Immunology and Respiratory Disease Research, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Alexander Klimowicz
- Immunology and Respiratory Disease Research, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Mark Panzenbeck
- Immunology and Respiratory Disease Research, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Zheng Liu
- Global Computational Biology and Digital Sciences, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Di Feng
- Global Computational Biology and Digital Sciences, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| |
Collapse
|
89
|
Cao B, Zhang KC, Wei B, Chen L. Status quo and future prospects of artificial neural network from the perspective of gastroenterologists. World J Gastroenterol 2021; 27:2681-2709. [PMID: 34135549 PMCID: PMC8173384 DOI: 10.3748/wjg.v27.i21.2681] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/29/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN’s clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public.
Collapse
Affiliation(s)
- Bo Cao
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ke-Cheng Zhang
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Bo Wei
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Lin Chen
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| |
Collapse
|
90
|
Khened M, Kori A, Rajkumar H, Krishnamurthi G, Srinivasan B. A generalized deep learning framework for whole-slide image segmentation and analysis. Sci Rep 2021; 11:11579. [PMID: 34078928 PMCID: PMC8172839 DOI: 10.1038/s41598-021-90444-8] [Citation(s) in RCA: 77] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 05/04/2021] [Indexed: 12/22/2022] Open
Abstract
Histopathology tissue analysis is considered the gold standard in cancer diagnosis and prognosis. Whole-slide imaging (WSI), i.e., the scanning and digitization of entire histology slides, are now being adopted across the world in pathology labs. Trained histopathologists can provide an accurate diagnosis of biopsy specimens based on WSI data. Given the dimensionality of WSIs and the increase in the number of potential cancer cases, analyzing these images is a time-consuming process. Automated segmentation of tumorous tissue helps in elevating the precision, speed, and reproducibility of research. In the recent past, deep learning-based techniques have provided state-of-the-art results in a wide variety of image analysis tasks, including the analysis of digitized slides. However, deep learning-based solutions pose many technical challenges, including the large size of WSI data, heterogeneity in images, and complexity of features. In this study, we propose a generalized deep learning-based framework for histopathology tissue analysis to address these challenges. Our framework is, in essence, a sequence of individual techniques in the preprocessing-training-inference pipeline which, in conjunction, improve the efficiency and the generalizability of the analysis. The combination of techniques we have introduced includes an ensemble segmentation model, division of the WSI into smaller overlapping patches while addressing class imbalances, efficient techniques for inference, and an efficient, patch-based uncertainty estimation framework. Our ensemble consists of DenseNet-121, Inception-ResNet-V2, and DeeplabV3Plus, where all the networks were trained end to end for every task. We demonstrate the efficacy and improved generalizability of our framework by evaluating it on a variety of histopathology tasks including breast cancer metastases (CAMELYON), colon cancer (DigestPath), and liver cancer (PAIP). Our proposed framework has state-of-the-art performance across all these tasks and is ranked within the top 5 currently for the challenges based on these datasets. The entire framework along with the trained models and the related documentation are made freely available at GitHub and PyPi. Our framework is expected to aid histopathologists in accurate and efficient initial diagnosis. Moreover, the estimated uncertainty maps will help clinicians to make informed decisions and further treatment planning or analysis.
Collapse
Affiliation(s)
- Mahendra Khened
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India
| | - Avinash Kori
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India
| | - Haran Rajkumar
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India
| | - Ganapathy Krishnamurthi
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, 600036, India.
| | - Balaji Srinivasan
- Department of Mechanical Engineering, Indian Institute of Technology Madras, Chennai, 600036, India
| |
Collapse
|
91
|
Kobayashi S, Saltz JH, Yang VW. State of machine and deep learning in histopathological applications in digestive diseases. World J Gastroenterol 2021; 27:2545-2575. [PMID: 34092975 PMCID: PMC8160628 DOI: 10.3748/wjg.v27.i20.2545] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/27/2021] [Accepted: 04/29/2021] [Indexed: 02/06/2023] Open
Abstract
Machine learning (ML)- and deep learning (DL)-based imaging modalities have exhibited the capacity to handle extremely high dimensional data for a number of computer vision tasks. While these approaches have been applied to numerous data types, this capacity can be especially leveraged by application on histopathological images, which capture cellular and structural features with their high-resolution, microscopic perspectives. Already, these methodologies have demonstrated promising performance in a variety of applications like disease classification, cancer grading, structure and cellular localizations, and prognostic predictions. A wide range of pathologies requiring histopathological evaluation exist in gastroenterology and hepatology, indicating these as disciplines highly targetable for integration of these technologies. Gastroenterologists have also already been primed to consider the impact of these algorithms, as development of real-time endoscopic video analysis software has been an active and popular field of research. This heightened clinical awareness will likely be important for future integration of these methods and to drive interdisciplinary collaborations on emerging studies. To provide an overview on the application of these methodologies for gastrointestinal and hepatological histopathological slides, this review will discuss general ML and DL concepts, introduce recent and emerging literature using these methods, and cover challenges moving forward to further advance the field.
Collapse
Affiliation(s)
- Soma Kobayashi
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
| | - Joel H Saltz
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
| | - Vincent W Yang
- Department of Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
- Department of Physiology and Biophysics, Renaissance School of Medicine, Stony Brook University, Stony Brook , NY 11794, United States
| |
Collapse
|
92
|
Chen Z, Chen Z, Liu J, Zheng Q, Zhu Y, Zuo Y, Wang Z, Guan X, Wang Y, Li Y. Weakly Supervised Histopathology Image Segmentation With Sparse Point Annotations. IEEE J Biomed Health Inform 2021; 25:1673-1685. [PMID: 32931437 DOI: 10.1109/jbhi.2020.3024262] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Digital histopathology image segmentation can facilitate computer-assisted cancer diagnostics. Given the difficulty of obtaining manual annotations, weak supervision is more suitable for the task than full supervision is. However, most weakly supervised models are not ideal for handling severe intra-class heterogeneity and inter-class homogeneity in histopathology images. Therefore, we propose a novel end-to-end weakly supervised learning framework named WESUP. With only sparse point annotations, it performs accurate segmentation and exhibits good generalizability. The training phase comprises two major parts, hierarchical feature representation and deep dynamic label propagation. The former uses superpixels to capture local details and global context from the convolutional feature maps obtained via transfer learning. The latter recognizes the manifold structure of the hierarchical features and identifies potential targets with the sparse annotations. Moreover, these two parts are trained jointly to improve the performance of the whole framework. To further boost test performance, pixel-wise inference is adopted for finer prediction. As demonstrated by experimental results, WESUP is able to largely resolve the confusion between histological foreground and background. It outperforms several state-of-the-art weakly supervised methods on a variety of histopathology datasets with minimal annotation efforts. Trained by very sparse point annotations, WESUP can even beat an advanced fully supervised segmentation network.
Collapse
|
93
|
van der Laak J, Litjens G, Ciompi F. Deep learning in histopathology: the path to the clinic. Nat Med 2021; 27:775-784. [PMID: 33990804 DOI: 10.1038/s41591-021-01343-4] [Citation(s) in RCA: 361] [Impact Index Per Article: 90.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 03/31/2021] [Indexed: 02/08/2023]
Abstract
Machine learning techniques have great potential to improve medical diagnostics, offering ways to improve accuracy, reproducibility and speed, and to ease workloads for clinicians. In the field of histopathology, deep learning algorithms have been developed that perform similarly to trained pathologists for tasks such as tumor detection and grading. However, despite these promising results, very few algorithms have reached clinical implementation, challenging the balance between hope and hype for these new techniques. This Review provides an overview of the current state of the field, as well as describing the challenges that still need to be addressed before artificial intelligence in histopathology can achieve clinical value.
Collapse
Affiliation(s)
- Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands. .,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| |
Collapse
|
94
|
Wen Z, Feng R, Liu J, Li Y, Ying S. GCSBA-Net: Gabor-Based and Cascade Squeeze Bi-Attention Network for Gland Segmentation. IEEE J Biomed Health Inform 2021; 25:1185-1196. [PMID: 32780703 DOI: 10.1109/jbhi.2020.3015844] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Colorectal cancer is the second and the third most common cancer in women and men, respectively. Pathological diagnosis is the "gold standard" for tumor diagnosis. Accurate segmentation of glands from tissue images is a crucial step in assisting pathologists in their diagnosis. The typical methods for gland segmentation form a dense image representation, ignoring its texture and multi-scale attention information. Therefore, we utilize a Gabor-based module to extract texture information at different scales and directions in histopathology images. This paper also designs a Cascade Squeeze Bi-Attention (CSBA) module. Specifically, we add Atrous Cascade Spatial Pyramid (ACSP), Squeeze Position Attention (SPA) module and Squeeze Channel Attention module (SCA) to model semantic correlation and maintain the multi-level aggregation on the spatial pyramid with different dilations. Besides, to solve the imbalance of data distribution and boundary blur, we propose a hybrid loss function to response the object boudary better. The experimental results show that the proposed method achieves state-of-the-art performance on the GlaS challenge dataset and CRAG colorectal adenocarcinoma dataset, respectively.
Collapse
|
95
|
Faster Mean-shift: GPU-accelerated clustering for cosine embedding-based cell segmentation and tracking. Med Image Anal 2021; 71:102048. [PMID: 33872961 DOI: 10.1016/j.media.2021.102048] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 10/15/2020] [Accepted: 03/20/2021] [Indexed: 01/08/2023]
Abstract
Recently, single-stage embedding based deep learning algorithms gain increasing attention in cell segmentation and tracking. Compared with the traditional "segment-then-associate" two-stage approach, a single-stage algorithm not only simultaneously achieves consistent instance cell segmentation and tracking but also gains superior performance when distinguishing ambiguous pixels on boundaries and overlaps. However, the deployment of an embedding based algorithm is restricted by slow inference speed (e.g., ≈1-2 min per frame). In this study, we propose a novel Faster Mean-shift algorithm, which tackles the computational bottleneck of embedding based cell segmentation and tracking. Different from previous GPU-accelerated fast mean-shift algorithms, a new online seed optimization policy (OSOP) is introduced to adaptively determine the minimal number of seeds, accelerate computation, and save GPU memory. With both embedding simulation and empirical validation via the four cohorts from the ISBI cell tracking challenge, the proposed Faster Mean-shift algorithm achieved 7-10 times speedup compared to the state-of-the-art embedding based cell instance segmentation and tracking algorithm. Our Faster Mean-shift algorithm also achieved the highest computational speed compared to other GPU benchmarks with optimized memory consumption. The Faster Mean-shift is a plug-and-play model, which can be employed on other pixel embedding based clustering inference for medical image analysis. (Plug-and-play model is publicly available: https://github.com/masqm/Faster-Mean-Shift).
Collapse
|
96
|
Vu QD, Kim K, Kwak JT. Unsupervised Tumor Characterization via Conditional Generative Adversarial Networks. IEEE J Biomed Health Inform 2021; 25:348-357. [PMID: 32396112 DOI: 10.1109/jbhi.2020.2993560] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Grading for cancer, based upon the degree of cancer differentiation, plays a major role in describing the characteristics and behavior of the cancer and determining treatment plan for patients. The grade is determined by a subjective and qualitative assessment of tissues under microscope, which suffers from high inter- and intra-observer variability among pathologists. Digital pathology offers an alternative means to automate the procedure as well as to improve the accuracy and robustness of cancer grading. However, most of such methods tend to mimic or reproduce cancer grade determined by human experts. Herein, we propose an alternative, quantitative means of assessing and characterizing cancers in an unsupervised manner. The proposed method utilizes conditional generative adversarial networks to characterize tissues. The proposed method is evaluated using whole slide images (WSIs) and tissue microarrays (TMAs) of colorectal cancer specimens. The results suggest that the proposed method holds a potential for quantifying cancer characteristics and improving cancer pathology.
Collapse
|
97
|
Nguyen HG, Blank A, Dawson HE, Lugli A, Zlobec I. Classification of colorectal tissue images from high throughput tissue microarrays by ensemble deep learning methods. Sci Rep 2021; 11:2371. [PMID: 33504830 PMCID: PMC7840737 DOI: 10.1038/s41598-021-81352-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 01/05/2021] [Indexed: 12/13/2022] Open
Abstract
Tissue microarray (TMA) core images are a treasure trove for artificial intelligence applications. However, a common problem of TMAs is multiple sectioning, which can change the content of the intended tissue core and requires re-labelling. Here, we investigate different ensemble methods for colorectal tissue classification using high-throughput TMAs. Hematoxylin and Eosin (H&E) core images of 0.6 mm or 1.0 mm diameter from three international cohorts were extracted from 54 digital slides (n = 15,150 cores). After TMA core extraction and color enhancement, five different flows of independent and ensemble deep learning were applied. Training and testing data with 2144 and 13,006 cores included three classes: tumor, normal or "other" tissue. Ground-truth data were collected from 30 ngTMA slides (n = 8689 cores). A test augmentation is applied to reduce the uncertain prediction. Predictive accuracy of the best method, namely Soft Voting Ensemble of one VGG and one CapsNet models was 0.982, 0.947 and 0.939 for normal, "other" and tumor, which outperformed to independent or ensemble learning with one base-estimator. Our high-accuracy algorithm for colorectal tissue classification in high-throughput TMAs is amenable to images from different institutions, core sizes and stain intensity. It helps to reduce error in TMA core evaluations with previously given labels.
Collapse
Affiliation(s)
- Huu-Giao Nguyen
- Institute of Pathology, University of Bern, Murtenstrasse 31, 3008, Bern, Switzerland
| | - Annika Blank
- Institute of Pathology, University of Bern, Murtenstrasse 31, 3008, Bern, Switzerland
- Institute of Pathology, Triemli City Hospital, Birmensdorferstrasse 497, 8063, Zurich, Switzerland
| | - Heather E Dawson
- Institute of Pathology, University of Bern, Murtenstrasse 31, 3008, Bern, Switzerland
| | - Alessandro Lugli
- Institute of Pathology, University of Bern, Murtenstrasse 31, 3008, Bern, Switzerland
| | - Inti Zlobec
- Institute of Pathology, University of Bern, Murtenstrasse 31, 3008, Bern, Switzerland.
| |
Collapse
|
98
|
Salvi M, Acharya UR, Molinari F, Meiburger KM. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput Biol Med 2021; 128:104129. [DOI: 10.1016/j.compbiomed.2020.104129] [Citation(s) in RCA: 105] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|
99
|
Srinidhi CL, Ciga O, Martel AL. Deep neural network models for computational histopathology: A survey. Med Image Anal 2021; 67:101813. [PMID: 33049577 PMCID: PMC7725956 DOI: 10.1016/j.media.2020.101813] [Citation(s) in RCA: 255] [Impact Index Per Article: 63.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 05/12/2020] [Accepted: 08/09/2020] [Indexed: 12/14/2022]
Abstract
Histopathological images contain rich phenotypic information that can be used to monitor underlying mechanisms contributing to disease progression and patient survival outcomes. Recently, deep learning has become the mainstream methodological choice for analyzing and interpreting histology images. In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis. From the survey of over 130 papers, we review the field's progress based on the methodological aspect of different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods. We also provide an overview of deep learning based survival models that are applicable for disease-specific prognosis tasks. Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
Collapse
Affiliation(s)
- Chetan L Srinidhi
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada.
| | - Ozan Ciga
- Department of Medical Biophysics, University of Toronto, Canada
| | - Anne L Martel
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada
| |
Collapse
|
100
|
Pan X, Phan TL, Adel M, Fossati C, Gaidon T, Wojak J, Guedj E. Multi-View Separable Pyramid Network for AD Prediction at MCI Stage by 18F-FDG Brain PET Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:81-92. [PMID: 32894711 DOI: 10.1109/tmi.2020.3022591] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Alzheimer's Disease (AD), one of the main causes of death in elderly people, is characterized by Mild Cognitive Impairment (MCI) at prodromal stage. Nevertheless, only part of MCI subjects could progress to AD. The main objective of this paper is thus to identify those who will develop a dementia of AD type among MCI patients. 18F-FluoroDeoxyGlucose Positron Emission Tomography (18F-FDG PET) serves as a neuroimaging modality for early diagnosis as it can reflect neural activity via measuring glucose uptake at resting-state. In this paper, we design a deep network on 18F-FDG PET modality to address the problem of AD identification at early MCI stage. To this end, a Multi-view Separable Pyramid Network (MiSePyNet) is proposed, in which representations are learned from axial, coronal and sagittal views of PET scans so as to offer complementary information and then combined to make a decision jointly. Different from the widely and naturally used 3D convolution operations for 3D images, the proposed architecture is deployed with separable convolution from slice-wise to spatial-wise successively, which can retain the spatial information and reduce training parameters compared to 2D and 3D networks, respectively. Experiments on ADNI dataset show that the proposed method can yield better performance than both traditional and deep learning-based algorithms for predicting the progression of Mild Cognitive Impairment, with a classification accuracy of 83.05%.
Collapse
|