1
|
Wang S, Sun M, Sun J, Wang Q, Wang G, Wang X, Meng X, Wang Z, Yu H. Advancing musculoskeletal tumor diagnosis: Automated segmentation and predictive classification using deep learning and radiomics. Comput Biol Med 2024; 175:108502. [PMID: 38678943 DOI: 10.1016/j.compbiomed.2024.108502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 03/18/2024] [Accepted: 04/21/2024] [Indexed: 05/01/2024]
Abstract
OBJECTIVES Musculoskeletal (MSK) tumors, given their high mortality rate and heterogeneity, necessitate precise examination and diagnosis to guide clinical treatment effectively. Magnetic resonance imaging (MRI) is pivotal in detecting MSK tumors, as it offers exceptional image contrast between bone and soft tissue. This study aims to enhance the speed of detection and the diagnostic accuracy of MSK tumors through automated segmentation and grading utilizing MRI. MATERIALS AND METHODS The research included 170 patients (mean age, 58 years ±12 (standard deviation), 84 men) with MSK lesions, who underwent MRI scans from April 2021 to May 2023. We proposed a deep learning (DL) segmentation model MSAPN based on multi-scale attention and pixel-level reconstruction, and compared it with existing algorithms. Using MSAPN-segmented lesions to extract their radiomic features for the benign and malignant classification of tumors. RESULTS Compared to the most advanced segmentation algorithms, MSAPN demonstrates better performance. The Dice similarity coefficients (DSC) are 0.871 and 0.815 in the testing set and independent validation set, respectively. The radiomics model for classifying benign and malignant lesions achieves an accuracy of 0.890. Moreover, there is no statistically significant difference between the radiomics model based on manual segmentation and MSAPN segmentation. CONCLUSION This research contributes to the advancement of MSK tumor diagnosis through automated segmentation and predictive classification. The integration of DL algorithms and radiomics shows promising results, and the visualization analysis of feature maps enhances clinical interpretability.
Collapse
Affiliation(s)
- Shuo Wang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, 300072, China.
| | - Man Sun
- Radiology Department, Tianjin University Tianjin Hospital, Tianjin, 300299, China.
| | - Jinglai Sun
- The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, 300072, China.
| | - Qingsong Wang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, China.
| | - Guangpu Wang
- The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, 300072, China.
| | - Xiaolin Wang
- The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, 300072, China.
| | - Xianghong Meng
- Radiology Department, Tianjin University Tianjin Hospital, Tianjin, 300299, China.
| | - Zhi Wang
- Radiology Department, Tianjin University Tianjin Hospital, Tianjin, 300299, China.
| | - Hui Yu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, China; State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, 300072, China; The School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, 300072, China.
| |
Collapse
|
2
|
Chelebian E, Avenel C, Ciompi F, Wählby C. DEPICTER: Deep representation clustering for histology annotation. Comput Biol Med 2024; 170:108026. [PMID: 38308865 DOI: 10.1016/j.compbiomed.2024.108026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 01/24/2024] [Accepted: 01/24/2024] [Indexed: 02/05/2024]
Abstract
Automatic segmentation of histopathology whole-slide images (WSI) usually involves supervised training of deep learning models with pixel-level labels to classify each pixel of the WSI into tissue regions such as benign or cancerous. However, fully supervised segmentation requires large-scale data manually annotated by experts, which can be expensive and time-consuming to obtain. Non-fully supervised methods, ranging from semi-supervised to unsupervised, have been proposed to address this issue and have been successful in WSI segmentation tasks. But these methods have mainly been focused on technical advancements in algorithmic performance rather than on the development of practical tools that could be used by pathologists or researchers in real-world scenarios. In contrast, we present DEPICTER (Deep rEPresentatIon ClusTERing), an interactive segmentation tool for histopathology annotation that produces a patch-wise dense segmentation map at WSI level. The interactive nature of DEPICTER leverages self- and semi-supervised learning approaches to allow the user to participate in the segmentation producing reliable results while reducing the workload. DEPICTER consists of three steps: first, a pretrained model is used to compute embeddings from image patches. Next, the user selects a number of benign and cancerous patches from the multi-resolution image. Finally, guided by the deep representations, label propagation is achieved using our novel seeded iterative clustering method or by directly interacting with the embedding space via feature space gating. We report both real-time interaction results with three pathologists and evaluate the performance on three public cancer classification dataset benchmarks through simulations. The code and demos of DEPICTER are publicly available at https://github.com/eduardchelebian/depicter.
Collapse
Affiliation(s)
- Eduard Chelebian
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden.
| | - Chirstophe Avenel
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Carolina Wählby
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden
| |
Collapse
|
3
|
Zhang X, Yu X, Liang W, Zhang Z, Zhang S, Xu L, Zhang H, Feng Z, Song M, Zhang J, Feng S. Deep learning-based accurate diagnosis and quantitative evaluation of microvascular invasion in hepatocellular carcinoma on whole-slide histopathology images. Cancer Med 2024; 13:e7104. [PMID: 38488408 PMCID: PMC10941532 DOI: 10.1002/cam4.7104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 12/13/2023] [Accepted: 03/03/2024] [Indexed: 03/18/2024] Open
Abstract
BACKGROUND Microvascular invasion (MVI) is an independent prognostic factor that is associated with early recurrence and poor survival after resection of hepatocellular carcinoma (HCC). However, the traditional pathology approach is relatively subjective, time-consuming, and heterogeneous in the diagnosis of MVI. The aim of this study was to develop a deep-learning model that could significantly improve the efficiency and accuracy of MVI diagnosis. MATERIALS AND METHODS We collected H&E-stained slides from 753 patients with HCC at the First Affiliated Hospital of Zhejiang University. An external validation set with 358 patients was selected from The Cancer Genome Atlas database. The deep-learning model was trained by simulating the method used by pathologists to diagnose MVI. Model performance was evaluated with accuracy, precision, recall, F1 score, and the area under the receiver operating characteristic curve. RESULTS We successfully developed a MVI artificial intelligence diagnostic model (MVI-AIDM) which achieved an accuracy of 94.25% in the independent external validation set. The MVI positive detection rate of MVI-AIDM was significantly higher than the results of pathologists. Visualization results demonstrated the recognition of micro MVIs that were difficult to differentiate by the traditional pathology. Additionally, the model provided automatic quantification of the number of cancer cells and spatial information regarding MVI. CONCLUSIONS We developed a deep learning diagnostic model, which performed well and improved the efficiency and accuracy of MVI diagnosis. The model provided spatial information of MVI that was essential to accurately predict HCC recurrence after surgery.
Collapse
Affiliation(s)
- Xiuming Zhang
- Department of Pathology, The First Affiliated Hospital, College of MedicineZhejiang UniversityHangzhouP. R. China
| | - Xiaotian Yu
- Department of Computer Science and TechnologyZhejiang UniversityHangzhouP. R. China
| | - Wenjie Liang
- Department of Radiology, The First Affiliated Hospital, College of MedicineZhejiang UniversityHangzhouP. R. China
| | - Zhongliang Zhang
- School of ManagementHangzhou Dianzi UniversityHangzhouP. R. China
| | - Shengxuming Zhang
- Department of Computer Science and TechnologyZhejiang UniversityHangzhouP. R. China
| | - Linjie Xu
- Department of Pathology, The First Affiliated Hospital, College of MedicineZhejiang UniversityHangzhouP. R. China
| | - Han Zhang
- Department of Pathology, The First Affiliated Hospital, College of MedicineZhejiang UniversityHangzhouP. R. China
| | - Zunlei Feng
- Department of Computer Science and TechnologyZhejiang UniversityHangzhouP. R. China
| | - Mingli Song
- Department of Computer Science and TechnologyZhejiang UniversityHangzhouP. R. China
| | - Jing Zhang
- Department of Pathology, The First Affiliated Hospital, College of MedicineZhejiang UniversityHangzhouP. R. China
| | - Shi Feng
- Department of Pathology, The First Affiliated Hospital, College of MedicineZhejiang UniversityHangzhouP. R. China
| |
Collapse
|
4
|
Mahbub T, Obeid A, Javed S, Dias J, Hassan T, Werghi N. Center-Focused Affinity Loss for Class Imbalance Histology Image Classification. IEEE J Biomed Health Inform 2024; 28:952-963. [PMID: 37999960 DOI: 10.1109/jbhi.2023.3336372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2023]
Abstract
Early-stage cancer diagnosis potentially improves the chances of survival for many cancer patients worldwide. Manual examination of Whole Slide Images (WSIs) is a time-consuming task for analyzing tumor-microenvironment. To overcome this limitation, the conjunction of deep learning with computational pathology has been proposed to assist pathologists in efficiently prognosing the cancerous spread. Nevertheless, the existing deep learning methods are ill-equipped to handle fine-grained histopathology datasets. This is because these models are constrained via conventional softmax loss function, which cannot expose them to learn distinct representational embeddings of the similarly textured WSIs containing an imbalanced data distribution. To address this problem, we propose a novel center-focused affinity loss (CFAL) function that exhibits 1) constructing uniformly distributed class prototypes in the feature space, 2) penalizing difficult samples, 3) minimizing intra-class variations, and 4) placing greater emphasis on learning minority class features. We evaluated the performance of the proposed CFAL loss function on two publicly available breast and colon cancer datasets having varying levels of imbalanced classes. The proposed CFAL function shows better discrimination abilities as compared to the popular loss functions such as ArcFace, CosFace, and Focal loss. Moreover, it outperforms several SOTA methods for histology image classification across both datasets.
Collapse
|
5
|
Li H, Xie J, Song J, Jin C, Xin H, Pan X, Ke J, Yuan Y, Shen H, Ning G. CRCS: An automatic image processing pipeline for hormone level analysis of Cushing's disease. Methods 2024; 222:28-40. [PMID: 38159688 DOI: 10.1016/j.ymeth.2023.12.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 12/01/2023] [Accepted: 12/25/2023] [Indexed: 01/03/2024] Open
Abstract
Due to the abnormal secretion of adreno-cortico-tropic-hormone (ACTH) by tumors, Cushing's disease leads to hypercortisonemia, a precursor to a series of metabolic disorders and serious complications. Cushing's disease has high recurrence rate, short recurrence time and undiscovered recurrence reason after surgical resection. Qualitative or quantitative automatic image analysis of histology images can potentially in providing insights into Cushing's disease, but still no software has been available to the best of our knowledge. In this study, we propose a quantitative image analysis-based pipeline CRCS, which aims to explore the relationship between the expression level of ACTH in normal cell tissues adjacent to tumor cells and the postoperative prognosis of patients. CRCS mainly consists of image-level clustering, cluster-level multi-modal image registration, patch-level image classification and pixel-level image segmentation on the whole slide imaging (WSI). On both image registration and classification tasks, our method CRCS achieves state-of-the-art performance compared to recently published methods on our collected benchmark dataset. In addition, CRCS achieves an accuracy of 0.83 for postoperative prognosis of 12 cases. CRCS demonstrates great potential for instrumenting automatic diagnosis and treatment for Cushing's disease.
Collapse
Affiliation(s)
- Haiyue Li
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China
| | - Jing Xie
- Department of Pathology, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, 197 Ruijin 2nd Road, Shanghai 200025, China
| | - Jialin Song
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiao Tong University, Xi'an 710049, China
| | - Cheng Jin
- Medical Robot Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Hongyi Xin
- University of Michigan - Shanghai Jiao Tong University Joint Institute Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xiaoyong Pan
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China
| | - Jing Ke
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Ye Yuan
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China
| | - Hongbin Shen
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China.
| | - Guang Ning
- State Key Laboratory of Medical Genomes, National Clinical Research Center for Endocrine and Metabolic Diseases, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Laboratory of Endocrinology and Metabolism, Institute of Health Sciences, Shanghai Institutes for Biological Sciences (SIBS), Chinese Academy of Sciences (CAS) & Shanghai Jiao Tong University School of Medicine (SJTUSM), Shanghai, China.
| |
Collapse
|
6
|
Xu L, Ma H, Guan Y, Liu J, Huang H, Zhang Y, Tian L. A Siamese Network With Node Convolution for Individualized Predictions Based on Connectivity Maps Extracted From Resting-State fMRI Data. IEEE J Biomed Health Inform 2023; 27:5418-5429. [PMID: 37578917 DOI: 10.1109/jbhi.2023.3304974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Deep learning has demonstrated great potential for objective diagnosis of neuropsychiatric disorders based on neuroimaging data, which includes the promising resting-state functional magnetic resonance imaging (RS-fMRI). However, the insufficient sample size has long been a bottleneck for deep model training for the purpose. In this study, we proposed a Siamese network with node convolution (SNNC) for individualized predictions based on RS-fMRI data. With the involvement of Siamese network, which uses sample pair (rather than a single sample) as input, the problem of insufficient sample size can largely be alleviated. To adapt to connectivity maps extracted from RS-fMRI data, we applied node convolution to each of the two branches of the Siamese network. For regression purposes, we replaced the contrastive loss in classic Siamese network with the mean square error loss and thus enabled Siamese network to quantitatively predict label differences. The label of a test sample can be predicted based on any of the training samples, by adding the label of the training sample to the predicted label difference between them. The final prediction for a test sample in this study was made by averaging the predictions based on each of the training samples. The performance of the proposed SNNC was evaluated with age and IQ predictions based on a public dataset (Cam-CAN). The results indicated that SNNC can make effective predictions even with a sample size of as small as 40, and SNNC achieved state-of-the-art accuracy among a variety of deep models and standard machine learning approaches.
Collapse
|
7
|
Chen J, Xue Y, Ren L, Lv K, Du P, Cheng H, Sun S, Hua L, Xie Q, Wu R, Gong Y. Predicting meningioma grades and pathologic marker expression via deep learning. Eur Radiol 2023:10.1007/s00330-023-10258-2. [PMID: 37853176 DOI: 10.1007/s00330-023-10258-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 07/05/2023] [Accepted: 07/15/2023] [Indexed: 10/20/2023]
Abstract
OBJECTIVES To establish a deep learning (DL) model for predicting tumor grades and expression of pathologic markers of meningioma. METHODS A total of 1192 meningioma patients from two centers who underwent surgical resection between September 2018 and December 2021 were retrospectively included. The pathological data and post-contrast T1-weight images for each patient were collected. The patients from institute I were subdivided into training, validation, and testing sets, while the patients from institute II served as the external testing cohort. The fine-tuned ResNet50 model based on transfer learning was adopted to classify WHO grade in the whole cohort and predict Ki-67 index, H3K27me3, and progesterone receptor (PR) status of grade 1 meningiomas. The predictive performance was evaluated by the accuracy and loss curve, confusion matrix, receiver operating characteristic curve (ROC), and area under curve (AUC). RESULTS The DL prediction model for each label achieved high predictive performance in two cohorts. For WHO grade prediction, the area under the curve (AUC) was 0.966 (95%CI 0.957-0.975) in the internal testing set and 0.669 (95%CI 0.643-0.695) in the external validation cohort. The AUC in predicting Ki-67 index, H3K27me3, and PR status were 0.905 (95%CI 0.895-0.915), 0.773 (95%CI 0.760-0.786), and 0.771 (95%CI 0.750-0.792) in the internal testing set and 0.591 (95%CI 0.562-0.620), 0.658 (95%CI 0.648-0.668), and 0.703 (95%CI 0.674-0.732) in the external validation cohort, respectively. CONCLUSION DL models can preoperatively predict meningioma grades and pathologic marker expression with favorable predictive performance. CLINICAL RELEVANCE STATEMENT Our DL model could predict meningioma grades and expression of pathologic markers and identify high-risk patients with WHO grade 1 meningioma, which would suggest a more aggressive operative intervention preoperatively and a more frequent follow-up schedule postoperatively. KEY POINTS WHO grades and some pathologic markers of meningioma were associated with therapeutic strategies and clinical outcomes. A deep learning-based approach was employed to develop a model for predicting meningioma grades and the expression of pathologic markers. Preoperative prediction of meningioma grades and the expression of pathologic markers was beneficial for clinical decision-making.
Collapse
Affiliation(s)
- Jiawei Chen
- Department of Neurosurgery of Huashan Hospital, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Fudan University, Shanghai, China
| | - Yanping Xue
- Department of Neurosurgery of Huashan Hospital, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Fudan University, Shanghai, China
| | - Leihao Ren
- Department of Neurosurgery of Huashan Hospital, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Fudan University, Shanghai, China
| | - Kun Lv
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai, China
| | - Peng Du
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai, China
| | - Haixia Cheng
- Department of Pathology, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China
| | - Shuchen Sun
- Department of Neurosurgery, Shanghai International Hospital, Shanghai, China
- Department of Neurosurgery, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Lingyang Hua
- Department of Neurosurgery of Huashan Hospital, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Fudan University, Shanghai, China
| | - Qing Xie
- Department of Neurosurgery of Huashan Hospital, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Fudan University, Shanghai, China.
| | - Ruiqi Wu
- Department of Neurosurgery of Huashan Hospital, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Fudan University, Shanghai, China.
| | - Ye Gong
- Department of Neurosurgery of Huashan Hospital, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Fudan University, Shanghai, China.
- Department of Critical Care Medicine, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China.
| |
Collapse
|
8
|
Yang J, Cao Y, Zhou F, Li C, Lv J, Li P. Combined deep-learning MRI-based radiomic models for preoperative risk classification of endometrial endometrioid adenocarcinoma. Front Oncol 2023; 13:1231497. [PMID: 37909025 PMCID: PMC10613647 DOI: 10.3389/fonc.2023.1231497] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 10/02/2023] [Indexed: 11/02/2023] Open
Abstract
Background Differences exist between high- and low-risk endometrial cancer (EC) in terms of whether lymph node dissection is performed. Factors such as tumor grade, myometrial invasion (MDI), and lymphovascular space invasion (LVSI) in the European Society for Medical Oncology (ESMO), European SocieTy for Radiotherapy & Oncology (ESTRO) and European Society of Gynaecological Oncology (ESGO) guidelines risk classification can often only be accurately assessed postoperatively. The aim of our study was to estimate the risk classification of patients with endometrial endometrioid adenocarcinoma before surgery and offer individualized treatment plans based on their risk classification. Methods Clinical information and last preoperative pelvic magnetic resonance imaging (MRI) of patients with postoperative pathologically determined endometrial endometrioid adenocarcinoma were collected retrospectively. The region of interest (ROI) was subsequently plotted in T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), and diffusion-weighted imaging (DWI) MRI scans, and the traditional radiomics features and deep-learning image features were extracted. A final radiomics nomogram model integrating traditional radiomics features, deep learning image features, and clinical information was constructed to distinguish between low- and high-risk patients (based on the 2020 ESMO-ESGO-ESTRO guidelines). The efficacy of the model was evaluated in the training and validation sets of the model. Results We finally included 168 patients from January 1, 2020 to July 29, 2021, of which 95 patients in 2021 were classified as the training set and 73 patients in 2020 were classified as the validation set. In the training set, the area under the curve (AUC) of the radiomics nomogram was 0.923 (95%CI: 0.865-0.980) and in the validation set, the AUC of the radiomics nomogram was 0.842 (95%CI: 0.762-0.923). The nomogram had better predictions than both the traditional radiomics model and the deep-learning radiomics model. Conclusion MRI-based radiomics models can be useful for preoperative risk classification of patients with endometrial endometrioid adenocarcinoma.
Collapse
Affiliation(s)
| | | | | | | | | | - Pu Li
- Clinical School of Obstetrics and Gynecology Center, Tianjin Medical University, Tianjin, China
| |
Collapse
|
9
|
Huang X, Bajaj R, Cui W, Hendricks MJ, Wang Y, Yap NAL, Ramasamy A, Maung S, Cap M, Zhou H, Torii R, Dijkstra J, Bourantas CV, Zhang Q. CARDIAN: a novel computational approach for real-time end-diastolic frame detection in intravascular ultrasound using bidirectional attention networks. Front Cardiovasc Med 2023; 10:1250800. [PMID: 37868778 PMCID: PMC10588184 DOI: 10.3389/fcvm.2023.1250800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 09/14/2023] [Indexed: 10/24/2023] Open
Abstract
Introduction Changes in coronary artery luminal dimensions during the cardiac cycle can impact the accurate quantification of volumetric analyses in intravascular ultrasound (IVUS) image studies. Accurate ED-frame detection is pivotal for guiding interventional decisions, optimizing therapeutic interventions, and ensuring standardized volumetric analysis in research studies. Images acquired at different phases of the cardiac cycle may also lead to inaccurate quantification of atheroma volume due to the longitudinal motion of the catheter in relation to the vessel. As IVUS images are acquired throughout the cardiac cycle, end-diastolic frames are typically identified retrospectively by human analysts to minimize motion artefacts and enable more accurate and reproducible volumetric analysis. Methods In this paper, a novel neural network-based approach for accurate end-diastolic frame detection in IVUS sequences is proposed, trained using electrocardiogram (ECG) signals acquired synchronously during IVUS acquisition. The framework integrates dedicated motion encoders and a bidirectional attention recurrent network (BARNet) with a temporal difference encoder to extract frame-by-frame motion features corresponding to the phases of the cardiac cycle. In addition, a spatiotemporal rotation encoder is included to capture the IVUS catheter's rotational movement with respect to the coronary artery. Results With a prediction tolerance range of 66.7 ms, the proposed approach was able to find 71.9%, 67.8%, and 69.9% of end-diastolic frames in the left anterior descending, left circumflex and right coronary arteries, respectively, when tested against ECG estimations. When the result was compared with two expert analysts' estimation, the approach achieved a superior performance. Discussion These findings indicate that the developed methodology is accurate and fully reproducible and therefore it should be preferred over experts for end-diastolic frame detection in IVUS sequences.
Collapse
Affiliation(s)
- Xingru Huang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
- School of Communication Engineering, Hangzhou Dianzi University, Hangzhou, China
| | - Retesh Bajaj
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, London, United Kingdom
- Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, United Kingdom
| | - Weiwei Cui
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
| | | | - Yaqi Wang
- College of Media Engineering, Zhejiang University of Media and Communications, Hangzhou, China
| | - Nathan A. L. Yap
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, London, United Kingdom
- Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, United Kingdom
| | - Anantharaman Ramasamy
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, London, United Kingdom
- Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, United Kingdom
| | - Soe Maung
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, London, United Kingdom
- Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, United Kingdom
| | - Murat Cap
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, London, United Kingdom
- Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, United Kingdom
| | - Huiyu Zhou
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, United Kingdom
| | - Ryo Torii
- Department of Mechanical Engineering, University College London, London, United Kingdom
| | | | - Christos V. Bourantas
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, London, United Kingdom
- Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, United Kingdom
| | - Qianni Zhang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
| |
Collapse
|
10
|
Zheng T, Chen W, Li S, Quan H, Zou M, Zheng S, Zhao Y, Gao X, Cui X. Learning how to detect: A deep reinforcement learning method for whole-slide melanoma histopathology images. Comput Med Imaging Graph 2023; 108:102275. [PMID: 37567046 DOI: 10.1016/j.compmedimag.2023.102275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/18/2023] [Accepted: 07/22/2023] [Indexed: 08/13/2023]
Abstract
Cutaneous melanoma represents one of the most life-threatening malignancies. Histopathological image analysis serves as a vital tool for early melanoma detection. Deep neural network (DNN) models are frequently employed to aid pathologists in enhancing the efficiency and accuracy of diagnoses. However, due to the paucity of well-annotated, high-resolution, whole-slide histopathology image (WSI) datasets, WSIs are typically fragmented into numerous patches during the model training and testing stages. This process disregards the inherent interconnectedness among patches, potentially impeding the models' performance. Additionally, the presence of excess, non-contributing patches extends processing times and introduces substantial computational burdens. To mitigate these issues, we draw inspiration from the clinical decision-making processes of dermatopathologists to propose an innovative, weakly supervised deep reinforcement learning framework, titled Fast medical decision-making in melanoma histopathology images (FastMDP-RL). This framework expedites model inference by reducing the number of irrelevant patches identified within WSIs. FastMDP-RL integrates two DNN-based agents: the search agent (SeAgent) and the decision agent (DeAgent). The SeAgent initiates actions, steered by the image features observed in the current viewing field at various magnifications. Simultaneously, the DeAgent provides labeling probabilities for each patch. We utilize multi-instance learning (MIL) to construct a teacher-guided model (MILTG), serving a dual purpose: rewarding the SeAgent and guiding the DeAgent. Our evaluations were conducted using two melanoma datasets: the publicly accessible TCIA-CM dataset and the proprietary MELSC dataset. Our experimental findings affirm FastMDP-RL's ability to expedite inference and accurately predict WSIs, even in the absence of pixel-level annotations. Moreover, our research investigates the WSI-based interactive environment, encompassing the design of agents, state and reward functions, and feature extractors suitable for melanoma tissue images. This investigation offers valuable insights and references for researchers engaged in related studies. The code is available at: https://github.com/titizheng/FastMDP-RL.
Collapse
Affiliation(s)
- Tingting Zheng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Weixing Chen
- Shenzhen College of Advanced Technology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shuqin Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Mingchen Zou
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Song Zheng
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xinghua Gao
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| |
Collapse
|
11
|
Rai HM. Cancer detection and segmentation using machine learning and deep learning techniques: a review. Multimed Tools Appl 2023. [DOI: 10.1007/s11042-023-16520-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 05/12/2023] [Accepted: 08/13/2023] [Indexed: 09/16/2023]
|
12
|
Tran TO, Vo TH, Le NQK. Omics-based deep learning approaches for lung cancer decision-making and therapeutics development. Brief Funct Genomics 2023:elad031. [PMID: 37519050 DOI: 10.1093/bfgp/elad031] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 07/04/2023] [Accepted: 07/13/2023] [Indexed: 08/01/2023] Open
Abstract
Lung cancer has been the most common and the leading cause of cancer deaths globally. Besides clinicopathological observations and traditional molecular tests, the advent of robust and scalable techniques for nucleic acid analysis has revolutionized biological research and medicinal practice in lung cancer treatment. In response to the demands for minimally invasive procedures and technology development over the past decade, many types of multi-omics data at various genome levels have been generated. As omics data grow, artificial intelligence models, particularly deep learning, are prominent in developing more rapid and effective methods to potentially improve lung cancer patient diagnosis, prognosis and treatment strategy. This decade has seen genome-based deep learning models thriving in various lung cancer tasks, including cancer prediction, subtype classification, prognosis estimation, cancer molecular signatures identification, treatment response prediction and biomarker development. In this study, we summarized available data sources for deep-learning-based lung cancer mining and provided an update on recent deep learning models in lung cancer genomics. Subsequently, we reviewed the current issues and discussed future research directions of deep-learning-based lung cancer genomics research.
Collapse
Affiliation(s)
- Thi-Oanh Tran
- International Ph.D. Program in Cell Therapy and Regenerative Medicine, College of Medicine, Taipei Medical University, No 250 Wuxing Street, 110, Taipei, Taiwan
- AIBioMed Research Group, Taipei Medical University, No 250 Wuxing Street, 110, Taipei, Taiwan
- Hematology and Blood Transfusion Center, Bach Mai Hospital, No 78 Giai Phong Street, Hanoi, Viet Nam
| | - Thanh Hoa Vo
- Department of Science, School of Science and Computing, South East Technological University, Waterford X91 K0EK, Ireland
- Pharmaceutical and Molecular Biotechnology Research Center (PMBRC), South East Technological University, Waterford X91 K0EK, Ireland
| | - Nguyen Quoc Khanh Le
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, 250 Wuxing Street, 110, Taipei, Taiwan
- AIBioMed Research Group, Taipei Medical University, No 250 Wuxing Street, 110, Taipei, Taiwan
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, 250 Wuxing Street, 110, Taipei, Taiwan
- Translational Imaging Research Center, Taipei Medical University Hospital, 252 Wuxing Street, 110, Taipei, Taiwan
| |
Collapse
|
13
|
Sfayyih AH, Sulaiman N, Sabry AH. A review on lung disease recognition by acoustic signal analysis with deep learning networks. J Big Data 2023; 10:101. [PMID: 37333945 PMCID: PMC10259357 DOI: 10.1186/s40537-023-00762-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 05/08/2023] [Indexed: 06/20/2023]
Abstract
Recently, assistive explanations for difficulties in the health check area have been made viable thanks in considerable portion to technologies like deep learning and machine learning. Using auditory analysis and medical imaging, they also increase the predictive accuracy for prompt and early disease detection. Medical professionals are thankful for such technological support since it helps them manage further patients because of the shortage of skilled human resources. In addition to serious illnesses like lung cancer and respiratory diseases, the plurality of breathing difficulties is gradually rising and endangering society. Because early prediction and immediate treatment are crucial for respiratory disorders, chest X-rays and respiratory sound audio are proving to be quite helpful together. Compared to related review studies on lung disease classification/detection using deep learning algorithms, only two review studies based on signal analysis for lung disease diagnosis have been conducted in 2011 and 2018. This work provides a review of lung disease recognition with acoustic signal analysis with deep learning networks. We anticipate that physicians and researchers working with sound-signal-based machine learning will find this material beneficial.
Collapse
Affiliation(s)
- Alyaa Hamel Sfayyih
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang, Malaysia
| | - Nasri Sulaiman
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Universiti Putra Malaysia, 43400 Serdang, Malaysia
| | - Ahmad H. Sabry
- Department of Computer Engineering, Al-Nahrain University, Al Jadriyah Bridge, 64074 Baghdad, Iraq
| |
Collapse
|
14
|
Zhu Y, Hu P, Li X, Tian Y, Bai X, Liang T, Li J. An End-to-End Data-Adaptive Pancreas Segmentation System with an Image Quality Control Toolbox. Journal of Healthcare Engineering 2023; 2023:1-12. [DOI: 10.1155/2023/3617318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
With the development of radiology and computer technology, diagnosis by medical imaging is heading toward precision and automation. Due to complex anatomy around the pancreatic tissue and high demands for clinical experience, the assisted pancreas segmentation system will greatly promote clinical efficiency. However, the existing segmentation model suffers from poor generalization among images from multiple hospitals. In this paper, we propose an end-to-end data-adaptive pancreas segmentation system to tackle the problems of lack of annotations and model generalizability. The system employs adversarial learning to transfer features from labeled domains to unlabeled domains, seeking a dynamic balance between domain discrimination and unsupervised segmentation. The image quality control toolbox is embedded in the system, which standardizes image quality in terms of intensity, field of view, and so on, to decrease heterogeneity among image domains. In addition, the system implements a data-adaptive process end-to-end without complex operations by doctors. The experiments are conducted on an annotated public dataset and an unannotated in-hospital dataset. The results indicate that after data adaptation, the segmentation performance measured by the dice similarity coefficient on unlabeled images improves from 58.79% to 75.43%, with a gain of 16.64%. Furthermore, the system preserves quantitatively structured information such as the pancreas’ size and volume, as well as objective and accurate visualized images, which assists clinicians in diagnosing and formulating treatment plans in a timely and accurate manner.
Collapse
|
15
|
Foucart A, Debeir O, Decaestecker C. Shortcomings and areas for improvement in digital pathology image segmentation challenges. Comput Med Imaging Graph 2023; 103:102155. [PMID: 36525770 DOI: 10.1016/j.compmedimag.2022.102155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 09/13/2022] [Accepted: 11/27/2022] [Indexed: 12/13/2022]
Abstract
Digital pathology image analysis challenges have been organised regularly since 2010, often with events hosted at major conferences and results published in high-impact journals. These challenges mobilise a lot of energy from organisers, participants, and expert annotators (especially for image segmentation challenges). This study reviews image segmentation challenges in digital pathology and the top-ranked methods, with a particular focus on how reference annotations are generated and how the methods' predictions are evaluated. We found important shortcomings in the handling of inter-expert disagreement and the relevance of the evaluation process chosen. We also noted key problems with the quality control of various challenge elements that can lead to uncertainties in the published results. Our findings show the importance of greatly increasing transparency in the reporting of challenge results, and the need to make publicly available the evaluation codes, test set annotations and participants' predictions. The aim is to properly ensure the reproducibility and interpretation of the results and to increase the potential for exploitation of the substantial work done in these challenges.
Collapse
Affiliation(s)
- Adrien Foucart
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium.
| | - Olivier Debeir
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium
| | - Christine Decaestecker
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium.
| |
Collapse
|
16
|
Gao Z, Hong B, Li Y, Zhang X, Wu J, Wang C, Zhang X, Gong T, Zheng Y, Meng D, Li C. A semi-supervised multi-task learning framework for cancer classification with weak annotation in whole-slide images. Med Image Anal 2023; 83:102652. [PMID: 36327654 DOI: 10.1016/j.media.2022.102652] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 09/15/2022] [Accepted: 10/08/2022] [Indexed: 11/06/2022]
Abstract
Cancer region detection (CRD) and subtyping are two fundamental tasks in digital pathology image analysis. The development of data-driven models for CRD and subtyping on whole-slide images (WSIs) would mitigate the burden of pathologists and improve their accuracy in diagnosis. However, the existing models are facing two major limitations. Firstly, they typically require large-scale datasets with precise annotations, which contradicts with the original intention of reducing labor effort. Secondly, for the subtyping task, the non-cancerous regions are treated as the same as cancerous regions within a WSI, which confuses a subtyping model in its training process. To tackle the latter limitation, the previous research proposed to perform CRD first for ruling out the non-cancerous region, then train a subtyping model based on the remaining cancerous patches. However, separately training ignores the interaction of these two tasks, also leads to propagating the error of the CRD task to the subtyping task. To address these issues and concurrently improve the performance on both CRD and subtyping tasks, we propose a semi-supervised multi-task learning (MTL) framework for cancer classification. Our framework consists of a backbone feature extractor, two task-specific classifiers, and a weight control mechanism. The backbone feature extractor is shared by two task-specific classifiers, such that the interaction of CRD and subtyping tasks can be captured. The weight control mechanism preserves the sequential relationship of these two tasks and guarantees the error back-propagation from the subtyping task to the CRD task under the MTL framework. We train the overall framework in a semi-supervised setting, where datasets only involve small quantities of annotations produced by our minimal point-based (min-point) annotation strategy. Extensive experiments on four large datasets with different cancer types demonstrate the effectiveness of the proposed framework in both accuracy and generalization.
Collapse
Affiliation(s)
- Zeyu Gao
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Bangyang Hong
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Yang Li
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Xianli Zhang
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Jialun Wu
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Chunbao Wang
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Department of Pathology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an 710061, China
| | - Xiangrong Zhang
- School of Artificial Intelligence, Xidian University, Xi'an 710071, China
| | - Tieliang Gong
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Yefeng Zheng
- Tencent Jarvis Lab, Shenzhen, Guangdong 518075, China
| | - Deyu Meng
- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China
| | - Chen Li
- School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; Shaanxi Provincial Key Laboratory of Big Data Knowledge Engineering, Xi'an Jiaotong University, Xi'an 710049, China.
| |
Collapse
|
17
|
Xu R, Wang Z, Liu Z, Han C, Yan L, Lin H, Xu Z, Feng Z, Liang C, Chen X, Pan X, Liu Z, Lu C. Histopathological Tissue Segmentation of Lung Cancer with Bilinear CNN and Soft Attention. BioMed Research International 2022; 2022:1-10. [PMID: 35845926 PMCID: PMC9283032 DOI: 10.1155/2022/7966553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 05/15/2022] [Accepted: 06/10/2022] [Indexed: 11/18/2022]
Abstract
Automatic tissue segmentation in whole-slide images (WSIs) is a critical task in hematoxylin and eosin- (H&E-) stained histopathological images for accurate diagnosis and risk stratification of lung cancer. Patch classification and stitching the classification results can fast conduct tissue segmentation of WSIs. However, due to the tumour heterogeneity, large intraclass variability and small interclass variability make the classification task challenging. In this paper, we propose a novel bilinear convolutional neural network- (Bilinear-CNN-) based model with a bilinear convolutional module and a soft attention module to tackle this problem. This method investigates the intraclass semantic correspondence and focuses on the more distinguishable features that make feature output variations relatively large between interclass. The performance of the Bilinear-CNN-based model is compared with other state-of-the-art methods on the histopathological classification dataset, which consists of 107.7 k patches of lung cancer. We further evaluate our proposed algorithm on an additional dataset from colorectal cancer. Extensive experiments show that the performance of our proposed method is superior to that of previous state-of-the-art ones and the interpretability of our proposed method is demonstrated by Grad-CAM.
Collapse
|
18
|
Zhang H, Liu J, Wang P, Yu Z, Liu W, Chen H. Cross-Boosted Multi-Target Domain Adaptation for Multi-Modality Histopathology Image Translation and Segmentation. IEEE J Biomed Health Inform 2022; 26:3197-3208. [PMID: 35196252 DOI: 10.1109/jbhi.2022.3153793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent digital pathology workflows mainly focus on mono-modality histopathology image analysis. However, they ignore the complementarity between Haematoxylin & Eosin (H&E) and Immunohistochemically (IHC) stained images, which can provide comprehensive gold standard for cancer diagnosis. To resolve this issue, we propose a cross-boosted multi-target domain adaptation pipeline for multi-modality histopathology images, which contains Cross-frequency Style-auxiliary Translation Network (CSTN) and Dual Cross-boosted Segmentation Network (DCSN). Firstly, CSTN achieves the one-to-many translation from fluorescence microscopy images to H&E and IHC images for providing source domain training data. To generate images with realistic color and texture, Cross-frequency Feature Transfer Module (CFTM) is developed to pertinently restructure and normalize high-frequency content and low-frequency style features from different domains. Then, DCSN fulfills multi-target domain adaptive segmentation, where a dual-branch encoder is introduced, and Bidirectional Cross-domain Boosting Module (BCBM) is designed to implement cross-modality information complementation through bidirectional inter-domain collaboration. Finally, we establish Multi-modality Thymus Histopathology (MThH) dataset, which is the largest publicly available H&E and IHC image benchmark. Experiments on MThH dataset and several public datasets show that the proposed pipeline outperforms state-of-the-art methods on both histopathology image translation and segmentation.
Collapse
|
19
|
Jain DK, Lakshmi KM, Varma KP, Ramachandran M, Bharati S, Sharma K. Lung Cancer Detection Based on Kernel PCA-Convolution Neural Network Feature Extraction and Classification by Fast Deep Belief Neural Network in Disease Management Using Multimedia Data Sources. Computational Intelligence and Neuroscience 2022; 2022:1-12. [PMID: 35669646 PMCID: PMC9167006 DOI: 10.1155/2022/3149406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 05/06/2022] [Accepted: 05/09/2022] [Indexed: 11/18/2022]
Abstract
In lung cancer, tumor histology is a significant predictor of treatment response and prognosis. Although tissue samples for pathologist view are the most pertinent approach for histology classification, current advances in DL for medical image analysis point to the importance of radiologic data in further characterization of disease characteristics as well as risk stratification. Cancer is a complex global health problem that has seen an increase in death rates in recent years. Progress in cancer disease detection based on subset traits has enabled awareness of significant as well as exact disease diagnosis, thanks to the rapid flowering of high-throughput technology as well as numerous ML techniques that have emerged in recent years. As a result, advanced ML approaches that can successfully distinguish lung cancer patients from healthy people are of major importance. This paper proposed lung tumor detection based on histopathological image analysis using deep learning architectures. Here, the input image is taken as a histopathological image, and it has also been processed for removing noise, image resizing, and enhancing the image. Then the image features are extracted using Kernel PCA integrated with a convolutional neural network (KPCA-CNN), in which KPCA has been used in the feature extraction layer of CNN. The classification of extracted features has been put into effect using a Fast Deep Belief Neural Network (FDBNN). Finally, the classified output will give the tumorous cell and nontumorous cell of the lung from the input histopathological image. The experimental analysis has been carried out for various histopathological image datasets, and the obtained parameters are accuracy, precision, recall, and F-measure. Confusion matrix gives the actual class and predicted class of tumor in an input image. From the comparative analysis, the proposed technique obtains enhanced output in detecting the tumor once compared with an existing methodology for the various datasets.
Collapse
|
20
|
Khalil MA, Lee YC, Lien HC, Jeng YM, Wang CW. Fast Segmentation of Metastatic Foci in H&E Whole-Slide Images for Breast Cancer Diagnosis. Diagnostics (Basel) 2022; 12. [PMID: 35454038 DOI: 10.3390/diagnostics12040990] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/13/2022] [Accepted: 04/13/2022] [Indexed: 12/12/2022] Open
Abstract
Breast cancer is the leading cause of death for women globally. In clinical practice, pathologists visually scan over enormous amounts of gigapixel microscopic tissue slide images, which is a tedious and challenging task. In breast cancer diagnosis, micro-metastases and especially isolated tumor cells are extremely difficult to detect and are easily neglected because tiny metastatic foci might be missed in visual examinations by medical doctors. However, the literature poorly explores the detection of isolated tumor cells, which could be recognized as a viable marker to determine the prognosis for T1NoMo breast cancer patients. To address these issues, we present a deep learning-based framework for efficient and robust lymph node metastasis segmentation in routinely used histopathological hematoxylin−eosin-stained (H−E) whole-slide images (WSI) in minutes, and a quantitative evaluation is conducted using 188 WSIs, containing 94 pairs of H−E-stained WSIs and immunohistochemical CK(AE1/AE3)-stained WSIs, which are used to produce a reliable and objective reference standard. The quantitative results demonstrate that the proposed method achieves 89.6% precision, 83.8% recall, 84.4% F1-score, and 74.9% mIoU, and that it performs significantly better than eight deep learning approaches, including two recently published models (v3_DCNN and Xception-65), and three variants of Deeplabv3+ with three different backbones, namely, U-Net, SegNet, and FCN, in precision, recall, F1-score, and mIoU (p<0.001). Importantly, the proposed system is shown to be capable of identifying tiny metastatic foci in challenging cases, for which there are high probabilities of misdiagnosis in visual inspection, while the baseline approaches tend to fail in detecting tiny metastatic foci. For computational time comparison, the proposed method takes 2.4 min for processing a WSI utilizing four NVIDIA Geforce GTX 1080Ti GPU cards and 9.6 min using a single NVIDIA Geforce GTX 1080Ti GPU card, and is notably faster than the baseline methods (4-times faster than U-Net and SegNet, 5-times faster than FCN, 2-times faster than the 3 different variants of Deeplabv3+, 1.4-times faster than v3_DCNN, and 41-times faster than Xception-65).
Collapse
|
21
|
Wang CW, Lee YC, Chang CC, Lin YJ, Liou YA, Hsu PC, Chang CC, Sai AKO, Wang CH, Chao TK. A Weakly Supervised Deep Learning Method for Guiding Ovarian Cancer Treatment and Identifying an Effective Biomarker. Cancers (Basel) 2022; 14:cancers14071651. [PMID: 35406422 PMCID: PMC8996991 DOI: 10.3390/cancers14071651] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/14/2022] [Accepted: 03/18/2022] [Indexed: 02/04/2023] Open
Abstract
Ovarian cancer is a common malignant gynecological disease. Molecular target therapy, i.e., antiangiogenesis with bevacizumab, was found to be effective in some patients of epithelial ovarian cancer (EOC). Although careful patient selection is essential, there are currently no biomarkers available for routine therapeutic usage. To the authors’ best knowledge, this is the first automated precision oncology framework to effectively identify and select EOC and peritoneal serous papillary carcinoma (PSPC) patients with positive therapeutic effect. From March 2013 to January 2021, we have a database, containing four kinds of immunohistochemical tissue samples, including AIM2, c3, C5 and NLRP3, from patients diagnosed with EOC and PSPC and treated with bevacizumab in a hospital-based retrospective study. We developed a hybrid deep learning framework and weakly supervised deep learning models for each potential biomarker, and the experimental results show that the proposed model in combination with AIM2 achieves high accuracy 0.92, recall 0.97, F-measure 0.93 and AUC 0.97 for the first experiment (66% training and 34%testing) and high accuracy 0.86 ± 0.07, precision 0.9 ± 0.07, recall 0.85 ± 0.06, F-measure 0.87 ± 0.06 and AUC 0.91 ± 0.05 for the second experiment using five-fold cross validation, respectively. Both Kaplan-Meier PFS analysis and Cox proportional hazards model analysis further confirmed that the proposed AIM2-DL model is able to distinguish patients gaining positive therapeutic effects with low cancer recurrence from patients with disease progression after treatment (p < 0.005).
Collapse
Affiliation(s)
- Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (C.-W.W.); (Y.-A.L.); (C.-C.C.); (A.-K.-O.S.)
- Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei 106335, Taiwan;
| | - Yu-Ching Lee
- Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei 106335, Taiwan;
| | - Cheng-Chang Chang
- Department of Gynecology and Obstetrics, Tri-Service General Hospital, Taipei 11490, Taiwan; (C.-C.C.); (P.-C.H.)
- Graduate Institute of Medical Sciences, National Defense Medical Center, Taipei 11490, Taiwan
| | - Yi-Jia Lin
- Department of Pathology, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Institute of Pathology and Parasitology, National Defense Medical Center, Taipei 11490, Taiwan
| | - Yi-An Liou
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (C.-W.W.); (Y.-A.L.); (C.-C.C.); (A.-K.-O.S.)
| | - Po-Chao Hsu
- Department of Gynecology and Obstetrics, Tri-Service General Hospital, Taipei 11490, Taiwan; (C.-C.C.); (P.-C.H.)
- Graduate Institute of Medical Sciences, National Defense Medical Center, Taipei 11490, Taiwan
| | - Chun-Chieh Chang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (C.-W.W.); (Y.-A.L.); (C.-C.C.); (A.-K.-O.S.)
| | - Aung-Kyaw-Oo Sai
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (C.-W.W.); (Y.-A.L.); (C.-C.C.); (A.-K.-O.S.)
| | - Chih-Hung Wang
- Department of Otolaryngology-Head and Neck Surgery, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Department of Otolaryngology-Head and Neck Surgery, National Defense Medical Center, Taipei 11490, Taiwan
| | - Tai-Kuang Chao
- Department of Pathology, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Institute of Pathology and Parasitology, National Defense Medical Center, Taipei 11490, Taiwan
- Correspondence:
| |
Collapse
|
22
|
Alawad M, Aljouie A, Alamri S, Alghamdi M, Alabdulkader B, Alkanhal N, Almazroa A. Machine Learning and Deep Learning Techniques for Optic Disc and Cup Segmentation – A Review. Clin Ophthalmol 2022; 16:747-764. [PMID: 35300031 PMCID: PMC8923700 DOI: 10.2147/opth.s348479] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 02/11/2022] [Indexed: 12/12/2022] Open
Abstract
Background Globally, glaucoma is the second leading cause of blindness. Detecting glaucoma in the early stages is essential to avoid disease complications, which lead to blindness. Thus, computer-aided diagnosis systems are powerful tools to overcome the shortage of glaucoma screening programs. Methods A systematic search of public databases, including PubMed, Google Scholar, and other sources, was performed to identify relevant studies to overview the publicly available fundus image datasets used to train, validate, and test machine learning and deep learning methods. Additionally, existing machine learning and deep learning methods for optic cup and disc segmentation were surveyed and critically reviewed. Results Eight fundus images datasets were publicly available with 15,445 images labeled with glaucoma or non-glaucoma, and manually annotated optic disc and cup boundaries were found. Five metrics were identified for evaluating the developed models. Finally, three main deep learning architectural designs were commonly used for optic disc and optic cup segmentation. Conclusion We provided future research directions to formulate robust optic cup and disc segmentation systems. Deep learning can be utilized in clinical settings for this task. However, many challenges need to be addressed before using this strategy in clinical trials. Finally, two deep learning architectural designs have been widely adopted, such as U-net and its variants.
Collapse
Affiliation(s)
- Mohammed Alawad
- Department of Biostatistics and Bioinformatics, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Abdulrhman Aljouie
- Department of Biostatistics and Bioinformatics, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
| | - Suhailah Alamri
- Department of Imaging Research, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for health Sciences, Riyadh, Saudi Arabia
- Research Labs, National Center for Artificial Intelligence, Riyadh, Saudi Arabia
| | - Mansour Alghamdi
- Department of Optometry and Vision Sciences College of Applied Medical Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Balsam Alabdulkader
- Department of Optometry and Vision Sciences College of Applied Medical Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Norah Alkanhal
- Department of Imaging Research, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for health Sciences, Riyadh, Saudi Arabia
| | - Ahmed Almazroa
- Department of Imaging Research, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for health Sciences, Riyadh, Saudi Arabia
- Correspondence: Ahmed Almazroa; Abdulrhman Aljouie, Email ;
| |
Collapse
|
23
|
Zhang X, Zhang Y, Zhang G, Qiu X, Tan W, Yin X, Liao L. Deep Learning With Radiomics for Disease Diagnosis and Treatment: Challenges and Potential. Front Oncol 2022; 12:773840. [PMID: 35251962 PMCID: PMC8891653 DOI: 10.3389/fonc.2022.773840] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 01/17/2022] [Indexed: 12/12/2022] Open
Abstract
The high-throughput extraction of quantitative imaging features from medical images for the purpose of radiomic analysis, i.e., radiomics in a broad sense, is a rapidly developing and emerging research field that has been attracting increasing interest, particularly in multimodality and multi-omics studies. In this context, the quantitative analysis of multidimensional data plays an essential role in assessing the spatio-temporal characteristics of different tissues and organs and their microenvironment. Herein, recent developments in this method, including manually defined features, data acquisition and preprocessing, lesion segmentation, feature extraction, feature selection and dimension reduction, statistical analysis, and model construction, are reviewed. In addition, deep learning-based techniques for automatic segmentation and radiomic analysis are being analyzed to address limitations such as rigorous workflow, manual/semi-automatic lesion annotation, and inadequate feature criteria, and multicenter validation. Furthermore, a summary of the current state-of-the-art applications of this technology in disease diagnosis, treatment response, and prognosis prediction from the perspective of radiology images, multimodality images, histopathology images, and three-dimensional dose distribution data, particularly in oncology, is presented. The potential and value of radiomics in diagnostic and therapeutic strategies are also further analyzed, and for the first time, the advances and challenges associated with dosiomics in radiotherapy are summarized, highlighting the latest progress in radiomics. Finally, a robust framework for radiomic analysis is presented and challenges and recommendations for future development are discussed, including but not limited to the factors that affect model stability (medical big data and multitype data and expert knowledge in medical), limitations of data-driven processes (reproducibility and interpretability of studies, different treatment alternatives for various institutions, and prospective researches and clinical trials), and thoughts on future directions (the capability to achieve clinical applications and open platform for radiomics analysis).
Collapse
Affiliation(s)
- Xingping Zhang
- Institute of Advanced Cyberspace Technology, Guangzhou University, Guangzhou, China
- Department of New Networks, Peng Cheng Laboratory, Shenzhen, China
| | - Yanchun Zhang
- Institute of Advanced Cyberspace Technology, Guangzhou University, Guangzhou, China
- Department of New Networks, Peng Cheng Laboratory, Shenzhen, China
| | - Guijuan Zhang
- Department of Respiratory Medicine, First Affiliated Hospital of Gannan Medical University, Ganzhou, China
| | - Xingting Qiu
- Department of Radiology, First Affiliated Hospital of Gannan Medical University, Ganzhou, China
| | - Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Xiaoxia Yin
- Institute of Advanced Cyberspace Technology, Guangzhou University, Guangzhou, China
| | - Liefa Liao
- School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou, China
| |
Collapse
|
24
|
Arlova A, Jin C, Wong-Rolle A, Chen ES, Lisle C, Brown GT, Lay N, Choyke PL, Turkbey B, Harmon S, Zhao C. Artificial Intelligence-based Tumor Segmentation in Mouse Models of Lung Adenocarcinoma. J Pathol Inform 2022; 13:100007. [PMID: 35242446 PMCID: PMC8860735 DOI: 10.1016/j.jpi.2022.100007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/14/2021] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Mouse models are highly effective for studying the pathophysiology of lung adenocarcinoma and evaluating new treatment strategies. Treatment efficacy is primarily determined by the total tumor burden measured on excised tumor specimens. The measurement process is time-consuming and prone to human errors. To address this issue, we developed a novel deep learning model to segment lung tumor foci on digitally scanned hematoxylin and eosin (H&E) histology slides. METHODS Digital slides of 239 mice from 9 experimental cohorts were split into training (n=137), validation (n=37), and testing cohorts (n=65). Image patches of 500×500 pixels were extracted from 5× and 10× magnifications, along with binary masks of expert annotations representing ground-truth tumor regions. Deep learning models utilizing DeepLabV3+ and UNet architectures were trained for binary segmentation of tumor foci under varying stain normalization conditions. The performance of algorithm segmentation was assessed by Dice Coefficient, and detection was evaluated by sensitivity and positive-predictive value (PPV). RESULTS The best model on patch-based validation was DeepLabV3+ using a Resnet-50 backbone, which achieved Dice 0.890 and 0.873 on validation and testing cohort, respectively. This result corresponded to 91.3 Sensitivity and 51.0 PPV in the validation cohort and 93.7 Sensitivity and 51.4 PPV in the testing cohort. False positives could be reduced 10-fold with thresholding artificial intelligence (AI) predicted output by area, without negative impact on Dice Coefficient. Evaluation at various stain normalization strategies did not demonstrate improvement from the baseline model. CONCLUSIONS A robust AI-based algorithm for detecting and segmenting lung tumor foci in the pre-clinical mouse models was developed. The output of this algorithm is compatible with open-source software that researchers commonly use.
Collapse
Affiliation(s)
- Alena Arlova
- Artificial Intelligence Resource, Center for Cancer Research, National Cancer Institute, NIH, Bethesda, MD, USA
| | - Chengcheng Jin
- Department of Cancer Biology, University of Pennsylvania, Philadelphia, PA, USA
| | - Abigail Wong-Rolle
- Thoracic and GI Malignancies Branch, Center for Cancer Research, National Cancer Institute, NIH, Bethesda, MD, USA
| | - Eric S. Chen
- Department of Cancer Biology, University of Pennsylvania, Philadelphia, PA, USA
| | | | - G. Thomas Brown
- Artificial Intelligence Resource, Center for Cancer Research, National Cancer Institute, NIH, Bethesda, MD, USA
| | - Nathan Lay
- Artificial Intelligence Resource, Center for Cancer Research, National Cancer Institute, NIH, Bethesda, MD, USA
| | - Peter L. Choyke
- Artificial Intelligence Resource, Center for Cancer Research, National Cancer Institute, NIH, Bethesda, MD, USA
| | - Baris Turkbey
- Artificial Intelligence Resource, Center for Cancer Research, National Cancer Institute, NIH, Bethesda, MD, USA
| | - Stephanie Harmon
- Artificial Intelligence Resource, Center for Cancer Research, National Cancer Institute, NIH, Bethesda, MD, USA
| | - Chen Zhao
- Thoracic and GI Malignancies Branch, Center for Cancer Research, National Cancer Institute, NIH, Bethesda, MD, USA
| |
Collapse
|
25
|
Dholey M, Sarkar A, Giri A, Sadhu A, Chaudhury K, Chatterjee J. Pixel-Based Nuclei Segmentation in Fine Needle Aspiration Cytology of Lung Lesions. Advances in Intelligent Systems and Computing 2022:1-12. [DOI: 10.1007/978-981-16-4369-9_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
26
|
Zhao L, Xu X, Hou R, Zhao W, Zhong H, Teng H, Han Y, Fu X, Sun J, Zhao J. Lung cancer subtype classification using histopathological images based on weakly supervised multi-instance learning. Phys Med Biol 2021; 66. [PMID: 34794136 DOI: 10.1088/1361-6560/ac3b32] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 11/18/2021] [Indexed: 11/12/2022]
Abstract
Objective.Subtype classification plays a guiding role in the clinical diagnosis and treatment of non-small-cell lung cancer (NSCLC). However, due to the gigapixel of whole slide images (WSIs) and the absence of definitive morphological features, most automatic subtype classification methods for NSCLC require manually delineating the regions of interest (ROIs) on WSIs.Approach.In this paper, a weakly supervised framework is proposed for accurate subtype classification while freeing pathologists from pixel-level annotation. With respect to the characteristics of histopathological images, we design a two-stage structure with ROI localization and subtype classification. We first develop a method called multi-resolution expectation-maximization convolutional neural network (MR-EM-CNN) to locate ROIs for subsequent subtype classification. The EM algorithm is introduced to select the discriminative image patches for training a patch-wise network, with only WSI-wise labels available. A multi-resolution mechanism is designed for fine localization, similar to the coarse-to-fine process of manual pathological analysis. In the second stage, we build a novel hierarchical attention multi-scale network (HMS) for subtype classification. HMS can capture multi-scale features flexibly driven by the attention module and implement hierarchical features interaction.Results.Experimental results on the 1002-patient Cancer Genome Atlas dataset achieved an AUC of 0.9602 in the ROI localization and an AUC of 0.9671 for subtype classification.Significance.The proposed method shows superiority compared with other algorithms in the subtype classification of NSCLC. The proposed framework can also be extended to other classification tasks with WSIs.
Collapse
Affiliation(s)
- Lu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Xiaowei Xu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Runping Hou
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China.,Department of radiation oncology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Wangyuan Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Hai Zhong
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Haohua Teng
- Department of pathology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Yuchen Han
- Department of pathology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Xiaolong Fu
- Department of radiation oncology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Jianqi Sun
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
27
|
Zheng Y, Jiang Z, Shi J, Xie F, Zhang H, Luo W, Hu D, Sun S, Jiang Z, Xue C. Encoding histopathology whole slide images with location-aware graphs for diagnostically relevant regions retrieval. Med Image Anal 2021; 76:102308. [PMID: 34856455 DOI: 10.1016/j.media.2021.102308] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2019] [Revised: 10/14/2021] [Accepted: 11/17/2021] [Indexed: 01/18/2023]
Abstract
Content-based histopathological image retrieval (CBHIR) has become popular in recent years in histopathological image analysis. CBHIR systems provide auxiliary diagnosis information for pathologists by searching for and returning regions that are contently similar to the region of interest (ROI) from a pre-established database. It is challenging and yet significant in clinical applications to retrieve diagnostically relevant regions from a database consisting of histopathological whole slide images (WSIs). In this paper, we propose a novel framework for regions retrieval from WSI database based on location-aware graphs and deep hash techniques. Compared to the present CBHIR framework, both structural information and global location information of ROIs in the WSI are preserved by graph convolution and self-attention operations, which makes the retrieval framework more sensitive to regions that are similar in tissue distribution. Moreover, benefited from the graph structure, the proposed framework has good scalability for both the size and shape variation of ROIs. It allows the pathologist to define query regions using free curves according to the appearance of tissue. Thirdly, the retrieval is achieved based on the hash technique, which ensures the framework is efficient and adequate for practical large-scale WSI database. The proposed method was evaluated on an in-house endometrium dataset with 2650 WSIs and the public ACDC-LungHP dataset. The experimental results have demonstrated that the proposed method achieved a mean average precision above 0.667 on the endometrium dataset and above 0.869 on the ACDC-LungHP dataset in the task of irregular region retrieval, which are superior to the state-of-the-art methods. The average retrieval time from a database containing 1855 WSIs is 0.752 ms. The source code is available at https://github.com/zhengyushan/lagenet.
Collapse
Affiliation(s)
- Yushan Zheng
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Zhiguo Jiang
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China.
| | - Jun Shi
- School of Software, Hefei University of Technology, Hefei 230601, China.
| | - Fengying Xie
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Haopeng Zhang
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Wei Luo
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Dingyi Hu
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Shujiao Sun
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Image Processing Center, School of Astronautics, Beihang University, Beijing 102206, China
| | - Zhongmin Jiang
- Department of Pathology, Tianjin Fifth Central Hospital, Tianjin 300450, China
| | - Chenghai Xue
- Wankangyuan Tianjin Gene Technology, Inc, Tianjin 300220, China; Tianjin Institute of Industrial Biotechnology, Chinese Academy of Sciences, Tianjin 300308, China
| |
Collapse
|
28
|
Wang CW, Liou YA, Lin YJ, Chang CC, Chu PH, Lee YC, Wang CH, Chao TK. Artificial intelligence-assisted fast screening cervical high grade squamous intraepithelial lesion and squamous cell carcinoma diagnosis and treatment planning. Sci Rep 2021; 11:16244. [PMID: 34376717 DOI: 10.1038/s41598-021-95545-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 07/27/2021] [Indexed: 02/07/2023] Open
Abstract
Every year cervical cancer affects more than 300,000 people, and on average one woman is diagnosed with cervical cancer every minute. Early diagnosis and classification of cervical lesions greatly boosts up the chance of successful treatments of patients, and automated diagnosis and classification of cervical lesions from Papanicolaou (Pap) smear images have become highly demanded. To the authors' best knowledge, this is the first study of fully automated cervical lesions analysis on whole slide images (WSIs) of conventional Pap smear samples. The presented deep learning-based cervical lesions diagnosis system is demonstrated to be able to detect high grade squamous intraepithelial lesions (HSILs) or higher (squamous cell carcinoma; SQCC), which usually immediately indicate patients must be referred to colposcopy, but also to rapidly process WSIs in seconds for practical clinical usage. We evaluate this framework at scale on a dataset of 143 whole slide images, and the proposed method achieves a high precision 0.93, recall 0.90, F-measure 0.88, and Jaccard index 0.84, showing that the proposed system is capable of segmenting HSILs or higher (SQCC) with high precision and reaches sensitivity comparable to the referenced standard produced by pathologists. Based on Fisher's Least Significant Difference (LSD) test (P < 0.0001), the proposed method performs significantly better than the two state-of-the-art benchmark methods (U-Net and SegNet) in precision, F-Measure, Jaccard index. For the run time analysis, the proposed method takes only 210 seconds to process a WSI and is 20 times faster than U-Net and 19 times faster than SegNet, respectively. In summary, the proposed method is demonstrated to be able to both detect HSILs or higher (SQCC), which indicate patients for further treatments, including colposcopy and surgery to remove the lesion, and rapidly processing WSIs in seconds for practical clinical usages.
Collapse
|
29
|
Schüffler PJ, Yarlagadda DVK, Vanderbilt C, Fuchs TJ. Overcoming an Annotation Hurdle: Digitizing Pen Annotations from Whole Slide Images. J Pathol Inform 2021; 12:9. [PMID: 34012713 PMCID: PMC8112348 DOI: 10.4103/jpi.jpi_85_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 11/24/2020] [Accepted: 12/20/2020] [Indexed: 01/01/2023] Open
Abstract
Background: The development of artificial intelligence (AI) in pathology frequently relies on digitally annotated whole slide images (WSI). The creation of these annotations – manually drawn by pathologists in digital slide viewers – is time consuming and expensive. At the same time, pathologists routinely annotate glass slides with a pen to outline cancerous regions, for example, for molecular assessment of the tissue. These pen annotations are currently considered artifacts and excluded from computational modeling. Methods: We propose a novel method to segment and fill hand-drawn pen annotations and convert them into a digital format to make them accessible for computational models. Our method is implemented in Python as an open source, publicly available software tool. Results: Our method is able to extract pen annotations from WSI and save them as annotation masks. On a data set of 319 WSI with pen markers, we validate our algorithm segmenting the annotations with an overall Dice metric of 0.942, Precision of 0.955, and Recall of 0.943. Processing all images takes 15 min in contrast to 5 h manual digital annotation time. Further, the approach is robust against text annotations. Conclusions: We envision that our method can take advantage of already pen-annotated slides in scenarios in which the annotations would be helpful for training computational models. We conclude that, considering the large archives of many pathology departments that are currently being digitized, our method will help to collect large numbers of training samples from those data.
Collapse
Affiliation(s)
- Peter J Schüffler
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York City, NY, USA
| | | | - Chad Vanderbilt
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York City, NY, USA
| | - Thomas J Fuchs
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York City, NY, USA
| |
Collapse
|