1
|
Huang L, Feng B, Yang Z, Feng ST, Liu Y, Xue H, Shi J, Chen Q, Zhou T, Chen X, Wan C, Chen X, Long W. A Transfer Learning Radiomics Nomogram to Predict the Postoperative Recurrence of Advanced Gastric Cancer. J Gastroenterol Hepatol 2025; 40:844-854. [PMID: 39730209 DOI: 10.1111/jgh.16863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 10/15/2024] [Accepted: 12/10/2024] [Indexed: 12/29/2024]
Abstract
BACKGROUND AND AIM In this study, a transfer learning (TL) algorithm was used to predict postoperative recurrence of advanced gastric cancer (AGC) and to evaluate its value in a small-sample clinical study. METHODS A total of 431 cases of AGC from three centers were included in this retrospective study. First, TL signatures (TLSs) were constructed based on different source domains, including whole slide images (TLS-WSIs) and natural images (TLS-ImageNet). Clinical model and non-TLS based on CT images were constructed simultaneously. Second, TL radiomic model (TLRM) was constructed by combining optimal TLS and clinical factors. Finally, the performance of the models was evaluated by ROC analysis. The clinical utility of the models was assessed using integrated discriminant improvement (IDI) and decision curve analysis (DCA). RESULTS TLS-WSI significantly outperformed TLS-ImageNet, non-TLS, and clinical models (p < 0.05). The AUC value of TLS-WSI in training cohort was 0.9459 (95CI%: 0.9054, 0.9863) and ranged from 0.8050 (95CI%: 0.7130, 0.8969) to 0.8984 (95CI%: 0.8420, 0.9547) in validation cohorts. TLS-WSI and the nodular or irregular outer layer of gastric wall were screened to construct TLRM. The AUC value of TLRM in training cohort was 0.9643 (95CI%: 0.9349, 0.9936) and ranged from 0.8561 (95CI%: 0.7571, 0.9552) to 0.9195 (95CI%: 0.8670, 0.9721) in validation cohorts. The IDI and DCA showed that the performance of TLRM outperformed the other models. CONCLUSION TLS-WSI can be used to predict postoperative recurrence in AGC, whereas TLRM is more effective. TL can effectively improve the performance of clinical research models with a small sample size.
Collapse
Affiliation(s)
- Liebin Huang
- Department of Medical Imaging Center, The First Affiliated Hospital of Jinan University, Guangzhou, China
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
| | - Bao Feng
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
- Guilin University of Aerospace Technology Laboratory of Intelligent Detection and Information Processing, Guilin University of Aerospace Technology, Guilin, China
| | - Zhiqi Yang
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Shi-Ting Feng
- Department of Radiology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yu Liu
- Guilin University of Aerospace Technology Laboratory of Intelligent Detection and Information Processing, Guilin University of Aerospace Technology, Guilin, China
| | - Huimin Xue
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
| | - Jiangfeng Shi
- Guilin University of Aerospace Technology Laboratory of Intelligent Detection and Information Processing, Guilin University of Aerospace Technology, Guilin, China
| | - Qinxian Chen
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
| | - Tao Zhou
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
| | - Xiangguang Chen
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Cuixia Wan
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Xiaofeng Chen
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Wansheng Long
- Department of Medical Imaging Center, The First Affiliated Hospital of Jinan University, Guangzhou, China
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
| |
Collapse
|
2
|
Djahnine A, Jupin-Delevaux E, Nempont O, Si-Mohamed SA, Craighero F, Cottin V, Douek P, Popoff A, Boussel L. Weakly-supervised learning-based pathology detection and localization in 3D chest CT scans. Med Phys 2024; 51:8272-8282. [PMID: 39140793 DOI: 10.1002/mp.17302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 05/24/2024] [Accepted: 07/01/2024] [Indexed: 08/15/2024] Open
Abstract
BACKGROUND Recent advancements in anomaly detection have paved the way for novel radiological reading assistance tools that support the identification of findings, aimed at saving time. The clinical adoption of such applications requires a low rate of false positives while maintaining high sensitivity. PURPOSE In light of recent interest and development in multi pathology identification, we present a novel method, based on a recent contrastive self-supervised approach, for multiple chest-related abnormality identification including low lung density area ("LLDA"), consolidation ("CONS"), nodules ("NOD") and interstitial pattern ("IP"). Our approach alerts radiologists about abnormal regions within a computed tomography (CT) scan by providing 3D localization. METHODS We introduce a new method for the classification and localization of multiple chest pathologies in 3D Chest CT scans. Our goal is to distinguish four common chest-related abnormalities: "LLDA", "CONS", "NOD", "IP" and "NORMAL". This method is based on a 3D patch-based classifier with a Resnet backbone encoder pretrained leveraging recent contrastive self supervised approach and a fine-tuned classification head. We leverage the SimCLR contrastive framework for pretraining on an unannotated dataset of randomly selected patches and we then fine-tune it on a labeled dataset. During inference, this classifier generates probability maps for each abnormality across the CT volume, which are aggregated to produce a multi-label patient-level prediction. We compare different training strategies, including random initialization, ImageNet weight initialization, frozen SimCLR pretrained weights and fine-tuned SimCLR pretrained weights. Each training strategy is evaluated on a validation set for hyperparameter selection and tested on a test set. Additionally, we explore the fine-tuned SimCLR pretrained classifier for 3D pathology localization and conduct qualitative evaluation. RESULTS Validated on 111 chest scans for hyperparameter selection and subsequently tested on 251 chest scans with multi-abnormalities, our method achieves an AUROC of 0.931 (95% confidence interval [CI]: [0.9034, 0.9557], p $ p$ -value < 0.001) and 0.963 (95% CI: [0.952, 0.976], p $ p$ -value < 0.001) in the multi-label and binary (i.e., normal versus abnormal) settings, respectively. Notably, our method surpasses the area under the receiver operating characteristic (AUROC) threshold of 0.9 for two abnormalities: IP (0.974) and LLDA (0.952), while achieving values of 0.853 and 0.791 for NOD and CONS, respectively. Furthermore, our results highlight the superiority of incorporating contrastive pretraining within the patch classifier, outperforming Imagenet pretraining weights and non-pretrained counterparts with uninitialized weights (F1 score = 0.943, 0.792, and 0.677 respectively). Qualitatively, the method achieved a satisfactory 88.8% completeness rate in localization and maintained an 88.3% accuracy rate against false positives. CONCLUSIONS The proposed method integrates self-supervised learning algorithms for pretraining, utilizes a patch-based approach for 3D pathology localization and develops an aggregation method for multi-label prediction at patient-level. It shows promise in efficiently detecting and localizing multiple anomalies within a single scan.
Collapse
Affiliation(s)
- Aissam Djahnine
- CREATIS UMR5220, INSERM U1044, Claude Bernard University Lyon 1, INSA, Lyon, France
- Philips Health Technology innovation, Paris, France
| | | | | | - Salim Aymeric Si-Mohamed
- CREATIS UMR5220, INSERM U1044, Claude Bernard University Lyon 1, INSA, Lyon, France
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
| | | | - Vincent Cottin
- National Reference Center for Rare Pulmonary Diseases, Louis Pradel Hospital, Lyon, France
- Claude Bernard University Lyon 1, Lyon, France
| | - Philippe Douek
- CREATIS UMR5220, INSERM U1044, Claude Bernard University Lyon 1, INSA, Lyon, France
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
| | | | - Loic Boussel
- CREATIS UMR5220, INSERM U1044, Claude Bernard University Lyon 1, INSA, Lyon, France
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
| |
Collapse
|
3
|
Xu T, Liu X, Chen Y, Wang S, Jiang C, Gong J. CT-based deep learning radiomics biomarker for programmed cell death ligand 1 expression in non-small cell lung cancer. BMC Med Imaging 2024; 24:196. [PMID: 39085788 PMCID: PMC11292915 DOI: 10.1186/s12880-024-01380-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 07/26/2024] [Indexed: 08/02/2024] Open
Abstract
BACKGROUND Programmed cell death ligand 1 (PD-L1), as a reliable predictive biomarker, plays an important role in guiding immunotherapy of lung cancer. To investigate the value of CT-based deep learning radiomics signature to predict PD-L1 expression in non-small cell lung cancers(NSCLCs). METHODS 259 consecutive patients with pathological confirmed NSCLCs were retrospectively collected and divided into the training cohort and validation cohort according to the chronological order. The univariate and multivariate analyses were used to build the clinical model. Radiomics and deep learning features were extracted from preoperative non-contrast CT images. After feature selection, Radiomics score (Rad-score) and deep learning radiomics score (DLR-score) were calculated through a linear combination of the selected features and their coefficients. Predictive performance for PD-L1 expression was evaluated via the area under the curve (AUC) of receiver operating characteristic, the calibration curves, and the decision curve analysis. RESULTS The clinical model based on Cytokeratin 19 fragment and lobulated shape obtained an AUC of 0.767(95% CI: 0.673-0.860) in the training cohort and 0.604 (95% CI:0.477-0.731) in the validation cohort. 11 radiomics features and 15 deep learning features were selected by LASSO regression. AUCs of the Rad-score were 0.849 (95%CI: 0.783-0.914) and 0.717 (95%CI: 0.607-0.826) in the training cohort and validation cohort, respectively. AUCs of DLR-score were 0.938 (95%CI: 0.899-0.977) and 0.818(95%CI:0.727-0.910) in the training cohort and validation cohort, respectively. AUCs of the DLR-score were significantly higher than those of the Rad-score and the clinical model. CONCLUSION The CT-based deep learning radiomics signature could achieve clinically acceptable predictive performance for PD-L1 expression, which showed potential to be a surrogate imaging biomarker or a complement of immunohistochemistry assessment.
Collapse
Affiliation(s)
- Ting Xu
- The Second Clinical Medical College of Jinan University, Shenzhen, 518020, China
| | - Xiaowen Liu
- The Second Clinical Medical College of Jinan University, Shenzhen, 518020, China
| | - Yaxi Chen
- The Second Clinical Medical College of Jinan University, Shenzhen, 518020, China
| | - Shuxing Wang
- The Second Clinical Medical College of Jinan University, Shenzhen, 518020, China
| | - Changsi Jiang
- Department of Radiology, Shenzhen People's Hospital (The Second Clinical Medical College of Jinan University, The First Affiliated Hospital of Southern University of Science and Technology), 1F, Building 4, No. 1017 Dongmen North Road, Shenzhen, 518020, China
| | - Jingshan Gong
- Department of Radiology, Shenzhen People's Hospital (The Second Clinical Medical College of Jinan University, The First Affiliated Hospital of Southern University of Science and Technology), 1F, Building 4, No. 1017 Dongmen North Road, Shenzhen, 518020, China.
| |
Collapse
|
4
|
Chen JX, Shen YC, Peng SL, Chen YW, Fang HY, Lan JL, Shih CT. Pattern classification of interstitial lung diseases from computed tomography images using a ResNet-based network with a split-transform-merge strategy and split attention. Phys Eng Sci Med 2024; 47:755-767. [PMID: 38436886 DOI: 10.1007/s13246-024-01404-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 02/09/2024] [Indexed: 03/05/2024]
Abstract
In patients with interstitial lung disease (ILD), accurate pattern assessment from their computed tomography (CT) images could help track lung abnormalities and evaluate treatment efficacy. Based on excellent image classification performance, convolutional neural networks (CNNs) have been massively investigated for classifying and labeling pathological patterns in the CT images of ILD patients. However, previous studies rarely considered the three-dimensional (3D) structure of the pathological patterns of ILD and used two-dimensional network input. In addition, ResNet-based networks such as SE-ResNet and ResNeXt with high classification performance have not been used for pattern classification of ILD. This study proposed a SE-ResNeXt-SA-18 for classifying pathological patterns of ILD. The SE-ResNeXt-SA-18 integrated the multipath design of the ResNeXt and the feature weighting of the squeeze-and-excitation network with split attention. The classification performance of the SE-ResNeXt-SA-18 was compared with the ResNet-18 and SE-ResNeXt-18. The influence of the input patch size on classification performance was also evaluated. Results show that the classification accuracy was increased with the increase of the patch size. With a 32 × 32 × 16 input, the SE-ResNeXt-SA-18 presented the highest performance with average accuracy, sensitivity, and specificity of 0.991, 0.979, and 0.994. High-weight regions in the class activation maps of the SE-ResNeXt-SA-18 also matched the specific pattern features. In comparison, the performance of the SE-ResNeXt-SA-18 is superior to the previously reported CNNs in classifying the ILD patterns. We concluded that the SE-ResNeXt-SA-18 could help track or monitor the progress of ILD through accuracy pattern classification.
Collapse
Affiliation(s)
- Jian-Xun Chen
- Department of Thoracic Surgery, China Medical University Hospital, Taichung, Taiwan
| | - Yu-Cheng Shen
- Department of Thoracic Surgery, China Medical University Hospital, Taichung, Taiwan
| | - Shin-Lei Peng
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan
| | - Yi-Wen Chen
- x-Dimension Center for Medical Research and Translation, China Medical University Hospital, Taichung, Taiwan
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan
- Graduate Institute of Biomedical Sciences, China Medical University, Taichung, Taiwan
| | - Hsin-Yuan Fang
- x-Dimension Center for Medical Research and Translation, China Medical University Hospital, Taichung, Taiwan
- School of Medicine, China Medical University, Taichung, Taiwan
| | - Joung-Liang Lan
- School of Medicine, China Medical University, Taichung, Taiwan
- Rheumatology and Immunology Center, China Medical University Hospital, Taichung, Taiwan
| | - Cheng-Ting Shih
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan.
- x-Dimension Center for Medical Research and Translation, China Medical University Hospital, Taichung, Taiwan.
| |
Collapse
|
5
|
Lu T, Ma J, Zou J, Jiang C, Li Y, Han J. CT-based intratumoral and peritumoral deep transfer learning features prediction of lymph node metastasis in non-small cell lung cancer. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:597-609. [PMID: 38578874 DOI: 10.3233/xst-230326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
BACKGROUND The main metastatic route for lung cancer is lymph node metastasis, and studies have shown that non-small cell lung cancer (NSCLC) has a high risk of lymph node infiltration. OBJECTIVE This study aimed to compare the performance of handcrafted radiomics (HR) features and deep transfer learning (DTL) features in Computed Tomography (CT) of intratumoral and peritumoral regions in predicting the metastatic status of NSCLC lymph nodes in different machine learning classifier models. METHODS We retrospectively collected data of 199 patients with pathologically confirmed NSCLC. All patients were divided into training (n = 159) and validation (n = 40) cohorts, respectively. The best HR and DTL features in the intratumoral and peritumoral regions were extracted and selected, respectively. Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Light Gradient Boosting Machine (Light GBM), Multilayer Perceptron (MLP), and Logistic Regression (LR) models were constructed, and the performance of the models was evaluated. RESULTS Among the five models in the training and validation cohorts, the LR classifier model performed best in terms of HR and DTL features. The AUCs of the training cohort were 0.841 (95% CI: 0.776-0.907) and 0.955 (95% CI: 0.926-0.983), and the AUCs of the validation cohort were 0.812 (95% CI: 0.677-0.948) and 0.893 (95% CI: 0.795-0.991), respectively. The DTL signature was superior to the handcrafted radiomics signature. CONCLUSIONS Compared with the radiomics signature, the DTL signature constructed based on intratumoral and peritumoral areas in CT can better predict NSCLC lymph node metastasis.
Collapse
Affiliation(s)
- Tianyu Lu
- Department of Radiology, The First Hospital of Jiaxing or The Affiliated Hospital of Jiaxing University, Jiaxing, China
| | - Jianbing Ma
- Department of Radiology, The First Hospital of Jiaxing or The Affiliated Hospital of Jiaxing University, Jiaxing, China
| | - Jiajun Zou
- Department of Radiology, The First Hospital of Jiaxing or The Affiliated Hospital of Jiaxing University, Jiaxing, China
| | - Chenxu Jiang
- Department of Radiology, The First Hospital of Jiaxing or The Affiliated Hospital of Jiaxing University, Jiaxing, China
| | - Yangyang Li
- Department of Radiology, The First Hospital of Jiaxing or The Affiliated Hospital of Jiaxing University, Jiaxing, China
| | - Jun Han
- Department of Radiology, The First Hospital of Jiaxing or The Affiliated Hospital of Jiaxing University, Jiaxing, China
| |
Collapse
|
6
|
B A, Sarkar A, Behera PR, Shukla J. Multi-source transfer learning for facial emotion recognition using multivariate correlation analysis. Sci Rep 2023; 13:21004. [PMID: 38017241 PMCID: PMC10684585 DOI: 10.1038/s41598-023-48250-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 11/23/2023] [Indexed: 11/30/2023] Open
Abstract
Deep learning techniques have proven to be effective in solving the facial emotion recognition (FER) problem. However, it demands a significant amount of supervision data which is often unavailable due to privacy and ethical concerns. In this paper, we present a novel approach for addressing the FER problem using multi-source transfer learning. The proposed method leverages the knowledge from multiple data sources of similar domains to inform the model on a related task. The approach involves the optimization of aggregate multivariate correlation among the source tasks trained on the source dataset, thus controlling the transfer of information to the target task. The hypothesis is validated on benchmark datasets for facial emotion recognition and image classification tasks, and the results demonstrate the effectiveness of the proposed method in capturing the group correlation among features, as well as being robust to negative transfer and performing well in few-shot multi-source adaptation. With respect to the state-of-the-art methods MCW and DECISION, our approach shows an improvement of 7% and [Formula: see text]15% respectively.
Collapse
Affiliation(s)
- Ashwini B
- Human-Machine Interaction Lab, Indraprastha Institute of Information Technology, New Delhi, India
| | - Arka Sarkar
- Human-Machine Interaction Lab, Indraprastha Institute of Information Technology, New Delhi, India
| | - Pruthivi Raj Behera
- Human-Machine Interaction Lab, Indraprastha Institute of Information Technology, New Delhi, India
| | - Jainendra Shukla
- Human-Machine Interaction Lab, Indraprastha Institute of Information Technology, New Delhi, India.
| |
Collapse
|
7
|
Waseem Sabir M, Farhan M, Almalki NS, Alnfiai MM, Sampedro GA. FibroVit-Vision transformer-based framework for detection and classification of pulmonary fibrosis from chest CT images. Front Med (Lausanne) 2023; 10:1282200. [PMID: 38020169 PMCID: PMC10666764 DOI: 10.3389/fmed.2023.1282200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Pulmonary Fibrosis (PF) is an immedicable respiratory condition distinguished by permanent fibrotic alterations in the pulmonary tissue for which there is no cure. Hence, it is crucial to diagnose PF swiftly and precisely. The existing research on deep learning-based pulmonary fibrosis detection methods has limitations, including dataset sample sizes and a lack of standardization in data preprocessing and evaluation metrics. This study presents a comparative analysis of four vision transformers regarding their efficacy in accurately detecting and classifying patients with Pulmonary Fibrosis and their ability to localize abnormalities within Images obtained from Computerized Tomography (CT) scans. The dataset consisted of 13,486 samples selected out of 24647 from the Pulmonary Fibrosis dataset, which included both PF-positive CT and normal images that underwent preprocessing. The preprocessed images were divided into three sets: the training set, which accounted for 80% of the total pictures; the validation set, which comprised 10%; and the test set, which also consisted of 10%. The vision transformer models, including ViT, MobileViT2, ViTMSN, and BEiT were subjected to training and validation procedures, during which hyperparameters like the learning rate and batch size were fine-tuned. The overall performance of the optimized architectures has been assessed using various performance metrics to showcase the consistent performance of the fine-tuned model. Regarding performance, ViT has shown superior performance in validation and testing accuracy and loss minimization, specifically for CT images when trained at a single epoch with a tuned learning rate of 0.0001. The results were as follows: validation accuracy of 99.85%, testing accuracy of 100%, training loss of 0.0075, and validation loss of 0.0047. The experimental evaluation of the independently collected data gives empirical evidence that the optimized Vision Transformer (ViT) architecture exhibited superior performance compared to all other optimized architectures. It achieved a flawless score of 1.0 in various standard performance metrics, including Sensitivity, Specificity, Accuracy, F1-score, Precision, Recall, Mathew Correlation Coefficient (MCC), Precision-Recall Area under the Curve (AUC PR), Receiver Operating Characteristic and Area Under the Curve (ROC-AUC). Therefore, the optimized Vision Transformer (ViT) functions as a reliable diagnostic tool for the automated categorization of individuals with pulmonary fibrosis (PF) using chest computed tomography (CT) scans.
Collapse
Affiliation(s)
| | - Muhammad Farhan
- Department of Computer Science, COMSATS University Islamabad, Sahiwal, Pakistan
| | - Nabil Sharaf Almalki
- Department of Special Education, College of Education, King Saud University, Riyadh, Saudi Arabia
| | - Mrim M. Alnfiai
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Gabriel Avelino Sampedro
- Faculty of Information and Communication Studies, University of the Philippines Open University, Los Baños, Philippines
- Center for Computational Imaging and Visual Innovations, De La Salle University, Manila, Philippines
| |
Collapse
|
8
|
Suman G, Koo CW. Recent Advancements in Computed Tomography Assessment of Fibrotic Interstitial Lung Diseases. J Thorac Imaging 2023; 38:S7-S18. [PMID: 37015833 DOI: 10.1097/rti.0000000000000705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/06/2023]
Abstract
Interstitial lung disease (ILD) is a heterogeneous group of disorders with complex and varied imaging manifestations and prognosis. High-resolution computed tomography (HRCT) is the current standard-of-care imaging tool for ILD assessment. However, visual evaluation of HRCT is limited by interobserver variation and poor sensitivity for subtle changes. Such challenges have led to tremendous recent research interest in objective and reproducible methods to examine ILDs. Computer-aided CT analysis to include texture analysis and machine learning methods have recently been shown to be viable supplements to traditional visual assessment through improved characterization and quantification of ILDs. These quantitative tools have not only been shown to correlate well with pulmonary function tests and patient outcomes but are also useful in disease diagnosis, surveillance and management. In this review, we provide an overview of recent computer-aided tools in diagnosis, prognosis, and longitudinal evaluation of fibrotic ILDs, while outlining some of the pitfalls and challenges that have precluded further advancement of these tools as well as potential solutions and further endeavors.
Collapse
Affiliation(s)
- Garima Suman
- Division of Thoracic Imaging, Mayo Clinic, Rochester, MN
| | | |
Collapse
|
9
|
Khan SH, Iqbal J, Hassnain SA, Owais M, Mostafa SM, Hadjouni M, Mahmoud A. COVID-19 detection and analysis from lung CT images using novel channel boosted CNNs. EXPERT SYSTEMS WITH APPLICATIONS 2023; 229:120477. [PMID: 37220492 PMCID: PMC10186852 DOI: 10.1016/j.eswa.2023.120477] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 05/10/2023] [Accepted: 05/10/2023] [Indexed: 05/25/2023]
Abstract
In December 2019, the global pandemic COVID-19 in Wuhan, China, affected human life and the worldwide economy. Therefore, an efficient diagnostic system is required to control its spread. However, the automatic diagnostic system poses challenges with a limited amount of labeled data, minor contrast variation, and high structural similarity between infection and background. In this regard, a new two-phase deep convolutional neural network (CNN) based diagnostic system is proposed to detect minute irregularities and analyze COVID-19 infection. In the first phase, a novel SB-STM-BRNet CNN is developed, incorporating a new channel Squeezed and Boosted (SB) and dilated convolutional-based Split-Transform-Merge (STM) block to detect COVID-19 infected lung CT images. The new STM blocks performed multi-path region-smoothing and boundary operations, which helped to learn minor contrast variation and global COVID-19 specific patterns. Furthermore, the diverse boosted channels are achieved using the SB and Transfer Learning concepts in STM blocks to learn texture variation between COVID-19-specific and healthy images. In the second phase, COVID-19 infected images are provided to the novel COVID-CB-RESeg segmentation CNN to identify and analyze COVID-19 infectious regions. The proposed COVID-CB-RESeg methodically employed region-homogeneity and heterogeneity operations in each encoder-decoder block and boosted-decoder using auxiliary channels to simultaneously learn the low illumination and boundaries of the COVID-19 infected region. The proposed diagnostic system yields good performance in terms of accuracy: 98.21 %, F-score: 98.24%, Dice Similarity: 96.40 %, and IOU: 98.85 % for the COVID-19 infected region. The proposed diagnostic system would reduce the burden and strengthen the radiologist's decision for a fast and accurate COVID-19 diagnosis.
Collapse
Affiliation(s)
- Saddam Hussain Khan
- Department of Computer Systems Engineering, University of Engineering and Applied Science, Swat 19060, Pakistan
| | - Javed Iqbal
- Department of Computer Systems Engineering, University of Engineering and Applied Science, Swat 19060, Pakistan
| | - Syed Agha Hassnain
- Ocean College, Zhejiang University, Zheda Road 1, Zhoushan, Zhejiang 316021, China
| | - Muhammad Owais
- KUCARS and C2PS, Department of Electrical Engineering and Computer Science, Khalifa University, UAE
| | - Samih M Mostafa
- Computer Science Department, Faculty of Computers and Information, South Valley University, Qena 83523, Egypt
- Faculty of Industry and Energy Technology, New Assiut Technological University (N.A.T.U.), New Assiut City, Egypt
| | - Myriam Hadjouni
- Department of Computer Sciences, College of Computer and Information Science, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Amena Mahmoud
- Faculty of Computers and Information, Department of Computer Science, KafrElSkeikh University, Egypt
| |
Collapse
|
10
|
Rauf Z, Khan AR, Sohail A, Alquhayz H, Gwak J, Khan A. Lymphocyte detection for cancer analysis using a novel fusion block based channel boosted CNN. Sci Rep 2023; 13:14047. [PMID: 37640739 PMCID: PMC10462751 DOI: 10.1038/s41598-023-40581-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 08/13/2023] [Indexed: 08/31/2023] Open
Abstract
Tumor-infiltrating lymphocytes, specialized immune cells, are considered an important biomarker in cancer analysis. Automated lymphocyte detection is challenging due to its heterogeneous morphology, variable distribution, and presence of artifacts. In this work, we propose a novel Boosted Channels Fusion-based CNN "BCF-Lym-Detector" for lymphocyte detection in multiple cancer histology images. The proposed network initially selects candidate lymphocytic regions at the tissue level and then detects lymphocytes at the cellular level. The proposed "BCF-Lym-Detector" generates diverse boosted channels by utilizing the feature learning capability of different CNN architectures. In this connection, a new adaptive fusion block is developed to combine and select the most relevant lymphocyte-specific features from the generated enriched feature space. Multi-level feature learning is used to retain lymphocytic spatial information and detect lymphocytes with variable appearances. The assessment of the proposed "BCF-Lym-Detector" show substantial improvement in terms of F-score (0.93 and 0.84 on LYSTO and NuClick, respectively), which suggests that the diverse feature extraction and dynamic feature selection enhanced the feature learning capacity of the proposed network. Moreover, the proposed technique's generalization on unseen test sets with a good recall (0.75) and F-score (0.73) shows its potential use for pathologists' assistance.
Collapse
Affiliation(s)
- Zunaira Rauf
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan
- PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan
| | - Abdul Rehman Khan
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi, UAE
| | - Hani Alquhayz
- Department of Computer Science and Information, College of Science in Zulfi, Majmaah University, 11952, Al-Majmaah, Saudi Arabia
| | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju, 27469, Republic of Korea.
| | - Asifullah Khan
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan.
- PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan.
- Center for Mathematical Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, 45650, Islamabad, Pakistan.
| |
Collapse
|
11
|
Cai GW, Liu YB, Feng QJ, Liang RH, Zeng QS, Deng Y, Yang W. Semi-Supervised Segmentation of Interstitial Lung Disease Patterns from CT Images via Self-Training with Selective Re-Training. Bioengineering (Basel) 2023; 10:830. [PMID: 37508857 PMCID: PMC10375953 DOI: 10.3390/bioengineering10070830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 06/22/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023] Open
Abstract
Accurate segmentation of interstitial lung disease (ILD) patterns from computed tomography (CT) images is an essential prerequisite to treatment and follow-up. However, it is highly time-consuming for radiologists to pixel-by-pixel segment ILD patterns from CT scans with hundreds of slices. Consequently, it is hard to obtain large amounts of well-annotated data, which poses a huge challenge for data-driven deep learning-based methods. To alleviate this problem, we propose an end-to-end semi-supervised learning framework for the segmentation of ILD patterns (ESSegILD) from CT images via self-training with selective re-training. The proposed ESSegILD model is trained using a large CT dataset with slice-wise sparse annotations, i.e., only labeling a few slices in each CT volume with ILD patterns. Specifically, we adopt a popular semi-supervised framework, i.e., Mean-Teacher, that consists of a teacher model and a student model and uses consistency regularization to encourage consistent outputs from the two models under different perturbations. Furthermore, we propose introducing the latest self-training technique with a selective re-training strategy to select reliable pseudo-labels generated by the teacher model, which are used to expand training samples to promote the student model during iterative training. By leveraging consistency regularization and self-training with selective re-training, our proposed ESSegILD can effectively utilize unlabeled data from a partially annotated dataset to progressively improve the segmentation performance. Experiments are conducted on a dataset of 67 pneumonia patients with incomplete annotations containing over 11,000 CT images with eight different lung patterns of ILDs, with the results indicating that our proposed method is superior to the state-of-the-art methods.
Collapse
Affiliation(s)
- Guang-Wei Cai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Yun-Bi Liu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Qian-Jin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Rui-Hong Liang
- Department of Medical Imaging Center, Nanfang Hospital of Southern Medical University, Guangzhou 510515, China
| | - Qing-Si Zeng
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Yu Deng
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
12
|
Ji Y, Gao Y, Bao R, Li Q, Liu D, Sun Y, Ye Y. Prediction of COVID-19 Patients' Emergency Room Revisit using Multi-Source Transfer Learning. IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS. IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS 2023; 2023:138-144. [PMID: 38486663 PMCID: PMC10939709 DOI: 10.1109/ichi57859.2023.00028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/17/2024]
Abstract
The coronavirus disease 2019 (COVID-19) has led to a global pandemic of significant severity. In addition to its high level of contagiousness, COVID-19 can have a heterogeneous clinical course, ranging from asymptomatic carriers to severe and potentially life-threatening health complications. Many patients have to revisit the emergency room (ER) within a short time after discharge, which significantly increases the workload for medical staff. Early identification of such patients is crucial for helping physicians focus on treating life-threatening cases. In this study, we obtained Electronic Health Records (EHRs) of 3,210 encounters from 13 affiliated ERs within the University of Pittsburgh Medical Center between March 2020 and January 2021. We leveraged a Natural Language Processing technique, ScispaCy, to extract clinical concepts and used the 1001 most frequent concepts to develop 7-day revisit models for COVID-19 patients in ERs. The research data we collected were obtained from 13 ERs, which may have distributional differences that could affect the model development. To address this issue, we employed a classic deep transfer learning method called the Domain Adversarial Neural Network (DANN) and evaluated different modeling strategies, including the Multi-DANN algorithm (which considers the source differences), the Single-DANN algorithm (which doesn't consider the source differences), and three baseline methods: using only source data, using only target data, and using a mixture of source and target data. Results showed that the Multi-DANN models outperformed the Single-DANN models and baseline models in predicting revisits of COVID-19 patients to the ER within 7 days after discharge (median AUROC = 0.8 vs. 0.5). Notably, the Multi-DANN strategy effectively addressed the heterogeneity among multiple source domains and improved the adaptation of source data to the target domain. Moreover, the high performance of Multi-DANN models indicates that EHRs are informative for developing a prediction model to identify COVID-19 patients who are very likely to revisit an ER within 7 days after discharge.
Collapse
Affiliation(s)
- Yuelyu Ji
- Department of Information Science, School of Computing and Information, University of Pittsburgh, Pittsburgh,USA
| | - Yuhe Gao
- Department of Biomedical Informatics, School of Medicine, University of Pittsburgh, Pittsburgh, USA
| | - Runxue Bao
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, USA
| | - Qi Li
- School of Business, State University of New York at New Paltz, New Paltz, USA
| | - Disheng Liu
- Department of Information Science, School of Computing and Information, University of Pittsburgh Pittsburgh, USA
| | - Yiming Sun
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh Pittsburgh, USA
| | - Ye Ye
- Department of Biomedical Informatics, School of Medicine, University of Pittsburgh, Pittsburgh, USA
| |
Collapse
|
13
|
Asif S, Zhao M, Chen X, Zhu Y. BMRI-NET: A Deep Stacked Ensemble Model for Multi-class Brain Tumor Classification from MRI Images. Interdiscip Sci 2023:10.1007/s12539-023-00571-1. [PMID: 37171681 DOI: 10.1007/s12539-023-00571-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 04/26/2023] [Accepted: 04/27/2023] [Indexed: 05/13/2023]
Abstract
Brain tumors are one of the most dangerous health problems for adults and children in many countries. Any failure in the diagnosis of brain tumors may lead to shortening of human life. Accurate and timely diagnosis of brain tumors provides appropriate treatment to increase the patient's chances of survival. Due to the different characteristics of tumors, one of the challenging problems is the classification of three types of brain tumors. With the advent of deep learning (DL) models, three classes of brain tumor classification have been addressed. However, the accuracy of these methods requires significant improvements in brain image classification. The main goal of this article is to design a new method for classifying the three types of brain tumors with extremely high accuracy. In this paper, we propose a novel deep stacked ensemble model called "BMRI-NET" that can detect brain tumors from MR images with high accuracy and recall. The stacked ensemble proposed in this article adapts three pre-trained models, namely DenseNe201, ResNet152V2, and InceptionResNetV2, to improve the generalization capability. We combine decisions from the three models using the stacking technique to obtain final results that are much more accurate than individual models for detecting brain tumors. The efficacy of the proposed model is evaluated on the Figshare brain MRI dataset of three types of brain tumors consisting of 3064 images. The experimental results clearly highlight the robustness of the proposed BMRI-NET model by achieving an overall classification of 98.69% and an average recall, F1-score and MCC of 98.33%, 98.40, and 97.95%, respectively. The results indicate that the proposed BMRI-NET model is superior to existing methods and can assist healthcare professionals in the diagnosis of brain tumors.
Collapse
Affiliation(s)
- Sohaib Asif
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Ming Zhao
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Xuehan Chen
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Yusen Zhu
- School of Mathematics, Hunan University, Changsha, China
| |
Collapse
|
14
|
Farhan AMQ, Yang S. Automatic lung disease classification from the chest X-ray images using hybrid deep learning algorithm. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-27. [PMID: 37362647 PMCID: PMC10030349 DOI: 10.1007/s11042-023-15047-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/30/2022] [Accepted: 02/27/2023] [Indexed: 06/28/2023]
Abstract
The chest X-ray images provide vital information about the congestion cost-effectively. We propose a novel Hybrid Deep Learning Algorithm (HDLA) framework for automatic lung disease classification from chest X-ray images. The model consists of steps including pre-processing of chest X-ray images, automatic feature extraction, and detection. In a pre-processing step, our goal is to improve the quality of raw chest X-ray images using the combination of optimal filtering without data loss. The robust Convolutional Neural Network (CNN) is proposed using the pre-trained model for automatic lung feature extraction. We employed the 2D CNN model for the optimum feature extraction in minimum time and space requirements. The proposed 2D CNN model ensures robust feature learning with highly efficient 1D feature estimation from the input pre-processed image. As the extracted 1D features have suffered from significant scale variations, we optimized them using min-max scaling. We classify the CNN features using the different machine learning classifiers such as AdaBoost, Support Vector Machine (SVM), Random Forest (RM), Backpropagation Neural Network (BNN), and Deep Neural Network (DNN). The experimental results claim that the proposed model improves the overall accuracy by 3.1% and reduces the computational complexity by 16.91% compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Abobaker Mohammed Qasem Farhan
- School of information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Shangming Yang
- School of information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
15
|
Haubold J, Zeng K, Farhand S, Stalke S, Steinberg H, Bos D, Meetschen M, Kureishi A, Zensen S, Goeser T, Maier S, Forsting M, Nensa F. AI co-pilot: content-based image retrieval for the reading of rare diseases in chest CT. Sci Rep 2023; 13:4336. [PMID: 36928759 PMCID: PMC10020154 DOI: 10.1038/s41598-023-29949-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 02/13/2023] [Indexed: 03/18/2023] Open
Abstract
The aim of the study was to evaluate the impact of the newly developed Similar patient search (SPS) Web Service, which supports reading complex lung diseases in computed tomography (CT), on the diagnostic accuracy of residents. SPS is an image-based search engine for pre-diagnosed cases along with related clinical reference content ( https://eref.thieme.de ). The reference database was constructed using 13,658 annotated regions of interest (ROIs) from 621 patients, comprising 69 lung diseases. For validation, 50 CT scans were evaluated by five radiology residents without SPS, and three months later with SPS. The residents could give a maximum of three diagnoses per case. A maximum of 3 points was achieved if the correct diagnosis without any additional diagnoses was provided. The residents achieved an average score of 17.6 ± 5.0 points without SPS. By using SPS, the residents increased their score by 81.8% to 32.0 ± 9.5 points. The improvement of the score per case was highly significant (p = 0.0001). The residents required an average of 205.9 ± 350.6 s per case (21.9% increase) when SPS was used. However, in the second half of the cases, after the residents became more familiar with SPS, this increase dropped to 7%. Residents' average score in reading complex chest CT scans improved by 81.8% when the AI-driven SPS with integrated clinical reference content was used. The increase in time per case due to the use of the SPS was minimal.
Collapse
Affiliation(s)
- Johannes Haubold
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.
| | - Ke Zeng
- Siemens Medical Solutions Inc., Malvern, PA, USA
| | | | | | - Hannah Steinberg
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Denise Bos
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Mathias Meetschen
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Anisa Kureishi
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Sebastian Zensen
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Tim Goeser
- Department of Radiology and Neuroradiology, Kliniken Maria Hilf, Viersener Str. 450, 41063, Mönchengladbach, NRW, Germany
| | - Sandra Maier
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Michael Forsting
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Felix Nensa
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| |
Collapse
|
16
|
Afriyie Y, Weyori BA, Opoku AA. A scaling up approach: a research agenda for medical imaging analysis with applications in deep learning. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Affiliation(s)
- Yaw Afriyie
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
- Department of Computer Science, Faculty of Information and Communication Technology, SD Dombo University of Business and Integrated Development Studies, Wa, Ghana
| | - Benjamin A. Weyori
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| | - Alex A. Opoku
- Department of Mathematics & Statistics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| |
Collapse
|
17
|
Alhares H, Tanha J, Balafar MA. AMTLDC: a new adversarial multi-source transfer learning framework to diagnosis of COVID-19. EVOLVING SYSTEMS 2023; 14:1-15. [PMID: 38625255 PMCID: PMC9838404 DOI: 10.1007/s12530-023-09484-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 01/02/2023] [Indexed: 01/13/2023]
Abstract
In recent years, deep learning techniques have been widely used to diagnose diseases. However, in some tasks, such as the diagnosis of COVID-19 disease, due to insufficient data, the model is not properly trained and as a result, the generalizability of the model decreases. For example, if the model is trained on a CT scan dataset and tested on another CT scan dataset, it predicts near-random results. To address this, data from several different sources can be combined using transfer learning, taking into account the intrinsic and natural differences in existing datasets obtained with different medical imaging tools and approaches. In this paper, to improve the transfer learning technique and better generalizability between multiple data sources, we propose a multi-source adversarial transfer learning model, namely AMTLDC. In AMTLDC, representations are learned that are similar among the sources. In other words, extracted representations are general and not dependent on the particular dataset domain. We apply the AMTLDC to predict Covid-19 from medical images using a convolutional neural network. We show that accuracy can be improved using the AMTLDC framework, and surpass the results of current successful transfer learning approaches. In particular, we show that the AMTLDC works well when using different dataset domains, or when there is insufficient data.
Collapse
Affiliation(s)
- Hadi Alhares
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| | - Jafar Tanha
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| | - Mohammad Ali Balafar
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| |
Collapse
|
18
|
Rahman T, Akinbi A, Chowdhury MEH, Rashid TA, Şengür A, Khandakar A, Islam KR, Ismael AM. COV-ECGNET: COVID-19 detection using ECG trace images with deep convolutional neural network. Health Inf Sci Syst 2022; 10:1. [PMID: 35096384 PMCID: PMC8785028 DOI: 10.1007/s13755-021-00169-1] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 12/27/2021] [Indexed: 12/25/2022] Open
Abstract
The reliable and rapid identification of the COVID-19 has become crucial to prevent the rapid spread of the disease, ease lockdown restrictions and reduce pressure on public health infrastructures. Recently, several methods and techniques have been proposed to detect the SARS-CoV-2 virus using different images and data. However, this is the first study that will explore the possibility of using deep convolutional neural network (CNN) models to detect COVID-19 from electrocardiogram (ECG) trace images. In this work, COVID-19 and other cardiovascular diseases (CVDs) were detected using deep-learning techniques. A public dataset of ECG images consisting of 1937 images from five distinct categories, such as normal, COVID-19, myocardial infarction (MI), abnormal heartbeat (AHB), and recovered myocardial infarction (RMI) were used in this study. Six different deep CNN models (ResNet18, ResNet50, ResNet101, InceptionV3, DenseNet201, and MobileNetv2) were used to investigate three different classification schemes: (i) two-class classification (normal vs COVID-19); (ii) three-class classification (normal, COVID-19, and other CVDs), and finally, (iii) five-class classification (normal, COVID-19, MI, AHB, and RMI). For two-class and three-class classification, Densenet201 outperforms other networks with an accuracy of 99.1%, and 97.36%, respectively; while for the five-class classification, InceptionV3 outperforms others with an accuracy of 97.83%. ScoreCAM visualization confirms that the networks are learning from the relevant area of the trace images. Since the proposed method uses ECG trace images which can be captured by smartphones and are readily available facilities in low-resources countries, this study will help in faster computer-aided diagnosis of COVID-19 and other cardiac abnormalities.
Collapse
Affiliation(s)
- Tawsifur Rahman
- Department of Electrical Engineering, Qatar University, 2713 Doha, Qatar
| | - Alex Akinbi
- School of Computer Science and Mathematics, Liverpool John Moores University, Liverpool, UK
| | | | - Tarik A. Rashid
- Computer Science and Engineering Department, School of Science and Engineering, University of Kurdistan Hewler, Erbīl, KRG Iraq
| | - Abdulkadir Şengür
- Electrical-Electronics Engineering Department, Technology Faculty, Firat University, Elazig, Turkey
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, 2713 Doha, Qatar
| | | | - Aras M. Ismael
- Information Technology Department, College of Informatics, Sulaimani Polytechnic University, Sulaymaniyah, Iraq
| |
Collapse
|
19
|
Nicholson M, Agrahari R, Conran C, Assem H, Kelleher JD. The interaction of normalisation and clustering in sub-domain definition for multi-source transfer learning based time series anomaly detection. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
20
|
Mi J, Wang L, Liu Y, Zhang J. KDE-GAN: A multimodal medical image-fusion model based on knowledge distillation and explainable AI modules. Comput Biol Med 2022; 151:106273. [PMID: 36368109 DOI: 10.1016/j.compbiomed.2022.106273] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/08/2022] [Accepted: 10/30/2022] [Indexed: 11/06/2022]
Abstract
BACKGROUND As medical images contain sensitive patient information, finding a publicly accessible dataset with patient permission is challenging. Furthermore, few large-scale datasets suitable for training image-fusion models are available. To address this issue, we propose a medical image-fusion model based on knowledge distillation (KD) and an explainable AI module-based generative adversarial network with dual discriminators (KDE-GAN). METHOD KD reduces the size of the datasets required for training by refining a complex image-fusion model into a simple model with the same feature-extraction capabilities as the complex model. The images generated by the explainable AI module show whether the discriminator can distinguish true images from false images. When the discriminator precisely judges the image based on the key features, the training can be stopped early, reducing overfitting and the amount of data required for training. RESULTS By training using only small-scale datasets, the trained KDE-GAN can generate clear fused images. KDE-GAN fusion results were evaluated quantitatively using five metrics: spatial frequency, structural similarity, edge information transfer factor, normalized mutual information, and nonlinear correlation information entropy. CONCLUSION Experimental results show that the fused images generated by KDE-GAN are superior to state-of-the-art methods, both subjectively and objectively.
Collapse
Affiliation(s)
- Jia Mi
- The Key Laboratory of Biomedical Imaging and Imaging on Big Data, North University of China, Taiyuan, 030051, China
| | - LiFang Wang
- The Key Laboratory of Biomedical Imaging and Imaging on Big Data, North University of China, Taiyuan, 030051, China.
| | - Yang Liu
- The Key Laboratory of Biomedical Imaging and Imaging on Big Data, North University of China, Taiyuan, 030051, China
| | - Jiong Zhang
- The Key Laboratory of Biomedical Imaging and Imaging on Big Data, North University of China, Taiyuan, 030051, China
| |
Collapse
|
21
|
Jiang L, Li M, Jiang H, Tao L, Yang W, Yuan H, He B. Development of an Artificial Intelligence Model for Analyzing the Relationship between Imaging Features and Glucocorticoid Sensitivity in Idiopathic Interstitial Pneumonia. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:13099. [PMID: 36293674 PMCID: PMC9602820 DOI: 10.3390/ijerph192013099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 09/29/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
High-resolution CT (HRCT) imaging features of idiopathic interstitial pneumonia (IIP) patients are related to glucocorticoid sensitivity. This study aimed to develop an artificial intelligence model to assess glucocorticoid efficacy according to the HRCT imaging features of IIP. The medical records and chest HRCT images of 150 patients with IIP were analyzed retrospectively. The U-net framework was used to create a model for recognizing different imaging features, including ground glass opacities, reticulations, honeycombing, and consolidations. Then, the area ratio of those imaging features was calculated automatically. Forty-five patients were treated with glucocorticoids, and according to the drug efficacy, they were divided into a glucocorticoid-sensitive group and a glucocorticoid-insensitive group. Models assessing the correlation between imaging features and glucocorticoid sensitivity were established using the k-nearest neighbor (KNN) algorithm. The total accuracy (ACC) and mean intersection over union (mIoU) of the U-net model were 0.9755 and 0.4296, respectively. Out of the 45 patients treated with glucocorticoids, 34 and 11 were placed in the glucocorticoid-sensitive and glucocorticoid-insensitive groups, respectively. The KNN-based model had an accuracy of 0.82. An artificial intelligence model was successfully developed for recognizing different imaging features of IIP and a preliminary model for assessing the correlation between imaging features and glucocorticoid sensitivity in IIP patients was established.
Collapse
Affiliation(s)
- Ling Jiang
- Department of Respiratory and Critical Care Medicine, Peking University Third Hospital, Beijing 100191, China
| | - Meijiao Li
- Department of Radiology, Peking University Third Hospital, Beijing 100191, China
| | - Han Jiang
- OpenBayes (Tianjin) IT Co., Ltd., Beijing 100027, China
| | - Liyuan Tao
- Research Center of Clinical Epidemiology, Peking University Third Hospital, Beijing 100191, China
| | - Wei Yang
- Department of Respiratory and Critical Care Medicine, Peking University Third Hospital, Beijing 100191, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing 100191, China
| | - Bei He
- Department of Respiratory and Critical Care Medicine, Peking University Third Hospital, Beijing 100191, China
| |
Collapse
|
22
|
Deep Learning Assisted Automated Assessment of Thalassaemia from Haemoglobin Electrophoresis Images. Diagnostics (Basel) 2022; 12:diagnostics12102405. [PMID: 36292094 PMCID: PMC9600204 DOI: 10.3390/diagnostics12102405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 07/18/2022] [Accepted: 07/23/2022] [Indexed: 11/16/2022] Open
Abstract
Haemoglobin (Hb) electrophoresis is a method of blood testing used to detect thalassaemia. However, the interpretation of the result of the electrophoresis test itself is a complex task. Expert haematologists, specifically in developing countries, are relatively few in number and are usually overburdened. To assist them with their workload, in this paper we present a novel method for the automated assessment of thalassaemia using Hb electrophoresis images. Moreover, in this study we compile a large Hb electrophoresis image dataset, consisting of 103 strips containing 524 electrophoresis images with a clear consensus on the quality of electrophoresis obtained from 824 subjects. The proposed methodology is split into two parts: (1) single-patient electrophoresis image segmentation by means of the lane extraction technique, and (2) binary classification (normal or abnormal) of the electrophoresis images using state-of-the-art deep convolutional neural networks (CNNs) and using the concept of transfer learning. Image processing techniques including filtering and morphological operations are applied for object detection and lane extraction to automatically separate the lanes and classify them using CNN models. Seven different CNN models (ResNet18, ResNet50, ResNet101, InceptionV3, DenseNet201, SqueezeNet and MobileNetV2) were investigated in this study. InceptionV3 outperformed the other CNNs in detecting thalassaemia using Hb electrophoresis images. The accuracy, precision, recall, f1-score, and specificity in the detection of thalassaemia obtained with the InceptionV3 model were 95.8%, 95.84%, 95.8%, 95.8% and 95.8%, respectively. MobileNetV2 demonstrated an accuracy, precision, recall, f1-score, and specificity of 95.72%, 95.73%, 95.72%, 95.7% and 95.72% respectively. Its performance was comparable with the best performing model, InceptionV3. Since it is a very shallow network, MobileNetV2 also provides the least latency in processing a single-patient image and it can be suitably used for mobile applications. The proposed approach, which has shown very high classification accuracy, will assist in the rapid and robust detection of thalassaemia using Hb electrophoresis images.
Collapse
|
23
|
Gil EM, Keppler M, Boretsky A, Yakovlev VV, Bixler JN. Segmentation of laser induced retinal lesions using deep learning (December 2021). Lasers Surg Med 2022; 54:1130-1142. [PMID: 35781887 PMCID: PMC9464686 DOI: 10.1002/lsm.23578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 05/18/2022] [Accepted: 06/13/2022] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Detection of retinal laser lesions is necessary in both the evaluation of the extent of damage from high power laser sources, and in validating treatments involving the placement of laser lesions. However, such lesions are difficult to detect using Color Fundus cameras alone. Deep learning-based segmentation can remedy this, by highlighting potential lesions in the image. METHODS A unique database of images collected at the Air Force Research Laboratory over the past 30 years was used to train deep learning models for classifying images with lesions and for subsequent segmentation. We investigate whether transferring weights from models that learned classification would improve performance of the segmentation models. We use Pearson's correlation coefficient between the initial and final training phases to reveal how the networks are transferring features. RESULTS The segmentation models are able to effectively segment a broad range of lesions and imaging conditions. CONCLUSION Deep learning-based segmentation of lesions can effectively highlight laser lesions, making this a useful tool for aiding clinicians.
Collapse
Affiliation(s)
- Eddie M Gil
- Department of Biomedical Engineering, Texas A&M University, College Station, Texas, USA
- SAIC, JBSA Fort Sam, Houston, Texas, USA
| | - Mark Keppler
- Department of Biomedical Engineering, Texas A&M University, College Station, Texas, USA
- SAIC, JBSA Fort Sam, Houston, Texas, USA
| | | | - Vladislav V Yakovlev
- Department of Biomedical Engineering, Texas A&M University, College Station, Texas, USA
| | - Joel N Bixler
- Air Force Research Laboratory, JBSA Fort Sam, Houston, Texas, USA
| |
Collapse
|
24
|
Draelos RL, Carin L. Explainable multiple abnormality classification of chest CT volumes. Artif Intell Med 2022; 132:102372. [DOI: 10.1016/j.artmed.2022.102372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 06/09/2022] [Accepted: 07/28/2022] [Indexed: 12/20/2022]
|
25
|
Oh AS, Lynch DA. Interstitial Lung Abnormality—Why Should I Care and What Should I Do About It? Radiol Clin North Am 2022; 60:889-899. [DOI: 10.1016/j.rcl.2022.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
26
|
Xia X, Zhang R, Yao X, Huang G, Tang T. A novel lung nodule accurate detection of computerized tomography images based on convolutional neural network and probability graph model. Comput Intell 2022. [DOI: 10.1111/coin.12531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/09/2022]
Affiliation(s)
- Xunpeng Xia
- School of Optical‐Electrical and Computer Engineering University of Shanghai for Science and Technology Shanghai China
| | - Rongfu Zhang
- School of Optical‐Electrical and Computer Engineering University of Shanghai for Science and Technology Shanghai China
| | - Xufeng Yao
- College of Medical Imaging Shanghai University of Medicine and Health Sciences Shanghai China
| | - Gang Huang
- College of Medical Imaging Shanghai University of Medicine and Health Sciences Shanghai China
- Shanghai Key Laboratory of Molecular Imaging Zhoupu Hospital, Shanghai University of Medicine and Health Sciences Shanghai China
- Shanghai Key Laboratory of Molecular Imaging Jiading District Central Hospital Affiliated Shanghai University of Medicine and Health Sciences Shanghai China
| | - Tiequn Tang
- School of Optical‐Electrical and Computer Engineering University of Shanghai for Science and Technology Shanghai China
| |
Collapse
|
27
|
Ma Y, Peng Y, Wu TY. Transfer learning model for false positive reduction in lymph node detection via sparse coding and deep learning. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-219312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Transfer learning technique is popularly employed for a lot of medical image classification tasks. Here based on convolutional neural network (CNN) and sparse coding process, we present a new deep transfer learning architecture for false positive reduction in lymph node detection task. We first convert the linear combination of the deep transferred features to the pre-trained filter banks. Next, a new point-wise filter based CNN branch is introduced to automatically integrate transfer features for the false and positive image classification purpose. To lower the scale of the proposed architecture, we bring sparse coding process to the fixed transferred convolution filter banks. On this basis, a two-stage training strategy with grouped sparse connection is presented to train the model efficiently. The model validity is tested on lymph node dataset for false positive reduction and our approach indicates encouraging performances compared to prior approaches. Our method reaches sensitivities of 71% /85% at 3 FP/vol. and 82% /91% at 6 FP/vol. in abdomen and mediastinum respectively, which compare competitively to previous approaches.
Collapse
Affiliation(s)
- Yingran Ma
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| | - Yanjun Peng
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
- Shandong Province Key Laboratory of Wisdom Mining Information Technology, Shandong University of Science and Technology, Qingdao, China
| | - Tsu-Yang Wu
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| |
Collapse
|
28
|
Aliboni L, Pennati F, Gelmini A, Colombo A, Ciuni A, Milanese G, Sverzellati N, Magnani S, Vespro V, Blasi F, Aliverti A, Aliberti S. Detection and Classification of Bronchiectasis Through Convolutional Neural Networks. J Thorac Imaging 2022; 37:100-108. [PMID: 33758127 DOI: 10.1097/rti.0000000000000588] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE Bronchiectasis is a chronic disease characterized by an irreversible dilatation of bronchi leading to chronic infection, airway inflammation, and progressive lung damage. Three specific patterns of bronchiectasis are distinguished in clinical practice: cylindrical, varicose, and cystic. The predominance and the extension of the type of bronchiectasis provide important clinical information. However, characterization is often challenging and is subject to high interobserver variability. The aim of this study is to provide an automatic tool for the detection and classification of bronchiectasis through convolutional neural networks. MATERIALS AND METHODS Two distinct approaches were adopted: (i) direct network performing a multilabel classification of 32×32 regions of interest (ROIs) into 4 classes: healthy, cylindrical, cystic, and varicose and (ii) a 2-network serial approach, where the first network performed a binary classification between normal tissue and bronchiectasis and the second one classified the ROIs containing abnormal bronchi into one of the 3 bronchiectasis typologies. Performances of the networks were compared with other architectures presented in the literature. RESULTS Computed tomography from healthy individuals (n=9, age=47±6, FEV1%pred=109±17, FVC%pred=116±17) and bronchiectasis patients (n=21, age=59±15, FEV1%pred=74±25, FVC%pred=91±22) were collected. A total of 19,059 manually selected ROIs were used for training and testing. The serial approach provided the best results with an accuracy and F1 score average of 0.84, respectively. Slightly lower performances were observed for the direct network (accuracy=0.81 and F1 score average=0.82). On the test set, cylindrical bronchiectasis was the subtype classified with highest accuracy, while most of the misclassifications were related to the varicose pattern, mainly to the cylindrical class. CONCLUSION The developed networks accurately detect and classify bronchiectasis disease, allowing to collect quantitative information regarding the radiologic severity and the topographical distribution of bronchiectasis subtype.
Collapse
Affiliation(s)
- Lorenzo Aliboni
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano
| | - Francesca Pennati
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano
| | - Alice Gelmini
- Respiratory Unit and Cystic Fibrosis Adult Center, Fondazione IRCCS Ca'Granda Ospedale Maggiore Policlinico
- Department of Pathophysiology and Transplantation, Università degli Studi di Milano
| | - Alessandra Colombo
- Respiratory Unit and Cystic Fibrosis Adult Center, Fondazione IRCCS Ca'Granda Ospedale Maggiore Policlinico
- Department of Pathophysiology and Transplantation, Università degli Studi di Milano
| | - Andrea Ciuni
- Department of Clinical Sciences, Section of Radiology, University of Parma, Parma
| | - Gianluca Milanese
- Department of Clinical Sciences, Section of Radiology, University of Parma, Parma
| | - Nicola Sverzellati
- Department of Clinical Sciences, Section of Radiology, University of Parma, Parma
| | - Sandro Magnani
- Department of Radiology, ASST Lodi, Ospedale Maggiore di Lodi, Lodi, Italy
| | - Valentina Vespro
- Department of Radiology, Fondazione IRCCS Ca'Granda Ospedale Maggiore Policlinico Milan, University of Milan, Milan
| | - Francesco Blasi
- Respiratory Unit and Cystic Fibrosis Adult Center, Fondazione IRCCS Ca'Granda Ospedale Maggiore Policlinico
- Department of Pathophysiology and Transplantation, Università degli Studi di Milano
| | - Andrea Aliverti
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano
| | - Stefano Aliberti
- Respiratory Unit and Cystic Fibrosis Adult Center, Fondazione IRCCS Ca'Granda Ospedale Maggiore Policlinico
- Department of Pathophysiology and Transplantation, Università degli Studi di Milano
| |
Collapse
|
29
|
AutoCellANLS: An Automated Analysis System for Mycobacteria-Infected Cells Based on Unstained Micrograph. Biomolecules 2022; 12:biom12020240. [PMID: 35204741 PMCID: PMC8961542 DOI: 10.3390/biom12020240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 01/25/2022] [Accepted: 01/25/2022] [Indexed: 11/17/2022] Open
Abstract
The detection of Mycobacterium tuberculosis (Mtb) infection plays an important role in the control of tuberculosis (TB), one of the leading infectious diseases in the world. Recent advances in artificial intelligence-aided cellular image processing and analytical techniques have shown great promises in automated Mtb detection. However, current cell imaging protocols often involve costly and time-consuming fluorescence staining, which has become a major bottleneck for procedural automation. To solve this problem, we have developed a novel automated system (AutoCellANLS) for cell detection and the recognition of morphological features in the phase-contrast micrographs by using unsupervised machine learning (UML) approaches and deep convolutional neural networks (CNNs). The detection algorithm can adaptively and automatically detect single cells in the cell population by the improved level set segmentation model with the circular Hough transform (CHT). Besides, we have designed a Cell-net by using the transfer learning strategies (TLS) to classify the virulence-specific cellular morphological changes that would otherwise be indistinguishable to the naked eye. The novel system can simultaneously classify and segment microscopic images of the cell populations and achieve an average accuracy of 95.13% for cell detection, 95.94% for morphological classification, 94.87% for sensitivity, and 96.61% for specificity. AutoCellANLS is able to detect significant morphological differences between the infected and uninfected mammalian cells throughout the infection period (2 hpi/12 hpi/24 hpi). Besides, it has overcome the drawback of manual intervention and increased the accuracy by more than 11% compared to our previous work, which used AI-aided imaging analysis to detect mycobacterial infection in macrophages. AutoCellANLS is also efficient and versatile when tailored to different cell lines datasets (RAW264.7 and THP-1 cell). This proof-of concept study provides a novel venue to investigate bacterial pathogenesis at a macroscopic level and offers great promise in the diagnosis of bacterial infections.
Collapse
|
30
|
Soffer S, Morgenthau AS, Shimon O, Barash Y, Konen E, Glicksberg BS, Klang E. Artificial Intelligence for Interstitial Lung Disease Analysis on Chest Computed Tomography: A Systematic Review. Acad Radiol 2022; 29 Suppl 2:S226-S235. [PMID: 34219012 DOI: 10.1016/j.acra.2021.05.014] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/10/2021] [Accepted: 05/11/2021] [Indexed: 12/22/2022]
Abstract
RATIONALE AND OBJECTIVES High-resolution computed tomography (HRCT) is paramount in the assessment of interstitial lung disease (ILD). Yet, HRCT interpretation of ILDs may be hampered by inter- and intra-observer variability. Recently, artificial intelligence (AI) has revolutionized medical image analysis. This technology has the potential to advance patient care in ILD. We aimed to systematically evaluate the application of AI for the analysis of ILD in HRCT. MATERIALS AND METHODS We searched MEDLINE/PubMed databases for original publications of deep learning for ILD analysis on chest CT. The search included studies published up to March 1, 2021. The risk of bias evaluation included tailored Quality Assessment of Diagnostic Accuracy Studies and the modified Joanna Briggs Institute Critical Appraisal checklist. RESULTS Data was extracted from 19 retrospective studies. Deep learning techniques included detection, segmentation, and classification of ILD on HRCT. Most studies focused on the classification of ILD into different morphological patterns. Accuracies of 78%-91% were achieved. Two studies demonstrated near-expert performance for the diagnosis of idiopathic pulmonary fibrosis (IPF). The Quality Assessment of Diagnostic Accuracy Studies tool identified a high risk of bias in 15/19 (78.9%) of the studies. CONCLUSION AI has the potential to contribute to the radiologic diagnosis and classification of ILD. However, the accuracy performance is still not satisfactory, and research is limited by a small number of retrospective studies. Hence, the existing published data may not be sufficiently reliable. Only well-designed prospective controlled studies can accurately assess the value of existing AI tools for ILD evaluation.
Collapse
|
31
|
Feng B, Huang L, Liu Y, Chen Y, Zhou H, Yu T, Xue H, Chen Q, Zhou T, Kuang Q, Yang Z, Chen X, Chen X, Peng Z, Long W. A Transfer Learning Radiomics Nomogram for Preoperative Prediction of Borrmann Type IV Gastric Cancer From Primary Gastric Lymphoma. Front Oncol 2022; 11:802205. [PMID: 35087761 PMCID: PMC8789309 DOI: 10.3389/fonc.2021.802205] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 12/20/2021] [Indexed: 12/12/2022] Open
Abstract
Objective This study aims to differentiate preoperative Borrmann type IV gastric cancer (GC) from primary gastric lymphoma (PGL) by transfer learning radiomics nomogram (TLRN) with whole slide images of GC as source domain data. Materials and Methods This study retrospectively enrolled 438 patients with histopathologic diagnoses of Borrmann type IV GC and PGL. They received CT examinations from three hospitals. Quantitative transfer learning features were extracted by the proposed transfer learning radiopathomic network and used to construct transfer learning radiomics signatures (TLRS). A TLRN, which integrates TLRS, clinical factors, and CT subjective findings, was developed by multivariate logistic regression. The diagnostic TLRN performance was assessed by clinical usefulness in the independent validation set. Results The TLRN was built by TLRS and a high enhanced serosa sign, which showed good agreement by the calibration curve. The TLRN performance was superior to the clinical model and TLRS. Its areas under the curve (AUC) were 0.958 (95% confidence interval [CI], 0.883–0.991), 0.867 (95% CI, 0.794–0.922), and 0.921 (95% CI, 0.860–0.960) in the internal and two external validation cohorts, respectively. Decision curve analysis (DCA) showed that the TLRN was better than any other model. TLRN has potential generalization ability, as shown in the stratification analysis. Conclusions The proposed TLRN based on gastric WSIs may help preoperatively differentiate PGL from Borrmann type IV GC. Borrmann type IV gastric cancer, primary gastric lymphoma, transfer learning, whole slide image, deep learning.
Collapse
Affiliation(s)
- Bao Feng
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China.,School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Liebin Huang
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Yu Liu
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Yehang Chen
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Haoyang Zhou
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Tianyou Yu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
| | - Huimin Xue
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Qinxian Chen
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Tao Zhou
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Qionglian Kuang
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Zhiqi Yang
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Xiangguang Chen
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Xiaofeng Chen
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Zhenpeng Peng
- Department of Radiology, The First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Wansheng Long
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| |
Collapse
|
32
|
AI-aided general clinical diagnoses verified by third-parties with dynamic uncertain causality graph extended to also include classification. Artif Intell Rev 2022; 55:4485-4521. [PMID: 35125607 PMCID: PMC8800413 DOI: 10.1007/s10462-021-10109-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2021] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI)-aided general clinical diagnosis is helpful to primary clinicians. Machine learning approaches have problems of generalization, interpretability, etc. Dynamic Uncertain Causality Graph (DUCG) based on uncertain casual knowledge provided by clinical experts does not have these problems. This paper extends DUCG to include the representation and inference algorithm for non-causal classification relationships. As a part of general clinical diagnoses, six knowledge bases corresponding to six chief complaints (arthralgia, dyspnea, cough and expectoration, epistaxis, fever with rash and abdominal pain) were constructed through constructing subgraphs relevant to a chief complaint separately and synthesizing them together as the knowledge base of the chief complaint. A subgraph represents variables and causalities related to a single disease that may cause the chief complaint, regardless of which hospital department the disease belongs to. Verified by two groups of third-party hospitals independently, total diagnostic precisions of the six knowledge bases ranged in 96.5–100%, in which the precision for every disease was no less than 80%.
Collapse
|
33
|
COVID-19 Detection in Chest X-ray Images Using a New Channel Boosted CNN. Diagnostics (Basel) 2022; 12:diagnostics12020267. [PMID: 35204358 PMCID: PMC8871483 DOI: 10.3390/diagnostics12020267] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 01/07/2022] [Accepted: 01/16/2022] [Indexed: 02/01/2023] Open
Abstract
COVID-19 is a respiratory illness that has affected a large population worldwide and continues to have devastating consequences. It is imperative to detect COVID-19 at the earliest opportunity to limit the span of infection. In this work, we developed a new CNN architecture STM-RENet to interpret the radiographic patterns from X-ray images. The proposed STM-RENet is a block-based CNN that employs the idea of split–transform–merge in a new way. In this regard, we have proposed a new convolutional block STM that implements the region and edge-based operations separately, as well as jointly. The systematic use of region and edge implementations in combination with convolutional operations helps in exploring region homogeneity, intensity inhomogeneity, and boundary-defining features. The learning capacity of STM-RENet is further enhanced by developing a new CB-STM-RENet that exploits channel boosting and learns textural variations to effectively screen the X-ray images of COVID-19 infection. The idea of channel boosting is exploited by generating auxiliary channels from the two additional CNNs using Transfer Learning, which are then concatenated to the original channels of the proposed STM-RENet. A significant performance improvement is shown by the proposed CB-STM-RENet in comparison to the standard CNNs on three datasets, especially on the stringent CoV-NonCoV-15k dataset. The good detection rate (97%), accuracy (96.53%), and reasonable F-score (95%) of the proposed technique suggest that it can be adapted to detect COVID-19 infected patients.
Collapse
|
34
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
35
|
Gour M, Jain S. Uncertainty-aware convolutional neural network for COVID-19 X-ray images classification. Comput Biol Med 2022; 140:105047. [PMID: 34847386 PMCID: PMC8609674 DOI: 10.1016/j.compbiomed.2021.105047] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Revised: 11/15/2021] [Accepted: 11/15/2021] [Indexed: 12/16/2022]
Abstract
Deep learning (DL) has shown great success in the field of medical image analysis. In the wake of the current pandemic situation of SARS-CoV-2, a few pioneering works based on DL have made significant progress in automated screening of COVID-19 disease from the chest X-ray (CXR) images. But these DL models have no inherent way of expressing uncertainty associated with the model's prediction, which is very important in medical image analysis. Therefore, in this paper, we develop an uncertainty-aware convolutional neural network model, named UA-ConvNet, for the automated detection of COVID-19 disease from CXR images, with an estimation of associated uncertainty in the model's predictions. The proposed approach utilizes the EfficientNet-B3 model and Monte Carlo (MC) dropout, where an EfficientNet-B3 model has been fine-tuned on the CXR images. During inference, MC dropout has been applied for M forward passes to obtain the posterior predictive distribution. After that mean and entropy have been calculated on the obtained predictive distribution to get the mean prediction and model uncertainty. The proposed method is evaluated on the three different datasets of chest X-ray images, namely the COVID19CXr, X-ray image, and Kaggle datasets. The proposed UA-ConvNet model achieves a G-mean of 98.02% (with a Confidence Interval (CI) of 97.99-98.07) and sensitivity of 98.15% for the multi-class classification task on the COVID19CXr dataset. For binary classification, the proposed model achieves a G-mean of 99.16% (with a CI of 98.81-99.19) and a sensitivity of 99.30% on the X-ray Image dataset. Our proposed approach shows its superiority over the existing methods for diagnosing the COVID-19 cases from the CXR images.
Collapse
Affiliation(s)
- Mahesh Gour
- Maulana Azad National Institute of Technology, Bhopal, MP, 462003, India.
| | - Sweta Jain
- Maulana Azad National Institute of Technology, Bhopal, MP, 462003, India
| |
Collapse
|
36
|
Suzuki Y, Kido S, Mabu S, Yanagawa M, Tomiyama N, Sato Y. Segmentation of Diffuse Lung Abnormality Patterns on Computed Tomography Images using Partially Supervised Learning. ADVANCED BIOMEDICAL ENGINEERING 2022. [DOI: 10.14326/abe.11.25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Affiliation(s)
- Yuki Suzuki
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine
| | - Shoji Kido
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine
| | - Shingo Mabu
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University
| | - Masahiro Yanagawa
- Department of Diagnostic and Interventional Radiology, Osaka University Graduate School of Medicine
| | - Noriyuki Tomiyama
- Department of Diagnostic and Interventional Radiology, Osaka University Graduate School of Medicine
| | - Yoshinobu Sato
- Division of Information Science, Graduate School of Science and Technology, Nara Institute of Science and Technology
| |
Collapse
|
37
|
|
38
|
Xie F, Yuan H, Ning Y, Ong MEH, Feng M, Hsu W, Chakraborty B, Liu N. Deep learning for temporal data representation in electronic health records: A systematic review of challenges and methodologies. J Biomed Inform 2021; 126:103980. [PMID: 34974189 DOI: 10.1016/j.jbi.2021.103980] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/07/2021] [Accepted: 12/20/2021] [Indexed: 12/21/2022]
Abstract
OBJECTIVE Temporal electronic health records (EHRs) contain a wealth of information for secondary uses, such as clinical events prediction and chronic disease management. However, challenges exist for temporal data representation. We therefore sought to identify these challenges and evaluate novel methodologies for addressing them through a systematic examination of deep learning solutions. METHODS We searched five databases (PubMed, Embase, the Institute of Electrical and Electronics Engineers [IEEE] Xplore Digital Library, the Association for Computing Machinery [ACM] Digital Library, and Web of Science) complemented with hand-searching in several prestigious computer science conference proceedings. We sought articles that reported deep learning methodologies on temporal data representation in structured EHR data from January 1, 2010, to August 30, 2020. We summarized and analyzed the selected articles from three perspectives: nature of time series, methodology, and model implementation. RESULTS We included 98 articles related to temporal data representation using deep learning. Four major challenges were identified, including data irregularity, heterogeneity, sparsity, and model opacity. We then studied how deep learning techniques were applied to address these challenges. Finally, we discuss some open challenges arising from deep learning. CONCLUSION Temporal EHR data present several major challenges for clinical prediction modeling and data utilization. To some extent, current deep learning solutions can address these challenges. Future studies may consider designing comprehensive and integrated solutions. Moreover, researchers should incorporate clinical domain knowledge into study designs and enhance model interpretability to facilitate clinical implementation.
Collapse
Affiliation(s)
- Feng Xie
- Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore; Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore
| | - Han Yuan
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore
| | - Yilin Ning
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore
| | - Marcus Eng Hock Ong
- Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore; Department of Emergency Medicine, Singapore General Hospital, Singapore
| | - Mengling Feng
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Wynne Hsu
- School of Computing, National University of Singapore, Singapore; Institute of Data Science, National University of Singapore, Singapore
| | - Bibhas Chakraborty
- Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore; Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore; Department of Statistics and Data Science, National University of Singapore, Singapore; Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, United States
| | - Nan Liu
- Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore; Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore; Institute of Data Science, National University of Singapore, Singapore; SingHealth AI Health Program, Singapore Health Services, Singapore.
| |
Collapse
|
39
|
Ren Q, Zhou B, Tian L, Guo W. Detection of COVID-19 with CT Images using Hybrid Complex Shearlet Scattering Networks. IEEE J Biomed Health Inform 2021; 26:194-205. [PMID: 34855604 DOI: 10.1109/jbhi.2021.3132157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
With the ongoing worldwide coronavirus disease 2019 (COVID-19) pandemic, it is desirable to develop effective algorithms for the automatic detection of COVID-19 with chest computed tomography (CT) images. As deep learning has achieved breakthrough results in numerous computer vision and image understanding tasks, a good choice is to consider diagnosis models based on deep learning. Recently, a considerable number of methods have indeed been proposed. However, training an accurate deep learning model requires a large-scale chest CT dataset, which is hard to collect due to the high contagiousness of COVID-19. To achieve improved COVID-19 detection performance, this paper proposes a hybrid framework that fuses the complex shearlet scattering transform (CSST) and a suitable convolutional neural network into a single model. The introduced CSST cascades complex shearlet transforms with modulus nonlinearities and low-pass filter convolutions to compute a sparse and locally invariant image representation. The features computed from the input chest CT images are discriminative for the detection of COVID-19. Furthermore, a wide residual network with a redesigned residual block (WR2N) is developed to learn more granular multiscale representations by applying it to scattering features. The combination of the model-based CSST and data-driven WR2N leads to a more convenient neural network for image representation, where the idea is to learn only the image parts that the CSST cannot handle instead of all parts. The experimental results obtained on two public chest CT datasets for COVID-19 detection demonstrate the superiority of the proposed method. We can obtain more accurate results than several state-of-the-art COVID-19 classification methods in terms of measures such as accuracy, the F1-score, and the area under the receiver operating characteristic curve.
Collapse
|
40
|
Wang W, Mohseni P, Kilgore KL, Najafizadeh L. Cuff-less Blood Pressure Estimation from Photoplethysmography via Visibility Graph and Transfer Learning. IEEE J Biomed Health Inform 2021; 26:2075-2085. [PMID: 34784289 DOI: 10.1109/jbhi.2021.3128383] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper presents a new solution that enables the use of transfer learning for cuff-less blood pressure (BP) monitoring via short duration of photoplethysmogram (PPG). The proposed method estimates BP with low computational budget by 1) creating images from segments of PPG via visibility graph (VG) that preserves the temporal information of the PPG waveform, 2) using pre-trained deep convolutional neural network (CNN) to extract feature vectors from VG images, and 3) solving for the weights and bias between the feature vectors and the reference BPs with ridge regression. Using the University of California Irvine (UCI) database consisting of 348 records, the proposed method achieves a best error performance of 0.008.46 mmHg for systolic blood pressure (SBP), and -0.045.36 mmHg for diastolic blood pressure (DBP), respectively, in terms of the mean error (ME) and the standard deviation (SD) of error, ranking grade B for SBP and grade A for DBP under the British Hypertension Society (BHS) protocol. Our novel data-driven method offers a computationally-efficient end-to-end solution for rapid and user-friendly cuff-less PPG-based BP estimation.
Collapse
|
41
|
Wong A, Lu J, Dorfman A, McInnis P, Famouri M, Manary D, Lee JRH, Lynch M. Fibrosis-Net: A Tailored Deep Convolutional Neural Network Design for Prediction of Pulmonary Fibrosis Progression From Chest CT Images. Front Artif Intell 2021; 4:764047. [PMID: 34805974 PMCID: PMC8596329 DOI: 10.3389/frai.2021.764047] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 10/11/2021] [Indexed: 01/02/2023] Open
Abstract
Pulmonary fibrosis is a devastating chronic lung disease that causes irreparable lung tissue scarring and damage, resulting in progressive loss in lung capacity and has no known cure. A critical step in the treatment and management of pulmonary fibrosis is the assessment of lung function decline, with computed tomography (CT) imaging being a particularly effective method for determining the extent of lung damage caused by pulmonary fibrosis. Motivated by this, we introduce Fibrosis-Net, a deep convolutional neural network design tailored for the prediction of pulmonary fibrosis progression from chest CT images. More specifically, machine-driven design exploration was leveraged to determine a strong architectural design for CT lung analysis, upon which we build a customized network design tailored for predicting forced vital capacity (FVC) based on a patient's CT scan, initial spirometry measurement, and clinical metadata. Finally, we leverage an explainability-driven performance validation strategy to study the decision-making behavior of Fibrosis-Net as to verify that predictions are based on relevant visual indicators in CT images. Experiments using a patient cohort from the OSIC Pulmonary Fibrosis Progression Challenge showed that the proposed Fibrosis-Net is able to achieve a significantly higher modified Laplace Log Likelihood score than the winning solutions on the challenge. Furthermore, explainability-driven performance validation demonstrated that the proposed Fibrosis-Net exhibits correct decision-making behavior by leveraging clinically-relevant visual indicators in CT images when making predictions on pulmonary fibrosis progress. Fibrosis-Net is able to achieve a significantly higher modified Laplace Log Likelihood score than the winning solutions on the OSIC Pulmonary Fibrosis Progression Challenge, and has been shown to exhibit correct decision-making behavior when making predictions. Fibrosis-Net is available to the general public in an open-source and open access manner as part of the OpenMedAI initiative. While Fibrosis-Net is not yet a production-ready clinical assessment solution, we hope that its release will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon it.
Collapse
Affiliation(s)
- Alexander Wong
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
- DarwinAI Corp., Waterloo, ON, Canada
| | - Jack Lu
- DarwinAI Corp., Waterloo, ON, Canada
| | | | | | | | | | | | | |
Collapse
|
42
|
Si X, Zhang X, Zhou Y, Chao Y, Lim SN, Sun Y, Yin S, Jin W, Zhao X, Li Q, Ming D. White matter structural connectivity as a biomarker for detecting juvenile myoclonic epilepsy by transferred deep convolutional neural networks with varying transfer rates. J Neural Eng 2021; 18. [PMID: 34507303 DOI: 10.1088/1741-2552/ac25d8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Accepted: 09/10/2021] [Indexed: 11/12/2022]
Abstract
Objective. By detecting abnormal white matter changes, diffusion magnetic resonance imaging (MRI) contributes to the detection of juvenile myoclonic epilepsy (JME). In addition, deep learning has greatly improved the detection performance of various brain disorders. However, there is almost no previous study effectively detecting JME by a deep learning approach with diffusion MRI.Approach. In this study, the white matter structural connectivity was generated by tracking the white matter fibers in detail based on Q-ball imaging and neurite orientation dispersion and density imaging. Four advanced deep convolutional neural networks (CNNs) were deployed by using the transfer learning approach, in which the transfer rate searching strategy was proposed to achieve the best detection performance.Main results. Our results showed: (a) Compared to normal control, the white matter' neurite density of JME was significantly decreased. The most significantly abnormal fiber tracts between the two groups were found to be cortico-cortical connection tracts. (b) The proposed transfer rate searching approach contributed to find each CNN's best performance, in which the best JME detection accuracy of 92.2% was achieved by using the Inception_resnet_v2 network with a 16% transfer rate.Significance. The results revealed: (a) Through detection of the abnormal white matter changes, the white matter structural connectivity can be used as a useful biomarker for detecting JME, which helps to characterize the pathophysiology of epilepsy. (b) The proposed transfer rate, as a new hyperparameter, promotes the CNNs transfer learning performance in detecting JME.
Collapse
Affiliation(s)
- Xiaopeng Si
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Institute of Applied Psychology, Tianjin University, Tianjin 300350, People's Republic of China
| | - Xingjian Zhang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Yu Zhou
- School of Microelectronics, Tianjin University, Tianjin 300072, People's Republic of China
| | - Yiping Chao
- Graduate Institute of Biomedical Engineering, Chang Gung University, Taoyuan 33302, Taiwan
| | - Siew-Na Lim
- Department of Neurology, Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
| | - Yulin Sun
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Shaoya Yin
- Department of Neurosurgery, Huanhu Hospital, Tianjin University, Tianjin 300072, People's Republic of China
| | - Weipeng Jin
- Department of Neurosurgery, Huanhu Hospital, Tianjin University, Tianjin 300072, People's Republic of China
| | - Xin Zhao
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Qiang Li
- School of Microelectronics, Tianjin University, Tianjin 300072, People's Republic of China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| |
Collapse
|
43
|
Dehkharghanian T, Rahnamayan S, Riasatian A, Bidgoli AA, Kalra S, Zaveri M, Babaie M, Seyed Sajadi MS, Gonzalelz R, Diamandis P, Pantanowitz L, Huang T, Tizhoosh HR. Selection, Visualization, and Interpretation of Deep Features in Lung Adenocarcinoma and Squamous Cell Carcinoma. THE AMERICAN JOURNAL OF PATHOLOGY 2021; 191:2172-2183. [PMID: 34508689 DOI: 10.1016/j.ajpath.2021.08.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 08/09/2021] [Accepted: 08/20/2021] [Indexed: 12/18/2022]
Abstract
Although deep learning networks applied to digital images have shown impressive results for many pathology-related tasks, their black-box approach and limitation in terms of interpretability are significant obstacles for their widespread clinical utility. This study investigates the visualization of deep features (DFs) to characterize two lung cancer subtypes, adenocarcinoma and squamous cell carcinoma. This study demonstrates that a subset of DFs exist that can accurately distinguish these two cancer subtypes, prominent DFs. Visualization of such individual DFs allows us to understand better histopathologic patterns at both the whole-slide and patch levels, allowing discrimination of these cancer types. These DFs were visualized at the whole slide image level through DF-specific heatmaps and at tissue patch level through generating activation maps. In addition, we show that these prominent DFs contain information that can distinguish carcinomas of organs other than the lung. This framework may serve as a platform for evaluating the interpretability of any deep network for diagnostic decision making.
Collapse
Affiliation(s)
- Taher Dehkharghanian
- Nature Inspired Computer Intelligence (NICI) Lab, Ontario Tech University, Oshawa, Ontario, Canada; Department of Pathology and Molecular Medicine, McMaster University, Hamilton, Ontario, Canada
| | - Shahryar Rahnamayan
- Nature Inspired Computer Intelligence (NICI) Lab, Ontario Tech University, Oshawa, Ontario, Canada
| | - Abtin Riasatian
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Azam A Bidgoli
- Nature Inspired Computer Intelligence (NICI) Lab, Ontario Tech University, Oshawa, Ontario, Canada
| | - Shivam Kalra
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Manit Zaveri
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Morteza Babaie
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Mahjabin S Seyed Sajadi
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Ricardo Gonzalelz
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Phedias Diamandis
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| | - Liron Pantanowitz
- Department of Pathology, University of Michigan, Ann Arbor, Michigan
| | - Tao Huang
- Department of Pathology, University of Michigan, Ann Arbor, Michigan
| | - Hamid R Tizhoosh
- KIMIA (Laboratory for Knowledge Inference in Medical Image Analysis) Lab, University of Waterloo, Waterloo, Ontario, Canada.
| |
Collapse
|
44
|
Zero-small sample classification method with model structure self-optimization and its application in capability evaluation. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02686-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
45
|
Bidirectional cross-modality unsupervised domain adaptation using generative adversarial networks for cardiac image segmentation. Comput Biol Med 2021; 136:104726. [PMID: 34371318 DOI: 10.1016/j.compbiomed.2021.104726] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 07/29/2021] [Accepted: 07/30/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND A novel Generative Adversarial Networks (GAN) based bidirectional cross-modality unsupervised domain adaptation (GBCUDA) framework is developed for cardiac image segmentation, which can effectively tackle the problem of network's segmentation performance degradation when adapting to the target domain without ground truth labels. METHOD GBCUDA uses GAN for image alignment, applies adversarial learning to extract image features, and gradually enhances the domain invariance of extracted features. The shared encoder performs an end-to-end learning task in which features that differ between the two domains complement each other. The self-attention mechanism is incorporated to the GAN network, which can generate details based on the prompts of all feature positions. Furthermore, spectrum normalization is implemented to stabilize the training of GAN, and knowledge distillation loss is introduced to process high-level feature-maps in order to better complete the cross-mode segmentation task. RESULTS The effectiveness of our proposed unsupervised domain adaptation framework is tested over the Multi-Modality Whole Heart Segmentation (MM-WHS) Challenge 2017 dataset. The proposed method is able to improve the average Dice from 74.1% to 81.5% for the four cardiac substructures, and reduce the average symmetric surface distance (ASD) from 7.0 to 5.8 over CT images. For MRI images, our proposed framework trained on CT images gives the average Dice of 59.2% and reduces the average ASD from 5.7 to 4.9. CONCLUSIONS The evaluation results demonstrate our method's effectiveness on domain adaptation and the superiority to the current state-of-the-art domain adaptation methods.
Collapse
|
46
|
Yu H, Yang LT, Zhang Q, Armstrong D, Deen MJ. Convolutional neural networks for medical image analysis: State-of-the-art, comparisons, improvement and perspectives. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.04.157] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
47
|
Wang Y, Feng Y, Zhang L, Wang Z, Lv Q, Yi Z. Deep adversarial domain adaptation for breast cancer screening from mammograms. Med Image Anal 2021; 73:102147. [PMID: 34246849 DOI: 10.1016/j.media.2021.102147] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 11/10/2020] [Accepted: 06/23/2021] [Indexed: 02/05/2023]
Abstract
The early detection of breast cancer greatly increases the chances that the right decision for a successful treatment plan will be made. Deep learning approaches are used in breast cancer screening and have achieved promising results when a large-scale labeled dataset is available for training. However, they may suffer from a dramatic decrease in performance when annotated data are limited. In this paper, we propose a method called deep adversarial domain adaptation (DADA) to improve the performance of breast cancer screening using mammography. Specifically, our aim is to extract the knowledge from a public dataset (source domain) and transfer the learned knowledge to improve the detection performance on the target dataset (target domain). Because of the different distributions of the source and target domains, the proposed method adopts an adversarial learning technique to perform domain adaptation using the two domains. Specifically, the adversarial procedure is trained by taking advantage of the disagreement of two classifiers. To evaluate the proposed method, the public well-labeled image-level dataset Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) is employed as the source domain. Mammography samples from the West China Hospital were collected to construct our target domain dataset, and the samples are annotated at case-level based on the corresponding pathological reports. The experimental results demonstrate the effectiveness of the proposed method compared with several other state-of-the-art automatic breast cancer screening approaches.
Collapse
Affiliation(s)
- Yan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China; Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore
| | - Yangqin Feng
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China.
| | - Zizhou Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Qing Lv
- Department of Galactophore Surgery, West China Hospital, Sichuan University, Chengdu 610041, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| |
Collapse
|
48
|
Rahman T, Khandakar A, Qiblawey Y, Tahir A, Kiranyaz S, Abul Kashem SB, Islam MT, Al Maadeed S, Zughaier SM, Khan MS, Chowdhury ME. Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Comput Biol Med 2021; 132:104319. [PMID: 33799220 PMCID: PMC7946571 DOI: 10.1016/j.compbiomed.2021.104319] [Citation(s) in RCA: 275] [Impact Index Per Article: 68.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 03/03/2021] [Accepted: 03/04/2021] [Indexed: 02/06/2023]
Abstract
Computer-aided diagnosis for the reliable and fast detection of coronavirus disease (COVID-19) has become a necessity to prevent the spread of the virus during the pandemic to ease the burden on the healthcare system. Chest X-ray (CXR) imaging has several advantages over other imaging and detection techniques. Numerous works have been reported on COVID-19 detection from a smaller set of original X-ray images. However, the effect of image enhancement and lung segmentation of a large dataset in COVID-19 detection was not reported in the literature. We have compiled a large X-ray dataset (COVQU) consisting of 18,479 CXR images with 8851 normal, 6012 non-COVID lung infections, and 3616 COVID-19 CXR images and their corresponding ground truth lung masks. To the best of our knowledge, this is the largest public COVID positive database and the lung masks. Five different image enhancement techniques: histogram equalization (HE), contrast limited adaptive histogram equalization (CLAHE), image complement, gamma correction, and balance contrast enhancement technique (BCET) were used to investigate the effect of image enhancement techniques on COVID-19 detection. A novel U-Net model was proposed and compared with the standard U-Net model for lung segmentation. Six different pre-trained Convolutional Neural Networks (CNNs) (ResNet18, ResNet50, ResNet101, InceptionV3, DenseNet201, and ChexNet) and a shallow CNN model were investigated on the plain and segmented lung CXR images. The novel U-Net model showed an accuracy, Intersection over Union (IoU), and Dice coefficient of 98.63%, 94.3%, and 96.94%, respectively for lung segmentation. The gamma correction-based enhancement technique outperforms other techniques in detecting COVID-19 from the plain and the segmented lung CXR images. Classification performance from plain CXR images is slightly better than the segmented lung CXR images; however, the reliability of network performance is significantly improved for the segmented lung images, which was observed using the visualization technique. The accuracy, precision, sensitivity, F1-score, and specificity were 95.11%, 94.55%, 94.56%, 94.53%, and 95.59% respectively for the segmented lung images. The proposed approach with very reliable and comparable performance will boost the fast and robust COVID-19 detection using chest X-ray images.
Collapse
Affiliation(s)
- Tawsifur Rahman
- Department of Electrical Engineering, Qatar University, Doha, 2713, Qatar
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, Doha, 2713, Qatar
| | - Yazan Qiblawey
- Department of Electrical Engineering, Qatar University, Doha, 2713, Qatar
| | - Anas Tahir
- Department of Electrical Engineering, Qatar University, Doha, 2713, Qatar
| | - Serkan Kiranyaz
- Department of Electrical Engineering, Qatar University, Doha, 2713, Qatar
| | - Saad Bin Abul Kashem
- Faculty of Robotics and Advanced Computing, Qatar Armed Forces Academic Bridge Program, Qatar Foundation, Doha, 24404, Qatar
| | - Mohammad Tariqul Islam
- Dept. of Electrical, Electronics and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi, Selangor, 43600, Malaysia
| | - Somaya Al Maadeed
- Department of Computer Science and Engineering, Qatar University, Doha, 2713, Qatar
| | - Susu M. Zughaier
- Department of Basic Medical Sciences, College of Medicine, Biomedical and Pharmaceutical Research Unit, QU Health, Qatar University, Doha, 2713, Qatar
| | - Muhammad Salman Khan
- Department of Electrical Engineering (JC), University of Engineering and Technology, Peshawar, Pakistan
| | - Muhammad E.H. Chowdhury
- Department of Electrical Engineering, Qatar University, Doha, 2713, Qatar,Corresponding author
| |
Collapse
|
49
|
Altaf F, Islam SMS, Janjua NK. A novel augmented deep transfer learning for classification of COVID-19 and other thoracic diseases from X-rays. Neural Comput Appl 2021; 33:14037-14048. [PMID: 33948047 PMCID: PMC8083924 DOI: 10.1007/s00521-021-06044-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 04/13/2021] [Indexed: 12/24/2022]
Abstract
Deep learning has provided numerous breakthroughs in natural imaging tasks. However, its successful application to medical images is severely handicapped with the limited amount of annotated training data. Transfer learning is commonly adopted for the medical imaging tasks. However, a large covariant shift between the source domain of natural images and target domain of medical images results in poor transfer learning. Moreover, scarcity of annotated data for the medical imaging tasks causes further problems for effective transfer learning. To address these problems, we develop an augmented ensemble transfer learning technique that leads to significant performance gain over the conventional transfer learning. Our technique uses an ensemble of deep learning models, where the architecture of each network is modified with extra layers to account for dimensionality change between the images of source and target data domains. Moreover, the model is hierarchically tuned to the target domain with augmented training data. Along with the network ensemble, we also utilize an ensemble of dictionaries that are based on features extracted from the augmented models. The dictionary ensemble provides an additional performance boost to our method. We first establish the effectiveness of our technique with the challenging ChestXray-14 radiography data set. Our experimental results show more than 50% reduction in the error rate with our method as compared to the baseline transfer learning technique. We then apply our technique to a recent COVID-19 data set for binary and multi-class classification tasks. Our technique achieves 99.49% accuracy for the binary classification, and 99.24% for multi-class classification.
Collapse
Affiliation(s)
- Fouzia Altaf
- School of Science, Edith Cowan University, Joondalup, WA Australia
| | - Syed M. S. Islam
- School of Science, Edith Cowan University, Joondalup, WA Australia
| | | |
Collapse
|
50
|
Chen H, Guo S, Hao Y, Fang Y, Fang Z, Wu W, Liu Z, Li S. Auxiliary Diagnosis for COVID-19 with Deep Transfer Learning. J Digit Imaging 2021; 34:231-241. [PMID: 33634413 PMCID: PMC7906243 DOI: 10.1007/s10278-021-00431-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 01/21/2021] [Accepted: 02/02/2021] [Indexed: 12/30/2022] Open
Abstract
To assist physicians identify COVID-19 and its manifestations through the automatic COVID-19 recognition and classification in chest CT images with deep transfer learning. In this retrospective study, the used chest CT image dataset covered 422 subjects, including 72 confirmed COVID-19 subjects (260 studies, 30,171 images), 252 other pneumonia subjects (252 studies, 26,534 images) that contained 158 viral pneumonia subjects and 94 pulmonary tuberculosis subjects, and 98 normal subjects (98 studies, 29,838 images). In the experiment, subjects were split into training (70%), validation (15%) and testing (15%) sets. We utilized the convolutional blocks of ResNets pretrained on the public social image collections and modified the top fully connected layer to suit our task (the COVID-19 recognition). In addition, we tested the proposed method on a finegrained classification task; that is, the images of COVID-19 were further split into 3 main manifestations (ground-glass opacity with 12,924 images, consolidation with 7418 images and fibrotic streaks with 7338 images). Similarly, the data partitioning strategy of 70%-15%-15% was adopted. The best performance obtained by the pretrained ResNet50 model is 94.87% sensitivity, 88.46% specificity, 91.21% accuracy for COVID-19 versus all other groups, and an overall accuracy of 89.01% for the three-category classification in the testing set. Consistent performance was observed from the COVID-19 manifestation classification task on images basis, where the best overall accuracy of 94.08% and AUC of 0.993 were obtained by the pretrained ResNet18 (P < 0.05). All the proposed models have achieved much satisfying performance and were thus very promising in both the practical application and statistics. Transfer learning is worth for exploring to be applied in recognition and classification of COVID-19 on CT images with limited training data. It not only achieved higher sensitivity (COVID-19 vs the rest) but also took far less time than radiologists, which is expected to give the auxiliary diagnosis and reduce the workload for the radiologists.
Collapse
Affiliation(s)
- Hongtao Chen
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Shuanshuan Guo
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Yanbin Hao
- School of Data Science, University of Science and Technology of China, Hefei, 230026, Anhui, China.
- Department of Computer Science, City University of Hong Kong, Hong Kong, 999077, China.
| | - Yijie Fang
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, 519000, Guangdong, China
| | - Zhaoxiong Fang
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Wenhao Wu
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, 519000, Guangdong, China
| | - Zhigang Liu
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Shaolin Li
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, 519000, Guangdong, China.
| |
Collapse
|