151
|
Zhou J, Pan F, Li W, Hu H, Wang W, Huang Q. Feature Fusion for Diagnosis of Atypical Hepatocellular Carcinoma in Contrast- Enhanced Ultrasound. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:114-123. [PMID: 34487493 DOI: 10.1109/tuffc.2021.3110590] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Contrast-enhanced ultrasound (CEUS) is generally employed for focal liver lesions (FLLs) diagnosis. Among the FLLs, atypical hepatocellular carcinoma (HCC) is difficult to distinguish from focal nodular hyperplasia (FNH) in CEUS video. For this reason, we propose and evaluate a feature fusion method to resolve this problem. The proposed algorithm extracts a set of hand-crafted features and the deep features from the CEUS cine clip data. The hand-crafted features include the spatial-temporal feature based on a novel descriptor called Velocity-Similarity and Dissimilarity Matching Local Binary Pattern (V-SDMLBP), and the deep features from a 3-D convolution neural network (3D-CNN). Then the two types of features are fused. Finally, a classifier is employed to diagnose HCC or FNH. Several classifiers have achieved excellent performance, which demonstrates the superiority of the fused features. In addition, compared with general CNNs, the proposed fused features have better interpretability.
Collapse
|
152
|
Cheng X, Wen H, You H, Hua L, Xiaohua W, Qiuting C, Jiabao L. Recognition of Peripheral Lung Cancer and Focal Pneumonia on Chest Computed Tomography Images Based on Convolutional Neural Network. Technol Cancer Res Treat 2022; 21:15330338221085375. [PMID: 35293240 PMCID: PMC8935416 DOI: 10.1177/15330338221085375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Introduction: Chest computed tomography (CT) is important for the early screening of lung diseases and clinical diagnosis, particularly during the COVID-19 pandemic. We propose a method for classifying peripheral lung cancer and focal pneumonia on chest CT images and undertake 5 window settings to study the effect on the artificial intelligence processing results. Methods: A retrospective collection of CT images from 357 patients with peripheral lung cancer having solitary solid nodule or focal pneumonia with a solitary consolidation was applied. We segmented and aligned the lung parenchyma based on some morphological methods and cropped this region of the lung parenchyma with the minimum 3D bounding box. Using these 3D cropped volumes of all cases, we designed a 3D neural network to classify them into 2 categories. We also compared the classification results of the 3 physicians with different experience levels on the same dataset. Results: We conducted experiments using 5 window settings. After cropping and alignment based on an automatic preprocessing procedure, our neural network achieved an average classification accuracy of 91.596% under a 5-fold cross-validation in the full window, in which the area under the curve (AUC) was 0.946. The classification accuracy and AUC value were 90.48% and 0.957 for the junior physician, 94.96% and 0.989 for the intermediate physician, and 96.92% and 0.980 for the senior physician, respectively. After removing the error prediction, the accuracy improved significantly, reaching 98.79% in the self-defined window2. Conclusion: Using the proposed neural network, in separating peripheral lung cancer and focal pneumonia in chest CT data, we achieved an accuracy competitive to that of a junior physician. Through a data ablation study, the proposed 3D CNN can achieve a slightly higher accuracy compared with senior physicians in the same subset. The self-defined window2 was the best for data training and evaluation.
Collapse
Affiliation(s)
- Xiaoyue Cheng
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - He Wen
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Hao You
- Key laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Li Hua
- Key laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Wu Xiaohua
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Cao Qiuting
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Liu Jiabao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
153
|
Suganyadevi S, Seethalakshmi V, Balasamy K. A review on deep learning in medical image analysis. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:19-38. [PMID: 34513553 PMCID: PMC8417661 DOI: 10.1007/s13735-021-00218-1] [Citation(s) in RCA: 95] [Impact Index Per Article: 31.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 08/06/2021] [Accepted: 08/09/2021] [Indexed: 05/02/2023]
Abstract
Ongoing improvements in AI, particularly concerning deep learning techniques, are assisting to identify, classify, and quantify patterns in clinical images. Deep learning is the quickest developing field in artificial intelligence and is effectively utilized lately in numerous areas, including medication. A brief outline is given on studies carried out on the region of application: neuro, brain, retinal, pneumonic, computerized pathology, bosom, heart, breast, bone, stomach, and musculoskeletal. For information exploration, knowledge deployment, and knowledge-based prediction, deep learning networks can be successfully applied to big data. In the field of medical image processing methods and analysis, fundamental information and state-of-the-art approaches with deep learning are presented in this paper. The primary goals of this paper are to present research on medical image processing as well as to define and implement the key guidelines that are identified and addressed.
Collapse
Affiliation(s)
- S. Suganyadevi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - V. Seethalakshmi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - K. Balasamy
- Department of IT, Dr. Mahalingam College of Engineering and Technology, Coimbatore, India
| |
Collapse
|
154
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
155
|
Model architecture and tile size selection for convolutional neural network training for non-small cell lung cancer detection on whole slide images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100850] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
|
156
|
Avetisian M, Burenko I, Egorov K, Kokh V, Nesterov A, Nikolaev A, Ponomarchuk A, Sokolova E, Tuzhilin A, Umerenkov D. CoRSAI: A System for Robust Interpretation of CT Scans of COVID-19 Patients Using Deep Learning. ACM TRANSACTIONS ON MANAGEMENT INFORMATION SYSTEMS 2021. [DOI: 10.1145/3467471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Analysis of chest CT scans can be used in detecting parts of lungs that are affected by infectious diseases such as COVID-19. Determining the volume of lungs affected by lesions is essential for formulating treatment recommendations and prioritizing patients by severity of the disease. In this article we adopted an approach based on using an ensemble of deep convolutional neural networks for segmentation of slices of lung CT scans. Using our models, we are able to segment the lesions, evaluate patients’ dynamics, estimate relative volume of lungs affected by lesions, and evaluate the lung damage stage. Our models were trained on data from different medical centers. We compared predictions of our models with those of six experienced radiologists, and our segmentation model outperformed most of them. On the task of classification of disease severity, our model outperformed all the radiologists.
Collapse
Affiliation(s)
| | | | | | | | | | - Aleksandr Nikolaev
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies, Russia
| | | | | | - Alex Tuzhilin
- Sberbank AI Laboratory and New York University, New York, USA
| | | |
Collapse
|
157
|
Kumar A, Dhara AK, Thakur SB, Sadhu A, Nandi D. Special Convolutional Neural Network for Identification and Positioning of Interstitial Lung Disease Patterns in Computed Tomography Images. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [PMCID: PMC8711684 DOI: 10.1134/s1054661821040027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In this paper, automated detection of interstitial lung disease patterns in high resolution computed tomography images is achieved by developing a faster region-based convolutional network based detector with GoogLeNet as a backbone. GoogLeNet is simplified by removing few inception models and used as the backbone of the detector network. The proposed framework is developed to detect several interstitial lung disease patterns without doing lung field segmentation. The proposed method is able to detect the five most prevalent interstitial lung disease patterns: fibrosis, emphysema, consolidation, micronodules and ground-glass opacity, as well as normal. Five-fold cross-validation has been used to avoid bias and reduce over-fitting. The proposed framework performance is measured in terms of F-score on the publicly available MedGIFT database. It outperforms state-of-the-art techniques. The detection is performed at slice level and could be used for screening and differential diagnosis of interstitial lung disease patterns using high resolution computed tomography images.
Collapse
Affiliation(s)
- Abhishek Kumar
- School of Computer and Information Sciences University of Hyderabad, 500046 Hyderabad, India
| | - Ashis Kumar Dhara
- Electrical Engineering National Institute of Technology, 713209 Durgapur, India
| | - Sumitra Basu Thakur
- Department of Chest and Respiratory Care Medicine, Medical College, 700073 Kolkata, India
| | - Anup Sadhu
- EKO Diagnostic, Medical College, 700073 Kolkata, India
| | - Debashis Nandi
- Computer Science and Engineering National Institute of Technology, 713209 Durgapur, India
| |
Collapse
|
158
|
Zhao J, Wang H, Zhang Y, Wang R, Liu Q, Li J, Li X, Huang H, Zhang J, Zeng Z, Zhang J, Yi Z, Zeng F. Deep learning radiomics model related with genomics phenotypes for lymph node metastasis prediction in colorectal cancer. Radiother Oncol 2021; 167:195-202. [PMID: 34968471 DOI: 10.1016/j.radonc.2021.12.031] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 12/03/2021] [Accepted: 12/21/2021] [Indexed: 12/22/2022]
Abstract
BACKGROUND AND PURPOSE The preoperative lymph node (LN) status is important for the treatment of colorectal cancer (CRC). Here, we established and validated a deep learning (DPL) model for predicting lymph node metastasis (LNM) in CRC. MATERIALS AND METHODS A total of 423 CRC patients were divided into cohort 1 (training set, n = 238, testing set, n = 101) and cohort 2 (validation set, n = 84). Among them, 84 patients' tumour tissues were collected for RNA sequencing. The DPL features were extracted from enhanced venous-phase computed tomography of CRC using an autoencoder. A DPL model was constructed with the least absolute shrinkage and selection operator algorithm. Carcinoembryonic antigen and carbohydrate antigen 19-9 were incorporated into the DPL model to construct a combined model. The model performance was assessed by receiver operating characteristic curves, calibration curves and decision curves. The correlations between DPL features, which have been selected, and genes were analysed by Spearman correlation, and the genes correlated with DPL features were used to transcriptomic analysis. RESULTS The DPL model, integrated with 20 DPL features, showed a good discrimination performance in predicting the LNM, with areas under the curves (AUCs) of 0.79, 0.73 and 0.70 in the training set, testing set and validation set, respectively. The combined model had a better performance, with AUCs of 0.81, 0.77 and 0.73 in the three sets, respectively. Decision curve analysis confirmed the clinical application value of the DPL model and combined model. Furthermore, catabolic processes and immune-related pathways were identified and related with the selected DPL features. CONCLUSION This study presented a DPL model and a combined model for LNM prediction. We explored the potential genomic phenotypes related with DPL features. In addition, the model could potentially be utilized to facilitate the individualized prediction of LNM in CRC.
Collapse
Affiliation(s)
- Jiaojiao Zhao
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, Sichuan, China
| | - Han Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Yin Zhang
- Oncology Department, Dazhou Central Hospital, Dazhou, Sichuan, China
| | - Rui Wang
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, Sichuan, China
| | - Qin Liu
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, Sichuan, China
| | - Jie Li
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, Sichuan, China
| | - Xue Li
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, Sichuan, China
| | - Hanyu Huang
- Department of Clinical Medicine, North Sichuan Medical College, Nanchong, Sichuan, China
| | - Jie Zhang
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, Sichuan, China
| | - Zhaoping Zeng
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, Sichuan, China
| | - Jun Zhang
- General surgery, Dazhou Central Hospital, Dazhou, Sichuan, China.
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, China.
| | - Fanxin Zeng
- Department of Clinical Research Center, Dazhou Central Hospital, Dazhou, Sichuan, China.
| |
Collapse
|
159
|
Bu R, Xiang W, Cao S. COVID-19 Interpretable Diagnosis Algorithm Based on a Small Number of Chest X-Ray Samples. JOURNAL OF SHANGHAI JIAOTONG UNIVERSITY (SCIENCE) 2021; 27:81-89. [PMID: 34975264 PMCID: PMC8710817 DOI: 10.1007/s12204-021-2393-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 04/21/2021] [Indexed: 06/14/2023]
Abstract
The COVID-19 medical diagnosis method based on individual's chest X-ray (CXR) is achieved difficultly in the initial research, owing to difficulties in identifying CXR data of COVID-19 individuals. At the beginning of the study, infected individuals' CXRs were scarce. The combination of artificial intelligence and medical diagnosis has been advanced and popular. To solve the difficulties, the interpretability analysis of AI model was used to explore the pathological characteristics of CXR samples infected with COVID-19 and assist medical diagnosis. The dataset was expanded by data augmentation to avoid overfitting. Transfer learning was used to test different pre-trained models and the unique output layers were designed to complete the model training with few samples. In this study, the output results of four pre-trained models were compared in three different output layers, and the results after data augmentation were compared with the results of the original dataset. The control variable method was used to conduct independent tests of 24 groups. Finally, 99.23% accuracy and 98% recall rate were obtained, and the visual results of CXR interpretability analysis were displayed. The network of COVID-19 interpretable diagnosis algorithm has the characteristics of high generalization and lightweight. It can be quickly applied to other urgent tasks with insufficient experimental data. At the same time, interpretability analysis brings new possibilities for medical diagnosis.
Collapse
Affiliation(s)
- Ran Bu
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission (Southwest Minzu University), Chengdu, 610041 China
| | - Wei Xiang
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission (Southwest Minzu University), Chengdu, 610041 China
| | - Shitong Cao
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission (Southwest Minzu University), Chengdu, 610041 China
| |
Collapse
|
160
|
Rezaeijo SM, Ghorvei M, Abedi-Firouzjah R, Mojtahedi H, Entezari Zarch H. Detecting COVID-19 in chest images based on deep transfer learning and machine learning algorithms. EGYPTIAN JOURNAL OF RADIOLOGY AND NUCLEAR MEDICINE 2021. [PMCID: PMC8193170 DOI: 10.1186/s43055-021-00524-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Abstract
Background
This study aimed to propose an automatic prediction of COVID-19 disease using chest CT images based on deep transfer learning models and machine learning (ML) algorithms.
Results
The dataset consisted of 5480 samples in two classes, including 2740 CT chest images of patients with confirmed COVID-19 and 2740 images of suspected cases was assessed. The DenseNet201 model has obtained the highest training with an accuracy of 100%. In combining pre-trained models with ML algorithms, the DenseNet201 model and KNN algorithm have received the best performance with an accuracy of 100%. Created map by t-SNE in the DenseNet201 model showed not any points clustered with the wrong class.
Conclusions
The mentioned models can be used in remote places, in low- and middle-income countries, and laboratory equipment with limited resources to overcome a shortage of radiologists.
Collapse
|
161
|
Adaptive Localizing Region-Based Level Set for Segmentation of Maxillary Sinus Based on Convolutional Neural Networks. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:4824613. [PMID: 34804142 PMCID: PMC8601823 DOI: 10.1155/2021/4824613] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 09/30/2021] [Accepted: 10/13/2021] [Indexed: 11/17/2022]
Abstract
In this paper, we propose a novel method, an adaptive localizing region-based level set using convolutional neural network, for improving performance of maxillary sinus segmentation. The healthy sinus without lesion inside is easy for conventional algorithms. However, in practice, most of the cases are filled with lesions of great heterogeneity which lead to lower accuracy. Therefore, we provide a strategy to avoid active contour from being trapped into a nontarget area. First, features of lesion and maxillary sinus are studied using a convolutional neural network (CNN) with two convolutional and three fully connected layers in architecture. In addition, outputs of CNN are devised to evaluate possibilities of zero level set location close to lesion or not. Finally, the method estimates stable points on the contour by an interactive process. If it locates in the lesion, the point needs to be paid a certain speed compensation based on the value of possibility via CNN, assisting itself to escape from the local minima. If not, the point preserves current status till convergence. Capabilities of our method have been demonstrated on a dataset of 200 CT images with possible lesions. To illustrate the strength of our method, we evaluated it against state-of-the-art methods, FLS and CRF-FCN. For all cases, our method, as assessed by Dice similarity coefficients, performed significantly better compared with currently available methods and obtained a significant Dice improvement, 0.25 than FLS and 0.12 than CRF-FCN, respectively, on an average.
Collapse
|
162
|
Zhao Q, He Y, Wu Y, Huang D, Wang Y, Sun C, Ju J, Wang J, Mahr JJL. Vocal cord lesions classification based on deep convolutional neural network and transfer learning. Med Phys 2021; 49:432-442. [PMID: 34813114 DOI: 10.1002/mp.15371] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 09/12/2021] [Accepted: 09/29/2021] [Indexed: 01/01/2023] Open
Abstract
PURPOSE Laryngoscopy, the most common diagnostic method for vocal cord lesions (VCLs), is based mainly on the visual subjective inspection of otolaryngologists. This study aimed to establish a highly objective computer-aided VCLs diagnosis system based on deep convolutional neural network (DCNN) and transfer learning. METHODS To classify VCLs, our method combined the DCNN backbone with transfer learning on a system specifically finetuned for a laryngoscopy image dataset. Laryngoscopy image database was collected to train the proposed system. The diagnostic performance was compared with other DCNN-based models. Analysis of F1 score and receiver operating characteristic curves were conducted to evaluate the performance of the system. RESULTS Beyond the existing VCLs diagnosis method, the proposed system achieved an overall accuracy of 80.23%, an F1 score of 0.7836, and an area under the curve (AUC) of 0.9557 for four fine-grained classes of VCLs, namely, normal, polyp, keratinization, and carcinoma. It also demonstrated robust classification capacity for detecting urgent (keratinization, carcinoma) and non-urgent (normal, polyp), with an overall accuracy of 0.939, a sensitivity of 0.887, a specificity of 0.993, and an AUC of 0.9828. The proposed method also outperformed clinicians in the classification of normal, polyps, and carcinoma at an extremely low time cost. CONCLUSION The VCLs diagnosis system succeeded in using DCNN to distinguish the most common VCLs and normal cases, holding a practical potential for improving the overall diagnostic efficacy in VCLs examinations. The proposed VCLs diagnosis system could be appropriately integrated into the conventional workflow of VCLs laryngoscopy as a highly objective auxiliary method.
Collapse
Affiliation(s)
- Qian Zhao
- Key Laboratory of Photoelectronic Imaging Technology and System, Ministry of Education, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Yuqing He
- Key Laboratory of Photoelectronic Imaging Technology and System, Ministry of Education, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Yanda Wu
- Key Laboratory of Photoelectronic Imaging Technology and System, Ministry of Education, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Dongyan Huang
- National Clinical Research Center for Otolaryngologic Diseases, College of Otolaryngology-Head and Neck Surgery, Chinese PLA General Hospital, Beijing, China
| | - Yang Wang
- National Clinical Research Center for Otolaryngologic Diseases, College of Otolaryngology-Head and Neck Surgery, Chinese PLA General Hospital, Beijing, China
| | - Cai Sun
- National Clinical Research Center for Otolaryngologic Diseases, College of Otolaryngology-Head and Neck Surgery, Chinese PLA General Hospital, Beijing, China
| | - Jun Ju
- National Clinical Research Center for Otolaryngologic Diseases, College of Otolaryngology-Head and Neck Surgery, Chinese PLA General Hospital, Beijing, China
| | - Jiasen Wang
- National Clinical Research Center for Otolaryngologic Diseases, College of Otolaryngology-Head and Neck Surgery, Chinese PLA General Hospital, Beijing, China
| | | |
Collapse
|
163
|
Aliboni L, Dias OM, Baldi BG, Sawamura MVY, Chate RC, Carvalho CRR, de Albuquerque ALP, Aliverti A, Pennati F. A Convolutional Neural Network Approach to Quantify Lung Disease Progression in Patients with Fibrotic Hypersensitivity Pneumonitis (HP). Acad Radiol 2021; 29:e149-e156. [PMID: 34794883 DOI: 10.1016/j.acra.2021.10.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 10/08/2021] [Accepted: 10/10/2021] [Indexed: 11/01/2022]
Abstract
Rationale and Objectives To evaluate associations between longitudinal changes of quantitative CT parameters and spirometry in patients with fibrotic hypersensitivity pneumonitis (HP). Materials and Methods Serial CT images and spirometric data were retrospectively collected in a group of 25 fibrotic HP patients. Quantitative CT analysis included histogram parameters (median, interquartile range, skewness, and kurtosis) and a pretrained convolutional neural network (CNN)-based textural analysis, aimed at quantifying the extent of consolidation (C), fibrosis (F), ground-glass opacity (GGO), low attenuation areas (LAA) and healthy tissue (H). Results At baseline, FVC was 61(44-70) %pred. The median follow-up period was 1.4(0.8-3.2) years, with 3(2-4) visits per patient. Over the study, 8 patients (32%) showed a FVC decline of more than 5%, a significant worsening of all histogram parameters (p≤0.015) and an increased extent of fibrosis via CNN (p=0.038). On histogram analysis, decreased skewness and kurtosis were the parameters most strongly associated with worsened FVC (respectively, r2=0.63 and r2=0.54, p<0.001). On CNN classification, increased extent of fibrosis and consolidation were the measures most strongly correlated with FVC decline (r2=0.54 and r2=0.44, p<0.001). Conclusion CT histogram and CNN measurements provide sensitive measures of functional changes in fibrotic HP patients over time. Increased fibrosis was associated with FVC decline, providing index of disease progression. CNN may help improve fibrotic HP follow-up, providing a sensitive tool for progressive interstitial changes, which can potentially contribute to clinical decisions for individualizing disease management.
Collapse
|
164
|
Liu L, Feng X, Li H, Cheng Li S, Qian Q, Wang Y. Deep learning model reveals potential risk genes for ADHD, especially Ephrin receptor gene EPHA5. Brief Bioinform 2021; 22:bbab207. [PMID: 34109382 PMCID: PMC8575025 DOI: 10.1093/bib/bbab207] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 04/30/2021] [Accepted: 05/11/2021] [Indexed: 11/19/2022] Open
Abstract
Attention deficit hyperactivity disorder (ADHD) is a common neurodevelopmental disorder. Although genome-wide association studies (GWAS) identify the risk ADHD-associated variants and genes with significant P-values, they may neglect the combined effect of multiple variants with insignificant P-values. Here, we proposed a convolutional neural network (CNN) to classify 1033 individuals diagnosed with ADHD from 950 healthy controls according to their genomic data. The model takes the single nucleotide polymorphism (SNP) loci of P-values $\le{1\times 10^{-3}}$, i.e. 764 loci, as inputs, and achieved an accuracy of 0.9018, AUC of 0.9570, sensitivity of 0.8980 and specificity of 0.9055. By incorporating the saliency analysis for the deep learning network, a total of 96 candidate genes were found, of which 14 genes have been reported in previous ADHD-related studies. Furthermore, joint Gene Ontology enrichment and expression Quantitative Trait Loci analysis identified a potential risk gene for ADHD, EPHA5 with a variant of rs4860671. Overall, our CNN deep learning model exhibited a high accuracy for ADHD classification and demonstrated that the deep learning model could capture variants' combining effect with insignificant P-value, while GWAS fails. To our best knowledge, our model is the first deep learning method for the classification of ADHD with SNPs data.
Collapse
Affiliation(s)
- Lu Liu
- Peking University Sixth Hospital/Institute of Mental Health, National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital) & the Key Laboratory of Mental Health, Ministry of Health (Peking University), 100191, Beijing, China
| | - Xikang Feng
- School of Software, Northwestern Polytechnical University, Xi’an, 710072, Shaanxi, China
| | - Haimei Li
- Peking University Sixth Hospital/Institute of Mental Health, National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital) & the Key Laboratory of Mental Health, Ministry of Health (Peking University), 100191, Beijing, China
| | - Shuai Cheng Li
- Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong, China
| | - Qiujin Qian
- Peking University Sixth Hospital/Institute of Mental Health, National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital) & the Key Laboratory of Mental Health, Ministry of Health (Peking University), 100191, Beijing, China
| | - Yufeng Wang
- Peking University Sixth Hospital/Institute of Mental Health, National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital) & the Key Laboratory of Mental Health, Ministry of Health (Peking University), 100191, Beijing, China
| |
Collapse
|
165
|
Wong A, Lu J, Dorfman A, McInnis P, Famouri M, Manary D, Lee JRH, Lynch M. Fibrosis-Net: A Tailored Deep Convolutional Neural Network Design for Prediction of Pulmonary Fibrosis Progression From Chest CT Images. Front Artif Intell 2021; 4:764047. [PMID: 34805974 PMCID: PMC8596329 DOI: 10.3389/frai.2021.764047] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 10/11/2021] [Indexed: 01/02/2023] Open
Abstract
Pulmonary fibrosis is a devastating chronic lung disease that causes irreparable lung tissue scarring and damage, resulting in progressive loss in lung capacity and has no known cure. A critical step in the treatment and management of pulmonary fibrosis is the assessment of lung function decline, with computed tomography (CT) imaging being a particularly effective method for determining the extent of lung damage caused by pulmonary fibrosis. Motivated by this, we introduce Fibrosis-Net, a deep convolutional neural network design tailored for the prediction of pulmonary fibrosis progression from chest CT images. More specifically, machine-driven design exploration was leveraged to determine a strong architectural design for CT lung analysis, upon which we build a customized network design tailored for predicting forced vital capacity (FVC) based on a patient's CT scan, initial spirometry measurement, and clinical metadata. Finally, we leverage an explainability-driven performance validation strategy to study the decision-making behavior of Fibrosis-Net as to verify that predictions are based on relevant visual indicators in CT images. Experiments using a patient cohort from the OSIC Pulmonary Fibrosis Progression Challenge showed that the proposed Fibrosis-Net is able to achieve a significantly higher modified Laplace Log Likelihood score than the winning solutions on the challenge. Furthermore, explainability-driven performance validation demonstrated that the proposed Fibrosis-Net exhibits correct decision-making behavior by leveraging clinically-relevant visual indicators in CT images when making predictions on pulmonary fibrosis progress. Fibrosis-Net is able to achieve a significantly higher modified Laplace Log Likelihood score than the winning solutions on the OSIC Pulmonary Fibrosis Progression Challenge, and has been shown to exhibit correct decision-making behavior when making predictions. Fibrosis-Net is available to the general public in an open-source and open access manner as part of the OpenMedAI initiative. While Fibrosis-Net is not yet a production-ready clinical assessment solution, we hope that its release will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon it.
Collapse
Affiliation(s)
- Alexander Wong
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
- DarwinAI Corp., Waterloo, ON, Canada
| | - Jack Lu
- DarwinAI Corp., Waterloo, ON, Canada
| | | | | | | | | | | | | |
Collapse
|
166
|
Pennati F, Aliboni L, Antoniazza A, Beretta D, Dias O, Baldi BG, Sawamura M, Chate RC, De Carvalho CRR, Albuquerque A, Aliverti A. Texture-based classification of lung disease patterns in chronic hypersensitivity pneumonitis and comparison to clinical outcomes. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3427-3430. [PMID: 34891976 DOI: 10.1109/embc46164.2021.9630247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Computer-aided detection algorithms applied to CT lung imaging have the potential to objectively quantify pulmonary pathology. We aim to develop an automatic classification method based on textural features able to classify healthy and pathological patterns on CT lung images and to quantify the extent of each disease pattern in a group of patients with chronic hypersensitivity pneumonitis (cHP), in comparison to pulmonary function tests (PFTs).27 cHP patients were scanned via high resolution CT (HRCT) at full-inspiration. Regions of interest (ROIs) were extracted and labeled as normal (NOR), ground glass opacity (GGO), reticulation (RET), consolidation (C), honeycombing (HB) and air trapping (AT). For each ROI, statistical, morphological and fractal parameters were computed. For automatic classification, we compared two classification methods (Bayesian and Support Vector Machine) and three ROI sizes. The classifier was therefore applied to the overall CT images and the extent of each class was calculated and compared to PFTs. Better classification accuracy was found for the Bayesian classifier and the 16x16 ROI size: 92.1±2.7%. The extent of GGO, HB and NOR significantly correlated with forced vital capacity (FVC) and the extent of NOR with carbon monoxide diffusing capacity (DLCO).Clinical Relevance- Texture analysis can differentiate and objectively quantify pathological classes in the lung parenchyma and may represent a quantitative diagnostic tool in cHP.
Collapse
|
167
|
Blais MA, Akhloufi MA. Deep Learning and Binary Relevance Classification of Multiple Diseases using Chest X-Ray images . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2794-2797. [PMID: 34891829 DOI: 10.1109/embc46164.2021.9629846] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Disease detection using chest X-ray (CXR) images is one of the most popular radiology methods to diagnose diseases through a visual inspection of abnormal symptoms in the lung region. A wide variety of diseases such as pneumonia, heart failure and lung cancer can be detected using CXRs. Although CXRs can show the symptoms of a variety of diseases, detecting and manually classifying those diseases can be difficult and time-consuming adding to clinicians' work burden. Research shows that nearly 90% of mistakes made in a lung cancer diagnosis involved chest radiography. A variety of algorithms and computer-assisted diagnosis tools (CAD) were proposed to assist radiologists in the interpretation of medical images to reduce diagnosis errors. In this work, we propose a deep learning approach to screen multiple diseases using more than 220,000 images from the CheXpert dataset. The proposed binary relevance approach using Deep Convolutional Neural Networks (CNNs) achieves high performance results and outperforms past published work in this area.Clinical relevance- This application can be used to support physicians ans speed-up the diagnosis work. The proposed CAD can increase the confidence in the diagnosis or suggest a second opinion. The CAD can also be used in emergency situations when a radiologist is not available immediately.
Collapse
|
168
|
Hoang-Thi TN, Vakalopoulou M, Christodoulidis S, Paragios N, Revel MP, Chassagnon G. Deep learning for lung disease segmentation on CT: Which reconstruction kernel should be used? Diagn Interv Imaging 2021; 102:691-695. [PMID: 34686464 DOI: 10.1016/j.diii.2021.10.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 09/30/2021] [Accepted: 10/01/2021] [Indexed: 12/30/2022]
Abstract
PURPOSE The purpose of this study was to determine whether a single reconstruction kernel or both high and low frequency kernels should be used for training deep learning models for the segmentation of diffuse lung disease on chest computed tomography (CT). MATERIALS AND METHODS Two annotated datasets of COVID-19 pneumonia (323,960 slices) and interstitial lung disease (ILD) (4,284 slices) were used. Annotated CT images were used to train a U-Net architecture to segment disease. All CT slices were reconstructed using both a lung kernel (LK) and a mediastinal kernel (MK). Three different trainings, resulting in three different models were compared for each disease: training on LK only, MK only or LK+MK images. Dice similarity scores (DSC) were compared using the Wilcoxon signed-rank test. RESULTS Models only trained on LK images performed better on LK images than on MK images (median DSC = 0.62 [interquartile range (IQR): 0.54, 0.69] vs. 0.60 [IQR: 0.50, 0.70], P < 0.001 for COVID-19 and median DSC = 0.62 [IQR: 0.56, 0.69] vs. 0.50 [IQR 0.43, 0.57], P < 0.001 for ILD). Similarly, models only trained on MK images performed better on MK images (median DSC = 0.62 [IQR: 0.53, 0.68] vs. 0.54 [IQR: 0.47, 0.63], P < 0.001 for COVID-19 and 0.69 [IQR: 0.61, 0.73] vs. 0.63 [IQR: 0.53, 0.70], P < 0.001 for ILD). Models trained on both kernels performed better or similarly than those trained on only one kernel. For COVID-19, median DSC was 0.67 (IQR: =0.59, 0.73) when applied on LK images and 0.67 (IQR: 0.60, 0.74) when applied on MK images (P < 0.001 for both). For ILD, median DSC was 0.69 (IQR: 0.63, 0.73) when applied on LK images (P = 0.006) and 0.68 (IQR: 0.62, 0.72) when applied on MK images (P > 0.99). CONCLUSION Reconstruction kernels impact the performance of deep learning-based models for lung disease segmentation. Training on both LK and MK images improves the performance.
Collapse
Affiliation(s)
- Trieu-Nghi Hoang-Thi
- Université de Paris, Faculté de Médecine, 75006 Paris, France; Department of Radiology, Hôpital Cochin, AP-HP.centre, 75014 Paris, France
| | - Maria Vakalopoulou
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, 3 91190 Gif-sur-Yvette, France
| | - Stergios Christodoulidis
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, 3 91190 Gif-sur-Yvette, France
| | - Nikos Paragios
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, 3 91190 Gif-sur-Yvette, France; TheraPanacea, 75014 Paris, France
| | - Marie-Pierre Revel
- Université de Paris, Faculté de Médecine, 75006 Paris, France; Department of Radiology, Hôpital Cochin, AP-HP.centre, 75014 Paris, France
| | - Guillaume Chassagnon
- Université de Paris, Faculté de Médecine, 75006 Paris, France; Department of Radiology, Hôpital Cochin, AP-HP.centre, 75014 Paris, France.
| |
Collapse
|
169
|
Tschuchnig ME, Zillner D, Romanelli P, Hercher D, Heimel P, Oostingh GJ, Couillard-Després S, Gadermayr M. Quantification of anomalies in rats' spinal cords using autoencoders. Comput Biol Med 2021; 138:104939. [PMID: 34656872 DOI: 10.1016/j.compbiomed.2021.104939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 10/05/2021] [Accepted: 10/09/2021] [Indexed: 10/20/2022]
Abstract
Computed tomography (CT) scans and magnetic resonance imaging (MRI) of spines are state-of-the-art for the evaluation of spinal cord lesions. This paper analyses micro-CT scans of rat spinal cords with the aim of generating lesion progression through the aggregation of anomaly-based scores. Since reliable labelling in spinal cords is only reasonable for the healthy class in the form of untreated spines, semi-supervised deviation-based anomaly detection algorithms are identified as powerful approaches. The main contribution of this paper is a large evaluation of different autoencoders and variational autoencoders for aggregated lesion quantification and a resulting spinal cord lesion quantification method that generates highly correlating quantifications. The conducted experiments showed that several models were able to generate 3D lesion quantifications of the data. These quantifications correlated with the weakly labelled true data with one model, reaching an average correlation of 0.83. We also introduced an area-based model, which correlated with a mean of 0.84. The possibility of the complementary use of the autoencoder-based method and the area feature were also discussed. Additionally to improving medical diagnostics, we anticipate features built on these quantifications to be useful for further applications like clustering into different lesions.
Collapse
Affiliation(s)
| | - Dominic Zillner
- Salzburg University of Applied Sciences, Urstein Süd 1, Puch, 5412, Salzburg, Austria
| | - Pasquale Romanelli
- Institute of Experimental Neuroregeneration, Spinal Cord Injury and Tissue Regeneration Center Salzburg, Strubergasse 21, Salzburg, 5020, Salzburg, Austria; Austrian Cluster for Tissue Regeneration, Donaueschingenstr 13, Vienna, 1200, Vienna, Austria
| | - David Hercher
- Austrian Cluster for Tissue Regeneration, Donaueschingenstr 13, Vienna, 1200, Vienna, Austria
| | - Patrick Heimel
- Austrian Cluster for Tissue Regeneration, Donaueschingenstr 13, Vienna, 1200, Vienna, Austria; Core Facility Hard Tissue and Biomaterial Research, Karl Donath Laboratory, University Clinic of Dentistry, Medical University Vienna, Spitalgasse 23, Wien, 1090, Wien, Austria
| | - Gertie J Oostingh
- Salzburg University of Applied Sciences, Urstein Süd 1, Puch, 5412, Salzburg, Austria
| | - Sébastien Couillard-Després
- Institute of Experimental Neuroregeneration, Spinal Cord Injury and Tissue Regeneration Center Salzburg, Strubergasse 21, Salzburg, 5020, Salzburg, Austria; Austrian Cluster for Tissue Regeneration, Donaueschingenstr 13, Vienna, 1200, Vienna, Austria
| | - Michael Gadermayr
- Salzburg University of Applied Sciences, Urstein Süd 1, Puch, 5412, Salzburg, Austria
| |
Collapse
|
170
|
Saadi SB, Ranjbarzadeh R, Ozeir kazemi, Amirabadi A, Ghoushchi SJ, Kazemi O, Azadikhah S, Bendechache M. Osteolysis: A Literature Review of Basic Science and Potential Computer-Based Image Processing Detection Methods. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:4196241. [PMID: 34646317 PMCID: PMC8505126 DOI: 10.1155/2021/4196241] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 07/30/2021] [Accepted: 09/14/2021] [Indexed: 12/22/2022]
Abstract
Osteolysis is one of the most prominent reasons of revision surgeries in total joint arthroplasty. This biological phenomenon is induced by wear particles and corrosion products that stimulate inflammatory biological response of surrounding tissues. The eventual responses of osteolysis are the activation of macrophages leading to bone resorption and prosthesis failure. Various factors are involved in the initiation of osteolysis from biological issues, design, material specifications, and model of the prosthesis to the health condition of the patient. Nevertheless, the factors leading to osteolysis are sometimes preventable. Changes in implant design and polyethylene manufacturing are striving to improve overall wear. Osteolysis is clinically asymptomatic and can be diagnosed and analyzed during follow-up sessions through various imaging modalities and methods, such as serial radiographic, CT scan, MRI, and image processing-based methods, especially with the use of artificial neural network algorithms. Deep learning algorithms with a variety of neural network structures such as CNN, U-Net, and Seg-UNet have proved to be efficient algorithms for medical image processing specifically in the field of orthopedics for the detection and segmentation of tumors. These deep learning algorithms can effectively detect and analyze osteolytic lesions well in advance during follow-up sessions in order to administer proper treatments before reaching a critical point. Osteolysis can be treated surgically or nonsurgically with medications. However, revision surgeries are the only solution for the progressive osteolysis. In this literature review, the underlying causes, mechanisms, and treatments of osteolysis are discussed with the main focus on the possible computer-based methods and algorithms that can be effectively employed for the detection of osteolysis.
Collapse
Affiliation(s)
- Soroush Baseri Saadi
- Department of Electrical Engineering, Islamic Azad University, South Tehran Branch, Tehran, Iran
| | - Ramin Ranjbarzadeh
- Department of Telecommunications Engineering, Faculty of Engineering, University of Guilan, Rasht, Iran
| | - Ozeir kazemi
- PPD - Global Pharmaceutical Contract Research Organization, Central Lab, Zaventem, Belgium
| | - Amir Amirabadi
- Department of Electrical Engineering, Islamic Azad University, South Tehran Branch, Tehran, Iran
| | | | | | - Sonya Azadikhah
- R.E.D. Laboratories N.V./S.A., Z.1 Researchpark, Zellik, Belgium
| | - Malika Bendechache
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Dublin, Ireland
| |
Collapse
|
171
|
Liu C, Xie H, Zhang Y. Self-Supervised Attention Mechanism for Pediatric Bone Age Assessment With Efficient Weak Annotation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2685-2697. [PMID: 33351757 DOI: 10.1109/tmi.2020.3046672] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Pediatric bone age assessment (BAA) is a common clinical practice to investigate endocrinology, genetic and growth disorders of children. Different specific bone parts are extracted as anatomical Regions of Interest (RoIs) during this task, since their morphological characters have important biological identification in skeletal maturity. Following this clinical prior knowledge, recently developed deep learning methods address BAA with an RoI-based attention mechanism, which segments or detects the discriminative RoIs for meticulous analysis. Great strides have been made, however, these methods strictly require large and precise RoIs annotations, which limits the real-world clinical value. To overcome the severe requirements on RoIs annotations, in this paper, we propose a novel self-supervised learning mechanism to effectively discover the informative RoIs without the need of extra knowledge and precise annotation-only image-level weak annotation is all we take. Our model, termed PEAR-Net for Part Extracting and Age Recognition Network, consists of one Part Extracting (PE) agent for discriminative RoIs discovering and one Age Recognition (AR) agent for age assessment. Without precise supervision, the PE agent is designed to discover and extract RoIs fully automatically. Then the proposed RoIs are fed into AR agent for feature learning and age recognition. Furthermore, we utilize the self-consistency of RoIs to optimize PE agent to understand the part relation and select the most useful RoIs. With this self-supervised design, the PE agent and AR agent can reinforce each other mutually. To the best of our knowledge, this is the first end-to-end bone age assessment method which can discover RoIs automatically with only image-level annotation. We conduct extensive experiments on the public RSNA 2017 dataset and achieve state-of-the-art performance with MAE 3.99 months. Project is available at http://imcc.ustc.edu.cn/project/ssambaa/.
Collapse
|
172
|
Gupta V, Vasudev M, Doegar A, Sambyal N. Breast cancer detection from histopathology images using modified residual neural networks. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.08.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
173
|
Zhang YN, XIA KR, LI CY, WEI BL, Zhang B. Review of Breast Cancer Pathologigcal Image Processing. BIOMED RESEARCH INTERNATIONAL 2021; 2021:1994764. [PMID: 34595234 PMCID: PMC8478535 DOI: 10.1155/2021/1994764] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 08/24/2021] [Indexed: 11/17/2022]
Abstract
Breast cancer is one of the most common malignancies. Pathological image processing of breast has become an important means for early diagnosis of breast cancer. Using medical image processing to assist doctors to detect potential breast cancer as early as possible has always been a hot topic in the field of medical image diagnosis. In this paper, a breast cancer recognition method based on image processing is systematically expounded from four aspects: breast cancer detection, image segmentation, image registration, and image fusion. The achievements and application scope of supervised learning, unsupervised learning, deep learning, CNN, and so on in breast cancer examination are expounded. The prospect of unsupervised learning and transfer learning for breast cancer diagnosis is prospected. Finally, the privacy protection of breast cancer patients is put forward.
Collapse
Affiliation(s)
- Ya-nan Zhang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
- HRG International Institute (Hefei) of Research and Innovation, Hefei 230000, China
| | - Ke-rui XIA
- HRG International Institute (Hefei) of Research and Innovation, Hefei 230000, China
| | - Chang-yi LI
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Ben-li WEI
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Bing Zhang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| |
Collapse
|
174
|
Guo Y, Song Q, Jiang M, Guo Y, Xu P, Zhang Y, Fu CC, Fang Q, Zeng M, Yao X. Histological Subtypes Classification of Lung Cancers on CT Images Using 3D Deep Learning and Radiomics. Acad Radiol 2021; 28:e258-e266. [PMID: 32622740 DOI: 10.1016/j.acra.2020.06.010] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2020] [Revised: 06/05/2020] [Accepted: 06/05/2020] [Indexed: 12/24/2022]
Abstract
RATIONALE AND OBJECTIVES Histological subtypes of lung cancers are critical for clinical treatment decision. In this study, we attempt to use 3D deep learning and radiomics methods to automatically distinguish lung adenocarcinomas (ADC), squamous cell carcinomas (SCC), and small cell lung cancers (SCLC) respectively on Computed Tomography images, and then compare their performance. MATERIALS AND METHODS 920 patients (mean age 61.2, range, 17-87; 340 Female and 580 Male) with lung cancer, including 554 patients with ADC, 175 patients with lung SCC and 191 patients with SCLC, were included in this retrospective study from January 2013 to August 2018. Histopathologic analysis was available for every patient. The classification models based on 3D deep learning (named the ProNet) and radiomics (named com_radNet) were designed to classify lung cancers into the three types mentioned above according to histopathologic results. The training, validation and testing cohorts counted 0.70, 0.15, and 0.15 of the whole datasets respectively. RESULTS The ProNet model used to classify the three types of lung cancers achieved the F1-scores of 90.0%, 72.4%, 83.7% in ADC, SCC, and SCLC respectively, and the weighted average F1-score of 73.2%. For com_radNet, the F1-scores achieved 83.1%, 75.4%, 85.1% in ADC, SCC, and SCLC, and the weighted average F1-score was 72.2%. The area under the receiver operating characteristic curve of the ProNet model and com_radNet were 0.840 and 0.789, and the accuracy were 71.6% and 74.7% respectively. CONCLUSION The ProNet and com_radNet models we developed can achieve high performance in distinguishing ADC, SCC, and SCLC and may be promising approaches for non-invasive predicting histological subtypes of lung cancers.
Collapse
|
175
|
Liu M, Li H, Li Y, Jin L, Huang Z. From WASD to BLS with application to pattern classification. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107455] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
176
|
Prokop-Piotrkowska M, Marszałek-Dziuba K, Moszczyńska E, Szalecki M, Jurkiewicz E. Traditional and New Methods of Bone Age Assessment-An Overview. J Clin Res Pediatr Endocrinol 2021; 13:251-262. [PMID: 33099993 PMCID: PMC8388057 DOI: 10.4274/jcrpe.galenos.2020.2020.0091] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Bone age is one of biological indicators of maturity used in clinical practice and it is a very important parameter of a child’s assessment, especially in paediatric endocrinology. The most widely used method of bone age assessment is by performing a hand and wrist radiograph and its analysis with Greulich-Pyle or Tanner-Whitehouse atlases, although it has been about 60 years since they were published. Due to the progress in the area of Computer-Aided Diagnosis and application of artificial intelligence in medicine, lately, numerous programs for automatic bone age assessment have been created. Most of them have been verified in clinical studies in comparison to traditional methods, showing good precision while eliminating inter- and intra-rater variability and significantly reducing the time of assessment. Additionally, there are available methods for assessment of bone age which avoid X-ray exposure, using modalities such as ultrasound or magnetic resonance imaging.
Collapse
Affiliation(s)
- Monika Prokop-Piotrkowska
- Children’s Memorial Health Institute, Department of Endocrinology and Diabetology, Warsaw, Poland,* Address for Correspondence: Children’s Memorial Health Institute, Department of Endocrinology and Diabetology, Warsaw, Poland Phone: +48 608 523 869 E-mail:
| | - Kamila Marszałek-Dziuba
- Children’s Memorial Health Institute, Department of Endocrinology and Diabetology, Warsaw, Poland
| | - Elżbieta Moszczyńska
- Children’s Memorial Health Institute, Department of Endocrinology and Diabetology, Warsaw, Poland
| | | | - Elżbieta Jurkiewicz
- Children’s Memorial Health Institute, Department of Diagnostic Imaging, Warsaw, Poland
| |
Collapse
|
177
|
Gammatonegram based triple classification of lung sounds using deep convolutional neural network with transfer learning. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102947] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
178
|
Li M, Lian F, Wang C, Guo S. Dual adversarial convolutional networks with multilevel cues for pancreatic segmentation. Phys Med Biol 2021; 66. [PMID: 34271564 DOI: 10.1088/1361-6560/ac155f] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 07/16/2021] [Indexed: 12/11/2022]
Abstract
Accurate organ segmentation is a relatively challenging subject in medical imaging, especially for the pancreas, whose morphological characteristics are subtle but variable. In this paper, a novel dual adversarial convolutional network with multilevel cues (DACN-MC) is proposed to segment the pancreas in computerized tomography (CT). DACN-MC first involves a duplex adversarial network using a conventional model for biomedical image segmentation, which ensures the veracity of the predicted probability volumes and ultimately enhances the quality of the obtained maps. Specifically, one of the adversarial networks helps the predicted maps to resemble the ground truths by importing extra guidance into the original loss functions. The other adversarial network further judges whether the obtained maps are well segmented and improves the image quality once again. Then, a multilevel cue collection module (MCCM) is introduced to gather many useful details for pancreas segmentation. In other words, we collect several sets of material formed by features from different layers and pick out a group with optimal performance for use in the ultimate algorithm. The experimental results show that dual adversarial convolutional networks together with multilevel cue collection help our proposed algorithm to achieve competitive segmentation performance, based on the results of several evaluation indexes.
Collapse
Affiliation(s)
- Meiyu Li
- College of Electronic Science and Engineering, Jilin University, Changchun 130012, People's Republic of China
| | - Fenghui Lian
- School of Aviation Operations and Services, Air Force Aviation University, Changchun 130000, People's Republic of China
| | - Chunyu Wang
- School of Aviation Operations and Services, Air Force Aviation University, Changchun 130000, People's Republic of China
| | - Shuxu Guo
- College of Electronic Science and Engineering, Jilin University, Changchun 130012, People's Republic of China
| |
Collapse
|
179
|
Nawaz M, Mehmood Z, Nazir T, Naqvi RA, Rehman A, Iqbal M, Saba T. Skin cancer detection from dermoscopic images using deep learning and fuzzy k-means clustering. Microsc Res Tech 2021; 85:339-351. [PMID: 34448519 DOI: 10.1002/jemt.23908] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2021] [Revised: 07/09/2021] [Accepted: 07/25/2021] [Indexed: 11/09/2022]
Abstract
Melanoma skin cancer is the most life-threatening and fatal disease among the family of skin cancer diseases. Modern technological developments and research methodologies made it possible to detect and identify this kind of skin cancer more effectively; however, the automated localization and segmentation of skin lesion at earlier stages is still a challenging task due to the low contrast between melanoma moles and skin portion and a higher level of color similarity between melanoma-affected and -nonaffected areas. In this paper, we present a fully automated method for segmenting the skin melanoma at its earliest stage by employing a deep-learning-based approach, namely faster region-based convolutional neural networks (RCNN) along with fuzzy k-means clustering (FKM). Several clinical images are utilized to test the presented method so that it may help the dermatologist in diagnosing this life-threatening disease at its earliest stage. The presented method first preprocesses the dataset images to remove the noise and illumination problems and enhance the visual information before applying the faster-RCNN to obtain the feature vector of fixed length. After that, FKM has been employed to segment the melanoma-affected portion of skin with variable size and boundaries. The performance of the presented method is evaluated on the three standard datasets, namely ISBI-2016, ISIC-2017, and PH2, and the results show that the presented method outperforms the state-of-the-art approaches. The presented method attains an average accuracy of 95.40, 93.1, and 95.6% on the ISIC-2016, ISIC-2017, and PH2 datasets, respectively, which is showing its robustness to skin lesion recognition and segmentation.
Collapse
Affiliation(s)
- Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Tahira Nazir
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul, South Korea
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics (AIDA) Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Munwar Iqbal
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics (AIDA) Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
180
|
Chen X, Chen DG, Zhao Z, Zhan J, Ji C, Chen J. Artificial image objects for classification of schizophrenia with GWAS-selected SNVs and convolutional neural network. PATTERNS (NEW YORK, N.Y.) 2021; 2:100303. [PMID: 34430925 PMCID: PMC8369164 DOI: 10.1016/j.patter.2021.100303] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 03/17/2021] [Accepted: 06/07/2021] [Indexed: 01/08/2023]
Abstract
In this article, we propose a new approach to analyze large genomics data. We considered individual genetic variants as pixels in an image and transformed a collection of variants into an artificial image object (AIO), which could be classified as a regular image by CNN algorithms. Using schizophrenia as a case study, we demonstrate the principles and their applications with 3 datasets. With 4,096 SNVs, the CNN models achieved an accuracy of 0.678 ± 0.007 and an AUC of 0.738 ± 0.008 for the diagnosis phenotype. With 44,100 SNVs, the models achieved class-specific accuracies of 0.806 ± 0.032 and 0.820 ± 0.049, and AUCs of 0.930 ± 0.017 and 0.867 ± 0.040 for the bottom and top classes stratified by the patient's polygenic risk scores. These results suggest that, once transformed to images, large genomics data can be analyzed effectively with image classification algorithms. Introduce a technique to transform genomics data into AIOs Apply CNN algorithms to classify genomics derived AIOs Showcase the technique with GWAS-selected SNVs to classify schizophrenia diagnosis
Genome-wide association studies have discovered many genetic variants that contribute to human diseases. However, it remains a challenge to effectively utilize these variants to facilitate early and accurate diagnosis and treatment. In this report, we propose a new approach that transforms genetic data into AIOs so that they can be classified by advanced artificial intelligence and machine learning algorithms. Using schizophrenia as a case study, we demonstrate that genetic variants can be transformed into AIOs and that the AIOs can be classified by CNN algorithms consistently. Our approach can be applied to other omics data and combine them to jointly model disease risks and treatment responses.
Collapse
Affiliation(s)
- Xiangning Chen
- 410 AI, LLC, 10 Plummer Ct, Germantown, MD 20876, USA.,A3.AI INC., 10530 Stevenson Road, Stevenson, MD 21153, USA
| | - Daniel G Chen
- 410 AI, LLC, 10 Plummer Ct, Germantown, MD 20876, USA
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, USA.,Department of Psychiatry and Behavioral Sciences, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Justin Zhan
- Department of Computer Science and Computer Engineering, University of Arkansas, Fayetteville, AR 72701, USA
| | - Changrong Ji
- A3.AI INC., 10530 Stevenson Road, Stevenson, MD 21153, USA
| | - Jingchun Chen
- Nevada Institute of Personalized Medicine, University of Nevada Las Vegas, Las Vegas, NV 89154, USA
| |
Collapse
|
181
|
Azuri I, Rosenhek-Goldian I, Regev-Rudzki N, Fantner G, Cohen SR. The role of convolutional neural networks in scanning probe microscopy: a review. BEILSTEIN JOURNAL OF NANOTECHNOLOGY 2021; 12:878-901. [PMID: 34476169 PMCID: PMC8372315 DOI: 10.3762/bjnano.12.66] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 07/23/2021] [Indexed: 05/13/2023]
Abstract
Progress in computing capabilities has enhanced science in many ways. In recent years, various branches of machine learning have been the key facilitators in forging new paths, ranging from categorizing big data to instrumental control, from materials design through image analysis. Deep learning has the ability to identify abstract characteristics embedded within a data set, subsequently using that association to categorize, identify, and isolate subsets of the data. Scanning probe microscopy measures multimodal surface properties, combining morphology with electronic, mechanical, and other characteristics. In this review, we focus on a subset of deep learning algorithms, that is, convolutional neural networks, and how it is transforming the acquisition and analysis of scanning probe data.
Collapse
Affiliation(s)
- Ido Azuri
- Weizmann Institute of Science, Department of Life Sciences Core Facilities, Rehovot 76100, Israel
| | - Irit Rosenhek-Goldian
- Weizmann Institute of Science, Department of Chemical Research Support, Rehovot 76100, Israel
| | - Neta Regev-Rudzki
- Weizmann Institute of Science, Department of Biomolecular Sciences, Rehovot 76100, Israel
| | - Georg Fantner
- École Polytechnique Fédérale de Lausanne, Laboratory for Bio- and Nano-Instrumentation, CH1015 Lausanne, Switzerland
| | - Sidney R Cohen
- Weizmann Institute of Science, Department of Chemical Research Support, Rehovot 76100, Israel
| |
Collapse
|
182
|
Automatic Classification of A-Lines in Intravascular OCT Images Using Deep Learning and Estimation of Attenuation Coefficients. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11167412] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Intravascular Optical Coherence Tomography (IVOCT) images provide important insight into every aspect of atherosclerosis. Specifically, the extent of plaque and its type, which are indicative of the patient’s condition, are better assessed by OCT images in comparison to other in vivo modalities. A large amount of imaging data per patient require automatic methods for rapid results. An effective step towards automatic plaque detection and plaque characterization is axial lines (A-lines) based classification into normal and various plaque types. In this work, a novel automatic method for A-line classification is proposed. The method employed convolutional neural networks (CNNs) for classification in its core and comprised the following pre-processing steps: arterial wall segmentation and an OCT-specific (depth-resolved) transformation and a post-processing step based on the majority of classifications. The important step was the OCT-specific transformation, which was based on the estimation of the attenuation coefficient in every pixel of the OCT image. The dataset used for training and testing consisted of 183 images from 33 patients. In these images, four different plaque types were delineated. The method was evaluated by cross-validation. The mean values of accuracy, sensitivity and specificity were 74.73%, 87.78%, and 61.45%, respectively, when classifying into plaque and normal A-lines. When plaque A-lines were classified into fibrolipidic and fibrocalcific, the overall accuracy was 83.47% for A-lines of OCT-specific transformed images and 74.94% for A-lines of original images. This large improvement in accuracy indicates the advantage of using attenuation coefficients when characterizing plaque types. The proposed automatic deep-learning pipeline constitutes a positive contribution to the accurate classification of A-lines in intravascular OCT images.
Collapse
|
183
|
Adu K, Yu Y, Cai J, Dela Tattrah V, Adu Ansere J, Tashi N. S-CCCapsule: Pneumonia detection in chest X-ray images using skip-connected convolutions and capsule neural network. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-202638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The squash function in capsule networks (CapsNets) dynamic routing is less capable of performing discrimination of non-informative capsules which leads to abnormal activation value distribution of capsules. In this paper, we propose vertical squash (VSquash) to improve the original squash by preventing the activation values of capsules in the primary capsule layer to shrink non-informative capsules, promote discriminative capsules and avoid high information sensitivity. Furthermore, a new neural network, (i) skip-connected convolutional capsule (S-CCCapsule), (ii) Integrated skip-connected convolutional capsules (ISCC) and (iii) Ensemble skip-connected convolutional capsules (ESCC) based on CapsNets are presented where the VSquash is applied in the dynamic routing. In order to achieve uniform distribution of coupling coefficient of probabilities between capsules, we use the Sigmoid function rather than Softmax function. Experiments on Guangzhou Women and Children’s Medical Center (GWCMC), Radiological Society of North America (RSNA) and Mendeley CXR Pneumonia datasets were performed to validate the effectiveness of our proposed methods. We found that our proposed methods produce better accuracy compared to other methods based on model evaluation metrics such as confusion matrix, sensitivity, specificity and Area under the curve (AUC). Our method for pneumonia detection performs better than practicing radiologists. It minimizes human error and reduces diagnosis time.
Collapse
Affiliation(s)
- Kwabena Adu
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yongbin Yu
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jingye Cai
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | | | - James Adu Ansere
- College of Internet of Things Engineering, Hohai University, China
| | - Nyima Tashi
- School of Information Science and Technology, Tibet University, Lhasa, China
| |
Collapse
|
184
|
Amin J, Sharif M, Gul N, Kadry S, Chakraborty C. Quantum Machine Learning Architecture for COVID-19 Classification Based on Synthetic Data Generation Using Conditional Adversarial Neural Network. Cognit Comput 2021; 14:1677-1688. [PMID: 34394762 PMCID: PMC8353617 DOI: 10.1007/s12559-021-09926-6] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 07/23/2021] [Indexed: 12/11/2022]
Abstract
Background COVID-19 is a novel virus that affects the upper respiratory tract, as well as the lungs. The scale of the global COVID-19 pandemic, its spreading rate, and deaths are increasing regularly. Computed tomography (CT) scans can be used carefully to detect and analyze COVID-19 cases. In CT images/scans, ground-glass opacity (GGO) is found in the early stages of infection. While in later stages, there is a superimposed pulmonary consolidation. Methods This research investigates the quantum machine learning (QML) and classical machine learning (CML) approaches for the analysis of COVID-19 images. The recent developments in quantum computing have led researchers to explore new ideas and approaches using QML. The proposed approach consists of two phases: in phase I, synthetic CT images are generated through the conditional adversarial network (CGAN) to increase the size of the dataset for accurate training and testing. In phase II, the classification of COVID-19/healthy images is performed, in which two models are proposed: CML and QML. Result The proposed model achieved 0.94 precision (Pn), 0.94 accuracy (Ac), 0.94 recall (Rl), and 0.94 F1-score (Fe) on POF Hospital dataset while 0.96 Pn, 0.96 Ac, 0.95 Rl, and 0.96 Fe on UCSD-AI4H dataset. Conclusion The proposed method achieved better results when compared to the latest published work in this domain.
Collapse
Affiliation(s)
- Javaria Amin
- Department of Computer Science, University of Wah, 47040, Wah Cantt, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, 47040, Wah Cantt, Pakistan
| | - Nadia Gul
- MBBS, FCPS Diagnostic Radiology, Consultant Radiologist POF Hospital and Associate Professor Radiology Wah Medical College, Wah Cantt, Pakistan
| | - Seifedine Kadry
- Faculty of Applied Computing and Technology, Noroff University College, Kristiansand, Norway
| | | |
Collapse
|
185
|
Liu Z, Ni S, Yang C, Sun W, Huang D, Su H, Shu J, Qin N. Axillary lymph node metastasis prediction by contrast-enhanced computed tomography images for breast cancer patients based on deep learning. Comput Biol Med 2021; 136:104715. [PMID: 34388460 DOI: 10.1016/j.compbiomed.2021.104715] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 07/09/2021] [Accepted: 07/27/2021] [Indexed: 12/09/2022]
Abstract
When doctors use contrast-enhanced computed tomography (CECT) images to predict the metastasis of axillary lymph nodes (ALN) for breast cancer patients, the prediction performance could be degraded by subjective factors such as experience, psychological factors, and degree of fatigue. This study aims to exploit efficient deep learning schemes to predict the metastasis of ALN automatically via CECT images. A new construction called deformable sampling module (DSM) was meticulously designed as a plug-and-play sampling module in the proposed deformable attention VGG19 (DA-VGG19). A dataset of 800 samples labeled from 800 CECT images of 401 breast cancer patients retrospectively enrolled in the last three years was adopted to train, validate, and test the deep convolutional neural network models. By comparing the accuracy, positive predictive value, negative predictive value, sensitivity and specificity indices, the performance of the proposed model is analyzed in detail. The best-performing DA-VGG19 model achieved an accuracy of 0.9088, which is higher than that of other classification neural networks. As such, the proposed intelligent diagnosis algorithm can provide doctors with daily diagnostic assistance and advice and reduce the workload of doctors. The source code mentioned in this article will be released later.
Collapse
Affiliation(s)
- Ziyi Liu
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Sijie Ni
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Chunmei Yang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, 646000, China
| | - Weihao Sun
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Deqing Huang
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Hu Su
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| | - Jian Shu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, 646000, China.
| | - Na Qin
- Institute of Systems Science and Technology, School of Electrical Engineering, Southwest Jiaotong University, Chengdu, 611756, China.
| |
Collapse
|
186
|
Sarvamangala DR, Kulkarni RV. Grading of Knee Osteoarthritis Using Convolutional Neural Networks. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10529-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
187
|
Tsujimoto M, Teramoto A, Dosho M, Tanahashi S, Fukushima A, Ota S, Inui Y, Matsukiyo R, Obama Y, Toyama H. Automated classification of increased uptake regions in bone single-photon emission computed tomography/computed tomography images using three-dimensional deep convolutional neural network. Nucl Med Commun 2021; 42:877-883. [PMID: 33741850 DOI: 10.1097/mnm.0000000000001409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE This study proposes an automated classification of benign and malignant in highly integrated regions in bone single-photon emission computed tomography/computed tomography (SPECT/CT) using a three-dimensional deep convolutional neural network (3D-DCNN). METHODS We examined 100 regions of 35 patients with bone SPECT/CT classified as benign and malignant by other examinations and follow-ups. First, SPECT and CT images were extracted at the same coordinates in a cube, with a long side two times the diameter of a high concentration in SPECT images. Next, we inputted the extracted image to DCNN and obtained the probability of benignity and malignancy. Integrating the output from DCNN of each SPECT and CT image provided the overall result. To validate the efficacy of the proposed method, the malignancy of all images was assessed using the leave-one-out cross-validation method; besides, the overall classification accuracy was evaluated. Furthermore, we compared the analysis results of SPECT/CT, SPECT alone, CT alone, and whole-body planar scintigraphy in the highly integrated region of the same site. RESULTS The extracted volume of interest was 50 benign and malignant regions, respectively. The overall classification accuracy of SPECT alone and CT alone was 73% and 68%, respectively, while that of the whole-body planar analysis at the same site was 74%. When SPECT/CT images were used, the overall classification accuracy was the highest (80%), while the classification accuracy of malignant and benign was 82 and 78%, respectively. CONCLUSIONS This study suggests that DCNN could be used for the direct classification of benign and malignant regions without extracting the features of SPECT/CT accumulation patterns.
Collapse
Affiliation(s)
| | | | | | | | | | - Seiichiro Ota
- Department of Radiology, School of Medicine, Fujita Health University, Toyoake, Japan
| | - Yoshitaka Inui
- Department of Radiology, School of Medicine, Fujita Health University, Toyoake, Japan
| | - Ryo Matsukiyo
- Department of Radiology, School of Medicine, Fujita Health University, Toyoake, Japan
| | - Yuuki Obama
- Department of Radiology, School of Medicine, Fujita Health University, Toyoake, Japan
| | - Hiroshi Toyama
- Department of Radiology, School of Medicine, Fujita Health University, Toyoake, Japan
| |
Collapse
|
188
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
189
|
Cheng X, Zhang W, Wu M, Jiang N, Guo Z, Leng X, Song J, Jin H, Sun X, Zhang F, Qin J, Yan X, Cai Z, Luo Y, Yang Y, Liu J. A prediction of hematoma expansion in hemorrhagic patients using a novel dual-modal machine learning strategy. Physiol Meas 2021; 42. [PMID: 34198278 DOI: 10.1088/1361-6579/ac10ab] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 07/01/2021] [Indexed: 11/11/2022]
Abstract
Objective.Hematoma expansion is closely associated with adverse functional outcomes in patients with intracerebral hemorrhage (ICH). Prediction of hematoma expansion would therefore be of great clinical significance. We therefore attempted to predict hematoma expansion using a dual-modal machine learning (ML) strategy which combines information from non-contrast computed tomography (NCCT) images and multiple clinical variables.Approach.We retrospectively identified 140 ICH patients (57 with hematoma expansion) with 5616 NCCT images of hematoma (2635 with hematoma expansion) and 10 clinical variables. The dual-modal ML strategy consists of two steps. The first step is to derive a mono-modal predictor from a deep convolutional neural network using solely NCCT images. The second step is to achieve a dual-modal predictor by combining the mono-modal predictor with 10 clinical variables to predict hematoma growth using a multi-layer perception network.Main results. For the mono-modal predictor, the best performance was merely 69.5% in accuracy with solely the NCCT images, whereas the dual-modal predictor could boost the accuracy greatly to be 86.5% by combining clinical variables.Significance.To our knowledge, this is the best performance from using ML to predict hematoma expansion. It could be potentially useful as a screening tool for high-risk patients with ICH, though further clinical tests would be necessary to show its performance on a larger cohort of patients.
Collapse
Affiliation(s)
- Xinpeng Cheng
- Stroke Center, Department of Neurology, The First Hospital of Jilin University, Chang Chun, Jilin, 130021, People's Republic of China.,Department of Neurology 2, Brain Hospital, Weifang People's Hospital, Weifang ,261021, Shandong, People's Republic of China
| | - Wei Zhang
- Shenzhen Institutes of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, 518000, People's Republic of China
| | - Menglu Wu
- Shenzhen Institutes of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, 518000, People's Republic of China
| | - Nan Jiang
- Stroke Center, Department of Neurology, The First Hospital of Jilin University, Chang Chun, Jilin, 130021, People's Republic of China
| | - Zhenni Guo
- Clinical Trial and Research Center for Stroke, Department of Neurology, The First Hospital of Jilin University, Chang Chun, Jilin, 130021, People's Republic of China
| | - Xinyi Leng
- Department of Medicine and Therapeutics, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong Special Administrative Region, 999077, People's Republic of China
| | - Jianing Song
- Shenzhen Institutes of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, 518000, People's Republic of China
| | - Hang Jin
- Stroke Center, Department of Neurology, The First Hospital of Jilin University, Chang Chun, Jilin, 130021, People's Republic of China
| | - Xin Sun
- Stroke Center, Department of Neurology, The First Hospital of Jilin University, Chang Chun, Jilin, 130021, People's Republic of China
| | - Fuliang Zhang
- Stroke Center, Department of Neurology, The First Hospital of Jilin University, Chang Chun, Jilin, 130021, People's Republic of China
| | - Jing Qin
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region, 999077, People's Republic of China
| | - Xiuli Yan
- Stroke Center, Department of Neurology, The First Hospital of Jilin University, Chang Chun, Jilin, 130021, People's Republic of China
| | - Zhenyu Cai
- Department of Radiology, Fuwai Hospital Chinese Academy of Medical Sciences, Shenzhen, Guangdong, China, 518000, People's Republic of China
| | - Ying Luo
- Department of Radiology, Fuwai Hospital Chinese Academy of Medical Sciences, Shenzhen, Guangdong, China, 518000, People's Republic of China
| | - Yi Yang
- Stroke Center, Department of Neurology, The First Hospital of Jilin University, Chang Chun, Jilin, 130021, People's Republic of China.,Clinical Trial and Research Center for Stroke, Department of Neurology, The First Hospital of Jilin University, Chang Chun, Jilin, 130021, People's Republic of China
| | - Jia Liu
- Shenzhen Institutes of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, 518000, People's Republic of China.,Shenzhen Key Laboratory for Exascale Engineering and Scientific Computing, Shenzhen, 518000, People's Republic of China
| |
Collapse
|
190
|
Elhage SA, Deerenberg EB, Ayuso SA, Murphy KJ, Shao JM, Kercher KW, Smart NJ, Fischer JP, Augenstein VA, Colavita PD, Heniford BT. Development and Validation of Image-Based Deep Learning Models to Predict Surgical Complexity and Complications in Abdominal Wall Reconstruction. JAMA Surg 2021; 156:933-940. [PMID: 34232255 DOI: 10.1001/jamasurg.2021.3012] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Importance Image-based deep learning models (DLMs) have been used in other disciplines, but this method has yet to be used to predict surgical outcomes. Objective To apply image-based deep learning to predict complexity, defined as need for component separation, and pulmonary and wound complications after abdominal wall reconstruction (AWR). Design, Setting, and Participants This quality improvement study was performed at an 874-bed hospital and tertiary hernia referral center from September 2019 to January 2020. A prospective database was queried for patients with ventral hernias who underwent open AWR by experienced surgeons and had preoperative computed tomography images containing the entire hernia defect. An 8-layer convolutional neural network was generated to analyze image characteristics. Images were batched into training (approximately 80%) or test sets (approximately 20%) to analyze model output. Test sets were blinded from the convolutional neural network until training was completed. For the surgical complexity model, a separate validation set of computed tomography images was evaluated by a blinded panel of 6 expert AWR surgeons and the surgical complexity DLM. Analysis started February 2020. Exposures Image-based DLM. Main Outcomes and Measures The primary outcome was model performance as measured by area under the curve in the receiver operating curve (ROC) calculated for each model; accuracy with accompanying sensitivity and specificity were also calculated. Measures were DLM prediction of surgical complexity using need for component separation techniques as a surrogate and prediction of postoperative surgical site infection and pulmonary failure. The DLM for predicting surgical complexity was compared against the prediction of 6 expert AWR surgeons. Results A total of 369 patients and 9303 computed tomography images were used. The mean (SD) age of patients was 57.9 (12.6) years, 232 (62.9%) were female, and 323 (87.5%) were White. The surgical complexity DLM performed well (ROC = 0.744; P < .001) and, when compared with surgeon prediction on the validation set, performed better with an accuracy of 81.3% compared with 65.0% (P < .001). Surgical site infection was predicted successfully with an ROC of 0.898 (P < .001). However, the DLM for predicting pulmonary failure was less effective with an ROC of 0.545 (P = .03). Conclusions and Relevance Image-based DLM using routine, preoperative computed tomography images was successful in predicting surgical complexity and more accurate than expert surgeon judgment. An additional DLM accurately predicted the development of surgical site infection.
Collapse
Affiliation(s)
- Sharbel Adib Elhage
- Department of Surgery, Franciscus Gasthuis en Vlietland, Rotterdam, the Netherlands
| | | | - Sullivan Armando Ayuso
- Division of Gastrointestinal and Minimally Invasive Surgery, Department of Surgery, Carolinas Medical Center, Charlotte, North Carolina
| | | | - Jenny Meng Shao
- Department of Surgery, University of Pennsylvania, Philadelphia
| | - Kent Williams Kercher
- Division of Gastrointestinal and Minimally Invasive Surgery, Department of Surgery, Carolinas Medical Center, Charlotte, North Carolina
| | - Neil James Smart
- Department of Colorectal Surgery, Royal Devon and Exeter NHS Foundation Trust, Royal Devon and Exeter Hospital, Exeter, United Kingdom
| | - John Patrick Fischer
- Division of Plastic Surgery, Department of Surgery, Perelman School of Medicine, Philadelphia, Pennsylvania
| | - Vedra Abdomerovic Augenstein
- Division of Gastrointestinal and Minimally Invasive Surgery, Department of Surgery, Carolinas Medical Center, Charlotte, North Carolina
| | - Paul Dominick Colavita
- Division of Gastrointestinal and Minimally Invasive Surgery, Department of Surgery, Carolinas Medical Center, Charlotte, North Carolina
| | - B Todd Heniford
- Division of Gastrointestinal and Minimally Invasive Surgery, Department of Surgery, Carolinas Medical Center, Charlotte, North Carolina
| |
Collapse
|
191
|
Lin M, Wynne JF, Zhou B, Wang T, Lei Y, Curran WJ, Liu T, Yang X. Artificial intelligence in tumor subregion analysis based on medical imaging: A review. J Appl Clin Med Phys 2021; 22:10-26. [PMID: 34164913 PMCID: PMC8292694 DOI: 10.1002/acm2.13321] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 04/17/2021] [Accepted: 05/22/2021] [Indexed: 12/20/2022] Open
Abstract
Medical imaging is widely used in the diagnosis and treatment of cancer, and artificial intelligence (AI) has achieved tremendous success in medical image analysis. This paper reviews AI-based tumor subregion analysis in medical imaging. We summarize the latest AI-based methods for tumor subregion analysis and their applications. Specifically, we categorize the AI-based methods by training strategy: supervised and unsupervised. A detailed review of each category is presented, highlighting important contributions and achievements. Specific challenges and potential applications of AI in tumor subregion analysis are discussed.
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jacob F. Wynne
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Boran Zhou
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
192
|
Sharifrazi D, Alizadehsani R, Roshanzamir M, Joloudari JH, Shoeibi A, Jafari M, Hussain S, Sani ZA, Hasanzadeh F, Khozeimeh F, Khosravi A, Nahavandi S, Panahiazar M, Zare A, Islam SMS, Acharya UR. Fusion of convolution neural network, support vector machine and Sobel filter for accurate detection of COVID-19 patients using X-ray images. Biomed Signal Process Control 2021; 68:102622. [PMID: 33846685 PMCID: PMC8026268 DOI: 10.1016/j.bspc.2021.102622] [Citation(s) in RCA: 74] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Revised: 03/22/2021] [Accepted: 04/04/2021] [Indexed: 02/02/2023]
Abstract
The coronavirus (COVID-19) is currently the most common contagious disease which is prevalent all over the world. The main challenge of this disease is the primary diagnosis to prevent secondary infections and its spread from one person to another. Therefore, it is essential to use an automatic diagnosis system along with clinical procedures for the rapid diagnosis of COVID-19 to prevent its spread. Artificial intelligence techniques using computed tomography (CT) images of the lungs and chest radiography have the potential to obtain high diagnostic performance for Covid-19 diagnosis. In this study, a fusion of convolutional neural network (CNN), support vector machine (SVM), and Sobel filter is proposed to detect COVID-19 using X-ray images. A new X-ray image dataset was collected and subjected to high pass filter using a Sobel filter to obtain the edges of the images. Then these images are fed to CNN deep learning model followed by SVM classifier with ten-fold cross validation strategy. This method is designed so that it can learn with not many data. Our results show that the proposed CNN-SVM with Sobel filter (CNN-SVM + Sobel) achieved the highest classification accuracy, sensitivity and specificity of 99.02%, 100% and 95.23%, respectively in automated detection of COVID-19. It showed that using Sobel filter can improve the performance of CNN. Unlike most of the other researches, this method does not use a pre-trained network. We have also validated our developed model using six public databases and obtained the highest performance. Hence, our developed model is ready for clinical application.
Collapse
Affiliation(s)
- Danial Sharifrazi
- Department of Computer Engineering, School of Technical and Engineering, Shiraz Branch, Islamic Azad University, Shiraz, Iran
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovations (IISRI), Deakin University, Geelong, Australia
| | - Mohamad Roshanzamir
- Department of Computer Engineering, Faculty of Engineering, Fasa University, 74617-81189, Fasa, Iran
| | | | - Afshin Shoeibi
- Computer Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran
- Faculty of Electrical and Computer Engineering, Biomedical Data Acquisition Lab, K. N. Toosi University of Technology, Tehran, Iran
| | - Mahboobeh Jafari
- Electrical and Computer Engineering Faculty, Semnan University, Semnan, Iran
| | - Sadiq Hussain
- System Administrator, Dibrugarh University, Assam, 786004, India
| | - Zahra Alizadeh Sani
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
- Omid Hospital, Iran University of Medical Sciences, Tehran, Iran
| | | | - Fahime Khozeimeh
- Institute for Intelligent Systems Research and Innovations (IISRI), Deakin University, Geelong, Australia
| | - Abbas Khosravi
- Institute for Intelligent Systems Research and Innovations (IISRI), Deakin University, Geelong, Australia
| | - Saeid Nahavandi
- Institute for Intelligent Systems Research and Innovations (IISRI), Deakin University, Geelong, Australia
| | - Maryam Panahiazar
- Institute for Computational Health Sciences, University of California, San Francisco, USA
| | - Assef Zare
- Faculty of Electrical Engineering, Gonabad Branch, Islamic Azad University, Gonabad, Iran
| | - Sheikh Mohammed Shariful Islam
- Institute for Physical Activity and Nutrition, Deakin University, Melbourne, Australia
- Cardiovascular Division, The George Institute for Global Health, Australia
- Sydney Medical School, University of Sydney, Australia
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore
- Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore
- Department of Bioinformatics and Medical Engineering, Asia University, Taiwan
| |
Collapse
|
193
|
Tian Q, Wu Y, Ren X, Razmjooy N. A New optimized sequential method for lung tumor diagnosis based on deep learning and converged search and rescue algorithm. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102761] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
194
|
Zhang L, Zhang Y, Wang L, Wang J, Liu Y. Diagnosis of gastric lesions through a deep convolutional neural network. Dig Endosc 2021; 33:788-796. [PMID: 32961597 DOI: 10.1111/den.13844] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 08/27/2020] [Accepted: 09/14/2020] [Indexed: 12/13/2022]
Abstract
BACKGROUND AND AIMS A deep convolutional neural network (CNN) was used to achieve fast and accurate artificial intelligence (AI)-assisted diagnosis of early gastric cancer (GC) and other gastric lesions based on endoscopic images. METHODS A CNN-based diagnostic system based on a ResNet34 residual network structure and a DeepLabv3 structure was constructed and trained using 21,217 gastroendoscopic images of five gastric conditions, peptic ulcer (PU), early gastric cancer (EGC) and high-grade intraepithelial neoplasia (HGIN), advanced gastric cancer (AGC), gastric submucosal tumors (SMTs), and normal gastric mucosa without lesions. The trained CNN was evaluated using a test dataset of 1091 images. The accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the CNN were calculated. The CNN diagnosis was compared with those of 10 endoscopists with over 8 years of experience in endoscopic diagnosis. RESULTS The diagnostic specificity and PPV of the CNN were higher than that of the endoscopists for the EGC and HGIN images (specificity: 91.2% vs. 86.7%, by 4.5%, 95% CI 2.8-7.2%; PPV: 55.4% vs. 41.7%, by 13.7%, 95% CI 11.2-16.8%) and the diagnostic accuracy of the CNN was close to those of the endoscopists for the lesion-free, EGC and HGIN, PU, AGC, and SMTs images. The CNN had image recognition time of 42 s for all the test set images. CONCLUSION The constructed CNN system could be used as a rapid auxiliary diagnostic instrument to detect EGC and HGIN, as well as other gastric lesions, to reduce the workload of endoscopists.
Collapse
Affiliation(s)
- Liming Zhang
- Department of Gastroenterology, Peking University People's Hospital, Beijing, China
| | - Yang Zhang
- Internet Medical Department of Love Life Insurance Company, Beijing, China
| | - Li Wang
- Department of Gastroenterology, Peking University People's Hospital, Beijing, China
| | - Jiangyuan Wang
- Department of Gastroenterology, Peking University People's Hospital, Beijing, China
| | - Yulan Liu
- Department of Gastroenterology, Peking University People's Hospital, Beijing, China
| |
Collapse
|
195
|
Deep learning based torsional nystagmus detection for dizziness and vertigo diagnosis. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102616] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
196
|
Zheng Y, Wang S, Chen Y, Du HQ. Deep learning with a convolutional neural network model to differentiate renal parenchymal tumors: a preliminary study. Abdom Radiol (NY) 2021; 46:3260-3268. [PMID: 33656574 DOI: 10.1007/s00261-021-02981-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Revised: 01/22/2021] [Accepted: 02/09/2021] [Indexed: 12/19/2022]
Abstract
PURPOSE With advancements in medical imaging, more renal tumors are detected early, but it remains a challenge for radiologists to accurately distinguish subtypes of renal parenchymal tumors. We aimed to establish a novel deep convolutional neural network (CNN) model and investigate its effect on identifying subtypes of renal parenchymal tumors in T2-weighted fat saturation sequence magnetic resonance (MR) images. METHODS This retrospective study included 199 patients with pathologically confirmed renal parenchymal tumors, including 77, 46, 34, and 42 patients with clear cell renal cell carcinoma (ccRCC), chromophobe renal cell carcinoma (chRCC), angiomyolipoma (AML), and papillary renal cell carcinoma (pRCC), respectively. All enrolled patients underwent kidney MR scans with the field strength of 1.5 Tesla (T) or 3.0 T before surgery. We selected T2-weighted fat saturation sequence images of all patients and built a deep learning model to determine the type of renal tumors. Receiver operating characteristic (ROC) curve was depicted to estimate the performance of the CNN model; the accuracy, precision, sensitivity, specificity, F1-score, and area under the curve (AUC) were calculated. One-way analysis of variance and χ2 tests of independent samples were used to analyze the variables. RESULTS The experimental results demonstrated that the model had a 60.4% overall accuracy, a 61.7% average accuracy, and a macro-average AUC of 0.82. The AUCs for ccRCC, chRCC, AML, and pRCC were 0.94, 0.78, 0.80, and 0.76, respectively. CONCLUSION Deep CNN model based on T2-weighted fat saturation sequence MR images was useful to classify the subtypes of renal parenchymal tumors with a relatively high diagnostic accuracy.
Collapse
Affiliation(s)
- Yao Zheng
- Department of Diagnostic Imaging, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Shuai Wang
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Yan Chen
- Department of Diagnostic Imaging, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| | - Hui-Qian Du
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
197
|
Yu H, Yang LT, Zhang Q, Armstrong D, Deen MJ. Convolutional neural networks for medical image analysis: State-of-the-art, comparisons, improvement and perspectives. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.04.157] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
198
|
Montgomery MK, David J, Zhang H, Ram S, Deng S, Premkumar V, Manzuk L, Jiang ZK, Giddabasappa A. Mouse lung automated segmentation tool for quantifying lung tumors after micro-computed tomography. PLoS One 2021; 16:e0252950. [PMID: 34138905 PMCID: PMC8211241 DOI: 10.1371/journal.pone.0252950] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 05/25/2021] [Indexed: 12/14/2022] Open
Abstract
Unlike the majority of cancers, survival for lung cancer has not shown much improvement since the early 1970s and survival rates remain low. Genetically engineered mice tumor models are of high translational relevance as we can generate tissue specific mutations which are observed in lung cancer patients. Since these tumors cannot be detected and quantified by traditional methods, we use micro-computed tomography imaging for longitudinal evaluation and to measure response to therapy. Conventionally, we analyze microCT images of lung cancer via a manual segmentation. Manual segmentation is time-consuming and sensitive to intra- and inter-analyst variation. To overcome the limitations of manual segmentation, we set out to develop a fully-automated alternative, the Mouse Lung Automated Segmentation Tool (MLAST). MLAST locates the thoracic region of interest, thresholds and categorizes the lung field into three tissue categories: soft tissue, intermediate, and lung. An increase in the tumor burden was measured by a decrease in lung volume with a simultaneous increase in soft and intermediate tissue quantities. MLAST segmentation was validated against three methods: manual scoring, manual segmentation, and histology. MLAST was applied in an efficacy trial using a Kras/Lkb1 non-small cell lung cancer model and demonstrated adequate precision and sensitivity in quantifying tumor growth inhibition after drug treatment. Implementation of MLAST has considerably accelerated the microCT data analysis, allowing for larger study sizes and mid-study readouts. This study illustrates how automated image analysis tools for large datasets can be used in preclinical imaging to deliver high throughput and quantitative results.
Collapse
Affiliation(s)
| | - John David
- Comparative Medicine, Pfizer Inc., La Jolla, CA, United States of America
| | - Haikuo Zhang
- Oncology Research Unit, Pfizer Inc., La Jolla, CA, United States of America
| | - Sripad Ram
- Drug Safety Research Unit, Pfizer Inc., La Jolla, CA, United States of America
| | - Shibing Deng
- Early Clinical Development, Pfizer Inc., La Jolla, CA, United States of America
| | - Vidya Premkumar
- Comparative Medicine, Pfizer Inc., La Jolla, CA, United States of America
| | - Lisa Manzuk
- Comparative Medicine, Pfizer Inc., La Jolla, CA, United States of America
| | - Ziyue Karen Jiang
- Comparative Medicine, Pfizer Inc., La Jolla, CA, United States of America
| | - Anand Giddabasappa
- Comparative Medicine, Pfizer Inc., La Jolla, CA, United States of America
- * E-mail:
| |
Collapse
|
199
|
Arumuga Maria Devi T, Mebin Jose VI. Three Stream Network Model for Lung Cancer Classification in the CT Images. OPEN COMPUTER SCIENCE 2021. [DOI: 10.1515/comp-2020-0145] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Abstract
Lung cancer is considered to be one of the deadly diseases that threaten the survival of human beings. It is a challenging task to identify lung cancer in its early stage from the medical images because of the ambiguity in the lung regions. This paper proposes a new architecture to detect lung cancer obtained from the CT images. The proposed architecture has a three-stream network to extract the manual and automated features from the images. Among these three streams, automated feature extraction as well as the classification is done using residual deep neural network and custom deep neural network. Whereas the manual features are the handcrafted features obtained using high and low-frequency sub-bands in the frequency domain that are classified using a Support Vector Machine Classifier. This makes the architecture robust enough to capture all the important features required to classify lung cancer from the input image. Hence, there is no chance of missing feature information. Finally, all the obtained prediction scores are combined by weighted based fusion. The experimental results show 98.2% classification accuracy which is relatively higher in comparison to other existing methods.
Collapse
|
200
|
A combined microfluidic deep learning approach for lung cancer cell high throughput screening toward automatic cancer screening applications. Sci Rep 2021; 11:9804. [PMID: 33963232 PMCID: PMC8105370 DOI: 10.1038/s41598-021-89352-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Accepted: 04/26/2021] [Indexed: 02/07/2023] Open
Abstract
Lung cancer is a leading cause of cancer death in both men and women worldwide. The high mortality rate in lung cancer is in part due to late-stage diagnostics as well as spread of cancer-cells to organs and tissues by metastasis. Automated lung cancer detection and its sub-types classification from cell’s images play a crucial role toward an early-stage cancer prognosis and more individualized therapy. The rapid development of machine learning techniques, especially deep learning algorithms, has attracted much interest in its application to medical image problems. In this study, to develop a reliable Computer-Aided Diagnosis (CAD) system for accurately distinguishing between cancer and healthy cells, we grew popular Non-Small Lung Cancer lines in a microfluidic chip followed by staining with Phalloidin and images were obtained by using an IX-81 inverted Olympus fluorescence microscope. We designed and tested a deep learning image analysis workflow for classification of lung cancer cell-line images into six classes, including five different cancer cell-lines (P-C9, SK-LU-1, H-1975, A-427, and A-549) and normal cell-line (16-HBE). Our results demonstrate that ResNet18, a residual learning convolutional neural network, is an efficient and promising method for lung cancer cell-lines categorization with a classification accuracy of 98.37% and F1-score of 97.29%. Our proposed workflow is also able to successfully distinguish normal versus cancerous cell-lines with a remarkable average accuracy of 99.77% and F1-score of 99.87%. The proposed CAD system completely eliminates the need for extensive user intervention, enabling the processing of large amounts of image data with robust and highly accurate results.
Collapse
|