51
|
Pugashetti JV, Khanna D, Kazerooni EA, Oldham J. Clinically Relevant Biomarkers in Connective Tissue Disease-Associated Interstitial Lung Disease. Immunol Allergy Clin North Am 2023; 43:411-433. [PMID: 37055096 PMCID: PMC10584384 DOI: 10.1016/j.iac.2023.01.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
Interstitial lung disease (ILD) complicates connective tissue disease (CTD) with variable incidence and is a leading cause of death in these patients. To improve CTD-ILD outcomes, early recognition and management of ILD is critical. Blood-based and radiologic biomarkers that assist in the diagnosis CTD-ILD have long been studied. Recent studies, including -omic investigations, have also begun to identify biomarkers that may help prognosticate such patients. This review provides an overview of clinically relevant biomarkers in patients with CTD-ILD, highlighting recent advances to assist in the diagnosis and prognostication of CTD-ILD.
Collapse
Affiliation(s)
- Janelle Vu Pugashetti
- Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, University of Michigan.
| | - Dinesh Khanna
- Scleroderma Program, Division of Rheumatology, Department of Internal Medicine, University of Michigan
| | - Ella A Kazerooni
- Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, University of Michigan; Division of Cardiothoracic Radiology, Department of Radiology, University of Michigan
| | - Justin Oldham
- Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, University of Michigan; Department of Epidemiology, University of Michigan
| |
Collapse
|
52
|
Choi H, Sunwoo L, Cho SJ, Baik SH, Bae YJ, Choi BS, Jung C, Kim JH. A Nationwide Web-Based Survey of Neuroradiologists' Perceptions of Artificial Intelligence Software for Neuro-Applications in Korea. Korean J Radiol 2023; 24:454-464. [PMID: 37133213 PMCID: PMC10157324 DOI: 10.3348/kjr.2022.0905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 02/19/2023] [Accepted: 03/06/2023] [Indexed: 05/04/2023] Open
Abstract
OBJECTIVE We aimed to investigate current expectations and clinical adoption of artificial intelligence (AI) software among neuroradiologists in Korea. MATERIALS AND METHODS In April 2022, a 30-item online survey was conducted by neuroradiologists from the Korean Society of Neuroradiology (KSNR) to assess current user experiences, perceptions, attitudes, and future expectations regarding AI for neuro-applications. Respondents with experience in AI software were further investigated in terms of the number and type of software used, period of use, clinical usefulness, and future scope. Results were compared between respondents with and without experience with AI software through multivariable logistic regression and mediation analyses. RESULTS The survey was completed by 73 respondents, accounting for 21.9% (73/334) of the KSNR members; 72.6% (53/73) were familiar with AI and 58.9% (43/73) had used AI software, with approximately 86% (37/43) using 1-3 AI software programs and 51.2% (22/43) having up to one year of experience with AI software. Among AI software types, brain volumetry software was the most common (62.8% [27/43]). Although 52.1% (38/73) assumed that AI is currently useful in practice, 86.3% (63/73) expected it to be useful for clinical practice within 10 years. The main expected benefits were reducing the time spent on repetitive tasks (91.8% [67/73]) and improving reading accuracy and reducing errors (72.6% [53/73]). Those who experienced AI software were more familiar with AI (adjusted odds ratio, 7.1 [95% confidence interval, 1.81-27.81]; P = 0.005). More than half of the respondents with AI software experience (55.8% [24/43]) agreed that AI should be included in training curriculums, while almost all (95.3% [41/43]) believed that radiologists should coordinate to improve its performance. CONCLUSION A majority of respondents experienced AI software and showed a proactive attitude toward adopting AI in clinical practice, suggesting that AI should be incorporated into training and active participation in AI development should be encouraged.
Collapse
Affiliation(s)
- Hyunsu Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
- Center for Artificial Intelligence in Healthcare, Seoul National University Bundang Hospital, Seongnam, Korea.
| | - Se Jin Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Sung Hyun Baik
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Yun Jung Bae
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Byung Se Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Cheolkyu Jung
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Jae Hyoung Kim
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| |
Collapse
|
53
|
An Q, Rahman S, Zhou J, Kang JJ. A Comprehensive Review on Machine Learning in Healthcare Industry: Classification, Restrictions, Opportunities and Challenges. SENSORS (BASEL, SWITZERLAND) 2023; 23:4178. [PMID: 37177382 PMCID: PMC10180678 DOI: 10.3390/s23094178] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 04/16/2023] [Accepted: 04/18/2023] [Indexed: 05/15/2023]
Abstract
Recently, various sophisticated methods, including machine learning and artificial intelligence, have been employed to examine health-related data. Medical professionals are acquiring enhanced diagnostic and treatment abilities by utilizing machine learning applications in the healthcare domain. Medical data have been used by many researchers to detect diseases and identify patterns. In the current literature, there are very few studies that address machine learning algorithms to improve healthcare data accuracy and efficiency. We examined the effectiveness of machine learning algorithms in improving time series healthcare metrics for heart rate data transmission (accuracy and efficiency). In this paper, we reviewed several machine learning algorithms in healthcare applications. After a comprehensive overview and investigation of supervised and unsupervised machine learning algorithms, we also demonstrated time series tasks based on past values (along with reviewing their feasibility for both small and large datasets).
Collapse
Affiliation(s)
- Qi An
- School of Information Technology, Faculty of Science, Engineering and Built Environment, Deakin University, Geelong, VIC 3216, Australia
| | - Saifur Rahman
- School of Information Technology, Faculty of Science, Engineering and Built Environment, Deakin University, Geelong, VIC 3216, Australia
| | - Jingwen Zhou
- School of Information Technology, Faculty of Science, Engineering and Built Environment, Deakin University, Geelong, VIC 3216, Australia
| | - James Jin Kang
- Computing and Security, School of Science, Edith Cowan University, Joondalup, WA 6027, Australia
| |
Collapse
|
54
|
Bhattacharjee V, Priya A, Kumari N, Anwar S. DeepCOVNet Model for COVID-19 Detection Using Chest X-Ray Images. WIRELESS PERSONAL COMMUNICATIONS 2023; 130:1399-1416. [PMID: 37168437 PMCID: PMC10088652 DOI: 10.1007/s11277-023-10336-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 02/25/2023] [Indexed: 05/13/2023]
Abstract
COVID-19 is an epidemic disease that has threatened all the people at worldwide scale and eventually became a pandemic It is a crucial task to differentiate COVID-19-affected patients from healthy patient populations. The need for technology enabled solutions is pertinent and this paper proposes a deep learning model for detection of COVID-19 using Chest X-Ray (CXR) images. In this research work, we provide insights on how to build robust deep learning based models for COVID-19 CXR image classification from Normal and Pneumonia affected CXR images. We contribute a methodical escort on preparation of data to produce a robust deep learning model. The paper prepared datasets by refactoring, using images from several datasets for ameliorate training of deep model. These recently published datasets enable us to build our own model and compare by using pre-trained models. The proposed experiments show the ability to work effectively to classify COVID-19 patients utilizing CXR. The empirical work, which uses a 3 convolutional layer based Deep Neural Network called "DeepCOVNet" to classify CXR images into 3 classes: COVID-19, Normal and Pneumonia cases, yielded an accuracy of 96.77% and a F1-score of 0.96 on two different combination of datasets.
Collapse
Affiliation(s)
| | - Ankita Priya
- Birla Institute of Technology Mesra, Ranchi, 835215 India
| | - Nandini Kumari
- Birla Institute of Technology Mesra, Ranchi, 835215 India
- Department of Data Science & Computer Application, Manipal Institute of Technology, Manipal, Manipal Academy of Higher Education, Manipal, 576104 Karnataka India
| | - Shamama Anwar
- Birla Institute of Technology Mesra, Ranchi, 835215 India
| |
Collapse
|
55
|
Chen CC, Huang JF, Lin WC, Cheng CT, Chen SC, Fu CY, Lee MS, Liao CH, Chung CY. The Feasibility and Performance of Total Hip Replacement Prediction Deep Learning Algorithm with Real World Data. Bioengineering (Basel) 2023; 10:458. [PMID: 37106645 PMCID: PMC10136253 DOI: 10.3390/bioengineering10040458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 03/15/2023] [Accepted: 04/04/2023] [Indexed: 04/29/2023] Open
Abstract
(1) Background: Hip degenerative disorder is a common geriatric disease is the main causes to lead to total hip replacement (THR). The surgical timing of THR is crucial for post-operative recovery. Deep learning (DL) algorithms can be used to detect anomalies in medical images and predict the need for THR. The real world data (RWD) were used to validate the artificial intelligence and DL algorithm in medicine but there was no previous study to prove its function in THR prediction. (2) Methods: We designed a sequential two-stage hip replacement prediction deep learning algorithm to identify the possibility of THR in three months of hip joints by plain pelvic radiography (PXR). We also collected RWD to validate the performance of this algorithm. (3) Results: The RWD totally included 3766 PXRs from 2018 to 2019. The overall accuracy of the algorithm was 0.9633; sensitivity was 0.9450; specificity was 1.000 and the precision was 1.000. The negative predictive value was 0.9009, the false negative rate was 0.0550, and the F1 score was 0.9717. The area under curve was 0.972 with 95% confidence interval from 0.953 to 0.987. (4) Conclusions: In summary, this DL algorithm can provide an accurate and reliable method for detecting hip degeneration and predicting the need for further THR. RWD offered an alternative support of the algorithm and validated its function to save time and cost.
Collapse
Affiliation(s)
- Chih-Chi Chen
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Jen-Fu Huang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Wei-Cheng Lin
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
- Department of Electrical Engineering, Chang Gung University, Taoyuan 33302, Taiwan
| | - Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Shann-Ching Chen
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chih-Yuan Fu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Mel S. Lee
- Department of Orthopaedic Surgery, Pao-Chien Hospital, Pingtung 90078, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chia-Ying Chung
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| |
Collapse
|
56
|
Chen X, Balko JM, Ling F, Jin Y, Gonzalez A, Zhao Z, Chen J. Convolutional neural network for biomarker discovery for triple negative breast cancer with RNA sequencing data. Heliyon 2023; 9:e14819. [PMID: 37025902 PMCID: PMC10070674 DOI: 10.1016/j.heliyon.2023.e14819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 03/14/2023] [Accepted: 03/17/2023] [Indexed: 03/29/2023] Open
Abstract
Triple negative breast cancers (TNBCs) are tumors with a poor treatment response and prognosis. In this study, we propose a new approach, candidate extraction from convolutional neural network (CNN) elements (CECE), for discovery of biomarkers for TNBCs. We used the GSE96058 and GSE81538 datasets to build a CNN model to classify TNBCs and non-TNBCs and used the model to make TNBC predictions for two additional datasets, the cancer genome atlas (TCGA) breast cancer RNA sequencing data and the data from Fudan University Shanghai Cancer Center (FUSCC). Using correctly predicted TNBCs from the GSE96058 and TCGA datasets, we calculated saliency maps for these subjects and extracted the genes that the CNN model used to separate TNBCs from non-TNBCs. Among the TNBC signature patterns that the CNN models learned from the training data, we found a set of 21 genes that can classify TNBCs into two major classes, or CECE subtypes, with distinct overall survival rates (P = 0.0074). We replicated this subtype classification in the FUSCC dataset using the same 21 genes, and the two subtypes had similar differential overall survival rates (P = 0.0490). When all TNBCs were combined from the 3 datasets, the CECE II subtype had a hazard ratio of 1.94 (95% CI, 1.25-3.01; P = 0.0032). The results demonstrate that the spatial patterns learned by the CNN models can be utilized to discover interacting biomarkers otherwise unlikely to be identified by traditional approaches.
Collapse
Affiliation(s)
| | - Justin M. Balko
- Department of Medicine, Vanderbilt-Ingram Cancer Center, Vanderbilt University Medical Center, 2101 W End Ave, Nashville, TN, 37240, USA
- Breast Cancer Research Program, Vanderbilt-Ingram Cancer Center, Vanderbilt University Medical Center, 2101, W End Ave, Nashville, TN, 37240, USA
- Departments of Pathology, Microbiology, and Immunology, Vanderbilt-Ingram Cancer Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Fei Ling
- School of Biology and Biological Engineering, South China University of Technology, Guangzhou, Guangdong, China
| | - Yabin Jin
- Clinical Research Institute, The First People’s Hospital of Foshan, Foshan, China
| | - Anneliese Gonzalez
- Department of Internal Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, TX77030, USA
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA
- Department of Psychiatry and Behavioral Sciences, McGovern Medical School, The University of Texas, Houston, TX, 77030, USA
| | - Jingchun Chen
- Nevada Institute of Personalized Medicine, University of Nevada Las Vegas, Las Vegas, NV, 89154, USA
| |
Collapse
|
57
|
Generative adversarial feature learning for glomerulopathy histological classification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
58
|
Farhan AMQ, Yang S. Automatic lung disease classification from the chest X-ray images using hybrid deep learning algorithm. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-27. [PMID: 37362647 PMCID: PMC10030349 DOI: 10.1007/s11042-023-15047-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/30/2022] [Accepted: 02/27/2023] [Indexed: 06/28/2023]
Abstract
The chest X-ray images provide vital information about the congestion cost-effectively. We propose a novel Hybrid Deep Learning Algorithm (HDLA) framework for automatic lung disease classification from chest X-ray images. The model consists of steps including pre-processing of chest X-ray images, automatic feature extraction, and detection. In a pre-processing step, our goal is to improve the quality of raw chest X-ray images using the combination of optimal filtering without data loss. The robust Convolutional Neural Network (CNN) is proposed using the pre-trained model for automatic lung feature extraction. We employed the 2D CNN model for the optimum feature extraction in minimum time and space requirements. The proposed 2D CNN model ensures robust feature learning with highly efficient 1D feature estimation from the input pre-processed image. As the extracted 1D features have suffered from significant scale variations, we optimized them using min-max scaling. We classify the CNN features using the different machine learning classifiers such as AdaBoost, Support Vector Machine (SVM), Random Forest (RM), Backpropagation Neural Network (BNN), and Deep Neural Network (DNN). The experimental results claim that the proposed model improves the overall accuracy by 3.1% and reduces the computational complexity by 16.91% compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Abobaker Mohammed Qasem Farhan
- School of information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Shangming Yang
- School of information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
59
|
Haubold J, Zeng K, Farhand S, Stalke S, Steinberg H, Bos D, Meetschen M, Kureishi A, Zensen S, Goeser T, Maier S, Forsting M, Nensa F. AI co-pilot: content-based image retrieval for the reading of rare diseases in chest CT. Sci Rep 2023; 13:4336. [PMID: 36928759 PMCID: PMC10020154 DOI: 10.1038/s41598-023-29949-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 02/13/2023] [Indexed: 03/18/2023] Open
Abstract
The aim of the study was to evaluate the impact of the newly developed Similar patient search (SPS) Web Service, which supports reading complex lung diseases in computed tomography (CT), on the diagnostic accuracy of residents. SPS is an image-based search engine for pre-diagnosed cases along with related clinical reference content ( https://eref.thieme.de ). The reference database was constructed using 13,658 annotated regions of interest (ROIs) from 621 patients, comprising 69 lung diseases. For validation, 50 CT scans were evaluated by five radiology residents without SPS, and three months later with SPS. The residents could give a maximum of three diagnoses per case. A maximum of 3 points was achieved if the correct diagnosis without any additional diagnoses was provided. The residents achieved an average score of 17.6 ± 5.0 points without SPS. By using SPS, the residents increased their score by 81.8% to 32.0 ± 9.5 points. The improvement of the score per case was highly significant (p = 0.0001). The residents required an average of 205.9 ± 350.6 s per case (21.9% increase) when SPS was used. However, in the second half of the cases, after the residents became more familiar with SPS, this increase dropped to 7%. Residents' average score in reading complex chest CT scans improved by 81.8% when the AI-driven SPS with integrated clinical reference content was used. The increase in time per case due to the use of the SPS was minimal.
Collapse
Affiliation(s)
- Johannes Haubold
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.
| | - Ke Zeng
- Siemens Medical Solutions Inc., Malvern, PA, USA
| | | | | | - Hannah Steinberg
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Denise Bos
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Mathias Meetschen
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Anisa Kureishi
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Sebastian Zensen
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Tim Goeser
- Department of Radiology and Neuroradiology, Kliniken Maria Hilf, Viersener Str. 450, 41063, Mönchengladbach, NRW, Germany
| | - Sandra Maier
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Michael Forsting
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Felix Nensa
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| |
Collapse
|
60
|
Liu J, Feng Q, Miao Y, He W, Shi W, Jiang Z. COVID-19 disease identification network based on weakly supervised feature selection. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:9327-9348. [PMID: 37161245 DOI: 10.3934/mbe.2023409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
The coronavirus disease 2019 (COVID-19) outbreak has resulted in countless infections and deaths worldwide, posing increasing challenges for the health care system. The use of artificial intelligence to assist in diagnosis not only had a high accuracy rate but also saved time and effort in the sudden outbreak phase with the lack of doctors and medical equipment. This study aimed to propose a weakly supervised COVID-19 classification network (W-COVNet). This network was divided into three main modules: weakly supervised feature selection module (W-FS), deep learning bilinear feature fusion module (DBFF) and Grad-CAM++ based network visualization module (Grad-Ⅴ). The first module, W-FS, mainly removed redundant background features from computed tomography (CT) images, performed feature selection and retained core feature regions. The second module, DBFF, mainly used two symmetric networks to extract different features and thus obtain rich complementary features. The third module, Grad-Ⅴ, allowed the visualization of lesions in unlabeled images. A fivefold cross-validation experiment showed an average classification accuracy of 85.3%, and a comparison with seven advanced classification models showed that our proposed network had a better performance.
Collapse
Affiliation(s)
- Jingyao Liu
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, Jilin 130022, China
- School of Computer and Information Engineering, Chuzhou University, Chuzhou 239000, China
| | - Qinghe Feng
- School of Intelligent Engineering, Henan Institute of Technology, Xinxiang 453003, China
| | - Yu Miao
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, Jilin 130022, China
- Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China
| | - Wei He
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, Jilin 130022, China
| | - Weili Shi
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, Jilin 130022, China
- Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China
| | - Zhengang Jiang
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, Jilin 130022, China
- Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China
| |
Collapse
|
61
|
Krinski BA, Ruiz DV, Laroca R, Todt E. DACov: a deeper analysis of data augmentation on the computed tomography segmentation problem. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2183807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Affiliation(s)
- Bruno A. Krinski
- Department of Informatics, Federal University of Paraná, Curitiba, Brazil
| | - Daniel V. Ruiz
- Department of Informatics, Federal University of Paraná, Curitiba, Brazil
| | - Rayson Laroca
- Department of Informatics, Federal University of Paraná, Curitiba, Brazil
| | - Eduardo Todt
- Department of Informatics, Federal University of Paraná, Curitiba, Brazil
| |
Collapse
|
62
|
Li Z, Chen W, Ju Y, Chen Y, Hou Z, Li X, Jiang Y. Bone age assessment based on deep neural networks with annotation-free cascaded critical bone region extraction. Front Artif Intell 2023; 6:1142895. [PMID: 36937708 PMCID: PMC10017763 DOI: 10.3389/frai.2023.1142895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 02/06/2023] [Indexed: 03/06/2023] Open
Abstract
Bone age assessment (BAA) from hand radiographs is crucial for diagnosing endocrinology disorders in adolescents and supplying therapeutic investigation. In practice, due to the conventional clinical assessment being a subjective estimation, the accuracy of BAA relies highly on the pediatrician's professionalism and experience. Recently, many deep learning methods have been proposed for the automatic estimation of bone age and had good results. However, these methods do not exploit sufficient discriminative information or require additional manual annotations of critical bone regions that are important biological identifiers in skeletal maturity, which may restrict the clinical application of these approaches. In this research, we propose a novel two-stage deep learning method for BAA without any manual region annotation, which consists of a cascaded critical bone region extraction network and a gender-assisted bone age estimation network. First, the cascaded critical bone region extraction network automatically and sequentially locates two discriminative bone regions via the visual heat maps. Second, in order to obtain an accurate BAA, the extracted critical bone regions are fed into the gender-assisted bone age estimation network. The results showed that the proposed method achieved a mean absolute error (MAE) of 5.45 months on the public dataset Radiological Society of North America (RSNA) and 3.34 months on our private dataset.
Collapse
Affiliation(s)
- Zhangyong Li
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Wang Chen
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Yang Ju
- Department of Mechanical Science and Engineering, Graduate School of Engineering, Nagoya University, Nagoya, Japan
| | - Yong Chen
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Zhengjun Hou
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Xinwei Li
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Yuhao Jiang
- Chongqing Engineering Research Center of Medical Electronics and Information Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| |
Collapse
|
63
|
Wang J, Dou J, Han J, Li G, Tao J. A population-based study to assess two convolutional neural networks for dental age estimation. BMC Oral Health 2023; 23:109. [PMID: 36803132 PMCID: PMC9938587 DOI: 10.1186/s12903-023-02817-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 02/14/2023] [Indexed: 02/19/2023] Open
Abstract
BACKGROUND Dental age (DA) estimation using two convolutional neural networks (CNNs), VGG16 and ResNet101, remains unexplored. In this study, we aimed to investigate the possibility of using artificial intelligence-based methods in an eastern Chinese population. METHODS A total of 9586 orthopantomograms (OPGs) (4054 boys and 5532 girls) of the Chinese Han population aged from 6 to 20 years were collected. DAs were automatically calculated using the two CNN model strategies. Accuracy, recall, precision, and F1 score of the models were used to evaluate VGG16 and ResNet101 for age estimation. An age threshold was also employed to evaluate the two CNN models. RESULTS The VGG16 network outperformed the ResNet101 network in terms of prediction performance. However, the model effect of VGG16 was less favorable than that in other age ranges in the 15-17 age group. The VGG16 network model prediction results for the younger age groups were acceptable. In the 6-to 8-year-old group, the accuracy of the VGG16 model can reach up to 93.63%, which was higher than the 88.73% accuracy of the ResNet101 network. The age threshold also implies that VGG16 has a smaller age-difference error. CONCLUSIONS This study demonstrated that VGG16 performed better when dealing with DA estimation via OPGs than the ResNet101 network on a wholescale. CNNs such as VGG16 hold great promise for future use in clinical practice and forensic sciences.
Collapse
Affiliation(s)
- Jian Wang
- grid.16821.3c0000 0004 0368 8293Department of General Dentistry, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, Shanghai, 200011 China ,grid.16821.3c0000 0004 0368 8293National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai, 200011 China
| | - Jiawei Dou
- grid.16821.3c0000 0004 0368 8293School of Software, Shanghai Jiao Tong University, Shanghai, 200240 China
| | - Jiaxuan Han
- grid.16821.3c0000 0004 0368 8293Department of General Dentistry, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, Shanghai, 200011 China ,grid.16821.3c0000 0004 0368 8293National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai, 200011 China
| | - Guoqiang Li
- School of Software, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Jiang Tao
- Department of General Dentistry, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, Shanghai, 200011, China. .,National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai Research Institute of Stomatology, Shanghai, 200011, China.
| |
Collapse
|
64
|
Jin Y, Lu H, Zhu W, Huo W. Deep learning based classification of multi-label chest X-ray images via dual-weighted metric loss. Comput Biol Med 2023; 157:106683. [PMID: 36905869 DOI: 10.1016/j.compbiomed.2023.106683] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 10/17/2022] [Accepted: 11/06/2022] [Indexed: 02/17/2023]
Abstract
-Thoracic disease, like many other diseases, can lead to complications. Existing multi-label medical image learning problems typically include rich pathological information, such as images, attributes, and labels, which are crucial for supplementary clinical diagnosis. However, the majority of contemporary efforts exclusively focus on regression from input to binary labels, ignoring the relationship between visual features and semantic vectors of labels. In addition, there is an imbalance in data amount between diseases, which frequently causes intelligent diagnostic systems to make erroneous disease predictions. Therefore, we aim to improve the accuracy of the multi-label classification of chest X-ray images. Chest X-ray14 pictures were utilized as the multi-label dataset for the experiments in this study. By fine-tuning the ConvNeXt network, we got visual vectors, which we combined with semantic vectors encoded by BioBert to map the two different forms of features into a common metric space and made semantic vectors the prototype of each class in metric space. The metric relationship between images and labels is then considered from the image level and disease category level, respectively, and a new dual-weighted metric loss function is proposed. Finally, the average AUC score achieved in the experiment reached 0.826, and our model outperformed the comparison models.
Collapse
Affiliation(s)
- Yufei Jin
- College of Information Engineering, China Jiliang University, Hangzhou, China.
| | - Huijuan Lu
- College of Information Engineering, China Jiliang University, Hangzhou, China.
| | - Wenjie Zhu
- College of Information Engineering, China Jiliang University, Hangzhou, China.
| | - Wanli Huo
- College of Information Engineering, China Jiliang University, Hangzhou, China.
| |
Collapse
|
65
|
Dash S, Parida P, Mohanty JR. Illumination robust deep convolutional neural network for medical image classification. Soft comput 2023. [DOI: 10.1007/s00500-023-07918-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
|
66
|
Huang YS, Wang TC, Huang SZ, Zhang J, Chen HM, Chang YC, Chang RF. An improved 3-D attention CNN with hybrid loss and feature fusion for pulmonary nodule classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107278. [PMID: 36463674 DOI: 10.1016/j.cmpb.2022.107278] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 11/17/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Lung cancer has the highest cancer-related mortality worldwide, and lung nodule usually presents with no symptom. Low-dose computed tomography (LDCT) was an important tool for lung cancer detection and diagnosis. It provided a complete three-dimensional (3-D) chest image with a high resolution.Recently, convolutional neural network (CNN) had flourished and been proven the CNN-based computer-aided diagnosis (CADx) system could extract the features and help radiologists to make a preliminary diagnosis. Therefore, a 3-D ResNeXt-based CADx system was proposed to assist radiologists for diagnosis in this study. METHODS The proposed CADx system consists of image preprocessing and a 3-D CNN-based classification model for pulmonary nodule classification. First, the image preprocessing was executed to generate the normalized volumn of interest (VOI) only including nodule information and a few surrounding tissues. Then, the extracted VOI was forwarded to the 3-D nodule classification model. In the classification model, the RestNext was employed as the backbone and the attention scheme was embedded to focus on the important features. Moreover, a multi-level feature fusion network incorporating feature information of different scales was used to enhance the prediction accuracy of small malignant nodules. Finally, a hybrid loss based on channel optimization which make the network learn more detailed information was empolyed to replace a binary cross-entropy (BCE) loss. RESULTS In this research, there were a total of 880 low-dose CT images including 440 benign and 440 malignant nodules from the American National Lung Screening Trial (NLST) for system evaluation. The results showed that our system could achieve the accuracy of 85.3%, the sensitivity of 86.8%, the specificity of 83.9%, and the area-under-curve (AUC) value was 0.9042. It was confirmed that the designed system had a good diagnostic ability. CONCLUSION In this study, a CADx composed of the image preprocessing and a 3-D nodule classification model with attention scheme, feature fusion, and hybrid loss was proposed for pulmonary nodule classification in LDCT. The results indicated that the proposed CADx system had potential for achieving high performance in classifying lung nodules as benign and malignant.
Collapse
Affiliation(s)
- Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Changhua University of Education, Changhua, Taiwan, ROC
| | - Teh-Chen Wang
- Department of Medical Imaging, Taipei City Hospital Yangming Branch, Taipei, Taiwan, ROC
| | - Sheng-Zhi Huang
- Graduate Institute of Network and Multimedia, National Taiwan University, Taipei, Taiwan, ROC
| | - Jun Zhang
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan, ROC
| | - Hsin-Ming Chen
- Department of Medical Imaging, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu, Taiwan, ROC
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 10617, Taiwan, ROC.
| | - Ruey-Feng Chang
- Graduate Institute of Network and Multimedia, National Taiwan University, Taipei, Taiwan, ROC; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan, ROC; Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan, ROC; MOST Joint Research Center for AI Technology and All Vista Healthcare, Taipei, Taiwan, ROC.
| |
Collapse
|
67
|
Zhou H, Liu Z, Li T, Chen Y, Huang W, Zhang Z. Classification of precancerous lesions based on fusion of multiple hierarchical features. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107301. [PMID: 36516661 DOI: 10.1016/j.cmpb.2022.107301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 12/01/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE To investigate an identification method for precancerous gastric cancer based on the fusion of superficial features and deep features of gastroscopic images. The purpose of this study is to make most use of superficial features and deep features to provide clinicians with clinical decision support to assist the diagnosis of precancerous gastric diseases and reduce the workload of doctors. METHODS According to the nature of gastroscopic images, 75-dimensional shallow features were manually designed, including histogram features, texture features and high-order features of the image; then, based on the constructed convolutional neural networks such as ResNet and GoogLeNet, before the output layer. A fully connected layer is added as the deep feature of the image. In order to ensure consistent feature weights, the number of neurons in the fully connected layer is designed to be 75 dimensions. Therefore, the superficial and deep features of the image are concatenated, and a machine learning classifier is used to identify gastric polyps, there are three types of gastric precancerous diseases such as gastric polyps, gastric ulcers and gastric erosions. RESULTS A dataset with 420 images was collected for each disease, and divided into a training set and a test set with a ratio of 5:1, and then based on the dataset, three methods, such as traditional machine learning, deep learning, and feature fusion, were used respectively. For model training and testing of traditional machine learning and feature fusion, SVM, RF and BP neural network are used as the classification results of the classifier. For deep learning, the GoogLeNet, ResNet, and ResNeXt were implemented. The test results of the model on the test set show that the recognition accuracy of the proposed feature fusion method reaches (SVM: 85.18%; RF: 83.42%; BPNN: 85.18%), which is better than the traditional machine learning method (SVM: 80.17%; RF: 82.37%; BPNN: 84.12%) and the deep learning method (GoogLeNet: 82.54%; ResNet-18: 81.67%; ResNet-50: 81.67%; ResNeXt-50: 82.11%), which proves that this method has obvious advantages. CONCLUSION This study provides a new strategy for the identification of precancerous gastric cancer, improving the efficiency and accuracy of precancerous gastric cancer identification, and hopes to provide substantial practical help for the identification of gastric precancerous diseases.
Collapse
Affiliation(s)
- Huijun Zhou
- Department of Gastroenterology and Urology, Hunan Cancer Hospital/The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University, Changsha, Hunan 410013, China
| | - Zhenyang Liu
- Department of Gastroenterology and Urology, Hunan Cancer Hospital/The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University, Changsha, Hunan 410013, China
| | - Ting Li
- Department of Gastroenterology and Urology, Hunan Cancer Hospital/The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University, Changsha, Hunan 410013, China
| | - Yifei Chen
- Department of Endoscopic Diagnosis and Treatment Center, Hunan Cancer Hospital/The Affiliated Cancer Hospital of Xiangya School of Medicine, Central South University, Changsha, Hunan 410013, China
| | - Wei Huang
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center of Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, Hunan 410008, China; Research Center of Carcinogenesis and Targeted Therapy, Xiangya Hospital, Central South University, Changsha, Hunan, China.
| | - Zijian Zhang
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center of Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, Hunan 410008, China.
| |
Collapse
|
68
|
Tseng CC, Lim V, Jyung RW. Use of artificial intelligence for the diagnosis of cholesteatoma. Laryngoscope Investig Otolaryngol 2023; 8:201-211. [PMID: 36846416 PMCID: PMC9948563 DOI: 10.1002/lio2.1008] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 12/07/2022] [Accepted: 12/30/2022] [Indexed: 01/19/2023] Open
Abstract
Objectives Accurate diagnosis of cholesteatomas is crucial. However, cholesteatomas can easily be missed in routine otoscopic exams. Convolutional neural networks (CNNs) have performed well in medical image classification, so we evaluated their use for detecting cholesteatomas in otoscopic images. Study Design Design and evaluation of artificial intelligence driven workflow for cholesteatoma diagnosis. Methods Otoscopic images collected from the faculty practice of the senior author were deidentified and labeled by the senior author as cholesteatoma, abnormal non-cholesteatoma, or normal. An image classification workflow was developed to automatically differentiate cholesteatomas from other possible tympanic membrane appearances. Eight pretrained CNNs were trained on our otoscopic images, then tested on a withheld subset of images to evaluate their final performance. CNN intermediate activations were also extracted to visualize important image features. Results A total of 834 otoscopic images were collected, further categorized into 197 cholesteatoma, 457 abnormal non-cholesteatoma, and 180 normal. Final trained CNNs demonstrated strong performance, achieving accuracies of 83.8%-98.5% for differentiating cholesteatoma from normal, 75.6%-90.1% for differentiating cholesteatoma from abnormal non-cholesteatoma, and 87.0%-90.4% for differentiating cholesteatoma from non-cholesteatoma (abnormal non-cholesteatoma + normal). DenseNet201 (100% sensitivity, 97.1% specificity), NASNetLarge (100% sensitivity, 88.2% specificity), and MobileNetV2 (94.1% sensitivity, 100% specificity) were among the best performing CNNs in distinguishing cholesteatoma versus normal. Visualization of intermediate activations showed robust detection of relevant image features by the CNNs. Conclusion While further refinement and more training images are needed to improve performance, artificial intelligence-driven analysis of otoscopic images shows great promise as a diagnostic tool for detecting cholesteatomas. Level of Evidence 3.
Collapse
Affiliation(s)
- Christopher C. Tseng
- Department of Otolaryngology – Head and Neck SurgeryRutgers New Jersey Medical SchoolNewarkNew JerseyUSA
| | - Valerie Lim
- Department of Otolaryngology – Head and Neck SurgeryRutgers New Jersey Medical SchoolNewarkNew JerseyUSA
| | - Robert W. Jyung
- Department of Otolaryngology – Head and Neck SurgeryRutgers New Jersey Medical SchoolNewarkNew JerseyUSA
| |
Collapse
|
69
|
Deep Transfer Learning Techniques-Based Automated Classification and Detection of Pulmonary Fibrosis from Chest CT Images. Processes (Basel) 2023. [DOI: 10.3390/pr11020443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Pulmonary Fibrosis (PF) is a non-curable chronic lung disease. Therefore, a quick and accurate PF diagnosis is imperative. In the present study, we aim to compare the performance of the six state-of-the-art Deep Transfer Learning techniques to classify patients accurately and perform abnormality localization in Computer Tomography (CT) scan images. A total of 2299 samples comprising normal and PF-positive CT images were preprocessed. The preprocessed images were split into training (75%), validation (15%), and test data (10%). These transfer learning models were trained and validated by optimizing the hyperparameters, such as the learning rate and the number of epochs. The optimized architectures have been evaluated with different performance metrics to demonstrate the consistency of the optimized model. At epoch 26, using an optimized learning rate of 0.0000625, the ResNet50v2 model achieved the highest training and validation accuracy (training = 99.92%, validation = 99.22%) and minimum loss (training = 0.00428, validation = 0.00683) for CT images. The experimental evaluation on the independent testing data confirms that optimized ResNet50v2 outperformed every other optimized architecture under consideration achieving a perfect score of 1.0 in each of the standard performance measures such as accuracy, precision, recall, F1-score, Mathew Correlation Coefficient (MCC), Area under the Receiver Operating Characteristic (ROC-AUC) curve, and the Area under the Precision recall (AUC_PR) curve. Therefore, we can propose that the optimized ResNet50v2 is a reliable diagnostic model for automatically classifying PF-positive patients using chest CT images.
Collapse
|
70
|
Yang P, Guo X, Mu C, Qi S, Li G. Detection of vertical root fractures by cone-beam computed tomography based on deep learning. Dentomaxillofac Radiol 2023; 52:20220345. [PMID: 36802858 PMCID: PMC9944014 DOI: 10.1259/dmfr.20220345] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 01/31/2023] [Accepted: 01/31/2023] [Indexed: 02/23/2023] Open
Abstract
OBJECTIVES This study aims to evaluate the performance of ResNet models in the detection of in vitro and in vivo vertical root fractures (VRF) in Cone-beam Computed Tomography (CBCT) images. METHODS A CBCT image dataset consisting of 28 teeth (14 intact and 14 teeth with VRF, 1641 slices) from 14 patients, and another dataset containing 60 teeth (30 intact and 30 teeth with VRF, 3665 slices) from an in vitro model were used for the establishment of VRFconvolutional neural network (CNN) models. The most popular CNN architecture ResNet with different layers was fine-tuned for the detection of VRF. Sensitivity, specificity, accuracy, PPV (positive predictive value), NPV (negative predictive value), and AUC (the area under the receiver operating characteristic curve) of the VRF slices classified by the CNN in the test set were compared. Two oral and maxillofacial radiologists independently reviewed all the CBCT images of the test set, and intraclass correlation coefficients (ICCs) were calculated to assess the interobserver agreement for the oral maxillofacial radiologists. RESULTS The AUC of the models on the patient data were: 0.827(ResNet-18), 0.929(ResNet-50), and 0.882(ResNet-101). The AUC of the models on the mixed data get improved as:0.927(ResNet-18), 0.936(ResNet-50), and 0.893(ResNet-101). The maximum AUC were: 0.929 (0.908-0.950, 95% CI) and 0.936 (0.924-0.948, 95% CI) for the patient data and mixed data from ResNet-50, which is comparable to the AUC (0.937 and 0.950) for patient data and (0.915 and 0.935) for the mixed data obtained from the two oral and maxillofacial radiologists, respectively. CONCLUSIONS Deep-learning models showed high accuracy in the detection of VRF using CBCT images. The data obtained from the in vitro VRF model increases the data scale, which is beneficial to the training of deep-learning models.
Collapse
Affiliation(s)
| | | | | | - Senrong Qi
- Department of Oral and Maxillofacial Radiology, Beijing Stomatology Hospital, School of Stomatology, Capital Medical University, Beijing, China
| | - Gang Li
- Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, Beijing, China
| |
Collapse
|
71
|
Ghaffar Nia N, Kaplanoglu E, Nasab A. Evaluation of artificial intelligence techniques in disease diagnosis and prediction. DISCOVER ARTIFICIAL INTELLIGENCE 2023. [PMCID: PMC9885935 DOI: 10.1007/s44163-023-00049-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
AbstractA broad range of medical diagnoses is based on analyzing disease images obtained through high-tech digital devices. The application of artificial intelligence (AI) in the assessment of medical images has led to accurate evaluations being performed automatically, which in turn has reduced the workload of physicians, decreased errors and times in diagnosis, and improved performance in the prediction and detection of various diseases. AI techniques based on medical image processing are an essential area of research that uses advanced computer algorithms for prediction, diagnosis, and treatment planning, leading to a remarkable impact on decision-making procedures. Machine Learning (ML) and Deep Learning (DL) as advanced AI techniques are two main subfields applied in the healthcare system to diagnose diseases, discover medication, and identify patient risk factors. The advancement of electronic medical records and big data technologies in recent years has accompanied the success of ML and DL algorithms. ML includes neural networks and fuzzy logic algorithms with various applications in automating forecasting and diagnosis processes. DL algorithm is an ML technique that does not rely on expert feature extraction, unlike classical neural network algorithms. DL algorithms with high-performance calculations give promising results in medical image analysis, such as fusion, segmentation, recording, and classification. Support Vector Machine (SVM) as an ML method and Convolutional Neural Network (CNN) as a DL method is usually the most widely used techniques for analyzing and diagnosing diseases. This review study aims to cover recent AI techniques in diagnosing and predicting numerous diseases such as cancers, heart, lung, skin, genetic, and neural disorders, which perform more precisely compared to specialists without human error. Also, AI's existing challenges and limitations in the medical area are discussed and highlighted.
Collapse
Affiliation(s)
- Nafiseh Ghaffar Nia
- College of Engineering and Computer Science, The University of Tennessee at Chattanooga, Chattanooga, TN 37403 USA
| | - Erkan Kaplanoglu
- College of Engineering and Computer Science, The University of Tennessee at Chattanooga, Chattanooga, TN 37403 USA
| | - Ahad Nasab
- College of Engineering and Computer Science, The University of Tennessee at Chattanooga, Chattanooga, TN 37403 USA
| |
Collapse
|
72
|
Afriyie Y, Weyori BA, Opoku AA. A scaling up approach: a research agenda for medical imaging analysis with applications in deep learning. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Affiliation(s)
- Yaw Afriyie
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
- Department of Computer Science, Faculty of Information and Communication Technology, SD Dombo University of Business and Integrated Development Studies, Wa, Ghana
| | - Benjamin A. Weyori
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| | - Alex A. Opoku
- Department of Mathematics & Statistics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| |
Collapse
|
73
|
Artificial intelligence in cancer research and precision medicine: Applications, limitations and priorities to drive transformation in the delivery of equitable and unbiased care. Cancer Treat Rev 2023; 112:102498. [PMID: 36527795 DOI: 10.1016/j.ctrv.2022.102498] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/03/2022] [Accepted: 12/06/2022] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) has experienced explosive growth in oncology and related specialties in recent years. The improved expertise in data capture, the increased capacity for data aggregation and analytic power, along with decreasing costs of genome sequencing and related biologic "omics", set the foundation and need for novel tools that can meaningfully process these data from multiple sources and of varying types. These advances provide value across biomedical discovery, diagnosis, prognosis, treatment, and prevention, in a multimodal fashion. However, while big data and AI tools have already revolutionized many fields, medicine has partially lagged due to its complexity and multi-dimensionality, leading to technical challenges in developing and validating solutions that generalize to diverse populations. Indeed, inner biases and miseducation of algorithms, in view of their implementation in daily clinical practice, are increasingly relevant concerns; critically, it is possible for AI to mirror the unconscious biases of the humans who generated these algorithms. Therefore, to avoid worsening existing health disparities, it is critical to employ a thoughtful, transparent, and inclusive approach that involves addressing bias in algorithm design and implementation along the cancer care continuum. In this review, a broad landscape of major applications of AI in cancer care is provided, with a focus on cancer research and precision medicine. Major challenges posed by the implementation of AI in the clinical setting will be discussed. Potentially feasible solutions for mitigating bias are provided, in the light of promoting cancer health equity.
Collapse
|
74
|
Seo H, Hwang J, Jung YH, Lee E, Nam OH, Shin J. Deep focus approach for accurate bone age estimation from lateral cephalogram. J Dent Sci 2023; 18:34-43. [PMID: 36643224 PMCID: PMC9831852 DOI: 10.1016/j.jds.2022.07.018] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 07/22/2022] [Accepted: 07/22/2022] [Indexed: 01/18/2023] Open
Abstract
Background/purpose Bone age is a useful indicator of children's growth and development. Recently, the rapid development of deep-learning technique has shown promising results in estimating bone age. This study aimed to devise a deep-learning approach for accurate bone-age estimation by focusing on the cervical vertebrae on lateral cephalograms of growing children using image segmentation. Materials and methods We included 900 participants, aged 4-18 years, who underwent lateral cephalogram and hand-wrist radiograph on the same day. First, cervical vertebrae segmentation was performed from the lateral cephalogram using DeepLabv3+ architecture. Second, after extracting the region of interest from the segmented image for preprocessing, bone age was estimated through transfer learning using a regression model based on Inception-ResNet-v2 architecture. The dataset was divided into train:test sets in a ratio of 4:1; five-fold cross-validation was performed at each step. Results The segmentation model possessed average accuracy, intersection over union, and mean boundary F1 scores of 0.956, 0.913, and 0.895, respectively, for the segmentation of cervical vertebrae from lateral cephalogram. The regression model for estimating bone age from segmented cervical vertebrae images yielded average mean absolute error and root mean squared error values of 0.300 and 0.390 years, respectively. The coefficient of determination of the proposed method for the actual and estimated bone age was 0.983. Our method visualized important regions on cervical vertebral images to make a prediction using the gradient-weighted regression activation map technique. Conclusion Results showed that our proposed method can estimate bone age by lateral cephalogram with sufficiently high accuracy.
Collapse
Affiliation(s)
- Hyejun Seo
- Department of Pediatric Dentistry, School of Dentistry, Pusan National University, Yangsan, South Korea,Department of Dentistry, Ulsan University Hospital, Ulsan, South Korea
| | - JaeJoon Hwang
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Yangsan, South Korea,Dental and Life Science Institute & Dental Research Institute, School of Dentistry, Pusan National University, Yangsan, South Korea
| | - Yun-Hoa Jung
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Yangsan, South Korea,Dental and Life Science Institute & Dental Research Institute, School of Dentistry, Pusan National University, Yangsan, South Korea
| | - Eungyung Lee
- Department of Pediatric Dentistry, School of Dentistry, Pusan National University, Yangsan, South Korea,Dental and Life Science Institute & Dental Research Institute, School of Dentistry, Pusan National University, Yangsan, South Korea
| | - Ok Hyung Nam
- Department of Pediatric Dentistry, School of Dentistry, Kyung Hee University, Seoul, South Korea,Corresponding author. Department of Pediatric Dentistry, Kyung Hee University School of Dentistry, 26 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, South Korea.
| | - Jonghyun Shin
- Department of Pediatric Dentistry, School of Dentistry, Pusan National University, Yangsan, South Korea,Dental and Life Science Institute & Dental Research Institute, School of Dentistry, Pusan National University, Yangsan, South Korea,Corresponding author. Department of Pediatric Dentistry, School of Dentistry, Pusan National University, Geumo-ro 20, Mulgeum-eup, Yangsan-si, 50612, South Korea.
| |
Collapse
|
75
|
Zhou Z, Gao Y, Zhang W, Bo K, Zhang N, Wang H, Wang R, Du Z, Firmin D, Yang G, Zhang H, Xu L. Artificial intelligence-based full aortic CT angiography imaging with ultra-low-dose contrast medium: a preliminary study. Eur Radiol 2023; 33:678-689. [PMID: 35788754 DOI: 10.1007/s00330-022-08975-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 05/16/2022] [Accepted: 06/20/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVES To further reduce the contrast medium (CM) dose of full aortic CT angiography (ACTA) imaging using the augmented cycle-consistent adversarial framework (Au-CycleGAN) algorithm. METHODS We prospectively enrolled 150 consecutive patients with suspected aortic disease. All received ACTA scans of ultra-low-dose CM (ULDCM) protocol and low-dose CM (LDCM) protocol. These data were randomly assigned to the training datasets (n = 100) and the validation datasets (n = 50). The ULDCM images were reconstructed by the Au-CycleGAN algorithm. Then, the AI-based ULDCM images were compared with LDCM images in terms of image quality and diagnostic accuracy. RESULTS The mean image quality score of each location in the AI-based ULDCM group was higher than that in the ULDCM group but a little lower than that in the LDCM group (all p < 0.05). All AI-based ULDCM images met the diagnostic requirements (score ≥ 3). Except for the image noise, the AI-based ULDCM images had higher attenuation value than the ULDCM and LDCM images as well as higher SNR and CNR in all locations of the aorta analyzed (all p < 0.05). Similar results were also seen in obese patients (BMI > 25, all p < 0.05). Using the findings of LDCM images as the reference, the AI-based ULDCM images showed good diagnostic parameters and no significant differences in any of the analyzed aortic disease diagnoses (all K-values > 0.80, p < 0.05). CONCLUSIONS The required dose of CM for full ACTA imaging can be reduced to one-third of the CM dose of the LDCM protocol while maintaining image quality and diagnostic accuracy using the Au-CycleGAN algorithm. KEY POINTS • The required dose of contrast medium (CM) for full ACTA imaging can be reduced to one-third of the CM dose of the low-dose contrast medium (LDCM) protocol using the Au-CycleGAN algorithm. • Except for the image noise, the AI-based ultra-low-dose contrast medium (ULDCM) images had better quantitative image quality parameters than the ULDCM and LDCM images. • No significant diagnostic differences were noted between the AI-based ULDCM and LDCM images regarding all the analyzed aortic disease diagnoses.
Collapse
Affiliation(s)
- Zhen Zhou
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, No. 2, Anzhen Road, Chaoyang District, Beijing, 100029, China
| | - Yifeng Gao
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, No. 2, Anzhen Road, Chaoyang District, Beijing, 100029, China
| | - Weiwei Zhang
- School of Biomedical Engineering, Sun Yat-Sen University, Guangzhou, China
| | - Kairui Bo
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, No. 2, Anzhen Road, Chaoyang District, Beijing, 100029, China
| | - Nan Zhang
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, No. 2, Anzhen Road, Chaoyang District, Beijing, 100029, China
| | - Hui Wang
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, No. 2, Anzhen Road, Chaoyang District, Beijing, 100029, China
| | - Rui Wang
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, No. 2, Anzhen Road, Chaoyang District, Beijing, 100029, China
| | - Zhiqiang Du
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, No. 2, Anzhen Road, Chaoyang District, Beijing, 100029, China
| | - David Firmin
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK.,National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK.,National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK
| | - Heye Zhang
- School of Biomedical Engineering, Sun Yat-Sen University, Guangzhou, China
| | - Lei Xu
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, No. 2, Anzhen Road, Chaoyang District, Beijing, 100029, China.
| |
Collapse
|
76
|
Wang S, Xu X, Du H, Chen Y, Mei W. Attention feature fusion methodology with additional constraint for ovarian lesion diagnosis on magnetic resonance images. Med Phys 2023; 50:297-310. [PMID: 35975618 DOI: 10.1002/mp.15937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 06/25/2022] [Accepted: 07/24/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE It is challenging for radiologists and gynecologists to identify the type of ovarian lesions by reading magnetic resonance (MR) images. Recently developed convolutional neural networks (CNNs) have made great progress in computer vision, but their architectures still need modification if they are used in processing medical images. This study aims to improve the feature extraction capability of CNNs, thus promoting the diagnostic performance in discriminating between benign and malignant ovarian lesions. METHODS We introduce a feature fusion architecture and insert the attention models in the neural network. The features extracted from different middle layers are integrated with reoptimized spatial and channel weights. We add a loss function to constrain the additional probability vector generated from the integrated features, thus guiding the middle layers to emphasize useful information. We analyzed 159 lesions imaged by dynamic contrast-enhanced MR imaging (DCE-MRI), including 73 benign lesions and 86 malignant lesions. Senior radiologists selected and labeled the tumor regions based on the pathology reports. Then, the tumor regions were cropped into 7494 nonoverlapping image patches for training and testing. The type of a single tumor was determined by the average probability scores of the image patches belonging to it. RESULTS We implemented fivefold cross-validation to characterize our proposed method, and the distribution of performance matrics was reported. For all the test image patches, the average accuracy of our method is 70.5% with an average area under the curve (AUC) of 0.785, while the baseline is 69.4% and 0.773, and for the diagnosis of single tumors, our model achieved an average accuracy of 82.4% and average AUC of 0.916, which were better than the baseline (81.8% and 0.899). Moreover, we evaluated the performance of our proposed method utilizing different CNN backbones and different attention mechanisms. CONCLUSIONS The texture features extracted from different middle layers are crucial for ovarian lesion diagnosis. Our proposed method can enhance the feature extraction capabilities of different layers of the network, thereby improving diagnostic performance.
Collapse
Affiliation(s)
- Shuai Wang
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Xiaojuan Xu
- Department of Diagnostic Imaging, National Cancer Center, National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking, Union Medical College, Beijing, China
| | - Huiqian Du
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China
| | - Yan Chen
- Department of Diagnostic Imaging, National Cancer Center, National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking, Union Medical College, Beijing, China
| | - Wenbo Mei
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
77
|
Li Z, Li X, Jin Z, Shen L. Learning from pseudo-lesion: a self-supervised framework for COVID-19 diagnosis. Neural Comput Appl 2023; 35:10717-10731. [PMID: 37155461 PMCID: PMC10038387 DOI: 10.1007/s00521-023-08259-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 01/06/2023] [Indexed: 05/10/2023]
Abstract
The Coronavirus disease 2019 (COVID-19) has rapidly spread all over the world since its first report in December 2019, and thoracic computed tomography (CT) has become one of the main tools for its diagnosis. In recent years, deep learning-based approaches have shown impressive performance in myriad image recognition tasks. However, they usually require a large number of annotated data for training. Inspired by ground glass opacity, a common finding in COIVD-19 patient's CT scans, we proposed in this paper a novel self-supervised pretraining method based on pseudo-lesion generation and restoration for COVID-19 diagnosis. We used Perlin noise, a gradient noise based mathematical model, to generate lesion-like patterns, which were then randomly pasted to the lung regions of normal CT images to generate pseudo-COVID-19 images. The pairs of normal and pseudo-COVID-19 images were then used to train an encoder-decoder architecture-based U-Net for image restoration, which does not require any labeled data. The pretrained encoder was then fine-tuned using labeled data for COVID-19 diagnosis task. Two public COVID-19 diagnosis datasets made up of CT images were employed for evaluation. Comprehensive experimental results demonstrated that the proposed self-supervised learning approach could extract better feature representation for COVID-19 diagnosis, and the accuracy of the proposed method outperformed the supervised model pretrained on large-scale images by 6.57% and 3.03% on SARS-CoV-2 dataset and Jinan COVID-19 dataset, respectively.
Collapse
Affiliation(s)
- Zhongliang Li
- AI Research Center for Medical Image Analysis and Diagnosis, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060 Guangdong China
| | - Xuechen Li
- National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University, Shenzhen, 518060 Guangdong China
| | - Zhihao Jin
- AI Research Center for Medical Image Analysis and Diagnosis, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060 Guangdong China
| | - Linlin Shen
- AI Research Center for Medical Image Analysis and Diagnosis, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060 Guangdong China
| |
Collapse
|
78
|
Chen Z, Wang Y, Zhang H, Yin H, Hu C, Huang Z, Tan Q, Song B, Deng L, Xia Q. Deep Learning Models for Severity Prediction of Acute Pancreatitis in the Early Phase From Abdominal Nonenhanced Computed Tomography Images. Pancreas 2023; 52:e45-e53. [PMID: 37378899 DOI: 10.1097/mpa.0000000000002216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/29/2023]
Abstract
OBJECTIVES To develop and validate deep learning (DL) models for predicting the severity of acute pancreatitis (AP) by using abdominal nonenhanced computed tomography (CT) images. METHODS The study included 978 AP patients admitted within 72 hours after onset and performed abdominal CT on admission. The image DL model was built by the convolutional neural networks. The combined model was developed by integrating CT images and clinical markers. The performance of the models was evaluated by using the area under the receiver operating characteristic curve. RESULTS The clinical, Image DL, and the combined DL models were developed in 783 AP patients and validated in 195 AP patients. The combined models possessed the predictive accuracy of 90.0%, 32.4%, and 74.2% for mild, moderately severe, and severe AP. The combined DL model outperformed clinical and image DL models with 0.820 (95% confidence interval, 0.759-0.871), the sensitivity of 84.76% and the specificity of 66.67% for predicting mild AP and the area under the receiver operating characteristic curve of 0.920 (95% confidence interval, 0.873-0.954), the sensitivity of 90.32%, and the specificity of 82.93% for predicting severe AP. CONCLUSIONS The DL technology allows nonenhanced CT images as a novel tool for predicting the severity of AP.
Collapse
Affiliation(s)
- Zhiyao Chen
- From the Pancreatitis Center, Center of Integrated Traditional Chinese and Western Medicine, Sichuan Provincial Pancreatitis Centre, West China Hospital, Sichuan University, Chengdu, China
| | - Yi Wang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Huiling Zhang
- Infervision Medical Technology Co., Ltd, Beijing, China
| | - Hongkun Yin
- Infervision Medical Technology Co., Ltd, Beijing, China
| | - Cheng Hu
- From the Pancreatitis Center, Center of Integrated Traditional Chinese and Western Medicine, Sichuan Provincial Pancreatitis Centre, West China Hospital, Sichuan University, Chengdu, China
| | - Zixing Huang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Qingyuan Tan
- From the Pancreatitis Center, Center of Integrated Traditional Chinese and Western Medicine, Sichuan Provincial Pancreatitis Centre, West China Hospital, Sichuan University, Chengdu, China
| | | | - Lihui Deng
- From the Pancreatitis Center, Center of Integrated Traditional Chinese and Western Medicine, Sichuan Provincial Pancreatitis Centre, West China Hospital, Sichuan University, Chengdu, China
| | - Qing Xia
- From the Pancreatitis Center, Center of Integrated Traditional Chinese and Western Medicine, Sichuan Provincial Pancreatitis Centre, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
79
|
Asteris PG, Kokoris S, Gavriilaki E, Tsoukalas MZ, Houpas P, Paneta M, Koutzas A, Argyropoulos T, Alkayem NF, Armaghani DJ, Bardhan A, Cavaleri L, Cao M, Mansouri I, Mohammed AS, Samui P, Gerber G, Boumpas DT, Tsantes A, Terpos E, Dimopoulos MA. Early prediction of COVID-19 outcome using artificial intelligence techniques and only five laboratory indices. Clin Immunol 2023; 246:109218. [PMID: 36586431 PMCID: PMC9797218 DOI: 10.1016/j.clim.2022.109218] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 10/25/2022] [Accepted: 12/21/2022] [Indexed: 12/29/2022]
Abstract
We aimed to develop a prediction model for intensive care unit (ICU) hospitalization of Coronavirus disease-19 (COVID-19) patients using artificial neural networks (ANN). We assessed 25 laboratory parameters at first from 248 consecutive adult COVID-19 patients for database creation, training, and development of ANN models. We developed a new alpha-index to assess association of each parameter with outcome. We used 166 records for training of computational simulations (training), 41 for documentation of computational simulations (validation), and 41 for reliability check of computational simulations (testing). The first five laboratory indices ranked by importance were Neutrophil-to-lymphocyte ratio, Lactate Dehydrogenase, Fibrinogen, Albumin, and D-Dimers. The best ANN based on these indices achieved accuracy 95.97%, precision 90.63%, sensitivity 93.55%. and F1-score 92.06%, verified in the validation cohort. Our preliminary findings reveal for the first time an ANN to predict ICU hospitalization accurately and early, using only 5 easily accessible laboratory indices.
Collapse
Affiliation(s)
- Panagiotis G. Asteris
- Computational Mechanics Laboratory, School of Pedagogical and Technological Education, Athens, Greece
| | - Styliani Kokoris
- Laboratory of Hematology and Hospital Blood Transfusion Department, University General Hospital "Attikon", National and Kapodistrian University of Athens, Medical School, Greece.
| | - Eleni Gavriilaki
- Hematology Department – BMT Unit, G Papanicolaou Hospital, Thessaloniki, Greece
| | - Markos Z. Tsoukalas
- Computational Mechanics Laboratory, School of Pedagogical and Technological Education, Athens, Greece
| | - Panagiotis Houpas
- Computational Mechanics Laboratory, School of Pedagogical and Technological Education, Athens, Greece
| | - Maria Paneta
- Fourth Department of Internal Medicine, University General Hospital "Attikon", National and Kapodistrian University of Athens, Medical School, Greece
| | | | | | - Nizar Faisal Alkayem
- Jiangxi Province Key Laboratory of Environmental Geotechnical Engineering and Hazards Control, Jiangxi University of Science and Technology, Ganzhou 341000, China
| | - Danial J. Armaghani
- Department of Urban Planning, Engineering Networks and Systems, Institute of Architecture and Construction, South Ural State University, 76, Lenin Prospect, Chelyabinsk 454080, Russian Federation
| | - Abidhan Bardhan
- Civil Engineering Department, National Institute of Technology Patna, Bihar, India
| | - Liborio Cavaleri
- Department of Civil, Environmental, Aerospace and Materials Engineering, University of Palermo, Palermo, Italy
| | - Maosen Cao
- Jiangxi Province Key Laboratory of Environmental Geotechnical Engineering and Hazards Control, Jiangxi University of Science and Technology, Ganzhou 341000, China
| | - Iman Mansouri
- Department of Civil and Environmental Engineering, Princeton University Princeton, Princeton, NJ 08544, USA
| | - Ahmed Salih Mohammed
- Engineering Department, American University of Iraq, Sulaimani, Kurdistan-Region, Iraq
| | - Pijush Samui
- Civil Engineering Department, National Institute of Technology Patna, Bihar, India
| | - Gloria Gerber
- Hematology Division, Johns Hopkins University, Baltimore, USA
| | - Dimitrios T. Boumpas
- "Attikon" University Hospital of Athens, Rheumatology and Clinical Immunology, Medical School, National and Kapodistrian University of Athens, Athens, Attica, Greece
| | - Argyrios Tsantes
- Laboratory of Hematology and Hospital Blood Transfusion Department, University General Hospital "Attikon", National and Kapodistrian University of Athens, Medical School, Greece
| | - Evangelos Terpos
- Department of Clinical Therapeutics, Medical School, Faculty of Medicine, National Kapodistrian University of Athens, Athens, Greece
| | - Meletios A. Dimopoulos
- Department of Clinical Therapeutics, Medical School, Faculty of Medicine, National Kapodistrian University of Athens, Athens, Greece
| |
Collapse
|
80
|
Bhatele KR, Jha A, Tiwari D, Bhatele M, Sharma S, Mithora MR, Singhal S. COVID-19 Detection: A Systematic Review of Machine and Deep Learning-Based Approaches Utilizing Chest X-Rays and CT Scans. Cognit Comput 2022; 16:1-38. [PMID: 36593991 PMCID: PMC9797382 DOI: 10.1007/s12559-022-10076-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 11/15/2022] [Indexed: 12/30/2022]
Abstract
This review study presents the state-of-the-art machine and deep learning-based COVID-19 detection approaches utilizing the chest X-rays or computed tomography (CT) scans. This study aims to systematically scrutinize as well as to discourse challenges and limitations of the existing state-of-the-art research published in this domain from March 2020 to August 2021. This study also presents a comparative analysis of the performance of four majorly used deep transfer learning (DTL) models like VGG16, VGG19, ResNet50, and DenseNet over the COVID-19 local CT scans dataset and global chest X-ray dataset. A brief illustration of the majorly used chest X-ray and CT scan datasets of COVID-19 patients utilized in state-of-the-art COVID-19 detection approaches are also presented for future research. The research databases like IEEE Xplore, PubMed, and Web of Science are searched exhaustively for carrying out this survey. For the comparison analysis, four deep transfer learning models like VGG16, VGG19, ResNet50, and DenseNet are initially fine-tuned and trained using the augmented local CT scans and global chest X-ray dataset in order to observe their performance. This review study summarizes major findings like AI technique employed, type of classification performed, used datasets, results in terms of accuracy, specificity, sensitivity, F1 score, etc., along with the limitations, and future work for COVID-19 detection in tabular manner for conciseness. The performance analysis of the four majorly used deep transfer learning models affirms that Visual Geometry Group 19 (VGG19) model delivered the best performance over both COVID-19 local CT scans dataset and global chest X-ray dataset.
Collapse
Affiliation(s)
| | - Anand Jha
- RJIT BSF Academy, Tekanpur, Gwalior India
| | | | | | | | | | | |
Collapse
|
81
|
Kumar R, Singh D, Srinivasan K, Hu YC. AI-Powered Blockchain Technology for Public Health: A Contemporary Review, Open Challenges, and Future Research Directions. Healthcare (Basel) 2022; 11:healthcare11010081. [PMID: 36611541 PMCID: PMC9819078 DOI: 10.3390/healthcare11010081] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/14/2022] [Accepted: 12/20/2022] [Indexed: 12/29/2022] Open
Abstract
Blockchain technology has been growing at a substantial growth rate over the last decade. Introduced as the backbone of cryptocurrencies such as Bitcoin, it soon found its application in other fields because of its security and privacy features. Blockchain has been used in the healthcare industry for several purposes including secure data logging, transactions, and maintenance using smart contracts. Great work has been carried out to make blockchain smart, with the integration of Artificial Intelligence (AI) to combine the best features of the two technologies. This review incorporates the conceptual and functional aspects of the individual technologies and innovations in the domains of blockchain and artificial intelligence and lays down a strong foundational understanding of the domains individually and also rigorously discusses the various ways AI has been used along with blockchain to power the healthcare industry including areas of great importance such as electronic health record (EHR) management, distant-patient monitoring and telemedicine, genomics, drug research, and testing, specialized imaging and outbreak prediction. It compiles various algorithms from supervised and unsupervised machine learning problems along with deep learning algorithms such as convolutional/recurrent neural networks and numerous platforms currently being used in AI-powered blockchain systems and discusses their applications. The review also presents the challenges still faced by these systems which they inherit from the AI and blockchain algorithms used at the core of them and the scope of future work.
Collapse
Affiliation(s)
- Ritik Kumar
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Divyangi Singh
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Yuh-Chung Hu
- Department of Mechanical and Electromechanical Engineering, National ILan University, Yilan 26047, Taiwan
| |
Collapse
|
82
|
Liu K, Ning X, Liu S. Medical Image Classification Based on Semi-Supervised Generative Adversarial Network and Pseudo-Labelling. SENSORS (BASEL, SWITZERLAND) 2022; 22:9967. [PMID: 36560335 PMCID: PMC9783368 DOI: 10.3390/s22249967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/08/2022] [Accepted: 12/15/2022] [Indexed: 06/17/2023]
Abstract
Deep learning has substantially improved the state-of-the-art in object detection and image classification. Deep learning usually requires large-scale labelled datasets to train the models; however, due to the restrictions in medical data sharing and accessibility and the expensive labelling cost, the application of deep learning in medical image classification has been dramatically hindered. In this study, we propose a novel method that leverages semi-supervised adversarial learning and pseudo-labelling to incorporate the unlabelled images in model learning. We validate the proposed method on two public databases, including ChestX-ray14 for lung disease classification and BreakHis for breast cancer histopathological image diagnosis. The results show that our method achieved highly effective performance with an accuracy of 93.15% while using only 30% of the labelled samples, which is comparable to the state-of-the-art accuracy for chest X-ray classification; it also outperformed the current methods in multi-class breast cancer histopathological image classification with a high accuracy of 96.87%.
Collapse
Affiliation(s)
- Kun Liu
- School of Information Engineering, Shanghai Maritime University, Shanghai 200135, China
| | - Xiaolin Ning
- School of Information Engineering, Shanghai Maritime University, Shanghai 200135, China
| | - Sidong Liu
- Australia Institute of Health Innovation, Macquarie University, Sydney 2113, Australia
| |
Collapse
|
83
|
Combined CNN and Pixel Feature Image for Fatty Liver Ultrasound Image Classification. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:9385734. [PMID: 36561737 PMCID: PMC9767727 DOI: 10.1155/2022/9385734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 11/17/2022] [Accepted: 11/19/2022] [Indexed: 12/15/2022]
Abstract
Recent revolutionary results of deep learning indicate the advent of reliable classifiers to perform difficult tasks in medical diagnosis. Fatty liver is a common liver disease, and it is also one of the major challenges people face in disease prevention. It will cause many complications, which need to be found and treated in time. In the field of automatic diagnosis of fatty liver ultrasound images, there are problems of less data amount, and the pathological images of different severity are similar. Therefore, this paper proposes a classification method through combining convolutional neural network with the differential image patches based on pixel-level features for fatty liver ultrasonic images. It can automatically diagnose the ultrasonic images of normal liver, low-grade fatty liver, moderate grade fatty liver, and severe fatty liver. The proposed method not only solves the problem of less data amount but also improves the accuracy of classification. Compared with other deep learning methods and traditional methods, the experimental results show that our method has better accuracy than other classification methods.
Collapse
|
84
|
Khanna NN, Maindarkar MA, Viswanathan V, Fernandes JFE, Paul S, Bhagawati M, Ahluwalia P, Ruzsa Z, Sharma A, Kolluri R, Singh IM, Laird JR, Fatemi M, Alizad A, Saba L, Agarwal V, Sharma A, Teji JS, Al-Maini M, Rathore V, Naidu S, Liblik K, Johri AM, Turk M, Mohanty L, Sobel DW, Miner M, Viskovic K, Tsoulfas G, Protogerou AD, Kitas GD, Fouda MM, Chaturvedi S, Kalra MK, Suri JS. Economics of Artificial Intelligence in Healthcare: Diagnosis vs. Treatment. Healthcare (Basel) 2022; 10:2493. [PMID: 36554017 PMCID: PMC9777836 DOI: 10.3390/healthcare10122493] [Citation(s) in RCA: 76] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/03/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
Motivation: The price of medical treatment continues to rise due to (i) an increasing population; (ii) an aging human growth; (iii) disease prevalence; (iv) a rise in the frequency of patients that utilize health care services; and (v) increase in the price. Objective: Artificial Intelligence (AI) is already well-known for its superiority in various healthcare applications, including the segmentation of lesions in images, speech recognition, smartphone personal assistants, navigation, ride-sharing apps, and many more. Our study is based on two hypotheses: (i) AI offers more economic solutions compared to conventional methods; (ii) AI treatment offers stronger economics compared to AI diagnosis. This novel study aims to evaluate AI technology in the context of healthcare costs, namely in the areas of diagnosis and treatment, and then compare it to the traditional or non-AI-based approaches. Methodology: PRISMA was used to select the best 200 studies for AI in healthcare with a primary focus on cost reduction, especially towards diagnosis and treatment. We defined the diagnosis and treatment architectures, investigated their characteristics, and categorized the roles that AI plays in the diagnostic and therapeutic paradigms. We experimented with various combinations of different assumptions by integrating AI and then comparing it against conventional costs. Lastly, we dwell on three powerful future concepts of AI, namely, pruning, bias, explainability, and regulatory approvals of AI systems. Conclusions: The model shows tremendous cost savings using AI tools in diagnosis and treatment. The economics of AI can be improved by incorporating pruning, reduction in AI bias, explainability, and regulatory approvals.
Collapse
Affiliation(s)
- Narendra N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110001, India
| | - Mahesh A. Maindarkar
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA
- Department of Biomedical Engineering, North Eastern Hill University, Shillong 793022, India
| | | | | | - Sudip Paul
- Department of Biomedical Engineering, North Eastern Hill University, Shillong 793022, India
| | - Mrinalini Bhagawati
- Department of Biomedical Engineering, North Eastern Hill University, Shillong 793022, India
| | - Puneet Ahluwalia
- Max Institute of Cancer Care, Max Super Specialty Hospital, New Delhi 110017, India
| | - Zoltan Ruzsa
- Invasive Cardiology Division, Faculty of Medicine, University of Szeged, 6720 Szeged, Hungary
| | - Aditya Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA 22904, USA
| | - Raghu Kolluri
- Ohio Health Heart and Vascular, Columbus, OH 43214, USA
| | - Inder M. Singh
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St. Helena, CA 94574, USA
| | - Mostafa Fatemi
- Department of Physiology & Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Azra Alizad
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria, 40138 Cagliari, Italy
| | - Vikas Agarwal
- Department of Immunology, SGPGIMS, Lucknow 226014, India
| | - Aman Sharma
- Department of Immunology, SGPGIMS, Lucknow 226014, India
| | - Jagjit S. Teji
- Ann and Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL 60611, USA
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON L4Z 4C4, Canada
| | | | - Subbaram Naidu
- Electrical Engineering Department, University of Minnesota, Duluth, MN 55812, USA
| | - Kiera Liblik
- Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada
| | - Amer M. Johri
- Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada
| | - Monika Turk
- The Hanse-Wissenschaftskolleg Institute for Advanced Study, 27753 Delmenhorst, Germany
| | - Lopamudra Mohanty
- Department of Computer Science, ABES Engineering College, Ghaziabad 201009, India
| | - David W. Sobel
- Rheumatology Unit, National Kapodistrian University of Athens, 15772 Athens, Greece
| | - Martin Miner
- Men’s Health Centre, Miriam Hospital Providence, Providence, RI 02906, USA
| | - Klaudija Viskovic
- Department of Radiology and Ultrasound, University Hospital for Infectious Diseases, 10000 Zagreb, Croatia
| | - George Tsoulfas
- Department of Surgery, Aristoteleion University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Athanasios D. Protogerou
- Cardiovascular Prevention and Research Unit, Department of Pathophysiology, National & Kapodistrian University of Athens, 15772 Athens, Greece
| | - George D. Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley DY1 2HQ, UK
- Arthritis Research UK Epidemiology Unit, Manchester University, Manchester M13 9PL, UK
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA
| | - Seemant Chaturvedi
- Department of Neurology & Stroke Program, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| | | | - Jasjit S. Suri
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, CA 95661, USA
| |
Collapse
|
85
|
Choe J, Lee SM, Hwang HJ, Lee SM, Yun J, Kim N, Seo JB. Artificial Intelligence in Lung Imaging. Semin Respir Crit Care Med 2022; 43:946-960. [PMID: 36174647 DOI: 10.1055/s-0042-1755571] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Recently, interest and advances in artificial intelligence (AI) including deep learning for medical images have surged. As imaging plays a major role in the assessment of pulmonary diseases, various AI algorithms have been developed for chest imaging. Some of these have been approved by governments and are now commercially available in the marketplace. In the field of chest radiology, there are various tasks and purposes that are suitable for AI: initial evaluation/triage of certain diseases, detection and diagnosis, quantitative assessment of disease severity and monitoring, and prediction for decision support. While AI is a powerful technology that can be applied to medical imaging and is expected to improve our current clinical practice, some obstacles must be addressed for the successful implementation of AI in workflows. Understanding and becoming familiar with the current status and potential clinical applications of AI in chest imaging, as well as remaining challenges, would be essential for radiologists and clinicians in the era of AI. This review introduces the potential clinical applications of AI in chest imaging and also discusses the challenges for the implementation of AI in daily clinical practice and future directions in chest imaging.
Collapse
Affiliation(s)
- Jooae Choe
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Hye Jeon Hwang
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Jihye Yun
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea.,Department of Convergence Medicine, Biomedical Engineering Research Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| |
Collapse
|
86
|
Yao X, Wang X, Wang SH, Zhang YD. A comprehensive survey on convolutional neural network in medical image analysis. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:41361-41405. [DOI: 10.1007/s11042-020-09634-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 07/30/2020] [Accepted: 08/13/2020] [Indexed: 08/30/2023]
|
87
|
Liu F, Demosthenes P. Real-world data: a brief review of the methods, applications, challenges and opportunities. BMC Med Res Methodol 2022; 22:287. [PMID: 36335315 PMCID: PMC9636688 DOI: 10.1186/s12874-022-01768-6] [Citation(s) in RCA: 158] [Impact Index Per Article: 52.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 10/22/2022] [Indexed: 11/07/2022] Open
Abstract
Abstract
Background
The increased adoption of the internet, social media, wearable devices, e-health services, and other technology-driven services in medicine and healthcare has led to the rapid generation of various types of digital data, providing a valuable data source beyond the confines of traditional clinical trials, epidemiological studies, and lab-based experiments.
Methods
We provide a brief overview on the type and sources of real-world data and the common models and approaches to utilize and analyze real-world data. We discuss the challenges and opportunities of using real-world data for evidence-based decision making This review does not aim to be comprehensive or cover all aspects of the intriguing topic on RWD (from both the research and practical perspectives) but serves as a primer and provides useful sources for readers who interested in this topic.
Results and Conclusions
Real-world hold great potential for generating real-world evidence for designing and conducting confirmatory trials and answering questions that may not be addressed otherwise. The voluminosity and complexity of real-world data also call for development of more appropriate, sophisticated, and innovative data processing and analysis techniques while maintaining scientific rigor in research findings, and attentions to data ethics to harness the power of real-world data.
Collapse
|
88
|
Addo D, Zhou S, Jackson JK, Nneji GU, Monday HN, Sarpong K, Patamia RA, Ekong F, Owusu-Agyei CA. EVAE-Net: An Ensemble Variational Autoencoder Deep Learning Network for COVID-19 Classification Based on Chest X-ray Images. Diagnostics (Basel) 2022; 12:2569. [PMID: 36359413 PMCID: PMC9689048 DOI: 10.3390/diagnostics12112569] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/13/2022] [Accepted: 10/18/2022] [Indexed: 09/08/2024] Open
Abstract
The COVID-19 pandemic has had a significant impact on many lives and the economies of many countries since late December 2019. Early detection with high accuracy is essential to help break the chain of transmission. Several radiological methodologies, such as CT scan and chest X-ray, have been employed in diagnosing and monitoring COVID-19 disease. Still, these methodologies are time-consuming and require trial and error. Machine learning techniques are currently being applied by several studies to deal with COVID-19. This study exploits the latent embeddings of variational autoencoders combined with ensemble techniques to propose three effective EVAE-Net models to detect COVID-19 disease. Two encoders are trained on chest X-ray images to generate two feature maps. The feature maps are concatenated and passed to either a combined or individual reparameterization phase to generate latent embeddings by sampling from a distribution. The latent embeddings are concatenated and passed to a classification head for classification. The COVID-19 Radiography Dataset from Kaggle is the source of chest X-ray images. The performances of the three models are evaluated. The proposed model shows satisfactory performance, with the best model achieving 99.19% and 98.66% accuracy on four classes and three classes, respectively.
Collapse
Affiliation(s)
- Daniel Addo
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
| | - Shijie Zhou
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
| | - Jehoiada Kofi Jackson
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
| | - Grace Ugochi Nneji
- Department of Computing, Oxford Brookes College of Chengdu University of Technology, Chengdu 610059, China
| | - Happy Nkanta Monday
- Department of Computing, Oxford Brookes College of Chengdu University of Technology, Chengdu 610059, China
| | - Kwabena Sarpong
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
| | - Rutherford Agbeshi Patamia
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
| | - Favour Ekong
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
| | | |
Collapse
|
89
|
Liu Y, Wang H, Song K, Sun M, Shao Y, Xue S, Li L, Li Y, Cai H, Jiao Y, Sun N, Liu M, Zhang T. CroReLU: Cross-Crossing Space-Based Visual Activation Function for Lung Cancer Pathology Image Recognition. Cancers (Basel) 2022; 14:5181. [PMID: 36358598 PMCID: PMC9657127 DOI: 10.3390/cancers14215181] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 10/14/2022] [Accepted: 10/19/2022] [Indexed: 08/13/2023] Open
Abstract
Lung cancer is one of the most common malignant tumors in human beings. It is highly fatal, as its early symptoms are not obvious. In clinical medicine, physicians rely on the information provided by pathology tests as an important reference for the final diagnosis of many diseases. Therefore, pathology diagnosis is known as the gold standard for disease diagnosis. However, the complexity of the information contained in pathology images and the increase in the number of patients far outpace the number of pathologists, especially for the treatment of lung cancer in less developed countries. To address this problem, we propose a plug-and-play visual activation function (AF), CroReLU, based on a priori knowledge of pathology, which makes it possible to use deep learning models for precision medicine. To the best of our knowledge, this work is the first to optimize deep learning models for pathology image diagnosis from the perspective of AFs. By adopting a unique crossover window design for the activation layer of the neural network, CroReLU is equipped with the ability to model spatial information and capture histological morphological features of lung cancer such as papillary, micropapillary, and tubular alveoli. To test the effectiveness of this design, 776 lung cancer pathology images were collected as experimental data. When CroReLU was inserted into the SeNet network (SeNet_CroReLU), the diagnostic accuracy reached 98.33%, which was significantly better than that of common neural network models at this stage. The generalization ability of the proposed method was validated on the LC25000 dataset with completely different data distribution and recognition tasks in the face of practical clinical needs. The experimental results show that CroReLU has the ability to recognize inter- and intra-class differences in cancer pathology images, and that the recognition accuracy exceeds the extant research work on the complex design of network layers.
Collapse
Affiliation(s)
- Yunpeng Liu
- Department of Thoracic Surgery, The First Hospital of Jilin University, Changchun 130012, China
| | - Haoran Wang
- School of Instrument and Electrical Engineering, Jilin University, Changchun 130012, China
| | - Kaiwen Song
- School of Instrument and Electrical Engineering, Jilin University, Changchun 130012, China
| | - Mingyang Sun
- School of Instrument and Electrical Engineering, Jilin University, Changchun 130012, China
| | - Yanbin Shao
- School of Instrument and Electrical Engineering, Jilin University, Changchun 130012, China
| | - Songfeng Xue
- School of Instrument and Electrical Engineering, Jilin University, Changchun 130012, China
| | - Liyuan Li
- School of Instrument and Electrical Engineering, Jilin University, Changchun 130012, China
| | - Yuguang Li
- School of Instrument and Electrical Engineering, Jilin University, Changchun 130012, China
| | - Hongqiao Cai
- Department of Hepatobiliary and Pancreatic Surgery, The First Hospital, Jilin University, 71 Xinmin Street, Changchun 130021, China
| | - Yan Jiao
- Department of Hepatobiliary and Pancreatic Surgery, The First Hospital, Jilin University, 71 Xinmin Street, Changchun 130021, China
| | - Nao Sun
- Center for Reproductive Medicine and Center for Prenatal Diagnosis, The First Hospital of Jilin University, Changchun 130012, China
| | - Mingyang Liu
- School of Instrument and Electrical Engineering, Jilin University, Changchun 130012, China
| | - Tianyu Zhang
- School of Instrument and Electrical Engineering, Jilin University, Changchun 130012, China
| |
Collapse
|
90
|
Jiang L, Li M, Jiang H, Tao L, Yang W, Yuan H, He B. Development of an Artificial Intelligence Model for Analyzing the Relationship between Imaging Features and Glucocorticoid Sensitivity in Idiopathic Interstitial Pneumonia. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:13099. [PMID: 36293674 PMCID: PMC9602820 DOI: 10.3390/ijerph192013099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 09/29/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
High-resolution CT (HRCT) imaging features of idiopathic interstitial pneumonia (IIP) patients are related to glucocorticoid sensitivity. This study aimed to develop an artificial intelligence model to assess glucocorticoid efficacy according to the HRCT imaging features of IIP. The medical records and chest HRCT images of 150 patients with IIP were analyzed retrospectively. The U-net framework was used to create a model for recognizing different imaging features, including ground glass opacities, reticulations, honeycombing, and consolidations. Then, the area ratio of those imaging features was calculated automatically. Forty-five patients were treated with glucocorticoids, and according to the drug efficacy, they were divided into a glucocorticoid-sensitive group and a glucocorticoid-insensitive group. Models assessing the correlation between imaging features and glucocorticoid sensitivity were established using the k-nearest neighbor (KNN) algorithm. The total accuracy (ACC) and mean intersection over union (mIoU) of the U-net model were 0.9755 and 0.4296, respectively. Out of the 45 patients treated with glucocorticoids, 34 and 11 were placed in the glucocorticoid-sensitive and glucocorticoid-insensitive groups, respectively. The KNN-based model had an accuracy of 0.82. An artificial intelligence model was successfully developed for recognizing different imaging features of IIP and a preliminary model for assessing the correlation between imaging features and glucocorticoid sensitivity in IIP patients was established.
Collapse
Affiliation(s)
- Ling Jiang
- Department of Respiratory and Critical Care Medicine, Peking University Third Hospital, Beijing 100191, China
| | - Meijiao Li
- Department of Radiology, Peking University Third Hospital, Beijing 100191, China
| | - Han Jiang
- OpenBayes (Tianjin) IT Co., Ltd., Beijing 100027, China
| | - Liyuan Tao
- Research Center of Clinical Epidemiology, Peking University Third Hospital, Beijing 100191, China
| | - Wei Yang
- Department of Respiratory and Critical Care Medicine, Peking University Third Hospital, Beijing 100191, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing 100191, China
| | - Bei He
- Department of Respiratory and Critical Care Medicine, Peking University Third Hospital, Beijing 100191, China
| |
Collapse
|
91
|
Draelos RL, Carin L. Explainable multiple abnormality classification of chest CT volumes. Artif Intell Med 2022; 132:102372. [DOI: 10.1016/j.artmed.2022.102372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 06/09/2022] [Accepted: 07/28/2022] [Indexed: 12/20/2022]
|
92
|
Development of Deep Learning-based Automatic Scan Range Setting Model for Lung Cancer Screening Low-dose CT Imaging. Acad Radiol 2022; 29:1541-1551. [PMID: 35131147 DOI: 10.1016/j.acra.2021.12.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Revised: 12/02/2021] [Accepted: 12/03/2021] [Indexed: 12/14/2022]
Abstract
RATIONALE AND OBJECTIVES To develop an automatic setting of a deep learning-based system for detecting low-dose computed tomography (CT) lung cancer screening scan range and compare its efficiency with the radiographer's performance. MATERIALS AND METHODS This retrospective study was performed using 1984 lung cancer screening low-dose CT scans obtained between November 2019 and May 2020. Among 1984 CT scans, 600 CT scans were considered suitable for an observational study to explore the relationship between the scout landmarks and the actual lung boundaries. Further, 1144 CT scans data set was used for the development of a deep learning-based algorithm. This data set was split into an 8:2 ratio divided into a training set (80%, n = 915) and a validation set (20%, n = 229). The performance of the deep learning algorithm was evaluated in the test set (n = 240) using actual lung boundaries and radiographers' scan ranges. RESULTS The mean differences between the upper and lower boundaries of the deep learning-based algorithm and the actual lung boundaries were 4.72 ± 3.15 mm and 16.50 ± 14.06 mm, respectively. The accuracy and over-scanning of the scan ranges generated by the system were 97.08% (233/240) and 0% (0/240) for the upper boundary, and 96.25% (231/240) and 29.58% (71/240) for the lower boundary. CONCLUSION The developed deep learning-based algorithm system can effectively predict lung cancer screening low-dose CT scan range with high accuracy using only the frontal scout.
Collapse
|
93
|
Deep multi-scale resemblance network for the sub-class differentiation of adrenal masses on computed tomography images. Artif Intell Med 2022; 132:102374. [DOI: 10.1016/j.artmed.2022.102374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 03/23/2022] [Accepted: 04/22/2022] [Indexed: 11/21/2022]
|
94
|
Xiao A, Shen B, Shi X, Zhang Z, Zhang Z, Tian J, Ji N, Hu Z. Intraoperative Glioma Grading Using Neural Architecture Search and Multi-Modal Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2570-2581. [PMID: 35404810 DOI: 10.1109/tmi.2022.3166129] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Glioma grading during surgery can help clinical treatment planning and prognosis, but intraoperative pathological examination of frozen sections is limited by the long processing time and complex procedures. Near-infrared fluorescence imaging provides chances for fast and accurate real-time diagnosis. Recently, deep learning techniques have been actively explored for medical image analysis and disease diagnosis. However, issues of near-infrared fluorescence images, including small-scale, noise, and low-resolution, increase the difficulty of training a satisfying network. Multi-modal imaging can provide complementary information to boost model performance, but simultaneously designing a proper network and utilizing the information of multi-modal data is challenging. In this work, we propose a novel neural architecture search method DLS-DARTS to automatically search for network architectures to handle these issues. DLS-DARTS has two learnable stems for multi-modal low-level feature fusion and uses a modified perturbation-based derivation strategy to improve the performance on the area under the curve and accuracy. White light imaging and fluorescence imaging in the first near-infrared window (650-900 nm) and the second near-infrared window (1,000-1,700 nm) are applied to provide multi-modal information on glioma tissues. In the experiments on 1,115 surgical glioma specimens, DLS-DARTS achieved an area under the curve of 0.843 and an accuracy of 0.634, which outperformed manually designed convolutional neural networks including ResNet, PyramidNet, and EfficientNet, and a state-of-the-art neural architecture search method for multi-modal medical image classification. Our study demonstrates that DLS-DARTS has the potential to help neurosurgeons during surgery, showing high prospects in medical image analysis.
Collapse
|
95
|
Avuçlu E. COVID-19 detection using X-ray images and statistical measurements. MEASUREMENT : JOURNAL OF THE INTERNATIONAL MEASUREMENT CONFEDERATION 2022; 201:111702. [PMID: 35942188 PMCID: PMC9349030 DOI: 10.1016/j.measurement.2022.111702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Revised: 07/14/2022] [Accepted: 07/30/2022] [Indexed: 06/15/2023]
Abstract
The COVID-19 pandemic spread all over the world, starting in China in late 2019, and significantly affected life in all aspects. As seen in SARS, MERS, COVID-19 outbreaks, coronaviruses pose a great threat to world health. The COVID-19 epidemic, which caused pandemics all over the world, continues to seriously threaten people's lives. Due to the rapid spread of COVID-19, many countries' healthcare sectors were caught off guard. This situation put a burden on doctors and healthcare professionals that they could not handle. All of the studies on COVID-19 in the literature have been done to help experts to recognize COVID-19 more accurately, to use more accurate diagnosis and appropriate treatment methods. The alleviation of this workload will be possible by developing computer aided early and accurate diagnosis systems with machine learning. Diagnosis and evaluation of pneumonia on computed tomography images provide significant benefits in investigating possible complications and in case follow-up. Pneumonia and lesions occurring in the lungs should be carefully examined as it helps in the diagnostic process during the pandemic period. For this reason, the first diagnosis and medications are very important to prevent the disease from progressing. In this study, a dataset consisting of Pneumonia and Normal images was used by proposing a new image preprocessing process. These preprocessed images were reduced to 15x15 unit size and their features were extracted according to their RGB values. Experimental studies were carried out by performing both normal values and feature reduction among these features. RGB values of the images were used in train and test processes for MLAs. In experimental studies, 5 different Machine Learning Algorithms (MLAs) (Multi Class Support Vector Machine (MC-SVM), k Nearest Neighbor (k-NN), Decision Tree (DT), Multinominal Logistic Regression (MLR), Naive Bayes (NB)). The following accuracy rates were obtained in train operations for MLAs, respectively; 1, 1, 1, 0.746377, 0.963768. Accuracy results in test operations were obtained as follows; 0.87755, 0.857143, 0.857143, 0.877551, 0.938776.
Collapse
Affiliation(s)
- Emre Avuçlu
- Department of Software Engineering, Faculty of Engineering, Aksaray University, Aksaray TURKEY
| |
Collapse
|
96
|
Anterior Cruciate Ligament Tear Detection Based on Deep Convolutional Neural Network. Diagnostics (Basel) 2022; 12:diagnostics12102314. [PMID: 36292003 PMCID: PMC9600338 DOI: 10.3390/diagnostics12102314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/02/2022] [Accepted: 09/07/2022] [Indexed: 11/30/2022] Open
Abstract
Anterior cruciate ligament (ACL) tear is very common in football players, volleyball players, sprinters, runners, etc. It occurs frequently due to extra stretching and sudden movement and causes extreme pain to the patient. Various computer vision-based techniques have been employed for ACL tear detection, but the performance of most of these systems is challenging because of the complex structure of knee ligaments. This paper presents a three-layered compact parallel deep convolutional neural network (CPDCNN) to enhance the feature distinctiveness of the knee MRI images for anterior cruciate ligament (ACL) tear detection in knee MRI images. The performance of the proposed approach is evaluated for the MRNet knee images dataset using accuracy, recall, precision, and the F1 score. The proposed CPDCNN offers an overall accuracy of 96.60%, a recall rate of 0.9668, a precision of 0.9654, and an F1 score of 0.9582, which shows superiority over the existing state-of-the-art methods for knee tear detection.
Collapse
|
97
|
Punitha S, Al-Turjman F, Stephan T. A novel e-healthcare diagnosing system for COVID-19 via whale optimization algorithm. J EXP THEOR ARTIF IN 2022. [DOI: 10.1080/0952813x.2022.2125079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Affiliation(s)
- S. Punitha
- Department of Computer Science and Engineering, Graphics Era Deemed to be University, Dehradun, India
| | - Fadi Al-Turjman
- Artificial Intelligence Engineering Department of AI and Robotics Institute, Near East University, Nicosia, Turkey
- Research Center for AI and IoT, Faculty of Engineering, University of Kyrenia, Kyrenia, Turkey
| | - Thompson Stephan
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, M. S. Ramaiah University of Applied Sciences, Bangalore, India
| |
Collapse
|
98
|
Integrating patient symptoms, clinical readings, and radiologist feedback with computer-aided diagnosis system for detection of infectious pulmonary disease: a feasibility study. Med Biol Eng Comput 2022; 60:2549-2565. [DOI: 10.1007/s11517-022-02611-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 06/07/2022] [Indexed: 10/17/2022]
|
99
|
Küstner T, Vogel J, Hepp T, Forschner A, Pfannenberg C, Schmidt H, Schwenzer NF, Nikolaou K, la Fougère C, Seith F. Development of a Hybrid-Imaging-Based Prognostic Index for Metastasized-Melanoma Patients in Whole-Body 18F-FDG PET/CT and PET/MRI Data. Diagnostics (Basel) 2022; 12:2102. [PMID: 36140504 PMCID: PMC9498091 DOI: 10.3390/diagnostics12092102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/19/2022] [Accepted: 08/25/2022] [Indexed: 11/17/2022] Open
Abstract
Besides tremendous treatment success in advanced melanoma patients, the rapid development of oncologic treatment options comes with increasingly high costs and can cause severe life-threatening side effects. For this purpose, predictive baseline biomarkers are becoming increasingly important for risk stratification and personalized treatment planning. Thus, the aim of this pilot study was the development of a prognostic tool for the risk stratification of the treatment response and mortality based on PET/MRI and PET/CT, including a convolutional neural network (CNN) for metastasized-melanoma patients before systemic-treatment initiation. The evaluation was based on 37 patients (19 f, 62 ± 13 y/o) with unresectable metastasized melanomas who underwent whole-body 18F-FDG PET/MRI and PET/CT scans on the same day before the initiation of therapy with checkpoint inhibitors and/or BRAF/MEK inhibitors. The overall survival (OS), therapy response, metastatically involved organs, number of lesions, total lesion glycolysis, total metabolic tumor volume (TMTV), peak standardized uptake value (SULpeak), diameter (Dmlesion) and mean apparent diffusion coefficient (ADCmean) were assessed. For each marker, a Kaplan−Meier analysis and the statistical significance (Wilcoxon test, paired t-test and Bonferroni correction) were assessed. Patients were divided into high- and low-risk groups depending on the OS and treatment response. The CNN segmentation and prediction utilized multimodality imaging data for a complementary in-depth risk analysis per patient. The following parameters correlated with longer OS: a TMTV < 50 mL; no metastases in the brain, bone, liver, spleen or pleura; ≤4 affected organ regions; no metastases; a Dmlesion > 37 mm or SULpeak < 1.3; a range of the ADCmean < 600 mm2/s. However, none of the parameters correlated significantly with the stratification of the patients into the high- or low-risk groups. For the CNN, the sensitivity, specificity, PPV and accuracy were 92%, 96%, 92% and 95%, respectively. Imaging biomarkers such as the metastatic involvement of specific organs, a high tumor burden, the presence of at least one large lesion or a high range of intermetastatic diffusivity were negative predictors for the OS, but the identification of high-risk patients was not feasible with the handcrafted parameters. In contrast, the proposed CNN supplied risk stratification with high specificity and sensitivity.
Collapse
Affiliation(s)
- Thomas Küstner
- MIDAS.Lab, Department of Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Jonas Vogel
- Nuclear Medicine and Clinical Molecular Imaging, Department of Radiology, University Hospital Tübingen, 72076 Tubingen, Germany
| | - Tobias Hepp
- MIDAS.Lab, Department of Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Andrea Forschner
- Department of Dermatology, University Hospital of Tübingen, 72070 Tubingen, Germany
| | - Christina Pfannenberg
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Holger Schmidt
- Faculty of Medicine, Eberhard-Karls-University Tübingen, 72076 Tubingen, Germany
- Siemens Healthineers, 91052 Erlangen, Germany
| | - Nina F. Schwenzer
- Faculty of Medicine, Eberhard-Karls-University Tübingen, 72076 Tubingen, Germany
| | - Konstantin Nikolaou
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
- Cluster of Excellence iFIT (EXC 2180) Image-Guided and Functionally Instructed Tumor Therapies, Eberhard Karls University, 72076 Tubingen, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Tübingen, 72076 Tubingen, Germany
| | - Christian la Fougère
- Nuclear Medicine and Clinical Molecular Imaging, Department of Radiology, University Hospital Tübingen, 72076 Tubingen, Germany
- Cluster of Excellence iFIT (EXC 2180) Image-Guided and Functionally Instructed Tumor Therapies, Eberhard Karls University, 72076 Tubingen, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Tübingen, 72076 Tubingen, Germany
| | - Ferdinand Seith
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| |
Collapse
|
100
|
Aerial Separation and Receiver Arrangements on Identifying Lung Syndromes Using the Artificial Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7298903. [PMID: 36052039 PMCID: PMC9427225 DOI: 10.1155/2022/7298903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 06/29/2022] [Accepted: 07/29/2022] [Indexed: 11/17/2022]
Abstract
Lung disease is one of the most harmful diseases in traditional days and is the same nowadays. Early detection is one of the most crucial ways to prevent a human from developing these types of diseases. Many researchers are involved in finding various techniques for predicting the accuracy of the diseases. On the basis of the machine learning algorithm, it was not possible to predict the better accuracy when compared to the deep learning technique; this work has proposed enhanced artificial neural network approaches for the accuracy of lung diseases. Here, the discrete Fourier transform and the Burg auto-regression techniques are used for extracting the computed tomography (CT) scan images, and feature reduction takes place by using principle component analysis (PCA). This proposed work has used the 120 subjective datasets from public landmarks with and without lung diseases. The given dataset is trained by using an enhanced artificial neural network (ANN). The preprocessing techniques are handled by using a Gaussian filter; thus, our proposed approach provides enhanced classification accuracy. Finally, our proposed method is compared with the existing machine learning approach based on its accuracy.
Collapse
|