1
|
Chai Z, Wu Z, Zhang C, Song J. Automated detection of anterior crossbite on intraoral images and videos utilizing deep learning. Int J Comput Dent 2024; 0:0. [PMID: 38700086 DOI: 10.3290/j.ijcd.b5290567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
AIM Malocclusion has emerged as a burgeoning global public health concern. Individuals with an anterior crossbite face an elevated risk of exhibiting characteristics such as a concave facial profile, negative overjet, and poor masticatory efficiency. In response to this issue, we proposed a convolutional neural network (CNN)-based model designed for the automated detection and classification of intraoral images and videos. MATERIALS AND METHODS A total of 1865 intraoral images were included in this study, 1493 (80%) of which were allocated for training and 372 (20%) for testing the CNN. Additionally, we tested the models on 10 videos, spanning a cumulative duration of 124 seconds. To assess the performance of our predictions, metrics including accuracy, sensitivity, specificity, precision, F1-score, area under the precision-recall (AUPR) curve, and area under the receiver operating characteristic (ROC) curve (AUC) were employed. RESULTS The trained model exhibited commendable classification performance, achieving an accuracy of 0.965 and an AUC of 0.986. Moreover, it demonstrated superior specificity (0.992 vs. 0.978 and 0.956, P < 0.05) in comparison to assessments by two orthodontists. Conversely, the CNN model displayed diminished sensitivity (0.89 vs. 0.96 and 0.92, P < 0.05) relative to the orthodontists. Notably, the CNN model accomplished a perfect classification rate, successfully identifying 100% of the videos in the test set. CONCLUSION The deep learning (DL) model exhibited remarkable classification accuracy in identifying anterior crossbite through both intraoral images and videos. This proficiency holds the potential to expedite the detection of severe malocclusions, facilitating timely classification for appropriate treatment and, consequently, mitigating the risk of complications.
Collapse
|
2
|
Aasem M, Javed Iqbal M. Toward explainable AI in radiology: Ensemble-CAM for effective thoracic disease localization in chest X-ray images using weak supervised learning. Front Big Data 2024; 7:1366415. [PMID: 38756502 PMCID: PMC11096460 DOI: 10.3389/fdata.2024.1366415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Accepted: 04/08/2024] [Indexed: 05/18/2024] Open
Abstract
Chest X-ray (CXR) imaging is widely employed by radiologists to diagnose thoracic diseases. Recently, many deep learning techniques have been proposed as computer-aided diagnostic (CAD) tools to assist radiologists in minimizing the risk of incorrect diagnosis. From an application perspective, these models have exhibited two major challenges: (1) They require large volumes of annotated data at the training stage and (2) They lack explainable factors to justify their outcomes at the prediction stage. In the present study, we developed a class activation mapping (CAM)-based ensemble model, called Ensemble-CAM, to address both of these challenges via weakly supervised learning by employing explainable AI (XAI) functions. Ensemble-CAM utilizes class labels to predict the location of disease in association with interpretable features. The proposed work leverages ensemble and transfer learning with class activation functions to achieve three objectives: (1) minimizing the dependency on strongly annotated data when locating thoracic diseases, (2) enhancing confidence in predicted outcomes by visualizing their interpretable features, and (3) optimizing cumulative performance via fusion functions. Ensemble-CAM was trained on three CXR image datasets and evaluated through qualitative and quantitative measures via heatmaps and Jaccard indices. The results reflect the enhanced performance and reliability in comparison to existing standalone and ensembled models.
Collapse
Affiliation(s)
- Muhammad Aasem
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | | |
Collapse
|
3
|
Wan C, Mao Y, Xi W, Zhang Z, Wang J, Yang W. DBPF-net: dual-branch structural feature extraction reinforcement network for ocular surface disease image classification. Front Med (Lausanne) 2024; 10:1309097. [PMID: 38239621 PMCID: PMC10794599 DOI: 10.3389/fmed.2023.1309097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 12/11/2023] [Indexed: 01/22/2024] Open
Abstract
Pterygium and subconjunctival hemorrhage are two common types of ocular surface diseases that can cause distress and anxiety in patients. In this study, 2855 ocular surface images were collected in four categories: normal ocular surface, subconjunctival hemorrhage, pterygium to be observed, and pterygium requiring surgery. We propose a diagnostic classification model for ocular surface diseases, dual-branch network reinforced by PFM block (DBPF-Net), which adopts the conformer model with two-branch architectural properties as the backbone of a four-way classification model for ocular surface diseases. In addition, we propose a block composed of a patch merging layer and a FReLU layer (PFM block) for extracting spatial structure features to further strengthen the feature extraction capability of the model. In practice, only the ocular surface images need to be input into the model to discriminate automatically between the disease categories. We also trained the VGG16, ResNet50, EfficientNetB7, and Conformer models, and evaluated and analyzed the results of all models on the test set. The main evaluation indicators were sensitivity, specificity, F1-score, area under the receiver operating characteristics curve (AUC), kappa coefficient, and accuracy. The accuracy and kappa coefficient of the proposed diagnostic model in several experiments were averaged at 0.9789 and 0.9681, respectively. The sensitivity, specificity, F1-score, and AUC were, respectively, 0.9723, 0.9836, 0.9688, and 0.9869 for diagnosing pterygium to be observed, and, respectively, 0.9210, 0.9905, 0.9292, and 0.9776 for diagnosing pterygium requiring surgery. The proposed method has high clinical reference value for recognizing these four types of ocular surface images.
Collapse
Affiliation(s)
- Cheng Wan
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Yulong Mao
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Wenqun Xi
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Zhe Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Jiantao Wang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| |
Collapse
|
4
|
Yoon J, Han J, Ko J, Choi S, Park JI, Hwang JS, Han JM, Hwang DDJ. Developing and Evaluating an AI-Based Computer-Aided Diagnosis System for Retinal Disease: Diagnostic Study for Central Serous Chorioretinopathy. J Med Internet Res 2023; 25:e48142. [PMID: 38019564 PMCID: PMC10719821 DOI: 10.2196/48142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 10/29/2023] [Accepted: 11/05/2023] [Indexed: 11/30/2023] Open
Abstract
BACKGROUND Although previous research has made substantial progress in developing high-performance artificial intelligence (AI)-based computer-aided diagnosis (AI-CAD) systems in various medical domains, little attention has been paid to developing and evaluating AI-CAD system in ophthalmology, particularly for diagnosing retinal diseases using optical coherence tomography (OCT) images. OBJECTIVE This diagnostic study aimed to determine the usefulness of a proposed AI-CAD system in assisting ophthalmologists with the diagnosis of central serous chorioretinopathy (CSC), which is known to be difficult to diagnose, using OCT images. METHODS For the training and evaluation of the proposed deep learning model, 1693 OCT images were collected and annotated. The data set included 929 and 764 cases of acute and chronic CSC, respectively. In total, 66 ophthalmologists (2 groups: 36 retina and 30 nonretina specialists) participated in the observer performance test. To evaluate the deep learning algorithm used in the proposed AI-CAD system, the training, validation, and test sets were split in an 8:1:1 ratio. Further, 100 randomly sampled OCT images from the test set were used for the observer performance test, and the participants were instructed to select a CSC subtype for each of these images. Each image was provided under different conditions: (1) without AI assistance, (2) with AI assistance with a probability score, and (3) with AI assistance with a probability score and visual evidence heatmap. The sensitivity, specificity, and area under the receiver operating characteristic curve were used to measure the diagnostic performance of the model and ophthalmologists. RESULTS The proposed system achieved a high detection performance (99% of the area under the curve) for CSC, outperforming the 66 ophthalmologists who participated in the observer performance test. In both groups, ophthalmologists with the support of AI assistance with a probability score and visual evidence heatmap achieved the highest mean diagnostic performance compared with that of those subjected to other conditions (without AI assistance or with AI assistance with a probability score). Nonretina specialists achieved expert-level diagnostic performance with the support of the proposed AI-CAD system. CONCLUSIONS Our proposed AI-CAD system improved the diagnosis of CSC by ophthalmologists, which may support decision-making regarding retinal disease detection and alleviate the workload of ophthalmologists.
Collapse
Affiliation(s)
- Jeewoo Yoon
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Republic of Korea
- Raondata, Seoul, Republic of Korea
| | - Jinyoung Han
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Republic of Korea
- Department of Human-Artificial Intelligence Interaction, Sungkyunkwan University, Seoul, Republic of Korea
| | - Junseo Ko
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Republic of Korea
- Raondata, Seoul, Republic of Korea
| | - Seong Choi
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Republic of Korea
- Raondata, Seoul, Republic of Korea
| | - Ji In Park
- Department of Medicine, Kangwon National University School of Medicine, Kangwon National University Hospital, Chuncheon, Republic of Korea
| | | | - Jeong Mo Han
- Seoul Bombit Eye Clinic, Sejong, Republic of Korea
| | - Daniel Duck-Jin Hwang
- Department of Ophthalmology, Hangil Eye Hospital, Incheon, Republic of Korea
- Lux Mind, Incheon, Republic of Korea
| |
Collapse
|
5
|
Ni M, Chen W, Zhao Q, Zhao Y, Yuan H. Deep Learning Approach for MRI in the Classification of Anterior Talofibular Ligament Injuries. J Magn Reson Imaging 2023; 58:1544-1556. [PMID: 36807381 DOI: 10.1002/jmri.28649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/05/2023] [Accepted: 02/06/2023] [Indexed: 02/22/2023] Open
Abstract
BACKGROUND Diagnosing anterior talofibular ligament (ATFL) injuries differs among radiologists. Further assessment of ATFL tears is valuable for clinical decision-making. PURPOSE To establish a deep learning method for classifying ATFL injuries based on magnetic resonance imaging (MRI). STUDY TYPE Retrospective. POPULATION One thousand seventy-three patients from a single center with ankle MRI within 1 month of reference standard arthroscopy (in-group dataset), were divided into training, validation, and test sets in a ratio of 8:1:1. Additionally, 167 patients from another center were used as an independent out-group dataset. FIELD STRENGTH/SEQUENCE Fat-saturation proton density-weighted fast spin-echo sequence at 1.5/3.0 T. ASSESSMENT Patients were divided into normal, strain and degeneration, partial tear and complete tear groups (groups 0-3). The complete tear group was divided into five sub-groups by location and the potential avulsion fracture (groups 3.1-3.5). All images were input into AlexNet, VGG11, Small-Sample-Attention Net (SSA-Net), and SSA-Net + Weight Loss for classification. The results were compared with four radiologists with 5-30 years of experience. STATISTICAL TESTS Model performance was evaluated by the receiver operating characteristic (ROC) curve, the area under the ROC curve (AUC), and so on. McNemar's test was used to compare performance among the different models, and between the radiologists and models. The intraclass correlation coefficient (ICC) was used to assess the reliability of the radiologists. P < 0.05 was considered statistically significant. RESULTS The average AUC of AlexNet, VGG11, SAA-Net, and SSA-Net + Weight Loss was 0.95, 0.99, 0.99, 0.99 in groups 0-3 and 0.96, 0.99, 0.99, 0.99 in groups 3.1-3.5. The effect of SSA-Net + Weight Loss was similar to SSA-Net but better than AlexNet and VGG11. In the out-group test set, the AUC of SSA-Net + Weight Loss ranged from 0.89 to 0.99. The ICC of radiologists was 0.97-1.00. The effect of SSA-Net + Weight Loss was better than each radiologist in the in-group and out-group test sets. DATA CONCLUSION Deep learning has potential to be used for classifying ATFL injuries. SSA-Net + Weight Loss has a better diagnostic effect than radiologists with different experience levels. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Ming Ni
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Wen Chen
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Qiang Zhao
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Yuqing Zhao
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing, China
| |
Collapse
|
6
|
Kalejahi BK, Meshgini S, Danishvar S. Segmentation of Brain Tumor Using a 3D Generative Adversarial Network. Diagnostics (Basel) 2023; 13:3344. [PMID: 37958240 PMCID: PMC10649332 DOI: 10.3390/diagnostics13213344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/15/2023] [Accepted: 10/16/2023] [Indexed: 11/15/2023] Open
Abstract
Images of brain tumors may only show up in a small subset of scans, so important details may be missed. Further, because labeling is typically a labor-intensive and time-consuming task, there are typically only a small number of medical imaging datasets available for analysis. The focus of this research is on the MRI images of the human brain, and an attempt has been made to propose a method for the accurate segmentation of these images to identify the correct location of tumors. In this study, GAN is utilized as a classification network to detect and segment of 3D MRI images. The 3D GAN network model provides dense connectivity, followed by rapid network convergence and improved information extraction. Mutual training in a generative adversarial network can bring the segmentation results closer to the labeled data to improve image segmentation. The BraTS 2021 dataset of 3D images was used to compare two experimental models.
Collapse
Affiliation(s)
- Behnam Kiani Kalejahi
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz 385Q+246, Iran;
| | - Saeed Meshgini
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz 385Q+246, Iran;
| | - Sebelan Danishvar
- Department of Electronic and Computer Engineering, Brunel University, London UB8 3PH, UK
| |
Collapse
|
7
|
Chu X, Wang X, Zhang C, Liu H, Li F, Li G, Zhao S. A deep learning-based model for automatic segmentation and evaluation of corneal neovascularization using slit-lamp anterior segment images. Quant Imaging Med Surg 2023; 13:6778-6788. [PMID: 37869308 PMCID: PMC10585580 DOI: 10.21037/qims-23-99] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 08/03/2023] [Indexed: 10/24/2023]
Abstract
Background Corneal neovascularization (CoNV) is a common sign in anterior segment eye diseases, the level of which can indicate condition changes. Current CoNV evaluation methods are time-consuming and some of them rely on equipment which is not widely available in hospitals. Thus, a fast and efficient evaluation method is now urgently required. In this study, a deep learning (DL)-based model was developed to automatically segment and evaluate CoNV using anterior segment images from a slit-lamp microscope. Methods A total of 80 cornea slit-lamp photographs (from 80 patients) with clinically manifested CoNV were collected from December 2021 to July 2022 at Tianjin Medical University Eye Hospital. Of these, 60 images were manually labelled by ophthalmologists using ImageJ software to train the vessel segmentation network IterNet. To evaluate the performance of this automated model, evaluation metrics including accuracy, precision, area under the receiver operating characteristic (ROC) curve (AUC), and F1 score were calculated between the manually labelled ground truth and the automatic segmentations of CoNV of 20 anterior segment images. Furthermore, the vessels pixel count was automatically calculated and compared with the manually labelled results to evaluate clinical usability of the automated segmentation network. Results The IterNet model achieved an AUC of 0.989, accuracy of 0.988, sensitivity of 0.879, specificity of 0.993, area under precision-recall of 0.921, and F1 score of 0.879. The Bland-Altman plot between manually labelled ground truth and automated segmentation results produced a concordance correlation coefficient of 0.989, 95% limits of agreement between 865.4 and -562.4, and the vessels pixel count's Pearson coefficient of correlation was 0.981 (P<0.01). Conclusions The fully automated network model IterNet provides a time-saving and efficient method to make a quantitative evaluation of CoNV using slit-lamp anterior segment images. This method demonstrates great value and clinical application potential for patient care and future research.
Collapse
Affiliation(s)
- Xiaoran Chu
- Department of Cornea and Refractive Surgery, Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Xin Wang
- School of Electronics and Information Engineering, Tiangong University, Tianjin, China
| | - Chen Zhang
- Department of Cornea and Refractive Surgery, Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Hui Liu
- Department of Cornea and Refractive Surgery, Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Fei Li
- Department of Cornea and Refractive Surgery, Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Guangxu Li
- School of Electronics and Information Engineering, Tiangong University, Tianjin, China
- Tianjin Optoelectronic Detection Technology and System Laboratory, Tianjin, China
| | - Shaozhen Zhao
- Department of Cornea and Refractive Surgery, Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| |
Collapse
|
8
|
Gomes Ataide EJ, Jabaraj MS, Schenke S, Petersen M, Haghghi S, Wuestemann J, Illanes A, Friebe M, Kreissl MC. Thyroid Nodule Detection and Region Estimation in Ultrasound Images: A Comparison between Physicians and an Automated Decision Support System Approach. Diagnostics (Basel) 2023; 13:2873. [PMID: 37761240 PMCID: PMC10529523 DOI: 10.3390/diagnostics13182873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/27/2023] [Accepted: 09/05/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND Thyroid nodules are very common. In most cases, they are benign, but they can be malignant in a low percentage of cases. The accurate assessment of these nodules is critical to choosing the next diagnostic steps and potential treatment. Ultrasound (US) imaging, the primary modality for assessing these nodules, can lack objectivity due to varying expertise among physicians. This leads to observer variability, potentially affecting patient outcomes. PURPOSE This study aims to assess the potential of a Decision Support System (DSS) in reducing these variabilities for thyroid nodule detection and region estimation using US images, particularly in lesser experienced physicians. METHODS Three physicians with varying levels of experience evaluated thyroid nodules on US images, focusing on nodule detection and estimating cystic and solid regions. The outcomes were compared to those obtained from a DSS for comparison. Metrics such as classification match percentage and variance percentage were used to quantify differences. RESULTS Notable disparities exist between physician evaluations and the DSS assessments: the overall classification match percentage was just 19.2%. Individually, Physicians 1, 2, and 3 had match percentages of 57.6%, 42.3%, and 46.1% with the DSS, respectively. Variances in assessments highlight the subjectivity and observer variability based on physician experience levels. CONCLUSIONS The evident variability among physician evaluations underscores the need for supplementary decision-making tools. Given its consistency, the CAD offers potential as a reliable "second opinion" tool, minimizing human-induced variabilities in the critical diagnostic process of thyroid nodules using US images. Future integration of such systems could bolster diagnostic precision and improve patient outcomes.
Collapse
Affiliation(s)
- Elmer Jeto Gomes Ataide
- Division of Nuclear Medicine, Department of Radiology and Nuclear Medicine, University Hospital Magdeburg, 39120 Magdeburg, Germany; (S.S.); (M.C.K.)
| | | | - Simone Schenke
- Division of Nuclear Medicine, Department of Radiology and Nuclear Medicine, University Hospital Magdeburg, 39120 Magdeburg, Germany; (S.S.); (M.C.K.)
- Department of Nuclear Medicine, Klinikum Bayreuth, 95445 Bayreuth, Germany
| | - Manuela Petersen
- Department of General, Visceral, Vascular and Transplant Surgery, University Hospital Magdeburg, 39120 Magdeburg, Germany
| | - Sarvar Haghghi
- Division of Nuclear Medicine, Department of Radiology and Nuclear Medicine, University Hospital Magdeburg, 39120 Magdeburg, Germany; (S.S.); (M.C.K.)
- Department of Nuclear Medicine, University Hospital Frankfurt, 60590 Frankfurt, Germany
| | - Jan Wuestemann
- Division of Nuclear Medicine, Department of Radiology and Nuclear Medicine, University Hospital Magdeburg, 39120 Magdeburg, Germany; (S.S.); (M.C.K.)
| | | | - Michael Friebe
- Surag Medical GmbH, 39118 Magdeburg, Germany
- Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, 30-059 Krakow, Poland
- Center for Innovation, Business Development and Entrepreneurship (CIBE), FOM University of Applied Science, 45127 Essen, Germany
| | - Michael C. Kreissl
- Division of Nuclear Medicine, Department of Radiology and Nuclear Medicine, University Hospital Magdeburg, 39120 Magdeburg, Germany; (S.S.); (M.C.K.)
- STIMULATE Research Campus, 39106 Magdeburg, Germany
- Center for Advanced Medical Engineering (CAME), Otto-von-Guericke University Magdeburg, 39106 Magdeburg, Germany
| |
Collapse
|
9
|
Zhou Z, Qiu Q, Liu H, Ge X, Li T, Xing L, Yang R, Yin Y. Automatic Detection of Brain Metastases in T1-Weighted Construct-Enhanced MRI Using Deep Learning Model. Cancers (Basel) 2023; 15:4443. [PMID: 37760413 PMCID: PMC10526374 DOI: 10.3390/cancers15184443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/03/2023] [Accepted: 09/04/2023] [Indexed: 09/29/2023] Open
Abstract
As a complication of malignant tumors, brain metastasis (BM) seriously threatens patients' survival and quality of life. Accurate detection of BM before determining radiation therapy plans is a paramount task. Due to the small size and heterogeneous number of BMs, their manual diagnosis faces enormous challenges. Thus, MRI-based artificial intelligence-assisted BM diagnosis is significant. Most of the existing deep learning (DL) methods for automatic BM detection try to ensure a good trade-off between precision and recall. However, due to the objective factors of the models, higher recall is often accompanied by higher number of false positive results. In real clinical auxiliary diagnosis, radiation oncologists are required to spend much effort to review these false positive results. In order to reduce false positive results while retaining high accuracy, a modified YOLOv5 algorithm is proposed in this paper. First, in order to focus on the important channels of the feature map, we add a convolutional block attention model to the neck structure. Furthermore, an additional prediction head is introduced for detecting small-size BMs. Finally, to distinguish between cerebral vessels and small-size BMs, a Swin transformer block is embedded into the smallest prediction head. With the introduction of the F2-score index to determine the most appropriate confidence threshold, the proposed method achieves a precision of 0.612 and recall of 0.904. Compared with existing methods, our proposed method shows superior performance with fewer false positive results. It is anticipated that the proposed method could reduce the workload of radiation oncologists in real clinical auxiliary diagnosis.
Collapse
Affiliation(s)
- Zichun Zhou
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China
| | - Qingtao Qiu
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
| | - Huiling Liu
- Department of Oncology, Binzhou People’s Hospital, Binzhou 256610, China
- Third Clinical Medical College, Xinjiang Medical University, Urumqi 830011, China
| | - Xuanchu Ge
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Tengxiang Li
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Ligang Xing
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Runtao Yang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China
| | - Yong Yin
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| |
Collapse
|
10
|
Yahyatabar M, Jouvet P, Cheriet F. Joint classification and segmentation for an interpretable diagnosis of acute respiratory distress syndrome from chest x-rays. J Med Imaging (Bellingham) 2023; 10:054504. [PMID: 37854097 PMCID: PMC10581023 DOI: 10.1117/1.jmi.10.5.054504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 09/05/2023] [Accepted: 10/03/2023] [Indexed: 10/20/2023] Open
Abstract
Purpose Acute respiratory distress syndrome (ARDS) is a life-threatening condition that can cause a dramatic drop in blood oxygen levels due to widespread lung inflammation. Chest radiography is widely used as a primary modality to detect ARDS due to its crucial role in diagnosing the syndrome, and the x-ray images can be obtained promptly. However, despite the extensive literature on chest x-ray (CXR) image analysis, there is limited research on ARDS diagnosis due to the scarcity of ARDS-labeled datasets. Additionally, many machine learning-based approaches result in high performance in pulmonary disease diagnosis, but their decisions are often not easily interpretable, which can hinder their clinical acceptance. This work aims to develop a method for detecting signs of ARDS in CXR images that can be clinically interpretable. Approach To achieve this goal, an ARDS-labeled dataset of chest radiography images is gathered and annotated for training and evaluation of the proposed approach. The proposed deep classification-segmentation model, Dense-Ynet, provides an interpretable framework for automatically diagnosing ARDS in CXR images. The model takes advantage of lung segmentation in diagnosing ARDS. By definition, ARDS causes bilateral diffuse infiltrates throughout the lungs. To consider the local involvement of lung areas, each lung is divided into upper and lower halves, and our model classifies the resulting lung quadrants. Results The quadrant-based classification strategy yields the area under the receiver operating characteristic curve of 95.1% (95% CI 93.5 to 96.1), which allows for providing a reference for the model's predictions. In terms of segmentation, the model accurately identifies lung regions in CXR images even when lung boundaries are unclear in abnormal images. Conclusions This study provides an interpretable decision system for diagnosing ARDS, by following the definition used by clinicians for the diagnosis of ARDS from CXR images.
Collapse
Affiliation(s)
- Mohammad Yahyatabar
- Polytechnique Montréal, Department of Computer and Software Engineering, Montreal, Quebec, Canada
| | - Philippe Jouvet
- University of Montréal, Department of Pediatrics, Faculty of Medicine, Montréal, Quebec, Canada
| | - Farida Cheriet
- Polytechnique Montréal, Department of Computer and Software Engineering, Montreal, Quebec, Canada
| |
Collapse
|
11
|
Ashrafinia S, Dalaie P, Schindler TH, Pomper MG, Rahmim A. Standardized Radiomics Analysis of Clinical Myocardial Perfusion Stress SPECT Images to Identify Coronary Artery Calcification. Cureus 2023; 15:e43343. [PMID: 37700937 PMCID: PMC10493172 DOI: 10.7759/cureus.43343] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
PURPOSE Myocardial perfusion (MP) stress single-photon emission computed tomography (SPECT) is an established diagnostic test for patients suspected of coronary artery disease (CAD). Meanwhile, coronary artery calcification (CAC) scoring obtained from diagnostic CT is a highly sensitive test, offering incremental diagnostic information in identifying patients with significant CAD yet normal MP stress SPECT (MPSS) scans. However, after decades of wide utilization of MPSS, CAC is not commonly reimbursed (e.g. by the CMS), nor widely deployed in community settings. We studied the potential of complementary information deduced from the radiomics analysis of normal MPSS scans in predicting the CAC score. METHODS We collected data from 428 patients with normal (non-ischemic) MPSS (99mTc-sestamibi; consensus reading). A nuclear medicine physician verified iteratively reconstructed images (attenuation-corrected) to be free from fixed perfusion defects and artifactual attenuation. Three-dimensional images were automatically segmented into four regions of interest (ROIs), including myocardium and three vascular segments (left anterior descending [LAD]-left circumference [LCX]-right coronary artery [RCA]). We used our software package, standardized environment for radiomics analysis (SERA), to extract 487 radiomic features in compliance with the image biomarker standardization initiative (IBSI). Isotropic cubic voxels were discretized using fixed bin-number discretization (eight schemes). We first performed blind-to-outcome feature selection focusing on a priori usefulness, dynamic range, and redundancy of features. Subsequently, we performed univariate and multivariate machine learning analyses to predict CAC scores from i) selected radiomic features, ii) 10 clinical features, and iii) combined radiomics + clinical features. Univariate analysis invoked Spearman correlation with Benjamini-Hotchberg false-discovery correction. The multivariate analysis incorporated stepwise linear regression, where we randomly selected a 15% test set and divided the other 85% of data into 70% training and 30% validation sets. Training started from a constant (intercept) model, iteratively adding/removing features (stepwise regression), invoking the Akaike information criterion (AIC) to discourage overfitting. Validation was run similarly, except that the training output model was used as the initial model. We randomized training/validation sets 20 times, selecting the best model using log-likelihood for evaluation in the test set. Assessment in the test set was performed thoroughly by running the entire operation 50 times, subsequently employing Fisher's method to verify the significance of independent tests. RESULTS Unsupervised feature selection significantly reduced 8×487 features to 56. In univariate analysis, no feature survived the false-discovery rate (FDR) to directly correlate with CAC scores. Applying Fisher's method to the multivariate regression results demonstrated combining radiomics with the clinical features to enhance the significance of the prediction model across all cardiac segments. Conclusions: Our standardized and statistically robust multivariate analysis demonstrated significant prediction of the CAC score for all cardiac segments when combining MPSS radiomic features with clinical features, suggesting radiomics analysis can add diagnostic or prognostic value to standard MPSS for wide clinical usage.
Collapse
Affiliation(s)
- Saeed Ashrafinia
- Radiology, Johns Hopkins University School of Medicine, Baltimore, USA
| | - Pejman Dalaie
- Radiology, Johns Hopkins University School of Medicine, Baltimore, USA
| | | | - Martin G Pomper
- Radiology, Johns Hopkins University School of Medicine, Baltimore, USA
| | - Arman Rahmim
- Physics and Astronomy, University of British Columbia, Vancouver, CAN
| |
Collapse
|
12
|
Sethanan K, Pitakaso R, Srichok T, Khonjun S, Weerayuth N, Prasitpuriprecha C, Preeprem T, Jantama SS, Gonwirat S, Enkvetchakul P, Kaewta C, Nanthasamroeng N. Computer-aided diagnosis using embedded ensemble deep learning for multiclass drug-resistant tuberculosis classification. Front Med (Lausanne) 2023; 10:1122222. [PMID: 37441685 PMCID: PMC10333053 DOI: 10.3389/fmed.2023.1122222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 05/23/2023] [Indexed: 07/15/2023] Open
Abstract
Introduction This study aims to develop a web application, TB-DRD-CXR, for the categorization of tuberculosis (TB) patients into subgroups based on their level of drug resistance. The application utilizes an ensemble deep learning model that classifies TB strains into five subtypes: drug sensitive tuberculosis (DS-TB), drug resistant TB (DR-TB), multidrug-resistant TB (MDR-TB), pre-extensively drug-resistant TB (pre-XDR-TB), and extensively drug-resistant TB (XDR-TB). Methods The ensemble deep learning model employed in the TB-DRD-CXR web application incorporates novel fusion techniques, image segmentation, data augmentation, and various learning rate strategies. The performance of the proposed model is compared with state-of-the-art techniques and standard homogeneous CNN architectures documented in the literature. Results Computational results indicate that the suggested method outperforms existing methods reported in the literature, providing a 4.0%-33.9% increase in accuracy. Moreover, the proposed model demonstrates superior performance compared to standard CNN models, including DenseNet201, NASNetMobile, EfficientNetB7, EfficientNetV2B3, EfficientNetV2M, and ConvNeXtSmall, with accuracy improvements of 28.8%, 93.4%, 2.99%, 48.0%, 4.4%, and 7.6% respectively. Conclusion The TB-DRD-CXR web application was developed and tested with 33 medical staff. The computational results showed a high accuracy rate of 96.7%, time-based efficiency (ET) of 4.16 goals/minutes, and an overall relative efficiency (ORE) of 100%. The system usability scale (SUS) score of the proposed application is 96.7%, indicating user satisfaction and a likelihood of recommending the TB-DRD-CXR application to others based on previous literature.
Collapse
Affiliation(s)
- Kanchana Sethanan
- Department of Industrial Engineer, Faculty of Engineering, Research Unit on System Modelling for Industry, Khon Kaen University, Khon Kaen, Thailand
| | - Rapeepan Pitakaso
- Department of Industrial Engineer, Faculty of Engineering, Artificial Intelligence Optimization SMART Laboratory, Ubon Ratchathani University, Ubon Ratchathani, Thailand
| | - Thanatkij Srichok
- Department of Industrial Engineer, Faculty of Engineering, Artificial Intelligence Optimization SMART Laboratory, Ubon Ratchathani University, Ubon Ratchathani, Thailand
| | - Surajet Khonjun
- Department of Industrial Engineer, Faculty of Engineering, Artificial Intelligence Optimization SMART Laboratory, Ubon Ratchathani University, Ubon Ratchathani, Thailand
| | - Nantawatana Weerayuth
- Ubon Ratchathani University, Department of Mechanical Engineer, Faculty of Engineering, Ubon Ratchathani, Thailand
| | - Chutinun Prasitpuriprecha
- Division of Biopharmacy, Faculty of Pharmaceutical Sciences, Ubon Ratchathani University, Ubon Ratchathani, Thailand
| | - Thanawadee Preeprem
- Division of Biopharmacy, Faculty of Pharmaceutical Sciences, Ubon Ratchathani University, Ubon Ratchathani, Thailand
| | - Sirima Suvarnakuta Jantama
- Ubon Ratchathani University, Division of Biopharmacy, Faculty of Pharmaceutical Sciences, Ubon Ratchathani, Thailand
| | - Sarayut Gonwirat
- Department of Industrial Engineer, Faculty of Engineering, Artificial Intelligence Optimization SMART Laboratory, Ubon Ratchathani University, Ubon Ratchathani, Thailand
- Department of Computer Engineering and Automation, Faculty of Engineering, Kalasin University, Kalasin, Thailand
| | - Prem Enkvetchakul
- Department of Industrial Engineer, Faculty of Engineering, Artificial Intelligence Optimization SMART Laboratory, Ubon Ratchathani University, Ubon Ratchathani, Thailand
- Department of Information Technology, Faculty of Sciences, Buriram Rajabhat University, Buriram, Thailand
| | - Chutchai Kaewta
- Department of Industrial Engineer, Faculty of Engineering, Artificial Intelligence Optimization SMART Laboratory, Ubon Ratchathani University, Ubon Ratchathani, Thailand
- Department of Computer Science, Faculty of Computer Sciences, Ubon Ratchathani Rajabhat University, Ubon Ratchathani, Thailand
| | - Natthapong Nanthasamroeng
- Department of Industrial Engineer, Faculty of Engineering, Artificial Intelligence Optimization SMART Laboratory, Ubon Ratchathani University, Ubon Ratchathani, Thailand
- Department of Engineering Technology, Faculty of Industrial Technology, Ubon Ratchathani Rajabhat University, Ubon Ratchathani, Thailand
| |
Collapse
|
13
|
Jiang J, Qiu J, Yin J, Wang J, Jiang X, Yi Z, Chen Y, Zhou X, Sima X. Automated detection of hippocampal sclerosis using real-world clinical MRI images. Front Neurosci 2023; 17:1180679. [PMID: 37255750 PMCID: PMC10225575 DOI: 10.3389/fnins.2023.1180679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 04/26/2023] [Indexed: 06/01/2023] Open
Abstract
Background Hippocampal sclerosis (HS) is the most common pathological type of temporal lobe epilepsy (TLE) and one of the important surgical markers. Currently, HS is mainly diagnosed manually by radiologists based on visual inspection of MRI, which greatly relies on MRI quality and physician experience. In clinical practice, non-thin MRI scans are often used due to the time and efficiency needed for the acquisition. However, these scans can be difficult for junior physicians to interpret accurately. Thus, the rapid and accurate diagnosis of HS using real-world MRI images in clinical settings is a challenging task. Objective Our aim was to explore the feasibility of using computer vision methods to diagnose HS on real-world clinical MRI images and to provide a reference for future clinical applications of artificial intelligence methods to aid in detecting HS. Methods We proposed a deep learning algorithm called "HS-Net" to discriminate HS using real-world clinical MRI images. First, we delineated and segmented a region of interest (ROI) around the hippocampus. Then, we utilized the fractional differential (FD) method to enhance the textures of the ROIs. Finally, we used a small-sample image classification method based on transfer learning to fine-tune the feature extraction part of a pretrained model and added two fully connected layers and an output layer. In the study, 96 TLE patients with HS confirmed by postoperative pathology and 89 healthy controls were retrospectively enrolled. All subjects were cross-validated, and models were evaluated for performance, robustness, and clinical utility. Results The HS-Net model achieved an area under the curve (AUC) of 0.894, an accuracy of 82.88%, an F1-score of 84.08% in the test cohort based on real, routine, clinical T2-weighted fluid attenuated inversion recovery (FLAIR) sequence MRI images. Additionally, the AUC, accuracy and F1 scores of our model all increased by around 3 percentage points when the inputs were augmented with the ROIs of the textures enhanced using the FD method. Conclusions Our computational model has the potential to be used for the diagnosis of HS in real clinical MRI images, which could assist physicians, particularly junior physicians, in improving the accuracy of discrimination.
Collapse
Affiliation(s)
- Jingwen Jiang
- Department of Neurosurgery and West China Biomedical Big Data Center, West China Hospital of Sichuan University, Chengdu, China
- Med-X Center for Informatics, Sichuan University, Chengdu, China
| | - Jiajun Qiu
- Department of Neurosurgery and West China Biomedical Big Data Center, West China Hospital of Sichuan University, Chengdu, China
- Med-X Center for Informatics, Sichuan University, Chengdu, China
| | - Jin Yin
- Department of Neurosurgery and West China Biomedical Big Data Center, West China Hospital of Sichuan University, Chengdu, China
- Med-X Center for Informatics, Sichuan University, Chengdu, China
| | - Junren Wang
- Department of Neurosurgery and West China Biomedical Big Data Center, West China Hospital of Sichuan University, Chengdu, China
- Med-X Center for Informatics, Sichuan University, Chengdu, China
| | - Xinyue Jiang
- Department of Radiology, Chengdu Second People's Hospital, Chengdu, China
| | - Zuo Yi
- Department of Computer Science and Technology, College of Computer Science, Sichuan University, Chengdu, China
| | - Yang Chen
- Department of Neurosurgery and West China Biomedical Big Data Center, West China Hospital of Sichuan University, Chengdu, China
| | - Xiaobo Zhou
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Xiutian Sima
- Department of Neurosurgery, West China Hospital of Sichuan University, Chengdu, China
| |
Collapse
|
14
|
Xu D, Xu Q, Nhieu K, Ruan D, Sheng K. An Efficient and Robust Method for Chest X-ray Rib Suppression That Improves Pulmonary Abnormality Diagnosis. Diagnostics (Basel) 2023; 13:diagnostics13091652. [PMID: 37175044 PMCID: PMC10177861 DOI: 10.3390/diagnostics13091652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/02/2023] [Accepted: 05/04/2023] [Indexed: 05/15/2023] Open
Abstract
BACKGROUND Suppression of thoracic bone shadows on chest X-rays (CXRs) can improve the diagnosis of pulmonary disease. Previous approaches can be categorized as either unsupervised physical models or supervised deep learning models. Physical models can remove the entire ribcage and preserve the morphological lung details but are impractical due to the extremely long processing time. Machine learning (ML) methods are computationally efficient but are limited by the available ground truth (GT) for effective and robust training, resulting in suboptimal results. PURPOSE To improve bone shadow suppression, we propose a generalizable yet efficient workflow for CXR rib suppression by combining physical and ML methods. MATERIALS AND METHOD Our pipeline consists of two stages: (1) pair generation with GT bone shadows eliminated by a physical model in spatially transformed gradient fields; and (2) a fully supervised image denoising network trained on stage-one datasets for fast rib removal from incoming CXRs. For stage two, we designed a densely connected network called SADXNet, combined with a peak signal-to-noise ratio and a multi-scale structure similarity index measure as the loss function to suppress the bony structures. SADXNet organizes the spatial filters in a U shape and preserves the feature map dimension throughout the network flow. RESULTS Visually, SADXNet can suppress the rib edges near the lung wall/vertebra without compromising the vessel/abnormality conspicuity. Quantitively, it achieves an RMSE of ~0 compared with the physical model generated GTs, during testing with one prediction in <1 s. Downstream tasks, including lung nodule detection as well as common lung disease classification and localization, are used to provide task-specific evaluations of our rib suppression mechanism. We observed a 3.23% and 6.62% AUC increase, as well as 203 (1273 to 1070) and 385 (3029 to 2644) absolute false positive decreases for lung nodule detection and common lung disease localization, respectively. CONCLUSION Through learning from image pairs generated from the physical model, the proposed SADXNet can make a robust sub-second prediction without losing fidelity. Quantitative outcomes from downstream validation further underpin the superiority of SADXNet and the training ML-based rib suppression approaches from the physical model yielded dataset. The training images and SADXNet are provided in the manuscript.
Collapse
Affiliation(s)
- Di Xu
- Department of Radiation Oncology, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Qifan Xu
- Department of Radiation Oncology, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Kevin Nhieu
- Department of Radiation Oncology, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Dan Ruan
- Department of Radiation Oncology, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Ke Sheng
- Department of Radiation Oncology, University of California at San Francisco, San Francisco, CA 94115, USA
| |
Collapse
|
15
|
Papadomanolakis TN, Sergaki ES, Polydorou AA, Krasoudakis AG, Makris-Tsalikis GN, Polydorou AA, Afentakis NM, Athanasiou SA, Vardiambasis IO, Zervakis ME. Tumor Diagnosis against Other Brain Diseases Using T2 MRI Brain Images and CNN Binary Classifier and DWT. Brain Sci 2023; 13:brainsci13020348. [PMID: 36831891 PMCID: PMC9954603 DOI: 10.3390/brainsci13020348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 02/08/2023] [Accepted: 02/14/2023] [Indexed: 02/22/2023] Open
Abstract
PURPOSE Brain tumors are diagnosed and classified manually and noninvasively by radiologists using Magnetic Resonance Imaging (MRI) data. The risk of misdiagnosis may exist due to human factors such as lack of time, fatigue, and relatively low experience. Deep learning methods have become increasingly important in MRI classification. To improve diagnostic accuracy, researchers emphasize the need to develop Computer-Aided Diagnosis (CAD) computational diagnostics based on artificial intelligence (AI) systems by using deep learning methods such as convolutional neural networks (CNN) and improving the performance of CNN by combining it with other data analysis tools such as wavelet transform. In this study, a novel diagnostic framework based on CNN and DWT data analysis is developed for the diagnosis of glioma tumors in the brain, among other tumors and other diseases, with T2-SWI MRI scans. It is a binary CNN classifier that treats the disease "glioma tumor" as positive and the other pathologies as negative, resulting in a very unbalanced binary problem. The study includes a comparative analysis of a CNN trained with wavelet transform data of MRIs instead of their pixel intensity values in order to demonstrate the increased performance of the CNN and DWT analysis in diagnosing brain gliomas. The results of the proposed CNN architecture are also compared with a deep CNN pre-trained on VGG16 transfer learning network and with the SVM machine learning method using DWT knowledge. METHODS To improve the accuracy of the CNN classifier, the proposed CNN model uses as knowledge the spatial and temporal features extracted by converting the original MRI images to the frequency domain by performing Discrete Wavelet Transformation (DWT), instead of the traditionally used original scans in the form of pixel intensities. Moreover, no pre-processing was applied to the original images. The images used are MRIs of type T2-SWI sequences parallel to the axial plane. Firstly, a compression step is applied for each MRI scan applying DWT up to three levels of decomposition. These data are used to train a 2D CNN in order to classify the scans as showing glioma or not. The proposed CNN model is trained on MRI slices originated from 382 various male and female adult patients, showing healthy and pathological images from a selection of diseases (showing glioma, meningioma, pituitary, necrosis, edema, non-enchasing tumor, hemorrhagic foci, edema, ischemic changes, cystic areas, etc.). The images are provided by the database of the Medical Image Computing and Computer-Assisted Intervention (MICCAI) and the Ischemic Stroke Lesion Segmentation (ISLES) challenges on Brain Tumor Segmentation (BraTS) challenges 2016 and 2017, as well as by the numerous records kept in the public general hospital of Chania, Crete, "Saint George". RESULTS The proposed frameworks are experimentally evaluated by examining MRI slices originating from 190 different patients (not included in the training set), of which 56% are showing gliomas by the longest two axes less than 2 cm and 44% are showing other pathological effects or healthy cases. Results show convincing performance when using as information the spatial and temporal features extracted by the original scans. With the proposed CNN model and with data in DWT format, we achieved the following statistic percentages: accuracy 0.97, sensitivity (recall) 1, specificity 0.93, precision 0.95, FNR 0, and FPR 0.07. These numbers are higher for this data format (respectively: accuracy by 6% higher, recall by 11%, specificity by 7%, precision by 5%, FNR by 0.1%, and FPR is the same) than it would be, had we used as input data the intensity values of the MRIs (instead of the DWT analysis of the MRIs). Additionally, our study showed that when our CNN takes into account the TL of the existing network VGG, the performance values are lower, as follows: accuracy 0.87, sensitivity (recall) 0.91, specificity 0.84, precision 0.86, FNR of 0.08, and FPR 0.14. CONCLUSIONS The experimental results show the outperformance of the CNN, which is not based on transfer learning, but is using as information the MRI brain scans decomposed into DWT information instead of the pixel intensity of the original scans. The results are promising for the proposed CNN based on DWT knowledge to serve for binary diagnosis of glioma tumors among other tumors and diseases. Moreover, the SVM learning model using DWT data analysis performs with higher accuracy and sensitivity than using pixel values.
Collapse
Affiliation(s)
| | - Eleftheria S. Sergaki
- School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Greece
- Correspondence: (E.S.S.); (I.O.V.)
| | - Andreas A. Polydorou
- Areteio Hospital, 2nd University Department of Surgery, Medical School, National and Kapodistrian University of Athens, 11528 Athens, Greece
| | | | | | - Alexios A. Polydorou
- Medical School, National and Kapodistrian University of Athens, 11528 Athens, Greece
| | - Nikolaos M. Afentakis
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Sofia A. Athanasiou
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Ioannis O. Vardiambasis
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
- Correspondence: (E.S.S.); (I.O.V.)
| | - Michail E. Zervakis
- School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Greece
| |
Collapse
|
16
|
Cantone M, Marrocco C, Tortorella F, Bria A. Convolutional Networks and Transformers for Mammography Classification: An Experimental Study. Sensors (Basel) 2023; 23:1229. [PMID: 36772268 PMCID: PMC9921468 DOI: 10.3390/s23031229] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/13/2023] [Accepted: 01/18/2023] [Indexed: 05/31/2023]
Abstract
Convolutional Neural Networks (CNN) have received a large share of research in mammography image analysis due to their capability of extracting hierarchical features directly from raw data. Recently, Vision Transformers are emerging as viable alternative to CNNs in medical imaging, in some cases performing on par or better than their convolutional counterparts. In this work, we conduct an extensive experimental study to compare the most recent CNN and Vision Transformer architectures for whole mammograms classification. We selected, trained and tested 33 different models, 19 convolutional- and 14 transformer-based, on the largest publicly available mammography image database OMI-DB. We also performed an analysis of the performance at eight different image resolutions and considering all the individual lesion categories in isolation (masses, calcifications, focal asymmetries, architectural distortions). Our findings confirm the potential of visual transformers, which performed on par with traditional CNNs like ResNet, but at the same time show a superiority of modern convolutional networks like EfficientNet.
Collapse
Affiliation(s)
- Marco Cantone
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| | - Claudio Marrocco
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| | - Francesco Tortorella
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, SA, Italy
| | - Alessandro Bria
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| |
Collapse
|
17
|
Alam M, Zhao EJ, Lam CK, Rubin DL. Segmentation-Assisted Fully Convolutional Neural Network Enhances Deep Learning Performance to Identify Proliferative Diabetic Retinopathy. J Clin Med 2023; 12. [PMID: 36615186 DOI: 10.3390/jcm12010385] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/27/2022] [Accepted: 12/29/2022] [Indexed: 01/05/2023] Open
Abstract
With the progression of diabetic retinopathy (DR) from the non-proliferative (NPDR) to proliferative (PDR) stage, the possibility of vision impairment increases significantly. Therefore, it is clinically important to detect the progression to PDR stage for proper intervention. We propose a segmentation-assisted DR classification methodology, that builds on (and improves) current methods by using a fully convolutional network (FCN) to segment retinal neovascularizations (NV) in retinal images prior to image classification. This study utilizes the Kaggle EyePacs dataset, containing retinal photographs from patients with varying degrees of DR (mild, moderate, severe NPDR and PDR. Two graders annotated the NV (a board-certified ophthalmologist and a trained medical student). Segmentation was performed by training an FCN to locate neovascularization on 669 retinal fundus photographs labeled with PDR status according to NV presence. The trained segmentation model was used to locate probable NV in images from the classification dataset. Finally, a CNN was trained to classify the combined images and probability maps into categories of PDR. The mean accuracy of segmentation-assisted classification was 87.71% on the test set (SD = 7.71%). Segmentation-assisted classification of PDR achieved accuracy that was 7.74% better than classification alone. Our study shows that segmentation assistance improves identification of the most severe stage of diabetic retinopathy and has the potential to improve deep learning performance in other imaging problems with limited data availability.
Collapse
|
18
|
Guo J, Cao W, Nie B, Qin Q. Unsupervised Learning Composite Network to Reduce Training Cost of Deep Learning Model for Colorectal Cancer Diagnosis. IEEE J Transl Eng Health Med 2022; 11:54-59. [PMID: 36544891 PMCID: PMC9762730 DOI: 10.1109/jtehm.2022.3224021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 10/31/2022] [Accepted: 11/18/2022] [Indexed: 11/23/2022]
Abstract
Deep learning facilitates complex medical data analysis and is increasingly being explored in colorectal cancer diagnostics. However, the training cost of the deep learning model limits its real-world medical utility. In this study, we present a composite network that combines deep learning and unsupervised K-means clustering algorithm (RK-net) for automatic processing of medical images. RK-net was more efficient in image refinement compared with manual screening and annotation. The training of a deep learning model for colorectal cancer diagnosis was accelerated by two times with utilization of RK-net-processed images. Better performance was observed in training loss and accuracy achievement as well. RK-net could be useful to refine medical images of the ever-expanding quantity and assist in subsequent construction of the artificial intelligence model.
Collapse
Affiliation(s)
- Jirui Guo
- Department of Colorectal SurgeryThe Sixth Affiliated Hospital, Sun Yat-sen University Guangzhou 510655 China
| | - Wuteng Cao
- Department of RadiologyThe Sixth Affiliated Hospital, Sun Yat-sen University Guangzhou 510655 China
| | - Bairun Nie
- School of Electrical Computer and Telecommunications EngineeringUniversity of Wollongong Wollongong NSW 2522 Australia
| | - Qiyuan Qin
- Department of Colorectal SurgeryThe Sixth Affiliated Hospital, Sun Yat-sen University Guangzhou 510655 China
| |
Collapse
|
19
|
Zheng H, Xiao Z, Luo S, Wu S, Huang C, Hong T, He Y, Guo Y, Du G. Improve follicular thyroid carcinoma diagnosis using computer aided diagnosis system on ultrasound images. Front Oncol 2022; 12:939418. [PMID: 36465352 PMCID: PMC9709400 DOI: 10.3389/fonc.2022.939418] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 11/01/2022] [Indexed: 08/15/2023] Open
Abstract
OBJECTIVE We aim to leverage deep learning to develop a computer aided diagnosis (CAD) system toward helping radiologists in the diagnosis of follicular thyroid carcinoma (FTC) on thyroid ultrasonography. METHODS A dataset of 1159 images, consisting of 351 images from 138 FTC patients and 808 images from 274 benign follicular-pattern nodule patients, was divided into a balanced and unbalanced dataset, and used to train and test the CAD system based on a transfer learning of a residual network. Six radiologists participated in the experiments to verify whether and how much the proposed CAD system helps to improve their performance. RESULTS On the balanced dataset, the CAD system achieved 0.892 of area under the ROC (AUC). The accuracy, recall, precision, and F1-score of the CAD method were 84.66%, 84.66%, 84.77%, 84.65%, while those of the junior and senior radiologists were 56.82%, 56.82%, 56.95%, 56.62% and 64.20%, 64.20%, 64.35%, 64.11% respectively. With the help of CAD, the metrics of the junior and senior radiologists improved to 62.81%, 62.81%, 62.85%, 62.79% and 73.86%, 73.86%, 74.00%, 73.83%. The results almost repeated on the unbalanced dataset. The results show the proposed CAD approach can not only achieve better performance than radiologists, but also significantly improve the radiologists' diagnosis of FTC. CONCLUSIONS The performances of the CAD system indicate it is a reliable reference for preoperative diagnosis of FTC, and might assist the development of a fast, accessible screening method for FTC.
Collapse
Affiliation(s)
- Huan Zheng
- Department of Ultrasound, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Zebin Xiao
- Department of Pathology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Siwei Luo
- Department of Ultrasound, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Suqing Wu
- Department of Ultrasound, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Chuxin Huang
- Department of Ultrasound, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Tingting Hong
- Department of Ultrasound, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Yan He
- Department of Ultrasound, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Yanhui Guo
- Department of Computer Science, University of Illinois Springfield, Springfield, IL, United States
| | - Guoqing Du
- Department of Ultrasound, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
20
|
Meddeb A, Kossen T, Bressem KK, Molinski N, Hamm B, Nagel SN. Two-Stage Deep Learning Model for Automated Segmentation and Classification of Splenomegaly. Cancers (Basel) 2022; 14. [PMID: 36428569 DOI: 10.3390/cancers14225476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/22/2022] [Accepted: 11/04/2022] [Indexed: 11/09/2022] Open
Abstract
Splenomegaly is a common cross-sectional imaging finding with a variety of differential diagnoses. This study aimed to evaluate whether a deep learning model could automatically segment the spleen and identify the cause of splenomegaly in patients with cirrhotic portal hypertension versus patients with lymphoma disease. This retrospective study included 149 patients with splenomegaly on computed tomography (CT) images (77 patients with cirrhotic portal hypertension, 72 patients with lymphoma) who underwent a CT scan between October 2020 and July 2021. The dataset was divided into a training (n = 99), a validation (n = 25) and a test cohort (n = 25). In the first stage, the spleen was automatically segmented using a modified U-Net architecture. In the second stage, the CT images were classified into two groups using a 3D DenseNet to discriminate between the causes of splenomegaly, first using the whole abdominal CT, and second using only the spleen segmentation mask. The classification performances were evaluated using the area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). Occlusion sensitivity maps were applied to the whole abdominal CT images, to illustrate which regions were important for the prediction. When trained on the whole abdominal CT volume, the DenseNet was able to differentiate between the lymphoma and liver cirrhosis in the test cohort with an AUC of 0.88 and an ACC of 0.88. When the model was trained on the spleen segmentation mask, the performance decreased (AUC = 0.81, ACC = 0.76). Our model was able to accurately segment splenomegaly and recognize the underlying cause. Training on whole abdomen scans outperformed training using the segmentation mask. Nonetheless, considering the performance, a broader and more general application to differentiate other causes for splenomegaly is also conceivable.
Collapse
|
21
|
Liu Y, Zhu Y, Wang W, Zheng B, Qin X, Wang P. Multi-scale discriminative network for prostate cancer lesion segmentation in multiparametric MR images. Med Phys 2022; 49:7001-7015. [PMID: 35851482 DOI: 10.1002/mp.15861] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 06/30/2022] [Accepted: 07/03/2022] [Indexed: 01/01/2023] Open
Abstract
PURPOSE The accurate and reliable segmentation of prostate cancer (PCa) lesions using multiparametric magnetic resonance imaging (mpMRI) sequences, is crucial to the image-guided intervention and treatment of prostate disease. For PCa lesion segmentation, it is essential to reliably combine local and global information to retain the features of small targets at multiple scales. Therefore, this study proposes a multi-scale segmentation network with a cascading pyramid convolution module (CPCM) and a double-input channel attention module (DCAM) for the automated and accurate segmentation of PCa lesions using mpMRI. METHODS First, the region of interest was extracted from the data by clipping to enlarge the target region and reduce the background noise interference. Next, four CPCMs with large convolution kernels in their skip connection paths were designed to improve the feature extraction capability of the network for small targets. At the same time, a convolution decomposition was applied to reduce the computational complexity. Finally, the DCAM was adopted in the decoder to provide bottom-up semantic discriminative guidance; it can use the semantic information of the network's deep features to guide the shallow output of features with a higher discriminant ability. A residual refinement module (RRM) was also designed to strengthen the recognition ability of each stage. The feature maps of the skip connection and the decoder all go through the RRM. RESULTS For the Initiative for Collaborative Computer Vision Benchmarking (I2CVB) dataset, our proposed model achieved a Dice similarity coefficient (DSC) of 79.31% and an average boundary distance (ABD) of 4.15 mm. For the Prostate Multiparametric MRI (PROMM) dataset, our method greatly improved the DSC to 82.11% and obtained an ABD of 3.64 mm. CONCLUSIONS The experimental results of two different mpMRI prostate datasets demonstrate that our model is more accurate and reliable on small targets. In addition, it outperforms other state-of-the-art methods.
Collapse
Affiliation(s)
- Yatong Liu
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
| | - Yu Zhu
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
- Shanghai Engineering Research Center of Internet of Things for Respiratory Medicine, Shanghai, P. R. China
| | - Wei Wang
- Department of Radiology, Tongji Hospital, Tongji University School of Medicine, Shanghai, P. R. China
| | - Bingbing Zheng
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
| | - Xiangxiang Qin
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
| | - Peijun Wang
- Department of Radiology, Tongji Hospital, Tongji University School of Medicine, Shanghai, P. R. China
| |
Collapse
|
22
|
Termine A, Fabrizio C, Caltagirone C, Petrosini L. A Reproducible Deep-Learning-Based Computer-Aided Diagnosis Tool for Frontotemporal Dementia Using MONAI and Clinica Frameworks. Life (Basel) 2022; 12:947. [PMID: 35888037 PMCID: PMC9323676 DOI: 10.3390/life12070947] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/16/2022] [Accepted: 06/21/2022] [Indexed: 12/16/2022] Open
Abstract
Despite Artificial Intelligence (AI) being a leading technology in biomedical research, real-life implementation of AI-based Computer-Aided Diagnosis (CAD) tools into the clinical setting is still remote due to unstandardized practices during development. However, few or no attempts have been made to propose a reproducible CAD development workflow for 3D MRI data. In this paper, we present the development of an easily reproducible and reliable CAD tool using the Clinica and MONAI frameworks that were developed to introduce standardized practices in medical imaging. A Deep Learning (DL) algorithm was trained to detect frontotemporal dementia (FTD) on data from the NIFD database to ensure reproducibility. The DL model yielded 0.80 accuracy (95% confidence intervals: 0.64, 0.91), 1 sensitivity, 0.6 specificity, 0.83 F1-score, and 0.86 AUC, achieving a comparable performance with other FTD classification approaches. Explainable AI methods were applied to understand AI behavior and to identify regions of the images where the DL model misbehaves. Attention maps highlighted that its decision was driven by hallmarking brain areas for FTD and helped us to understand how to improve FTD detection. The proposed standardized methodology could be useful for benchmark comparison in FTD classification. AI-based CAD tools should be developed with the goal of standardizing pipelines, as varying pre-processing and training methods, along with the absence of model behavior explanations, negatively impact regulators' attitudes towards CAD. The adoption of common best practices for neuroimaging data analysis is a step toward fast evaluation of efficacy and safety of CAD and may accelerate the adoption of AI products in the healthcare system.
Collapse
Affiliation(s)
- Andrea Termine
- Data Science Unit, IRCCS Santa Lucia Foundation, 00143 Rome, Italy; (A.T.); (C.F.)
| | - Carlo Fabrizio
- Data Science Unit, IRCCS Santa Lucia Foundation, 00143 Rome, Italy; (A.T.); (C.F.)
| | - Carlo Caltagirone
- Department of Clinical and Behavioral Neurology, IRCCS Santa Lucia Foundation, 00179 Rome, Italy;
| | - Laura Petrosini
- Experimental and Behavioral Neurophysiology, IRCCS Santa Lucia Foundation, 00143 Rome, Italy
| | | |
Collapse
|
23
|
Pavlova M, Terhljan N, Chung AG, Zhao A, Surana S, Aboutalebi H, Gunraj H, Sabri A, Alaref A, Wong A. COVID-Net CXR-2: An Enhanced Deep Convolutional Neural Network Design for Detection of COVID-19 Cases From Chest X-ray Images. Front Med (Lausanne) 2022; 9:861680. [PMID: 35755067 PMCID: PMC9226387 DOI: 10.3389/fmed.2022.861680] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 05/12/2022] [Indexed: 01/08/2023] Open
Abstract
As the COVID-19 pandemic devastates globally, the use of chest X-ray (CXR) imaging as a complimentary screening strategy to RT-PCR testing continues to grow given its routine clinical use for respiratory complaint. As part of the COVID-Net open source initiative, we introduce COVID-Net CXR-2, an enhanced deep convolutional neural network design for COVID-19 detection from CXR images built using a greater quantity and diversity of patients than the original COVID-Net. We also introduce a new benchmark dataset composed of 19,203 CXR images from a multinational cohort of 16,656 patients from at least 51 countries, making it the largest, most diverse COVID-19 CXR dataset in open access form. The COVID-Net CXR-2 network achieves sensitivity and positive predictive value of 95.5 and 97.0%, respectively, and was audited in a transparent and responsible manner. Explainability-driven performance validation was used during auditing to gain deeper insights in its decision-making behavior and to ensure clinically relevant factors are leveraged for improving trust in its usage. Radiologist validation was also conducted, where select cases were reviewed and reported on by two board-certified radiologists with over 10 and 19 years of experience, respectively, and showed that the critical factors leveraged by COVID-Net CXR-2 are consistent with radiologist interpretations.
Collapse
Affiliation(s)
- Maya Pavlova
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Naomi Terhljan
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Audrey G. Chung
- Waterloo AI Institute, University of Waterloo, Waterloo, ON, Canada
- DarwinAI Corp., Waterloo, ON, Canada
| | - Andy Zhao
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Siddharth Surana
- Cheriton School of Computer Science, University of Waterloo, Waterloo, ON, Canada
| | - Hossein Aboutalebi
- Waterloo AI Institute, University of Waterloo, Waterloo, ON, Canada
- Cheriton School of Computer Science, University of Waterloo, Waterloo, ON, Canada
| | - Hayden Gunraj
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Ali Sabri
- Department of Radiology, McMaster University, Hamilton, ON, Canada
- Niagara Health System, St. Catharines, ON, Canada
| | - Amer Alaref
- Department of Diagnostic Imaging, Northern Ontario School of Medicine, Thunder Bay, ON, Canada
- Department of Diagnostic Radiology, Thunder Bay Regional Health Sciences Centre, Thunder Bay, ON, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo AI Institute, University of Waterloo, Waterloo, ON, Canada
- DarwinAI Corp., Waterloo, ON, Canada
| |
Collapse
|
24
|
D’Antoni F, Russo F, Ambrosio L, Bacco L, Vollero L, Vadalà G, Merone M, Papalia R, Denaro V. Artificial Intelligence and Computer Aided Diagnosis in Chronic Low Back Pain: A Systematic Review. Int J Environ Res Public Health 2022; 19:ijerph19105971. [PMID: 35627508 PMCID: PMC9141006 DOI: 10.3390/ijerph19105971] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 05/09/2022] [Accepted: 05/12/2022] [Indexed: 12/10/2022]
Abstract
Low Back Pain (LBP) is currently the first cause of disability in the world, with a significant socioeconomic burden. Diagnosis and treatment of LBP often involve a multidisciplinary, individualized approach consisting of several outcome measures and imaging data along with emerging technologies. The increased amount of data generated in this process has led to the development of methods related to artificial intelligence (AI), and to computer-aided diagnosis (CAD) in particular, which aim to assist and improve the diagnosis and treatment of LBP. In this manuscript, we have systematically reviewed the available literature on the use of CAD in the diagnosis and treatment of chronic LBP. A systematic research of PubMed, Scopus, and Web of Science electronic databases was performed. The search strategy was set as the combinations of the following keywords: “Artificial Intelligence”, “Machine Learning”, “Deep Learning”, “Neural Network”, “Computer Aided Diagnosis”, “Low Back Pain”, “Lumbar”, “Intervertebral Disc Degeneration”, “Spine Surgery”, etc. The search returned a total of 1536 articles. After duplication removal and evaluation of the abstracts, 1386 were excluded, whereas 93 papers were excluded after full-text examination, taking the number of eligible articles to 57. The main applications of CAD in LBP included classification and regression. Classification is used to identify or categorize a disease, whereas regression is used to produce a numerical output as a quantitative evaluation of some measure. The best performing systems were developed to diagnose degenerative changes of the spine from imaging data, with average accuracy rates >80%. However, notable outcomes were also reported for CAD tools executing different tasks including analysis of clinical, biomechanical, electrophysiological, and functional imaging data. Further studies are needed to better define the role of CAD in LBP care.
Collapse
Affiliation(s)
- Federico D’Antoni
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo, 21, 00128 Rome, Italy; (F.D.); (L.B.); (L.V.)
| | - Fabrizio Russo
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo, 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
- Correspondence: (F.R.); (M.M.)
| | - Luca Ambrosio
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo, 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| | - Luca Bacco
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo, 21, 00128 Rome, Italy; (F.D.); (L.B.); (L.V.)
- ItaliaNLP Lab, Istituto di Linguistica Computazionale “Antonio Zampolli”, National Research Council, Via Giuseppe Moruzzi, 1, 56124 Pisa, Italy
- Webmonks S.r.l., Via del Triopio, 5, 00178 Rome, Italy
| | - Luca Vollero
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo, 21, 00128 Rome, Italy; (F.D.); (L.B.); (L.V.)
| | - Gianluca Vadalà
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo, 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| | - Mario Merone
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo, 21, 00128 Rome, Italy; (F.D.); (L.B.); (L.V.)
- Correspondence: (F.R.); (M.M.)
| | - Rocco Papalia
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo, 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| | - Vincenzo Denaro
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo, 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| |
Collapse
|
25
|
Au C, Reeves R, Li Z, Gingold E, Halpern E, Sundaram B. Impact of multidetector computed tomography scan parameters, novel reconstruction settings, and lung nodule characteristics on nodule diameter measurements: A Phantom Study. Med Phys 2022; 49:3936-3943. [PMID: 35358333 DOI: 10.1002/mp.15639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 03/09/2022] [Accepted: 03/18/2022] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Novel CT reconstruction techniques strive to maintain image quality and processing efficiency. The purpose of this study is to investigate the impact of a newer hybrid iterative reconstruction technique, Adaptive Statistical Iterative Reconstruction-V (ASIR-V) in combination with various CT scan parameters on the semi-automated quantification using various lung nodules. METHODS A chest phantom embedded with eight spherical objects was scanned using varying CT parameters such as tube current and ASIR-V levels. We calculated absolute percentage error (APE) and mean APE (MAPE) using differences between the semi-automated measured diameters and known dimensions. Predictive variables were assessed using a multivariable general linear model. The linear regression slope coefficients (β) were reported to demonstrate effect size and directionality. RESULTS The APE of the semi-automated measured diameters was higher in ground-glass than solid nodules (β = 9.000, p<0.001). APE had an inverse relationship with nodule diameter (mm; β = -3.499, p<0.001) and tube current (mA; β = -0.006, p<0.001). MAPE did not vary based on the ASIR-V level (range: 5.7-13.1%). CONCLUSION Error is dominated by nodule characteristics with a small effect of tube current. Regardless of phantom size, nodule size accuracy is not affected by tube voltage or ASIR-V level, maintaining accuracy while maximizing radiation dose reduction. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cherry Au
- Department of Internal Medicine, Rush University Medical Center, 1620 W Harrison St, Chicago, IL, 60612
| | - Russell Reeves
- Department of Radiology, Thomas Jefferson University Hospital, 111 S 11th St, Philadelphia, PA, 19107
| | - Zhenteng Li
- Department of Radiology, Thomas Jefferson University Hospital, 111 S 11th St, Philadelphia, PA, 19107.,The Vascular Center, St. Luke's Anderson Campus - Medical Office Building, 1700 St. Luke's Boulevard, Suite 301, Easton, PA
| | - Eric Gingold
- Department of Radiology, Thomas Jefferson University Hospital, 111 S 11th St, Philadelphia, PA, 19107
| | - Ethan Halpern
- Department of Radiology, Thomas Jefferson University Hospital, 111 S 11th St, Philadelphia, PA, 19107
| | - Baskaran Sundaram
- Department of Radiology, Thomas Jefferson University Hospital, 111 S 11th St, Philadelphia, PA, 19107
| |
Collapse
|
26
|
Stępień P, Kawa J, Sitek EJ, Wieczorek D, Sikorski R, Dąbrowska M, Sławek J, Pietka E. Computer Aided Written Character Feature Extraction in Progressive Supranuclear Palsy and Parkinson's Disease. Sensors (Basel) 2022; 22:s22041688. [PMID: 35214587 PMCID: PMC8880639 DOI: 10.3390/s22041688] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 02/16/2022] [Accepted: 02/17/2022] [Indexed: 02/01/2023]
Abstract
Parkinson's disease (PD) and progressive supranuclear palsy (PSP) are neurodegenerative movement disorders associated with cognitive dysfunction. The Luria's Alternating Series Test (LAST) is a clinical tool sensitive to both graphomotor problems and perseverative tendencies that may suggest the dysfunction of prefrontal and/or frontostriatal areas and may be used in PD and PSP assessment. It requires the participant to draw a series of alternating triangles and rectangles. In the study, two clinical groups-51 patients with PD and 22 patients with PSP-were compared to 32 neurologically intact seniors. Participants underwent neuropsychological assessment. The LAST was administered in a paper and pencil version, then scanned and preprocessed. The series was automatically divided into characters, and the shapes were recognized as rectangles or triangles. In the feature extraction step, each rectangle and triangle was regarded both as an image and a two-dimensional signal, separately and as a part of the series. Standard and novel features were extracted and normalized using characters written by the examiner. Out of 71 proposed features, 51 differentiated the groups (p < 0.05). A classifier showed an accuracy of 70.5% for distinguishing three groups.
Collapse
Affiliation(s)
- Paula Stępień
- Faculty of Biomedical Engineering, Silesian University of Technology, 41-800 Zabrze, Poland; (P.S.); (E.P.)
| | - Jacek Kawa
- Faculty of Biomedical Engineering, Silesian University of Technology, 41-800 Zabrze, Poland; (P.S.); (E.P.)
- Correspondence:
| | - Emilia J. Sitek
- Division of Neurological and Psychiatric Nursing, Faculty of Health Sciences, Medical University of Gdansk, 80-211 Gdansk, Poland; (E.J.S.); (J.S.)
- Department of Neurology, St. Adalbert Hospital, Copernicus PL Ltd., 80-462 Gdansk, Poland;
| | - Dariusz Wieczorek
- Department of Rehabilitation, Faculty of Health Sciences, Medical University of Gdansk, 80-219 Gdansk, Poland;
| | - Rafał Sikorski
- Department of Rehabilitation, Saint Vincent a Paulo Hospital, Pomeranian Hospitals Ltd., 81-519 Gdynia, Poland;
| | - Magda Dąbrowska
- Department of Neurology, St. Adalbert Hospital, Copernicus PL Ltd., 80-462 Gdansk, Poland;
| | - Jarosław Sławek
- Division of Neurological and Psychiatric Nursing, Faculty of Health Sciences, Medical University of Gdansk, 80-211 Gdansk, Poland; (E.J.S.); (J.S.)
- Department of Neurology, St. Adalbert Hospital, Copernicus PL Ltd., 80-462 Gdansk, Poland;
| | - Ewa Pietka
- Faculty of Biomedical Engineering, Silesian University of Technology, 41-800 Zabrze, Poland; (P.S.); (E.P.)
| |
Collapse
|
27
|
Mahmood T, Li J, Pei Y, Akhtar F, Imran A, Yaqub M. An Automatic Detection and Localization of Mammographic Microcalcifications ROI with Multi-Scale Features Using the Radiomics Analysis Approach. Cancers (Basel) 2021; 13:cancers13235916. [PMID: 34885026 PMCID: PMC8657253 DOI: 10.3390/cancers13235916] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 11/17/2021] [Accepted: 11/18/2021] [Indexed: 11/16/2022] Open
Abstract
Microcalcifications in breast tissue can be an early sign of breast cancer, and play a crucial role in breast cancer screening. This study proposes a radiomics approach based on advanced machine learning algorithms for diagnosing pathological microcalcifications in mammogram images and provides radiologists with a valuable decision support system (in regard to diagnosing patients). An adaptive enhancement method based on the contourlet transform is proposed to enhance microcalcifications and effectively suppress background and noise. Textural and statistical features are extracted from each wavelet layer's high-frequency coefficients to detect microcalcification regions. The top-hat morphological operator and wavelet transform segment microcalcifications, implying their exact locations. Finally, the proposed radiomic fusion algorithm is employed to classify the selected features into benign and malignant. The proposed model's diagnostic performance was evaluated on the MIAS dataset and compared with traditional machine learning models, such as the support vector machine, K-nearest neighbor, and random forest, using different evaluation parameters. Our proposed approach outperformed existing models in diagnosing microcalcification by achieving an 0.90 area under the curve, 0.98 sensitivity, and 0.98 accuracy. The experimental findings concur with expert observations, indicating that the proposed approach is most effective and practical for early diagnosing breast microcalcifications, substantially improving the work efficiency of physicians.
Collapse
Affiliation(s)
- Tariq Mahmood
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (T.M.); (J.L.); (M.Y.)
- Division of Science and Technology, University of Education, Lahore 54000, Pakistan
| | - Jianqiang Li
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (T.M.); (J.L.); (M.Y.)
- Beijing Engineering Research Center for IoT Software and Systems, Beijing 100124, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu 965-8580, Japan
- Correspondence:
| | - Faheem Akhtar
- Department of Computer Science, Sukkur IBA University, Sukkur 65200, Pakistan;
| | - Azhar Imran
- Department of Creative Technologies, Air University, Islamabad 44000, Pakistan;
| | - Muhammad Yaqub
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (T.M.); (J.L.); (M.Y.)
| |
Collapse
|
28
|
Combès B, Kerbrat A, Pasquier G, Commowick O, Le Bon B, Galassi F, L'Hostis P, El Graoui N, Chouteau R, Cordonnier E, Edan G, Ferré JC. A Clinically-Compatible Workflow for Computer-Aided Assessment of Brain Disease Activity in Multiple Sclerosis Patients. Front Med (Lausanne) 2021; 8:740248. [PMID: 34805206 PMCID: PMC8595265 DOI: 10.3389/fmed.2021.740248] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 09/30/2021] [Indexed: 11/21/2022] Open
Abstract
Over the last 10 years, the number of approved disease modifying drugs acting on the focal inflammatory process in Multiple Sclerosis (MS) has increased from 3 to 10. This wide choice offers the opportunity of a personalized medicine with the objective of no clinical and radiological activity for each patient. This new paradigm requires the optimization of the detection of new FLAIR lesions on longitudinal MRI. In this paper, we describe a complete workflow—that we developed, implemented, deployed, and evaluated—to facilitate the monitoring of new FLAIR lesions on longitudinal MRI of MS patients. This workflow has been designed to be usable by both hospital and private neurologists and radiologists in France. It consists of three main components: (i) a software component that allows for automated and secured anonymization and transfer of MRI data from the clinical Picture Archive and Communication System (PACS) to a processing server (and vice-versa); (ii) a fully automated segmentation core that enables detection of focal longitudinal changes in patients from T1-weighted, T2-weighted and FLAIR brain MRI scans, and (iii) a dedicated web viewer that provides an intuitive visualization of new lesions to radiologists and neurologists. We first present these different components. Then, we evaluate the workflow on 54 pairs of longitudinal MRI scans that were analyzed by 3 experts (1 neuroradiologist, 1 radiologist, and 1 neurologist) with and without the proposed workflow. We show that our workflow provided a valuable aid to clinicians in detecting new MS lesions both in terms of accuracy (mean number of detected lesions per patient and per expert 1.8 without the workflow vs. 2.3 with the workflow, p = 5.10−4) and of time dedicated by the experts (mean time difference 2′45″, p = 10−4). This increase in the number of detected lesions has implications in the classification of MS patients as stable or active, even for the most experienced neuroradiologist (mean sensitivity was 0.74 without the workflow and 0.90 with the workflow, p-value for no difference = 0.003). It therefore has potential consequences on the therapeutic management of MS patients.
Collapse
Affiliation(s)
- Benoit Combès
- Univ Rennes, Inria, CNRS, Inserm IRISA UMR 6074, Empenn ERL U 1228, Rennes, France
| | - Anne Kerbrat
- Univ Rennes, Inria, CNRS, Inserm IRISA UMR 6074, Empenn ERL U 1228, Rennes, France.,Neurology Department, Rennes University Hospital, Rennes, France
| | | | - Olivier Commowick
- Univ Rennes, Inria, CNRS, Inserm IRISA UMR 6074, Empenn ERL U 1228, Rennes, France
| | - Brandon Le Bon
- Univ Rennes, Inria, CNRS, Inserm IRISA UMR 6074, Empenn ERL U 1228, Rennes, France
| | - Francesca Galassi
- Univ Rennes, Inria, CNRS, Inserm IRISA UMR 6074, Empenn ERL U 1228, Rennes, France
| | | | - Nora El Graoui
- Univ Rennes, Inria, CNRS, Inserm IRISA UMR 6074, Empenn ERL U 1228, Rennes, France.,CHU Rennes, Department of Neuroradiology, Rennes, France
| | - Raphael Chouteau
- Neurology Department, Rennes University Hospital, Rennes, France
| | | | - Gilles Edan
- Univ Rennes, Inria, CNRS, Inserm IRISA UMR 6074, Empenn ERL U 1228, Rennes, France.,Neurology Department, Rennes University Hospital, Rennes, France
| | - Jean-Christophe Ferré
- Univ Rennes, Inria, CNRS, Inserm IRISA UMR 6074, Empenn ERL U 1228, Rennes, France.,CHU Rennes, Department of Neuroradiology, Rennes, France
| |
Collapse
|
29
|
Kha QH, Le VH, Hung TNK, Le NQK. Development and Validation of an Efficient MRI Radiomics Signature for Improving the Predictive Performance of 1p/19q Co-Deletion in Lower-Grade Gliomas. Cancers (Basel) 2021; 13:cancers13215398. [PMID: 34771562 PMCID: PMC8582370 DOI: 10.3390/cancers13215398] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 10/19/2021] [Accepted: 10/26/2021] [Indexed: 12/30/2022] Open
Abstract
Simple Summary Low-grade gliomas (LGG) with the 1p/19q co-deletion mutation have been proven to have a better survival prognosis and response to treatment than individuals without the mutation. Identifying this mutation has a vital role in managing LGG patients; however, the current diagnostic gold standard, including the brain-tissue biopsy or the surgical resection of the tumor, remains highly invasive and time-consuming. We proposed a model based on the eXtreme Gradient Boosting (XGBoost) classifier to detect 1p/19q co-deletion mutation using non-invasive medical images. The performance of our model achieved 87% and 82.8% accuracy on the training and external test set, respectively. Significantly, the prediction was based on only seven optimal wavelet radiomics features extracted from brain Magnetic Resonance (MR) images. We believe that this model can address clinicians in the rapid diagnosis of clinical 1p/19q co-deletion mutation, thereby improving the treatment prognosis of LGG patients. Abstract The prognosis and treatment plans for patients diagnosed with low-grade gliomas (LGGs) may significantly be improved if there is evidence of chromosome 1p/19q co-deletion mutation. Many studies proved that the codeletion status of 1p/19q enhances the sensitivity of the tumor to different types of therapeutics. However, the current clinical gold standard of detecting this chromosomal mutation remains invasive and poses implicit risks to patients. Radiomics features derived from medical images have been used as a new approach for non-invasive diagnosis and clinical decisions. This study proposed an eXtreme Gradient Boosting (XGBoost)-based model to predict the 1p/19q codeletion status in a binary classification task. We trained our model on the public database extracted from The Cancer Imaging Archive (TCIA), including 159 LGG patients with 1p/19q co-deletion mutation status. The XGBoost was the baseline algorithm, and we combined the SHapley Additive exPlanations (SHAP) analysis to select the seven most optimal radiomics features to build the final predictive model. Our final model achieved an accuracy of 87% and 82.8% on the training set and external test set, respectively. With seven wavelet radiomics features, our XGBoost-based model can identify the 1p/19q codeletion status in LGG-diagnosed patients for better management and address the drawbacks of invasive gold-standard tests in clinical practice.
Collapse
Affiliation(s)
- Quang-Hien Kha
- International Master/Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan; (Q.-H.K.); (V.-H.L.); (T.N.K.H.)
| | - Viet-Huan Le
- International Master/Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan; (Q.-H.K.); (V.-H.L.); (T.N.K.H.)
- Department of Thoracic Surgery, Khanh Hoa General Hospital, Nha Trang City 65000, Vietnam
| | - Truong Nguyen Khanh Hung
- International Master/Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan; (Q.-H.K.); (V.-H.L.); (T.N.K.H.)
- Department of Orthopedic and Trauma, Cho Ray Hospital, Ho Chi Minh City 70000, Vietnam
| | - Nguyen Quoc Khanh Le
- International Master/Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan; (Q.-H.K.); (V.-H.L.); (T.N.K.H.)
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei 106, Taiwan
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei 106, Taiwan
- Translational Imaging Research Center, Taipei Medical University Hospital, Taipei 110, Taiwan
- Correspondence: ; Tel.: +886-02-663-82736-1992
| |
Collapse
|
30
|
D’Antoni F, Russo F, Ambrosio L, Vollero L, Vadalà G, Merone M, Papalia R, Denaro V. Artificial Intelligence and Computer Vision in Low Back Pain: A Systematic Review. Int J Environ Res Public Health 2021; 18:ijerph182010909. [PMID: 34682647 PMCID: PMC8535895 DOI: 10.3390/ijerph182010909] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/04/2021] [Accepted: 10/09/2021] [Indexed: 12/16/2022]
Abstract
Chronic Low Back Pain (LBP) is a symptom that may be caused by several diseases, and it is currently the leading cause of disability worldwide. The increased amount of digital images in orthopaedics has led to the development of methods related to artificial intelligence, and to computer vision in particular, which aim to improve diagnosis and treatment of LBP. In this manuscript, we have systematically reviewed the available literature on the use of computer vision in the diagnosis and treatment of LBP. A systematic research of PubMed electronic database was performed. The search strategy was set as the combinations of the following keywords: "Artificial Intelligence", "Feature Extraction", "Segmentation", "Computer Vision", "Machine Learning", "Deep Learning", "Neural Network", "Low Back Pain", "Lumbar". Results: The search returned a total of 558 articles. After careful evaluation of the abstracts, 358 were excluded, whereas 124 papers were excluded after full-text examination, taking the number of eligible articles to 76. The main applications of computer vision in LBP include feature extraction and segmentation, which are usually followed by further tasks. Most recent methods use deep learning models rather than digital image processing techniques. The best performing methods for segmentation of vertebrae, intervertebral discs, spinal canal and lumbar muscles achieve Sørensen-Dice scores greater than 90%, whereas studies focusing on localization and identification of structures collectively showed an accuracy greater than 80%. Future advances in artificial intelligence are expected to increase systems' autonomy and reliability, thus providing even more effective tools for the diagnosis and treatment of LBP.
Collapse
Affiliation(s)
- Federico D’Antoni
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 21, 00128 Rome, Italy; (F.D.); (L.V.)
| | - Fabrizio Russo
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
- Correspondence: (F.R.); (M.M.)
| | - Luca Ambrosio
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| | - Luca Vollero
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 21, 00128 Rome, Italy; (F.D.); (L.V.)
| | - Gianluca Vadalà
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| | - Mario Merone
- Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 21, 00128 Rome, Italy; (F.D.); (L.V.)
- Correspondence: (F.R.); (M.M.)
| | - Rocco Papalia
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| | - Vincenzo Denaro
- Department of Orthopaedic Surgery, Università Campus Bio-Medico di Roma, Via Alvaro Del Portillo 200, 00128 Rome, Italy; (L.A.); (G.V.); (R.P.); (V.D.)
| |
Collapse
|
31
|
Gudigar A, Nayak S, Samanth J, Raghavendra U, A J A, Barua PD, Hasan MN, Ciaccio EJ, Tan RS, Rajendra Acharya U. Recent Trends in Artificial Intelligence-Assisted Coronary Atherosclerotic Plaque Characterization. Int J Environ Res Public Health 2021; 18:10003. [PMID: 34639303 DOI: 10.3390/ijerph181910003] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 09/12/2021] [Accepted: 09/17/2021] [Indexed: 01/21/2023]
Abstract
Coronary artery disease is a major cause of morbidity and mortality worldwide. Its underlying histopathology is the atherosclerotic plaque, which comprises lipid, fibrous and—when chronic—calcium components. Intravascular ultrasound (IVUS) and intravascular optical coherence tomography (IVOCT) performed during invasive coronary angiography are reference standards for characterizing the atherosclerotic plaque. Fine image spatial resolution attainable with contemporary coronary computed tomographic angiography (CCTA) has enabled noninvasive plaque assessment, including identifying features associated with vulnerable plaques known to presage acute coronary events. Manual interpretation of IVUS, IVOCT and CCTA images demands scarce physician expertise and high time cost. This has motivated recent research into and development of artificial intelligence (AI)-assisted methods for image processing, feature extraction, plaque identification and characterization. We performed parallel searches of the medical and technical literature from 1995 to 2021 focusing respectively on human plaque characterization using various imaging modalities and the use of AI-assisted computer aided diagnosis (CAD) to detect and classify atherosclerotic plaques, including their composition and the presence of high-risk features denoting vulnerable plaques. A total of 122 publications were selected for evaluation and the analysis was summarized in terms of data sources, methods—machine versus deep learning—and performance metrics. Trends in AI-assisted plaque characterization are detailed and prospective research challenges discussed. Future directions for the development of accurate and efficient CAD systems to characterize plaque noninvasively using CCTA are proposed.
Collapse
|
32
|
Fiani B, Pasko KBD, Sarhadi K, Covarrubias C. Current uses, emerging applications, and clinical integration of artificial intelligence in neuroradiology. Rev Neurosci 2021; 33:383-395. [PMID: 34506699 DOI: 10.1515/revneuro-2021-0101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 08/18/2021] [Indexed: 11/15/2022]
Abstract
Artificial intelligence (AI) is a branch of computer science with a variety of subfields and techniques, exploited to serve as a deductive tool that performs tasks originally requiring human cognition. AI tools and its subdomains are being incorporated into healthcare delivery for the improvement of medical data interpretation encompassing clinical management, diagnostics, and prognostic outcomes. In the field of neuroradiology, AI manifested through deep machine learning and connected neural networks (CNNs) has demonstrated incredible accuracy in identifying pathology and aiding in diagnosis and prognostication in several areas of neurology and neurosurgery. In this literature review, we survey the available clinical data highlighting the utilization of AI in the field of neuroradiology across multiple neurological and neurosurgical subspecialties. In addition, we discuss the emerging role of AI in neuroradiology, its strengths and limitations, as well as future needs in strengthening its role in clinical practice. Our review evaluated data across several subspecialties of neurology and neurosurgery including vascular neurology, spinal pathology, traumatic brain injury (TBI), neuro-oncology, multiple sclerosis, Alzheimer's disease, and epilepsy. AI has established a strong presence within the realm of neuroradiology as a successful and largely supportive technology aiding in the interpretation, diagnosis, and even prognostication of various pathologies. More research is warranted to establish its full scientific validity and determine its maximum potential to aid in optimizing and providing the most accurate imaging interpretation.
Collapse
Affiliation(s)
- Brian Fiani
- Department of Neurosurgery, Desert Regional Medical Center, 1150 N Indian Canyon Dr, Palm Springs, CA, 92262, USA
| | - Kory B Dylan Pasko
- School of Medicine, Georgetown University, 3900 Reservoir Rd NW, Washington, DC, 20007, USA
| | - Kasra Sarhadi
- Department of Neurology, University of Washington, Main Hospital, 325 9th Ave, Seattle, WA, 98104, USA
| | - Claudia Covarrubias
- School of Medicine, Universidad Anáhuac Querétaro, Cto. Universidades I, Fracción 2, 76246 Qro., Querétaro, Mexico
| |
Collapse
|
33
|
Kim CI, Hwang SM, Park EB, Won CH, Lee JH. Computer-Aided Diagnosis Algorithm for Classification of Malignant Melanoma Using Deep Neural Networks. Sensors (Basel) 2021; 21:s21165551. [PMID: 34450993 PMCID: PMC8400855 DOI: 10.3390/s21165551] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 08/09/2021] [Accepted: 08/13/2021] [Indexed: 11/19/2022]
Abstract
Malignant melanoma accounts for about 1–3% of all malignancies in the West, especially in the United States. More than 9000 people die each year. In general, it is difficult to characterize a skin lesion from a photograph. In this paper, we propose a deep learning-based computer-aided diagnostic algorithm for the classification of malignant melanoma and benign skin tumors from RGB channel skin images. The proposed deep learning model constitutes a tumor lesion segmentation model and a classification model of malignant melanoma. First, U-Net was used to classify skin lesions in dermoscopy images. We implement an algorithm to classify malignant melanoma and benign tumors using skin lesion images and expert labeling results from convolutional neural networks. The U-Net model achieved a dice similarity coefficient of 81.1% compared to the expert labeling results. The classification accuracy of malignant melanoma reached 80.06%. As a result, the proposed AI algorithm is expected to be utilized as a computer-aided diagnostic algorithm to help early detection of malignant melanoma.
Collapse
Affiliation(s)
- Chan-Il Kim
- Department of Biomedical Engineering, Keimyung University, Daegu 42601, Korea; (C.-I.K.); (S.-M.H.); (E.-B.P.)
| | - Seok-Min Hwang
- Department of Biomedical Engineering, Keimyung University, Daegu 42601, Korea; (C.-I.K.); (S.-M.H.); (E.-B.P.)
| | - Eun-Bin Park
- Department of Biomedical Engineering, Keimyung University, Daegu 42601, Korea; (C.-I.K.); (S.-M.H.); (E.-B.P.)
| | - Chang-Hee Won
- Department of Electrical and Computer Engineering, Temple University, Philadelphia, PA 19122, USA;
| | - Jong-Ha Lee
- Department of Electrical and Computer Engineering, Temple University, Philadelphia, PA 19122, USA;
- Correspondence: ; Tel.: +82-10-8968-8769
| |
Collapse
|
34
|
Liew XY, Hameed N, Clos J. A Review of Computer-Aided Expert Systems for Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:2764. [PMID: 34199444 PMCID: PMC8199592 DOI: 10.3390/cancers13112764] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 05/25/2021] [Accepted: 05/28/2021] [Indexed: 11/18/2022] Open
Abstract
A computer-aided diagnosis (CAD) expert system is a powerful tool to efficiently assist a pathologist in achieving an early diagnosis of breast cancer. This process identifies the presence of cancer in breast tissue samples and the distinct type of cancer stages. In a standard CAD system, the main process involves image pre-processing, segmentation, feature extraction, feature selection, classification, and performance evaluation. In this review paper, we reviewed the existing state-of-the-art machine learning approaches applied at each stage involving conventional methods and deep learning methods, the comparisons within methods, and we provide technical details with advantages and disadvantages. The aims are to investigate the impact of CAD systems using histopathology images, investigate deep learning methods that outperform conventional methods, and provide a summary for future researchers to analyse and improve the existing techniques used. Lastly, we will discuss the research gaps of existing machine learning approaches for implementation and propose future direction guidelines for upcoming researchers.
Collapse
Affiliation(s)
- Xin Yu Liew
- Jubilee Campus, University of Nottingham, Wollaton Road, Nottingham NG8 1BB, UK; (N.H.); (J.C.)
| | | | | |
Collapse
|
35
|
Giannini V, Mazzetti S, Cappello G, Doronzio VM, Vassallo L, Russo F, Giacobbe A, Muto G, Regge D. Computer-Aided Diagnosis Improves the Detection of Clinically Significant Prostate Cancer on Multiparametric-MRI: A Multi-Observer Performance Study Involving Inexperienced Readers. Diagnostics (Basel) 2021; 11:973. [PMID: 34071215 PMCID: PMC8227686 DOI: 10.3390/diagnostics11060973] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 05/17/2021] [Accepted: 05/26/2021] [Indexed: 11/17/2022] Open
Abstract
Recently, Computer Aided Diagnosis (CAD) systems have been proposed to help radiologists in detecting and characterizing Prostate Cancer (PCa). However, few studies evaluated the performances of these systems in a clinical setting, especially when used by non-experienced readers. The main aim of this study is to assess the diagnostic performance of non-experienced readers when reporting assisted by the likelihood map generated by a CAD system, and to compare the results with the unassisted interpretation. Three resident radiologists were asked to review multiparametric-MRI of patients with and without PCa, both unassisted and assisted by a CAD system. In both reading sessions, residents recorded all positive cases, and sensitivity, specificity, negative and positive predictive values were computed and compared. The dataset comprised 90 patients (45 with at least one clinically significant biopsy-confirmed PCa). Sensitivity significantly increased in the CAD assisted mode for patients with at least one clinically significant lesion (GS > 6) (68.7% vs. 78.1%, p = 0.018). Overall specificity was not statistically different between unassisted and assisted sessions (94.8% vs. 89.6, p = 0.072). The use of the CAD system significantly increases the per-patient sensitivity of inexperienced readers in the detection of clinically significant PCa, without negatively affecting specificity, while significantly reducing overall reporting time.
Collapse
Affiliation(s)
- Valentina Giannini
- Department of Surgical Sciences, University of Turin, 10126 Turin, Italy
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy; (G.C.); (V.M.D.); (L.V.); (F.R.); (D.R.)
| | - Simone Mazzetti
- Department of Surgical Sciences, University of Turin, 10126 Turin, Italy
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy; (G.C.); (V.M.D.); (L.V.); (F.R.); (D.R.)
| | - Giovanni Cappello
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy; (G.C.); (V.M.D.); (L.V.); (F.R.); (D.R.)
| | - Valeria Maria Doronzio
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy; (G.C.); (V.M.D.); (L.V.); (F.R.); (D.R.)
| | - Lorenzo Vassallo
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy; (G.C.); (V.M.D.); (L.V.); (F.R.); (D.R.)
| | - Filippo Russo
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy; (G.C.); (V.M.D.); (L.V.); (F.R.); (D.R.)
| | | | - Giovanni Muto
- Department of Urology, Humanitas University, 10153 Turin, Italy;
| | - Daniele Regge
- Department of Surgical Sciences, University of Turin, 10126 Turin, Italy
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy; (G.C.); (V.M.D.); (L.V.); (F.R.); (D.R.)
| |
Collapse
|
36
|
Yeo M, Kok HK, Kutaiba N, Maingard J, Thijs V, Tahayori B, Russell J, Jhamb A, Chandra RV, Brooks M, Barras CD, Asadi H. Artificial intelligence in clinical decision support and outcome prediction - applications in stroke. J Med Imaging Radiat Oncol 2021; 65:518-528. [PMID: 34050596 DOI: 10.1111/1754-9485.13193] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 04/29/2021] [Indexed: 01/19/2023]
Abstract
Artificial intelligence (AI) is making a profound impact in healthcare, with the number of AI applications in medicine increasing substantially over the past five years. In acute stroke, it is playing an increasingly important role in clinical decision-making. Contemporary advances have increased the amount of information - both clinical and radiological - which clinicians must consider when managing patients. In the time-critical setting of acute stroke, AI offers the tools to rapidly evaluate and consolidate available information, extracting specific predictions from rich, noisy data. It has been applied to the automatic detection of stroke lesions on imaging and can guide treatment decisions through the prediction of tissue outcomes and long-term functional outcomes. This review examines the current state of AI applications in stroke, exploring their potential to reform stroke care through clinical decision support, as well as the challenges and limitations which must be addressed to facilitate their acceptance and adoption for clinical use.
Collapse
Affiliation(s)
- Melissa Yeo
- School of Medicine, University of Melbourne, Melbourne, Victoria, Australia
| | - Hong Kuan Kok
- Interventional Radiology Service, Department of Radiology, Northern Health, Melbourne, Victoria, Australia
- School of Medicine, Faculty of Health, Deakin University, Burwood, Victoria, Australia
| | - Numan Kutaiba
- Department of Radiology, Austin Hospital, Melbourne, Victoria, Australia
| | - Julian Maingard
- School of Medicine, Faculty of Health, Deakin University, Burwood, Victoria, Australia
- Interventional Neuroradiology Unit, Monash Health, Clayton, Victoria, Australia
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, Victoria, Australia
| | - Vincent Thijs
- Stroke Theme, Florey Institute of Neuroscience and Mental Health, Melbourne, Victoria, Australia
- Department of Neurology, Austin Health, Melbourne, Victoria, Australia
| | - Bahman Tahayori
- Department of Biomedical Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- IBM Research Australia, Melbourne, Victoria, Australia
| | - Jeremy Russell
- Department of Neurosurgery, Austin Hospital, Melbourne, Victoria, Australia
| | - Ashu Jhamb
- Department of Radiology, St Vincent's Hospital, Melbourne, Victoria, Australia
| | - Ronil V Chandra
- Interventional Neuroradiology Unit, Monash Health, Clayton, Victoria, Australia
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, Victoria, Australia
| | - Mark Brooks
- School of Medicine, University of Melbourne, Melbourne, Victoria, Australia
- School of Medicine, Faculty of Health, Deakin University, Burwood, Victoria, Australia
- Stroke Theme, Florey Institute of Neuroscience and Mental Health, Melbourne, Victoria, Australia
- Interventional Neuroradiology Service, Department of Radiology, Austin Hospital, Melbourne, Victoria, Australia
| | - Christen D Barras
- South Australian Institute of Health and Medical Research, Adelaide, South Australia, Australia
- School of Medicine, The University of Adelaide, Adelaide, South Australia, Australia
| | - Hamed Asadi
- School of Medicine, Faculty of Health, Deakin University, Burwood, Victoria, Australia
- Interventional Neuroradiology Unit, Monash Health, Clayton, Victoria, Australia
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, Victoria, Australia
- Stroke Theme, Florey Institute of Neuroscience and Mental Health, Melbourne, Victoria, Australia
- Department of Radiology, St Vincent's Hospital, Melbourne, Victoria, Australia
- Interventional Neuroradiology Service, Department of Radiology, Austin Hospital, Melbourne, Victoria, Australia
| |
Collapse
|
37
|
Gang GJ, Deshpande R, Stayman JW. Standardization of histogram- and GLCM-based radiomics in the presence of blur and noise. Phys Med Biol 2021; 66:074004. [PMID: 33721845 PMCID: PMC8607458 DOI: 10.1088/1361-6560/abeea5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Radiomics have been extensively investigated as quantitative biomarkers that can enhance the utility of imaging studies and aid the clinical decision making process. A major challenge to the clinical translation of radiomics is their variability as a result of different imaging and reconstruction protocols. In this work, we present a novel radiomics standardization framework capable of modeling and recovering the underlying radiomic feature in images that have been corrupted by the effects of spatial resolution and noise. We focus on two classes of radiomics based on pixel value distributions - i.e., histograms and gray-level co-occurrence matrices. We developed a model that predicts these distributions in the presence of system blur and noise, and used that model to invert these physical effects and recover the underlying distributions. Specifically, the effect of blur on histogram and GLCM is highly image-dependent, while additive noise convolves the histogram/GLCM of the noiseless image with those of the noise. The recovery method therefore consists of two deconvolution operations: the first in the image domain to remove the effect of system blur, the second in the histogram/GLCM domain to remove the effect of noise. The performance of the proposed recovery strategy was investigated using a set of texture phantoms and an emulated CT imaging chain with a range of realistic blur and noise levels. The proposed method was able to obtain histogram and GLCM estimates that closely resemble the ground truth. The method performed well across imaging conditions and significantly lowered the variability associated with different imaging protocols. This improvement also translated to better classification accuracy, where recovered radiomic values results in greater separation of radiomic clusters for two different texture phantoms as compared to values derived from the original blurred and noisy images. In summary, the novel radiomics standardization framework demonstrates high potential for mitigating radiomic variability as a result of the imaging system and can potentially be integrated as a preprocessing step towards more robust and reproducible radiomic models.
Collapse
Affiliation(s)
- Grace Jianan Gang
- Department of Biomedical Engineering, Johns Hopkins University, Traylor Research Building, 720 Rutland Avenue, Baltimore, Maryland, MD 21205, UNITED STATES
| | - Radhika Deshpande
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, UNITED STATES
| | - Joseph Webster Stayman
- Department of Biomedical Engineering, Johns Hopkins University, 720 Rutland Avenue, Baltimore, Maryland, MD 21205, USA, Baltimore, 21205, UNITED STATES
| |
Collapse
|
38
|
Calderon-Ramirez S, Yang S, Moemeni A, Colreavy-Donnelly S, Elizondo DA, Oala L, Rodriguez-Capitan J, Jimenez-Navarro M, Lopez-Rubio E, Molina-Cabello MA. Improving Uncertainty Estimation With Semi-Supervised Deep Learning for COVID-19 Detection Using Chest X-Ray Images. IEEE Access 2021; 9:85442-85454. [PMID: 34812397 PMCID: PMC8545186 DOI: 10.1109/access.2021.3085418] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 05/24/2021] [Indexed: 05/02/2023]
Abstract
In this work we implement a COVID-19 infection detection system based on chest X-ray images with uncertainty estimation. Uncertainty estimation is vital for safe usage of computer aided diagnosis tools in medical applications. Model estimations with high uncertainty should be carefully analyzed by a trained radiologist. We aim to improve uncertainty estimations using unlabelled data through the MixMatch semi-supervised framework. We test popular uncertainty estimation approaches, comprising Softmax scores, Monte-Carlo dropout and deterministic uncertainty quantification. To compare the reliability of the uncertainty estimates, we propose the usage of the Jensen-Shannon distance between the uncertainty distributions of correct and incorrect estimations. This metric is statistically relevant, unlike most previously used metrics, which often ignore the distribution of the uncertainty estimations. Our test results show a significant improvement in uncertainty estimates when using unlabelled data. The best results are obtained with the use of the Monte Carlo dropout method.
Collapse
Affiliation(s)
- Saul Calderon-Ramirez
- School of Computer Science and InformaticsDe Montfort University Leicester LE1 9BH U.K
- Instituto Tecnologico de Costa Rica Cartago 30101 Costa Rica
| | - Shengxiang Yang
- School of Computer Science and InformaticsDe Montfort University Leicester LE1 9BH U.K
| | - Armaghan Moemeni
- School of Computer ScienceUniversity of Nottingham Nottingham NG8 1BB U.K
| | | | - David A Elizondo
- School of Computer Science and InformaticsDe Montfort University Leicester LE1 9BH U.K
| | - Luis Oala
- XAI GroupArtificial Intelligence DepartmentFraunhofer Heinrich Hertz Institute 10587 Berlin Germany
| | - Jorge Rodriguez-Capitan
- CIBERCVHospital Universitario Virgen de la Victoria 29010 Málaga Spain
- Instituto de Investigación Biomédica de Mñlaga (IBIMA) 29010 Málaga Spain
| | - Manuel Jimenez-Navarro
- CIBERCVHospital Universitario Virgen de la Victoria 29010 Málaga Spain
- Instituto de Investigación Biomédica de Mñlaga (IBIMA) 29010 Málaga Spain
| | - Ezequiel Lopez-Rubio
- Department of Computer Languages and Computer ScienceUniversity of Málaga 29071 Málaga Spain
- Instituto de Investigación Biomédica de Mñlaga (IBIMA) 29010 Málaga Spain
| | - Miguel A Molina-Cabello
- Department of Computer Languages and Computer ScienceUniversity of Málaga 29071 Málaga Spain
- Instituto de Investigación Biomédica de Mñlaga (IBIMA) 29010 Málaga Spain
| |
Collapse
|
39
|
Li J, Wu X, Mao N, Zheng G, Zhang H, Mou Y, Jia C, Mi J, Song X. Computed Tomography-Based Radiomics Model to Predict Central Cervical Lymph Node Metastases in Papillary Thyroid Carcinoma: A Multicenter Study. Front Endocrinol (Lausanne) 2021; 12:741698. [PMID: 34745008 PMCID: PMC8567994 DOI: 10.3389/fendo.2021.741698] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 10/04/2021] [Indexed: 01/04/2023] Open
Abstract
OBJECTIVES This study aimed to develop a computed tomography (CT)-based radiomics model to predict central lymph node metastases (CLNM) preoperatively in patients with papillary thyroid carcinoma (PTC). METHODS In this retrospective study, 678 patients with PTC were enrolled from Yantai Yuhuangding Hot3spital (n=605) and the Affiliated Hospital of Binzhou Medical University (n=73) within August 2010 to December 2020. The patients were randomly divided into a training set (n=423), an internal test set (n=182), and an external test set (n=73). Radiomics features of each patient were extracted from preoperative plain scan and contrast-enhanced CT images (arterial and venous phases). One-way analysis of variance (ANOVA) and least absolute shrinkage and selection operator algorithm were used for feature selection. The K-nearest neighbor, logistics regression, decision tree, linear-support vector machine (linear-SVM), Gaussian-SVM, and polynomial-SVM algorithms were used to establish radiomics models for CLNM prediction. The clinical risk factors were selected by ANOVA and multivariate logistic regression. Incorporated with clinical risk factors, a combined radiomics model was established for the preoperative prediction of CLNM in patients with PTCs. The performance of the combined radiomics model was evaluated using the receiver operating characteristic (ROC) and calibration curves in the training and test sets. The clinical usefulness was evaluated through decision curve analysis (DCA). RESULTS A total of 4227 radiomic features were extracted from the CT images of each patient, and 14 non-zero coefficient features associated with CLNM were selected. Four clinical variables (sex, age, tumor diameter, and CT-reported lymph node status) were significantly associated with CLNM. Linear-SVM led to the best prediction model, which incorporated radiomic features and clinical risk factors. Areas under the ROC curves of 0.747 (95% confidence interval [CI] 0.706-0.782), 0.710 (95% CI 0.634-0.786), and 0.764 (95% CI 0.654-0.875) were obtained in the training, internal, and external test sets, respectively. The linear-SVM algorithm also showed better sensitivity (0.702 [95% CI 0.600-0.790] vs. 0.477 [95% CI 0.409-0.545]) and accuracy (0.670 [95% CI 0.600-0.738] vs. 0.642 [95% CI 0.569-0.712]) than an experienced radiologist in the internal test set in the combined radiomics model. The calibration plot reflected a favorable agreement between the actual and estimated probabilities of CLNM. The DCA indicated the clinical usefulness of the combined radiomics model. CONCLUSION The combined radiomics model is a non-invasive preoperative tool that incorporates radiomic features and clinical risk factors to predict CLNM in patients with PTC.
Collapse
Affiliation(s)
- Jingjing Li
- Second Clinical Medicine College, Binzhou Medical University, Yantai, China
- Department of Otorhinolaryngology-Head and Neck Surgery, Yantai Yuhuangding Hospital, Qingdao University, Yantai, China
| | - Xinxin Wu
- Department of Otorhinolaryngology-Head and Neck Surgery, Yantai Yuhuangding Hospital, Qingdao University, Yantai, China
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, Yantai, China
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Yantai, China
- Shandong Provincial Clinical Research Center for Otorhinolaryngologic Diseases, Yantai, China
| | - Guibin Zheng
- Department of Thyroid Surgery, Yantai Yuhuangding Hospital, Yantai, China
| | - Haicheng Zhang
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Yantai, China
| | - Yakui Mou
- Department of Otorhinolaryngology-Head and Neck Surgery, Yantai Yuhuangding Hospital, Qingdao University, Yantai, China
- Shandong Provincial Clinical Research Center for Otorhinolaryngologic Diseases, Yantai, China
| | - Chuanliang Jia
- Department of Otorhinolaryngology-Head and Neck Surgery, Yantai Yuhuangding Hospital, Qingdao University, Yantai, China
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Yantai, China
- Shandong Provincial Clinical Research Center for Otorhinolaryngologic Diseases, Yantai, China
| | - Jia Mi
- Precision Medicine Research Center, Binzhou Medical University, Yantai, China
- *Correspondence: Xicheng Song, ; Jia Mi,
| | - Xicheng Song
- Department of Otorhinolaryngology-Head and Neck Surgery, Yantai Yuhuangding Hospital, Qingdao University, Yantai, China
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Yantai, China
- Shandong Provincial Clinical Research Center for Otorhinolaryngologic Diseases, Yantai, China
- *Correspondence: Xicheng Song, ; Jia Mi,
| |
Collapse
|
40
|
Folego G, Weiler M, Casseb RF, Pires R, Rocha A. Alzheimer's Disease Detection Through Whole-Brain 3D-CNN MRI. Front Bioeng Biotechnol 2020; 8:534592. [PMID: 33195111 PMCID: PMC7661929 DOI: 10.3389/fbioe.2020.534592] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 09/18/2020] [Indexed: 12/25/2022] Open
Abstract
The projected burden of dementia by Alzheimer's disease (AD) represents a looming healthcare crisis as the population of most countries grows older. Although there is currently no cure, it is possible to treat symptoms of dementia. Early diagnosis is paramount to the development and success of interventions, and neuroimaging represents one of the most promising areas for early detection of AD. We aimed to deploy advanced deep learning methods to determine whether they can extract useful AD biomarkers from structural magnetic resonance imaging (sMRI) and classify brain images into AD, mild cognitive impairment (MCI), and cognitively normal (CN) groups. We tailored and trained Convolutional Neural Networks (CNNs) on sMRIs of the brain from datasets available in online databases. Our proposed method, ADNet, was evaluated on the CADDementia challenge and outperformed several approaches in the prior art. The method's configuration with machine-learning domain adaptation, ADNet-DA, reached 52.3% accuracy. Contributions of our study include devising a deep learning system that is entirely automatic and comparatively fast, presenting competitive results without using any patient's domain-specific knowledge about the disease. We were able to implement an end-to-end CNN system to classify subjects into AD, MCI, or CN groups, reflecting the identification of distinctive elements in brain images. In this context, our system represents a promising tool in finding biomarkers to help with the diagnosis of AD and, eventually, many other diseases.
Collapse
Affiliation(s)
- Guilherme Folego
- Institute of Computing, University of Campinas, Campinas, Brazil.,CPQD, Campinas, Brazil
| | - Marina Weiler
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Intramural Research Program (NIA/NIH/IRP), Baltimore, MD, United States
| | - Raphael F Casseb
- Seaman Family MR Research Center, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Ramon Pires
- Institute of Computing, University of Campinas, Campinas, Brazil
| | - Anderson Rocha
- Institute of Computing, University of Campinas, Campinas, Brazil
| |
Collapse
|
41
|
Gomes Ataide EJ, Ponugoti N, Illanes A, Schenke S, Kreissl M, Friebe M. Thyroid Nodule Classification for Physician Decision Support Using Machine Learning-Evaluated Geometric and Morphological Features. Sensors (Basel) 2020; 20:E6110. [PMID: 33121054 PMCID: PMC7663034 DOI: 10.3390/s20216110] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 10/17/2020] [Accepted: 10/26/2020] [Indexed: 01/18/2023]
Abstract
The classification of thyroid nodules using ultrasound (US) imaging is done using the Thyroid Imaging Reporting and Data System (TIRADS) guidelines that classify nodules based on visual and textural characteristics. These are composition, shape, size, echogenicity, calcifications, margins, and vascularity. This work aims to reduce subjectivity in the current diagnostic process by using geometric and morphological (G-M) features that represent the visual characteristics of thyroid nodules to provide physicians with decision support. A total of 27 G-M features were extracted from images obtained from an open-access US thyroid nodule image database. 11 significant features in accordance with TIRADS were selected from this global feature set. Each feature was labeled (0 = benign and 1 = malignant) and the performance of the selected features was evaluated using machine learning (ML). G-M features together with ML resulted in the classification of thyroid nodules with a high accuracy, sensitivity and specificity. The results obtained here were compared against state-of the-art methods and perform significantly well in comparison. Furthermore, this method can act as a computer aided diagnostic (CAD) system for physicians by providing them with a validation of the TIRADS visual characteristics used for the classification of thyroid nodules in US images.
Collapse
Affiliation(s)
- Elmer Jeto Gomes Ataide
- Clinic for Radiology and Nuclear medicine, Department of Nuclear Medicine, Otto-von-Guericke University Medical Faculty, 39120 Magdeburg, Germany; (S.S.); (M.K.)
- INKA-Application Driven Research, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany; (N.P.); (A.I.); (M.F.)
| | - Nikhila Ponugoti
- INKA-Application Driven Research, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany; (N.P.); (A.I.); (M.F.)
| | - Alfredo Illanes
- INKA-Application Driven Research, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany; (N.P.); (A.I.); (M.F.)
| | - Simone Schenke
- Clinic for Radiology and Nuclear medicine, Department of Nuclear Medicine, Otto-von-Guericke University Medical Faculty, 39120 Magdeburg, Germany; (S.S.); (M.K.)
| | - Michael Kreissl
- Clinic for Radiology and Nuclear medicine, Department of Nuclear Medicine, Otto-von-Guericke University Medical Faculty, 39120 Magdeburg, Germany; (S.S.); (M.K.)
| | - Michael Friebe
- INKA-Application Driven Research, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany; (N.P.); (A.I.); (M.F.)
- IDTM GmbH, 45657 Recklinghausen, Germany
| |
Collapse
|
42
|
Yin R, Jiang M, Lv WZ, Jiang F, Li J, Hu B, Cui XW, Dietrich CF. Study Processes and Applications of Ultrasomics in Precision Medicine. Front Oncol 2020; 10:1736. [PMID: 33014858 PMCID: PMC7494734 DOI: 10.3389/fonc.2020.01736] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 08/04/2020] [Indexed: 12/12/2022] Open
Abstract
Ultrasomics is the science of transforming digitally encrypted medical ultrasound images that hold information related to tumor pathophysiology into mineable high-dimensional data. Ultrasomics data have the potential to uncover disease characteristics that are not found with the naked eye. The task of ultrasomics is to quantify the state of diseases using distinctive imaging algorithms and thereby provide valuable information for personalized medicine. Ultrasomics is a powerful tool in oncology but can also be applied to other medical problems for which a disease is imaged. To date there is no comprehensive review focusing on ultrasomics. Here, we describe how ultrasomics works and its capability in diagnosing disease in different organs, including breast, liver, and thyroid. Its pitfalls, challenges and opportunities are also discussed.
Collapse
Affiliation(s)
- Rui Yin
- Department of Ultrasound, Affiliated Renhe Hospital of China Three Gorges University, Yichang, China
| | - Meng Jiang
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wen-Zhi Lv
- Department of Artificial Intelligence, Julei Technology, Wuhan, China
| | - Fan Jiang
- Department of Ultrasound, The Second Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Jun Li
- Department of Ultrasound, The First Affiliated Hospital, School of Medicine, Shihezi University, Shihezi, China
| | - Bing Hu
- Department of Ultrasound, Affiliated Renhe Hospital of China Three Gorges University, Yichang, China
| | - Xin-Wu Cui
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | | |
Collapse
|
43
|
Wang L, Wang H, Xia C, Wang Y, Tang Q, Li J, Zhou XH. Toward standardized premarket evaluation of computer aided diagnosis/detection products: insights from FDA-approved products. Expert Rev Med Devices 2020; 17:899-918. [PMID: 32842797 DOI: 10.1080/17434440.2020.1813566] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
INTRODUCTION Computer aided detection and diagnosis (CADe and CADx) products are an emerging branch of medical device industry. However, limited technical standard has been developed for product verification and validation. It will be helpful to investigate the current practice of preclinical and clinical evaluation of approved products and provide insights for future standardization. AREAS COVERED Document review was conducted on 56 products approved by the United States Food and Drug Administration, including Summary of Safety and Effectiveness Data, 510(k) decision and de novo decision summaries. Key parameters describing product characteristics, preclinical studies and clinical studies were collected. Evaluation strategies for CADe/CADx products were analyzed and assessed. EXPERT OPINION Preclinical studies were widely adopted in the verification of CADe/CADx products. Standalone performance testing was a common procedure, but the selection of testing dataset and performance metrics showed significant variability and flexibility among manufacturers. Clinical studies were reported by all class III products and some class II products, and Multi-Reader Multi-Case design was commonly used. However, statistical analysis and presentation/interpretation of results was oftentimes incomplete. To resolve above issues, systematic development of standards of CADe/CADx is encouraged, which can be implemented at different aspects through the product lifecycle.
Collapse
Affiliation(s)
- Lu Wang
- Beijing International Center for Mathematical Research, Peking University , Beijing, China
| | - Hao Wang
- Institute for Medical Device Control, National Institutes for Food and Drug Control , Beijing, China
| | - Chen Xia
- Institute of Advanced Research, Beijing Infervision Technology Limited Liability Company , Beijing, China
| | - Yao Wang
- Department of Biosciences, University of Chicago , Chicago, Illinois, USA
| | - Qiaohong Tang
- Institute for Medical Device Control, National Institutes for Food and Drug Control , Beijing, China
| | - Jiage Li
- Institute for Medical Device Control, National Institutes for Food and Drug Control , Beijing, China
| | - Xiao-Hua Zhou
- Beijing International Center for Mathematical Research, Peking University , Beijing, China.,Department of Biostatistics, School of Public Health, Peking University , Beijing, China
| |
Collapse
|
44
|
Abedi V, Khan A, Chaudhary D, Misra D, Avula V, Mathrawala D, Kraus C, Marshall KA, Chaudhary N, Li X, Schirmer CM, Scalzo F, Li J, Zand R. Using artificial intelligence for improving stroke diagnosis in emergency departments: a practical framework. Ther Adv Neurol Disord 2020; 13:1756286420938962. [PMID: 32922515 PMCID: PMC7453441 DOI: 10.1177/1756286420938962] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Accepted: 06/02/2020] [Indexed: 12/02/2022] Open
Abstract
Stroke is the fifth leading cause of death in the United States and a major cause of severe disability worldwide. Yet, recognizing the signs of stroke in an acute setting is still challenging and leads to loss of opportunity to intervene, given the narrow therapeutic window. A decision support system using artificial intelligence (AI) and clinical data from electronic health records combined with patients' presenting symptoms can be designed to support emergency department providers in stroke diagnosis and subsequently reduce the treatment delay. In this article, we present a practical framework to develop a decision support system using AI by reflecting on the various stages, which could eventually improve patient care and outcome. We also discuss the technical, operational, and ethical challenges of the process.
Collapse
Affiliation(s)
- Vida Abedi
- Department of Molecular and Functional Genomics, Geisinger Health System, Danville, PA, USA
- Biocomplexity Institute, Virginia Tech, Blacksburg, VA, USA
| | - Ayesha Khan
- Neuroscience Institute, Geisinger Health System, Danville, PA, USA
| | | | - Debdipto Misra
- Division of Informatics, Geisinger Health System, Danville, PA, USA
| | - Venkatesh Avula
- Department of Molecular and Functional Genomics, Geisinger Health System, Danville, PA, USA
| | - Dhruv Mathrawala
- Division of Informatics, Geisinger Health System, Danville, PA, USA
| | - Chadd Kraus
- Department of Emergency Medicine, Geisinger Health System, Danville, PA, USA
| | - Kyle A. Marshall
- Department of Emergency Medicine, Geisinger Health System, Danville, PA, USA
| | | | - Xiao Li
- Genentech/Roche inc., South San Francisco, CA, USA
| | | | - Fabien Scalzo
- Department of Neurology, University of California, Los Angeles, CA, USA
- Department of Computer Science, University of California, Los Angeles, CA, USA
| | - Jiang Li
- Department of Molecular and Functional Genomics, Geisinger Health System, Danville, PA, USA
| | - Ramin Zand
- Neuroscience Institute, Geisinger Health System, Stroke Program, Geisinger Northeast Region, GRA Stroke Task Force, American Heart Association, Department of Neurosciences, 100 N Academy Ave, Danville, PA 17822-2101, USA
| |
Collapse
|
45
|
Cem Birbiri U, Hamidinekoo A, Grall A, Malcolm P, Zwiggelaar R. Investigating the Performance of Generative Adversarial Networks for Prostate Tissue Detection and Segmentation. J Imaging 2020; 6:jimaging6090083. [PMID: 34460740 PMCID: PMC8321056 DOI: 10.3390/jimaging6090083] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 08/14/2020] [Accepted: 08/18/2020] [Indexed: 12/24/2022] Open
Abstract
The manual delineation of region of interest (RoI) in 3D magnetic resonance imaging (MRI) of the prostate is time-consuming and subjective. Correct identification of prostate tissue is helpful to define a precise RoI to be used in CAD systems in clinical practice during diagnostic imaging, radiotherapy and monitoring the progress of disease. Conditional GAN (cGAN), cycleGAN and U-Net models and their performances were studied for the detection and segmentation of prostate tissue in 3D multi-parametric MRI scans. These models were trained and evaluated on MRI data from 40 patients with biopsy-proven prostate cancer. Due to the limited amount of available training data, three augmentation schemes were proposed to artificially increase the training samples. These models were tested on a clinical dataset annotated for this study and on a public dataset (PROMISE12). The cGAN model outperformed the U-Net and cycleGAN predictions owing to the inclusion of paired image supervision. Based on our quantitative results, cGAN gained a Dice score of 0.78 and 0.75 on the private and the PROMISE12 public datasets, respectively.
Collapse
Affiliation(s)
- Ufuk Cem Birbiri
- Department of Computer Engineering, Middle East Technical University, Ankara 06800, Turkey;
| | - Azam Hamidinekoo
- Division of Molecular Pathology, Institute of Cancer Research (ICR), London SM2 5NG, UK;
| | | | - Paul Malcolm
- Department of Radiology, Norfolk & Norwich University Hospital, Norwich NR4 7UY, UK;
| | - Reyer Zwiggelaar
- Department of Computer Science, Aberystwyth University, Aberystwyth SY23 3DB, UK
- Correspondence:
| |
Collapse
|
46
|
Jebamony J, Jacob D. Classification of Benign and Malignant Breast Masses on Mammograms for Large Datasets using Core Vector Machines. Curr Med Imaging 2020; 16:703-710. [PMID: 32723242 DOI: 10.2174/1573405615666190801121506] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2018] [Revised: 03/25/2019] [Accepted: 04/09/2019] [Indexed: 12/12/2022]
Abstract
BACKGROUND Breast cancer is one of the most leading causes of cancer deaths among women. Early detection of cancer increases the survival rate of the affected women. Machine learning approaches that are used for classification of breast cancer usually takes a lot of processing time during the training process. This paper attempts to propose a Machine Learning approach for breast cancer detection in mammograms, which does not depend on the number of training samples. OBJECTIVES The paper aims to develop a core vector machine-based diagnosis system for breast cancer detection using the date from MIAS. The main motivation behind using this system is to reduce the computational and memory requirement for large training data and to improve the classification accuracy. METHODS The proposed method has four stages: 1) Pre-processing is done to extract the breast region using global thresholding and enhancement using histogram equalization; 2) identification of potential mass using Otsu thresholding; 3) feature extraction using Laws Texture energy measures; and 4) mass detection is done using Core vector machine (CVM) classifier. RESULTS Comparative analysis was done with different existing algorithms: Artificial Neural Network (ANN), Support Vector Machine (SVM), and Fuzzy Support Vector Machines (FSVM). The results illustrate that the proposed Core Vector Machine (CVM) classifier produced a promising result in terms of sensitivity (96.9%), misclassification rate (0.0443) and accuracy (95.89%). The time taken for training process is 0.0443, which is less when compared with other machine learning algorithms. CONCLUSION Performance analysis shows that CVM classifier is superior to other classifiers like ANN, SVM and FSVM. The computational time of the CVM classifier during the training process was also analysed and found to be better than other discussed algorithms. The results achieved show that CVM classifier is the best algorithm for breast mass detection in mammograms.
Collapse
Affiliation(s)
| | - Dheeba Jacob
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore Campus, Katpadi, India
| |
Collapse
|
47
|
Aziz S, Khan MU, Alhaisoni M, Akram T, Altaf M. Phonocardiogram Signal Processing for Automatic Diagnosis of Congenital Heart Disorders through Fusion of Temporal and Cepstral Features. Sensors (Basel) 2020; 20:E3790. [PMID: 32640710 DOI: 10.3390/s20133790] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 06/30/2020] [Accepted: 07/01/2020] [Indexed: 12/19/2022]
Abstract
Congenital heart disease (CHD) is a heart disorder associated with the devastating indications that result in increased mortality, increased morbidity, increased healthcare expenditure, and decreased quality of life. Ventricular Septal Defects (VSDs) and Arterial Septal Defects (ASDs) are the most common types of CHD. CHDs can be controlled before reaching a serious phase with an early diagnosis. The phonocardiogram (PCG) or heart sound auscultation is a simple and non-invasive technique that may reveal obvious variations of different CHDs. Diagnosis based on heart sounds is difficult and requires a high level of medical training and skills due to human hearing limitations and the non-stationary nature of PCGs. An automated computer-aided system may boost the diagnostic objectivity and consistency of PCG signals in the detection of CHDs. The objective of this research was to assess the effects of various pattern recognition modalities for the design of an automated system that effectively differentiates normal, ASD, and VSD categories using short term PCG time series. The proposed model in this study adopts three-stage processing: pre-processing, feature extraction, and classification. Empirical mode decomposition (EMD) was used to denoise the raw PCG signals acquired from subjects. One-dimensional local ternary patterns (1D-LTPs) and Mel-frequency cepstral coefficients (MFCCs) were extracted from the denoised PCG signal for precise representation of data from different classes. In the final stage, the fused feature vector of 1D-LTPs and MFCCs was fed to the support vector machine (SVM) classifier using 10-fold cross-validation. The PCG signals were acquired from the subjects admitted to local hospitals and classified by applying various experiments. The proposed methodology achieves a mean accuracy of 95.24% in classifying ASD, VSD, and normal subjects. The proposed model can be put into practice and serve as a second opinion for cardiologists by providing more objective and faster interpretations of PCG signals.
Collapse
|
48
|
Li Y, Zhang EL, Li WJ, Lang N, Yuan HS. [Applications of Artificial Intelligence in Musculoskeletal System Imaging]. Zhongguo Yi Xue Ke Xue Yuan Xue Bao 2020; 42:242-246. [PMID: 32385032 DOI: 10.3881/j.issn.1000-503x.11614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Artificial intelligence (AI) represents the latest wave of computer revolution and is considered revolutionary technology in many industries including healthcare. AI has been applied in medical imaging mainly due to the improvement of computational learning,big data mining,and innovations of neural network architecture. AI can improve the efficiency and accuracy of imaging diagnosis and reduce medical cost;also,it can be used to predict the disease risk. In this article we summarize and analyze the application of AI in musculoskeletal imaging.
Collapse
Affiliation(s)
- Yuan Li
- Department of Radiology,Peking University Third Hospital,Beijing 100191,China
| | - En-Long Zhang
- Department of Radiology,Peking University International Hospital,Beijing 102206,China
| | - Wen-Juan Li
- Department of Radiology,Peking University Third Hospital,Beijing 100191,China
| | - Ning Lang
- Department of Radiology,Peking University Third Hospital,Beijing 100191,China
| | - Hui-Shu Yuan
- Department of Radiology,Peking University Third Hospital,Beijing 100191,China
| |
Collapse
|
49
|
Molder A, Balaban DV, Jinga M, Molder CC. Current Evidence on Computer-Aided Diagnosis of Celiac Disease: Systematic Review. Front Pharmacol 2020; 11:341. [PMID: 32372947 PMCID: PMC7179080 DOI: 10.3389/fphar.2020.00341] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 03/09/2020] [Indexed: 02/05/2023] Open
Abstract
Celiac disease (CD) is a chronic autoimmune disease that occurs in genetically predisposed individuals in whom the ingestion of gluten leads to damage of the small bowel. It is estimated to affect 1 in 100 people worldwide, but is severely underdiagnosed. Currently available guidelines require CD-specific serology and atrophic histology in duodenal biopsy samples for the diagnosis of adult CD. In pediatric CD, but in recent years in adults also, nonbioptic diagnostic strategies have become increasingly popular. In this setting, in order to increase the diagnostic rate of this pathology, endoscopy itself has been thought of as a case finding strategy by use of digital image processing techniques. Research focused on computer aided decision support used as database video capsule, endoscopy and even biopsy duodenal images. Early automated methods for diagnosis of celiac disease used feature extraction methods like spatial domain features, transform domain features, scale-invariant features and spatio-temporal features. Recent artificial intelligence (AI) techniques using deep learning (DL) methods such as convolutional neural network (CNN), support vector machines (SVM) or Bayesian inference have emerged as a breakthrough computer technology which can be used for computer aided diagnosis of celiac disease. In the current review we summarize methods used in clinical studies for classification of CD from feature extraction methods to AI techniques.
Collapse
Affiliation(s)
- Adriana Molder
- Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
- Center of Excellence in Robotics and Autonomous Systems, Military Technical Academy Ferdinand I, Bucharest, Romania
| | - Daniel Vasile Balaban
- Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
- Gastroenterology Department, Dr. Carol Davila Central Military Emergency University Hospital, Bucharest, Romania
- *Correspondence: Daniel Vasile Balaban,
| | - Mariana Jinga
- Carol Davila University of Medicine and Pharmacy, Bucharest, Romania
- Gastroenterology Department, Dr. Carol Davila Central Military Emergency University Hospital, Bucharest, Romania
| | - Cristian-Constantin Molder
- Center of Excellence in Robotics and Autonomous Systems, Military Technical Academy Ferdinand I, Bucharest, Romania
| |
Collapse
|
50
|
Ruiz E, Ramírez J, Górriz JM, Casillas J. Alzheimer's Disease Computer-Aided Diagnosis: Histogram-Based Analysis of Regional MRI Volumes for Feature Selection and Classification. J Alzheimers Dis 2019; 65:819-842. [PMID: 29966190 DOI: 10.3233/jad-170514] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
This paper proposes a novel fully automatic computer-aided diagnosis (CAD) system for the early detection of Alzheimer's disease (AD) based on supervised machine learning methods. The novelty of the approach, which is based on histogram analysis, is twofold: 1) a feature extraction process that aims to detect differences in brain regions of interest (ROIs) relevant for the recognition of subjects with AD and 2) an original greedy algorithm that predicts the severity of the effects of AD on these regions. This algorithm takes account of the progressive nature of AD that affects the brain structure with different levels of severity, i.e., the loss of gray matter in AD is found first in memory-related areas of the brain such as the hippocampus. Moreover, the proposed feature extraction process generates a reduced set of attributes which allows the use of general-purpose classification machine learning algorithms. In particular, the proposed feature extraction approach assesses the ROI image separability between classes in order to identify the ones with greater discriminant power. These regions will have the highest influence in the classification decision at the final stage. Several experiments were carried out on segmented magnetic resonance images from the Alzheimer's Disease Neuroimaging Initiative (ADNI) in order to show the benefits of the overall method. The proposed CAD system achieved competitive classification results in a highly efficient and straightforward way.
Collapse
|