1
|
Sarvutiene J, Ramanavicius A, Ramanavicius S, Prentice U. Advances in Duchenne Muscular Dystrophy: Diagnostic Techniques and Dystrophin Domain Insights. Int J Mol Sci 2025; 26:3579. [PMID: 40332074 PMCID: PMC12027135 DOI: 10.3390/ijms26083579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2025] [Revised: 03/27/2025] [Accepted: 04/07/2025] [Indexed: 05/08/2025] Open
Abstract
Abnormalities in X chromosomes, either numerical or structural, cause X-linked disorders, such as Duchenne muscular dystrophy (DMD). Recent molecular and cytogenetic techniques can help identify DMD gene mutations. The accurate diagnosis of Duchenne is crucial, directly impacting patient treatment management, genetics, and the establishment of effective prevention strategies. This review provides an overview of X chromosomal disorders affecting Duchenne and discusses how mutations in Dystrophin domains can impact detection accuracy. Firstly, the efficiency and use of cytogenetic and molecular techniques for the genetic diagnosis of Duchenne disease have, thus, become increasingly important. Secondly, artificial intelligence (AI) will be instrumental in developing future therapies by enabling the aggregation and synthesis of extensive and heterogeneous datasets, thereby elucidating underlying molecular mechanisms. However, despite advances in diagnostic technology, understanding the role of Dystrophin in Duchenne disease remains a challenge. Therefore, this review aims to synthesize this complex information to significantly advance the understanding of DMD and how it could affect patient care.
Collapse
Affiliation(s)
- Julija Sarvutiene
- State Research Institute Center for Physical Sciences and Technology (FTMC), Sauletekio Av. 3, LT-10257 Vilnius, Lithuania; (J.S.); (A.R.); (S.R.)
| | - Arunas Ramanavicius
- State Research Institute Center for Physical Sciences and Technology (FTMC), Sauletekio Av. 3, LT-10257 Vilnius, Lithuania; (J.S.); (A.R.); (S.R.)
- Department of Physical Chemistry, Institute of Chemistry, Faculty of Chemistry and Geosciences, Vilnius University, Naugarduko St. 24, LT-03225 Vilnius, Lithuania
| | - Simonas Ramanavicius
- State Research Institute Center for Physical Sciences and Technology (FTMC), Sauletekio Av. 3, LT-10257 Vilnius, Lithuania; (J.S.); (A.R.); (S.R.)
| | - Urte Prentice
- State Research Institute Center for Physical Sciences and Technology (FTMC), Sauletekio Av. 3, LT-10257 Vilnius, Lithuania; (J.S.); (A.R.); (S.R.)
- Department of Physical Chemistry, Institute of Chemistry, Faculty of Chemistry and Geosciences, Vilnius University, Naugarduko St. 24, LT-03225 Vilnius, Lithuania
- Department of Personalised Medicine, State Research Institute Center for Innovative Medicine, Santariskiu St. 5, LT-08410 Vilnius, Lithuania
| |
Collapse
|
2
|
Shamas M, Tauseef H, Ahmad A, Raza A, Ghadi YY, Mamyrbayev O, Momynzhanova K, Alahmadi TJ. Classification of pulmonary diseases from chest radiographs using deep transfer learning. PLoS One 2025; 20:e0316929. [PMID: 40096069 PMCID: PMC11913265 DOI: 10.1371/journal.pone.0316929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Accepted: 12/18/2024] [Indexed: 03/19/2025] Open
Abstract
Pulmonary diseases are the leading causes of disabilities and deaths worldwide. Early diagnosis of pulmonary diseases can reduce the fatality rate. Chest radiographs are commonly used to diagnose pulmonary diseases. In clinical practice, diagnosing pulmonary diseases using chest radiographs is challenging due to Overlapping and complex anatomical Structures, variability in radiographs, and their quality. The availability of a medical specialist with extensive professional experience is profoundly required. With the use of Convolutional Neural Networks in the medical field, diagnosis can be improved by automatically detecting and classifying these diseases. This paper has explored the effectiveness of Convolutional Neural Networks and transfer learning to improve the predictive outcomes of fifteen different pulmonary diseases using chest radiographs. Our proposed deep transfer learning-based computational model achieved promising results as compared to existing state-of-the-art methods. Our model reported an overall specificity of 97.92%, a sensitivity of 97.30%, a precision of 97.94%, and an Area under the Curve of 97.61%. It has been observed that the promising results of our proposed model will be valuable tool for practitioners in decision-making and efficiently diagnosing various pulmonary diseases.
Collapse
Affiliation(s)
- Muneeba Shamas
- Department of Computer Science, Lahore College for Women University, Lahore, Pakistan
| | - Huma Tauseef
- Department of Computer Science, Lahore College for Women University, Lahore, Pakistan
| | - Ashfaq Ahmad
- Department of Computer Science, MY University, Islamabad, Pakistan
| | - Ali Raza
- Department of Computer Science, MY University, Islamabad, Pakistan
| | - Yazeed Yasin Ghadi
- Department of Computer Science, Al Ain University, Abu Dhabi, United Arab Emirates
| | - Orken Mamyrbayev
- Institute of Information and Computational Technologies, Almaty, Kazakhstan
| | | | - Tahani Jaser Alahmadi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
3
|
Shen R, Hu Y, Wu Q, Zhu J, Xu W, Pan B, Shao W, Wang B, Guo W. Achieving a New Artificial Intelligence System for Serum Protein Electrophoresis to Recognize M-Spikes. ACS OMEGA 2025; 10:5770-5777. [PMID: 39989836 PMCID: PMC11840632 DOI: 10.1021/acsomega.4c09327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2024] [Revised: 01/19/2025] [Accepted: 01/27/2025] [Indexed: 02/25/2025]
Abstract
PURPOSE In order to accurately identify the low-concentration M-spikes in serum protein electrophoresis (SPE) patterns, a new artificial intelligence (AI) system is explored. METHODS 166,003 SPE data sets, which were equally divided into 4 training sets and 1 optimal set, were utilized to establish and evaluate the AI system, namely, "AIRSPE". 10,014 internal test sets and 1861 external test sets with immunofixation electrophoresis (IFE) results as gold standard were used to assess the performance of AIRSPE including sensitivity, negative predictive value, and concordance. In the internal test group with different concentrations of M-spikes, the consistencies of AIRSPE and manual interpretation with IF-positive results were compared. RESULTS AIRSPE selected MobileNetv2, which performed with F1-score of 84.60%, precision of 76.20%, recall of 95.20%, loss of 26.80%, accuracy of 89.48%, and interpretation time of 14 ms. In internal test sets, the sensitivity and negative predictive values of AIRSPE were 95.21% and 97.65%, respectively, with no significant difference in performance compared to the external test set (P > 0.05). AIRSPE and IFE results showed a concordance (k = 0.832) that implies an almost perfect agreement, which was higher than that between manual interpretation and IFE (k = 0.699). The M-spikes that were identified by IFE as positive in the internal test data sets that were detected by AIRSPE but were not detected by manual interpretation were mainly concentrated in the γ-fraction, with M-spike concentrations lower than 0.5 g/L. CONCLUSIONS AIRSPE, established through AI deep learning and validated by IFE results, significantly outperforms manual interpretation in detecting low-concentration M-spikes, demonstrating its potential to assist with clinical screening for M-spikes.
Collapse
Affiliation(s)
- Ruojian Shen
- Department of Laboratory
Medicine, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Yuyi Hu
- Department of Laboratory
Medicine, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Qun Wu
- Department of Laboratory
Medicine, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Jing Zhu
- Department of Laboratory
Medicine, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Wen Xu
- Department of Laboratory
Medicine, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Baishen Pan
- Department of Laboratory
Medicine, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Wenqi Shao
- Department of Laboratory
Medicine, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Beili Wang
- Department of Laboratory
Medicine, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Wei Guo
- Department of Laboratory
Medicine, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| |
Collapse
|
4
|
Kim J, Kwak CW, Uhmn S, Lee J, Yoo S, Cho MC, Son H, Jeong H, Choo MS. A Novel Deep Learning-based Artificial Intelligence System for Interpreting Urolithiasis in Computed Tomography. Eur Urol Focus 2024; 10:1049-1054. [PMID: 38997836 DOI: 10.1016/j.euf.2024.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 06/05/2024] [Accepted: 07/03/2024] [Indexed: 07/14/2024]
Abstract
BACKGROUND AND OBJECTIVE Our aim was to develop an artificial intelligence (AI) system for detection of urolithiasis in computed tomography (CT) images using advanced deep learning capable of real-time calculation of stone parameters such as volume and density, which are essential for treatment decisions. The performance of the system was compared to that of urologists in emergency room (ER) scenarios. METHODS Axial CT images for patients who underwent stone surgery between August 2022 and July 2023 comprised the data set, which was divided into 70% for training, 10% for internal validation, and 20% for testing. Two urologists and an AI specialist annotated stones using Labelimg for ground-truth data. The YOLOv4 architecture was used for training, with acceleration via an RTX 4900 graphics processing unit (GPU). External validation was performed using CT images for 100 patients with suspected urolithiasis. KEY FINDINGS AND LIMITATIONS The AI system was trained on 39 433 CT images, of which 9.1% were positive. The system achieved accuracy of 95%, peaking with a 1:2 positive-to-negative sample ratio. In a validation set of 5736 images (482 positive), accuracy remained at 95%. Misses (2.6%) were mainly irregular stones. False positives (3.4%) were often due to artifacts or calcifications. External validation using 100 CT images from the ER revealed accuracy of 94%; cases that were missed were mostly ureterovesical junction stones, which were not included in the training set. The AI system surpassed human specialists in speed, analyzing 150 CT images in 13 s, versus 38.6 s for evaluation by urologists and 23 h for formal reading. The AI system calculated stone volume in 0.2 s, versus 77 s for calculation by urologists. CONCLUSIONS AND CLINICAL IMPLICATIONS Our AI system, which uses advanced deep learning, assists in diagnosing urolithiasis with 94% accuracy in real clinical settings and has potential for rapid diagnosis using standard consumer-grade GPUs. PATIENT SUMMARY We developed a new AI (artificial intelligence) system that can quickly and accurately detect kidney stones in CT (computed tomography) scans. Testing showed that this system is highly effective, with accuracy of 94% for real cases in the emergency department. It is much faster than traditional methods and provides rapid and reliable results to help doctors in making better treatment decisions for their patients.
Collapse
Affiliation(s)
- Jin Kim
- Department of Computer Engineering, Hallym University, Chuncheon, South Korea
| | - Chan Woo Kwak
- Land Combat R&D Center, Hanwha Systems, Gumi, South Korea
| | - Saangyong Uhmn
- Department of Computer Engineering, Hallym University, Chuncheon, South Korea
| | - Junghoon Lee
- Department of Urology, Boramae Medical Center, Seoul Metropolitan Government-Seoul National University, Seoul, South Korea
| | - Sangjun Yoo
- Department of Urology, Boramae Medical Center, Seoul Metropolitan Government-Seoul National University, Seoul, South Korea
| | - Min Chul Cho
- Department of Urology, Boramae Medical Center, Seoul Metropolitan Government-Seoul National University, Seoul, South Korea
| | - Hwancheol Son
- Department of Urology, Boramae Medical Center, Seoul Metropolitan Government-Seoul National University, Seoul, South Korea
| | - Hyeon Jeong
- Department of Urology, Boramae Medical Center, Seoul Metropolitan Government-Seoul National University, Seoul, South Korea
| | - Min Soo Choo
- Department of Urology, Boramae Medical Center, Seoul Metropolitan Government-Seoul National University, Seoul, South Korea.
| |
Collapse
|
5
|
Li S, Xie J, Liu J, Wu Y, Wang Z, Cao Z, Wen D, Zhang X, Wang B, Yang Y, Lu L, Dong X. Prognostic Value of a Combined Nomogram Model Integrating 3-Dimensional Deep Learning and Radiomics for Head and Neck Cancer. J Comput Assist Tomogr 2024; 48:498-507. [PMID: 38438336 DOI: 10.1097/rct.0000000000001584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
OBJECTIVE The preoperative prediction of the overall survival (OS) status of patients with head and neck cancer (HNC) is significant value for their individualized treatment and prognosis. This study aims to evaluate the impact of adding 3D deep learning features to radiomics models for predicting 5-year OS status. METHODS Two hundred twenty cases from The Cancer Imaging Archive public dataset were included in this study; 2212 radiomics features and 304 deep features were extracted from each case. The features were selected by univariate analysis and the least absolute shrinkage and selection operator, and then grouped into a radiomics model containing Positron Emission Tomography /Computed Tomography (PET/CT) radiomics features score, a deep model containing deep features score, and a combined model containing PET/CT radiomics features score +3D deep features score. TumorStage model was also constructed using initial patient tumor node metastasis stage to compare the performance of the combined model. A nomogram was constructed to analyze the influence of deep features on the performance of the model. The 10-fold cross-validation of the average area under the receiver operating characteristic curve and calibration curve were used to evaluate performance, and Shapley Additive exPlanations (SHAP) was developed for interpretation. RESULTS The TumorStage model, radiomics model, deep model, and the combined model achieved areas under the receiver operating characteristic curve of 0.604, 0.851, 0.840, and 0.895 on the train set and 0.571, 0.849, 0.832, and 0.900 on the test set. The combined model showed better performance of predicting the 5-year OS status of HNC patients than the radiomics model and deep model. The combined model was shown to provide a favorable fit in calibration curves and be clinically useful in decision curve analysis. SHAP summary plot and SHAP The SHAP summary plot and SHAP force plot visually interpreted the influence of deep features and radiomics features on the model results. CONCLUSIONS In predicting 5-year OS status in patients with HNC, 3D deep features could provide richer features for combined model, which showed outperformance compared with the radiomics model and deep model.
Collapse
Affiliation(s)
| | - Jiayi Xie
- Department of automation, Tsinghua University, Beijing, China
| | | | | | - Zhongxiao Wang
- From the Hebei International Research Center for Medical-Engineering
| | - Zhendong Cao
- Department of Radiology, The Affiliated Hospital of Chengde Medical University, Chengde, Hebei
| | - Dong Wen
- Institute of Artificial Intelligence, University of Science and Technology Beijing
| | - Xiaolei Zhang
- From the Hebei International Research Center for Medical-Engineering
| | | | - Yifan Yang
- Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| | - Lijun Lu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou
| | | |
Collapse
|
6
|
Fink A, Tran H, Reisert M, Rau A, Bayer J, Kotter E, Bamberg F, Russe MF. A deep learning approach for projection and body-side classification in musculoskeletal radiographs. Eur Radiol Exp 2024; 8:23. [PMID: 38353812 PMCID: PMC10866807 DOI: 10.1186/s41747-023-00417-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 11/29/2023] [Indexed: 02/16/2024] Open
Abstract
BACKGROUND The growing prevalence of musculoskeletal diseases increases radiologic workload, highlighting the need for optimized workflow management and automated metadata classification systems. We developed a large-scale, well-characterized dataset of musculoskeletal radiographs and trained deep learning neural networks to classify radiographic projection and body side. METHODS In this IRB-approved retrospective single-center study, a dataset of musculoskeletal radiographs from 2011 to 2019 was retrieved and manually labeled for one of 45 possible radiographic projections and the depicted body side. Two classification networks were trained for the respective tasks using the Xception architecture with a custom network top and pretrained weights. Performance was evaluated on a hold-out test sample, and gradient-weighted class activation mapping (Grad-CAM) heatmaps were computed to visualize the influential image regions for network predictions. RESULTS A total of 13,098 studies comprising 23,663 radiographs were included with a patient-level dataset split, resulting in 19,183 training, 2,145 validation, and 2,335 test images. Focusing on paired body regions, training for side detection included 16,319 radiographs (13,284 training, 1,443 validation, and 1,592 test images). The models achieved an overall accuracy of 0.975 for projection and 0.976 for body-side classification on the respective hold-out test sample. Errors were primarily observed in projections with seamless anatomical transitions or non-orthograde adjustment techniques. CONCLUSIONS The deep learning neural networks demonstrated excellent performance in classifying radiographic projection and body side across a wide range of musculoskeletal radiographs. These networks have the potential to serve as presorting algorithms, optimizing radiologic workflow and enhancing patient care. RELEVANCE STATEMENT The developed networks excel at classifying musculoskeletal radiographs, providing valuable tools for research data extraction, standardized image sorting, and minimizing misclassifications in artificial intelligence systems, ultimately enhancing radiology workflow efficiency and patient care. KEY POINTS • A large-scale, well-characterized dataset was developed, covering a broad spectrum of musculoskeletal radiographs. • Deep learning neural networks achieved high accuracy in classifying radiographic projection and body side. • Grad-CAM heatmaps provided insight into network decisions, contributing to their interpretability and trustworthiness. • The trained models can help optimize radiologic workflow and manage large amounts of data.
Collapse
Affiliation(s)
- Anna Fink
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany.
| | - Hien Tran
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
| | - Marco Reisert
- Department of Stereotactic and Functional Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- Medical Physics, Department of Diagnostic and Interventional Radiology, Medical Center, University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Alexander Rau
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
- Department of Neuroradiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Jörg Bayer
- Department of Trauma and Orthopaedic Surgery, Schwarzwald-Baar Hospital, Villingen-Schwenningen, Germany
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
| | - Maximilian F Russe
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
| |
Collapse
|
7
|
Sorace L, Raju N, O'Shaughnessy J, Kachel S, Jansz K, Yang N, Lim RP. Assessment of inspiration and technical quality in anteroposterior thoracic radiographs using machine learning. Radiography (Lond) 2024; 30:107-115. [PMID: 37918335 DOI: 10.1016/j.radi.2023.10.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 10/16/2023] [Accepted: 10/17/2023] [Indexed: 11/04/2023]
Abstract
INTRODUCTION Chest radiographs are the most performed radiographic procedure, but suboptimal technical factors can impact clinical interpretation. A deep learning model was developed to assess technical and inspiratory adequacy of anteroposterior chest radiographs. METHODS Adult anteroposterior chest radiographs (n = 2375) were assessed for technical adequacy, and if otherwise technically adequate, for adequacy of inspiration. Images were labelled by an experienced radiologist with one of three ground truth labels: inadequate technique (n = 605, 25.5 %), adequate inspiration (n = 900, 37.9 %), and inadequate inspiration (n = 870, 36.6 %). A convolutional neural network was then iteratively trained to predict these labels and evaluated using recall, precision, F1 and micro-F1, and Gradient-weighted Class Activation Mapping analysis on a hold-out test set. Impact of kyphosis on model accuracy was assessed. RESULTS The model performed best for radiographs with adequate technique, and worst for images with inadequate technique. Recall was highest (89 %) for radiographs with both adequate technique and inspiration, with recall of 81 % for images with adequate technique and inadequate inspiration, and 60 % for images with inadequate technique, although precision was highest (85 %) for this category. Per-class F1 was 80 %, 81 % and 70 % for adequate inspiration, inadequate inspiration, and inadequate technique respectively. Weighted F1 and Micro F1 scores were 78 %. Presence or absence of kyphosis had no significant impact on model accuracy in images with adequate technique. CONCLUSION This study explores the promising performance of a machine learning algorithm for assessment of inspiratory adequacy and overall technical adequacy for anteroposterior chest radiograph acquisition. IMPLICATIONS FOR PRACTICE With further refinement, machine learning can contribute to education and quality improvement in radiology departments.
Collapse
Affiliation(s)
- L Sorace
- Department of Radiology, Austin Hospital, Heidelberg, Australia.
| | - N Raju
- Department of Radiology, Austin Hospital, Heidelberg, Australia
| | - J O'Shaughnessy
- Department of Radiology, Austin Hospital, Heidelberg, Australia
| | - S Kachel
- Department of Radiology, Austin Hospital, Heidelberg, Australia; The University of Melbourne, Parkville, Australia; Columbia University, New York, NY, USA
| | - K Jansz
- Department of Radiology, Austin Hospital, Heidelberg, Australia
| | - N Yang
- Department of Radiology, Austin Hospital, Heidelberg, Australia; The University of Melbourne, Parkville, Australia
| | - R P Lim
- Department of Radiology, Austin Hospital, Heidelberg, Australia; The University of Melbourne, Parkville, Australia
| |
Collapse
|
8
|
Teague J, Socia D, An G, Badylak S, Johnson S, Jiang P, Vodovotz Y, Cockrell RC. Artificial Intelligence Optical Biopsy for Evaluating the Functional State of Wounds. J Surg Res 2023; 291:683-690. [PMID: 37562230 DOI: 10.1016/j.jss.2023.07.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 07/05/2023] [Accepted: 07/06/2023] [Indexed: 08/12/2023]
Abstract
INTRODUCTION The clinical characterization of the functional status of active wounds in terms of their driving cellular and molecular biology remains a considerable challenge that currently requires excision via a tissue biopsy. In this pilot study, we use convolutional Siamese neural network (SNN) architecture to predict the functional state of a wound using digital photographs of wounds in a canine model of volumetric muscle loss (VML). METHODS Digital images of VML injuries and tissue biopsies were obtained in a standardized fashion from an established canine model of VML. Gene expression profiles for each biopsy site were obtained using RNA sequencing. These profiles were converted to functional profiles by a manual review of validated gene ontology databases in which we determined a hierarchical representation of gene functions based on functional specificity. An SNN was trained to regress functional profile expression values, informed by an image segment showing the surface of a small tissue biopsy. RESULTS The SNN was able to predict the functional expression of a range of functions based with error ranging from ∼5% to ∼30%, with functions that are most closely associated with the early state of wound healing to be those best-predicted. CONCLUSIONS These initial results suggest promise for further research regarding this novel use of machine learning regression on medical images. The regression of functional profiles, as opposed to specific genes, both addresses the challenge of genetic redundancy and gives a deeper insight into the mechanistic configuration of a region of tissue in wounds.
Collapse
Affiliation(s)
- Joe Teague
- Department of Surgery, University of Vermont, Burlington, Vermont
| | - Damien Socia
- Department of Surgery, University of Vermont, Burlington, Vermont
| | - Gary An
- Department of Surgery, University of Vermont, Burlington, Vermont
| | - Stephen Badylak
- McGowan Institute of Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Scott Johnson
- McGowan Institute of Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Peng Jiang
- Center for Gene Regulation in Health and Disease (GRHD), Cleveland State University, Cleveland, Ohio
| | - Yoram Vodovotz
- McGowan Institute of Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania; Department of Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - R Chase Cockrell
- Department of Surgery, University of Vermont, Burlington, Vermont.
| |
Collapse
|
9
|
Weaver JK, Logan J, Broms R, Antony M, Rickard M, Erdman L, Edwins R, Pominville R, Hannick J, Woo L, Viteri B, D'Souza N, Viswanath SE, Flask C, Lorenzo A, Fan Y, Tasian GE. Deep learning of renal scans in children with antenatal hydronephrosis. J Pediatr Urol 2023; 19:514.e1-514.e7. [PMID: 36775719 DOI: 10.1016/j.jpurol.2022.12.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/16/2022] [Accepted: 12/20/2022] [Indexed: 01/07/2023]
Abstract
INTRODUCTION Antenatal hydronephrosis (ANH) is one of the most common anomalies identified on prenatal ultrasound, found in up to 4.5% of all pregnancies. Children with ANH are surveilled with repeated renal ultrasound and when there is high suspicion for a ureteropelvic junction obstruction on renal ultrasound, a mercaptuacetyltriglycerine (MAG3) Lasix renal scan is performed to evaluate for obstruction. However, the challenging interpretation of MAG3 renal scans places patients at risk of misdiagnosis. OBJECTIVE Our objective was to analyze MAG3 renal scans using machine learning to predict renal complications. We hypothesized that our deep learning model would extract features from MAG3 renal scans that can predict renal complications in children with ANH. STUDY DESIGN We performed a case-control study of MAG3 studies drawn from a population of children with ANH concerning for ureteropelvic junction obstruction evaluated at our institution from January 2009 until June of 2021. The outcome was renal complications that occur ≥6 months after an equivocal MAG-3 renal scan. We created two machine learning models: a deep learning model using the radiotracer concentration versus time data from the kidney of interest and a random forest model created using clinical data. The performance of the models was assessed using measures of diagnostic accuracy. RESULTS We identified 152 eligible patients with available images of which 62 were cases and 90 were controls. The deep learning model predicted future renal complications with an overall accuracy of 73% (95% confidence inteveral [CI] 68-76%) and an AUC of 0.78 (95% CI 0.7, 0.84). The random forest model had an accuracy of 62% (95% CI 60-66%) and an AUC of 0.67 (95% CI. 0 64, 0.72) DISCUSSION: Our deep learning model predicted patients at high risk of developing renal complications following an equivocal renal scan and discriminate those at low risk with moderately high accuracy (73%). The deep learning model outperformed the clinical model built from clinical features classically used by urologists for surgical decision making. CONCLUSION Our models have the potential to influence clinical decision making by providing supplemental analytical data from MAG3 scans that would not otherwise be available to urologists. Future multi-institutional retrospective and prospective trials are needed to validate our model.
Collapse
Affiliation(s)
- J K Weaver
- Division of Urology Rainbow Babies and Children's Hospital/Case Western Reserve University School of Medicine, Cleveland, OH, USA.
| | - J Logan
- Division of Urology, Children's Hospital of Philadelphia, PA, USA; Department of Biostatistics, Epidemiology and Informatics and Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - R Broms
- Division of Urology, Children's Hospital of Philadelphia, PA, USA
| | - M Antony
- Division of Urology, Children's Hospital of Philadelphia, PA, USA
| | - M Rickard
- Division of Urology for Sick Children, Toronto, ON, Canada
| | - L Erdman
- Division of Urology for Sick Children, Toronto, ON, Canada
| | - R Edwins
- Division of Urology Rainbow Babies and Children's Hospital/Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - R Pominville
- Division of Urology Rainbow Babies and Children's Hospital/Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - J Hannick
- Division of Urology Rainbow Babies and Children's Hospital/Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - L Woo
- Division of Urology Rainbow Babies and Children's Hospital/Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - B Viteri
- Division of Nephrology, Children's Hospital of Philadelphia, PA, USA
| | - N D'Souza
- Division of Urology, Children's Hospital of Philadelphia, PA, USA
| | - S E Viswanath
- Department of Radiology, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - C Flask
- Department of Radiology, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - A Lorenzo
- Division of Urology for Sick Children, Toronto, ON, Canada
| | - Y Fan
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - G E Tasian
- Division of Urology, Children's Hospital of Philadelphia, PA, USA; Department of Biostatistics, Epidemiology and Informatics and Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| |
Collapse
|
10
|
Del Real Mata C, Jeanne O, Jalali M, Lu Y, Mahshid S. Nanostructured-Based Optical Readouts Interfaced with Machine Learning for Identification of Extracellular Vesicles. Adv Healthc Mater 2023; 12:e2202123. [PMID: 36443009 DOI: 10.1002/adhm.202202123] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 11/14/2022] [Indexed: 11/30/2022]
Abstract
Extracellular vesicles (EVs) are shed from cancer cells into body fluids, enclosing molecular information about the underlying disease with the potential for being the target cancer biomarker in emerging diagnosis approaches such as liquid biopsy. Still, the study of EVs presents major challenges due to their heterogeneity, complexity, and scarcity. Recently, liquid biopsy platforms have allowed the study of tumor-derived materials, holding great promise for early-stage diagnosis and monitoring of cancer when interfaced with novel adaptations of optical readouts and advanced machine learning analysis. Here, recent advances in labeled and label-free optical techniques such as fluorescence, plasmonic, and chromogenic-based systems interfaced with nanostructured sensors like nanoparticles, nanoholes, and nanowires, and diverse machine learning analyses are reviewed. The adaptability of the different optical methods discussed is compared and insights are provided into prospective avenues for the translation of the technological approaches for cancer diagnosis. It is discussed that the inherent augmented properties of nanostructures enhance the sensitivity of the detection of EVs. It is concluded by reviewing recent integrations of nanostructured-based optical readouts with diverse machine learning models as novel analysis ventures that can potentially increase the capability of the methods to the point of translation into diagnostic applications.
Collapse
Affiliation(s)
| | - Olivia Jeanne
- McGill University, Department of Bioengineering, Montreal, QC, H3A 0E9, Canada
| | - Mahsa Jalali
- McGill University, Department of Bioengineering, Montreal, QC, H3A 0E9, Canada
| | - Yao Lu
- McGill University, Department of Bioengineering, Montreal, QC, H3A 0E9, Canada
| | - Sara Mahshid
- McGill University, Department of Bioengineering, Montreal, QC, H3A 0E9, Canada
| |
Collapse
|
11
|
Yu X, Jia X, Zhang Z, Fu Y, Zhai J, Chen N, Cao Q, Zhu Z, Dai Q. Meibomian gland morphological changes in ocular herpes zoster patients based on AI analysis. Front Cell Dev Biol 2022; 10:1094044. [DOI: 10.3389/fcell.2022.1094044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 11/21/2022] [Indexed: 12/04/2022] Open
Abstract
Varicella-zoster virus (VZV) infections result in a series of ophthalmic complications. Clinically, we also discover that the proportion of dry eye symptoms was significantly higher in patients with herpes zoster ophthalmicus (HZO) than in healthy individuals. Meibomian gland dysfunction (MGD) is one of the main reasons for dry eye. Therefore, we hypothesize that HZO may associate with MGD, affecting the morphology of meibomian gland (MG) because of immune response and inflammation. The purpose of this study is to retrospectively analyze the effect of HZO with craniofacial herpes zoster on dry eye and MG morphology based on an Artificial intelligence (AI) MG morphology analytic system. In this study, 26 patients were diagnosed as HZO based on a history of craniofacial herpes zoster accompanied by abnormal ocular signs. We found that the average height of all MGs of the upper eyelid and both eyelids were significantly lower in the research group than in the normal control group (p < 0.05 for all). The average width and tortuosity of all MGs for both upper and lower eyelids were not significantly different between the two groups. The MG density of the upper eyelid and both eyelids were significantly lower in the HZO group than in the normal control group (p = 0.020 and p = 0.022). Therefore, HZO may lead to dry eye, coupled with the morphological changes of MGs, mainly including a reduction in MG density and height. Moreover, it is important to control HZO early and timely, which could prevent potential long-term severe ocular surface injury.
Collapse
|
12
|
Kim M, Park SK, Kubota Y, Lee S, Park K, Kong DS. Applying a deep convolutional neural network to monitor the lateral spread response during microvascular surgery for hemifacial spasm. PLoS One 2022; 17:e0276378. [PMID: 36322573 PMCID: PMC9629649 DOI: 10.1371/journal.pone.0276378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 10/06/2022] [Indexed: 01/24/2023] Open
Abstract
BACKGROUND Intraoperative neurophysiological monitoring is essential in neurosurgical procedures. In this study, we built and evaluated the performance of a deep neural network in differentiating between the presence and absence of a lateral spread response, which provides critical information during microvascular decompression surgery for the treatment of hemifacial spasm using intraoperatively acquired electromyography images. METHODS AND FINDINGS A total of 3,674 image screenshots of monitoring devices from 50 patients were prepared, preprocessed, and then adopted into training and validation sets. A deep neural network was constructed using current-standard, off-the-shelf tools. The neural network correctly differentiated 50 test images (accuracy, 100%; area under the curve, 0.96) collected from 25 patients whose data were never exposed to the neural network during training or validation. The accuracy of the network was equivalent to that of the neuromonitoring technologists (p = 0.3013) and higher than that of neurosurgeons experienced in hemifacial spasm (p < 0.0001). Heatmaps obtained to highlight the key region of interest achieved a level similar to that of trained human professionals. Provisional clinical application showed that the neural network was preferable as an auxiliary tool. CONCLUSIONS A deep neural network trained on a dataset of intraoperatively collected electromyography data could classify the presence and absence of the lateral spread response with equivalent performance to human professionals. Well-designated applications based upon the neural network may provide useful auxiliary tools for surgical teams during operations.
Collapse
Affiliation(s)
- Minsoo Kim
- Department of Neurosurgery, Gangneung Asan Hospital, Gangneung, Korea
- Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
- Department of Medicine, Graduate School, Yonsei University College of Medicine, Seoul, Korea
| | - Sang-Ku Park
- Department of Neurosurgery, Konkuk University Medical Center, Konkuk University School of Medicine, Seoul, Korea
| | | | - Seunghoon Lee
- Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Kwan Park
- Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
- Department of Neurosurgery, Konkuk University Medical Center, Konkuk University School of Medicine, Seoul, Korea
| | - Doo-Sik Kong
- Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
- * E-mail:
| |
Collapse
|
13
|
Yamamuro M, Asai Y, Hashimoto N, Yasuda N, Kimura H, Yamada T, Nemoto M, Kimura Y, Handa H, Yoshida H, Abe K, Tada M, Habe H, Nagaoka T, Nin S, Ishii K, Kondo Y. Utility of U-Net for the objective segmentation of the fibroglandular tissue region on clinical digital mammograms. Biomed Phys Eng Express 2022; 8. [PMID: 35728581 DOI: 10.1088/2057-1976/ac7ada] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 06/21/2022] [Indexed: 11/11/2022]
Abstract
This study investigates the equivalence or compatibility between U-Net and visual segmentations of fibroglandular tissue regions by mammography experts for calculating the breast density and mean glandular dose (MGD). A total of 703 mediolateral oblique-view mammograms were used for segmentation. Two region types were set as the ground truth (determined visually): (1) one type included only the region where fibroglandular tissue was identifiable (called the 'dense region'); (2) the other type included the region where the fibroglandular tissue may have existed in the past, provided that apparent adipose-only parts, such as the retromammary space, are excluded (the 'diffuse region'). U-Net was trained to segment the fibroglandular tissue region with an adaptive moment estimation optimiser, five-fold cross-validated with 400 training and 100 validation mammograms, and tested with 203 mammograms. The breast density and MGD were calculated using the van Engeland and Dance formulas, respectively, and compared between U-Net and the ground truth with the Dice similarity coefficient and Bland-Altman analysis. Dice similarity coefficients between U-Net and the ground truth were 0.895 and 0.939 for the dense and diffuse regions, respectively. In the Bland-Altman analysis, no proportional or fixed errors were discovered in either the dense or diffuse region for breast density, whereas a slight proportional error was discovered in both regions for the MGD (the slopes of the regression lines were -0.0299 and -0.0443 for the dense and diffuse regions, respectively). Consequently, the U-Net and ground truth were deemed equivalent (interchangeable) for breast density and compatible (interchangeable following four simple arithmetic operations) for MGD. U-Net-based segmentation of the fibroglandular tissue region was satisfactory for both regions, providing reliable segmentation for breast density and MGD calculations. U-Net will be useful in developing a reliable individualised screening-mammography programme, instead of relying on the visual judgement of mammography experts.
Collapse
Affiliation(s)
- Mika Yamamuro
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan.,Graduate School of Health Sciences, Niigata University, 2-746, Asahimachidori, Chuouku, Niigata 951-8518, Japan
| | - Yoshiyuki Asai
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Naomi Hashimoto
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Nao Yasuda
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Hiorto Kimura
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Takahiro Yamada
- Division of Positron Emission Tomography Institute of Advanced Clinical Medicine, Kindai University, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Mitsutaka Nemoto
- Department of Computational Systems Biology, Kindai University Faculty of Biology-Oriented Science and Technology, 930, Nishimitani, Kinokawa, Wakayama 649-6433, Japan
| | - Yuichi Kimura
- Department of Computational Systems Biology, Kindai University Faculty of Biology-Oriented Science and Technology, 930, Nishimitani, Kinokawa, Wakayama 649-6433, Japan
| | - Hisashi Handa
- Department of Informatics, Kindai University Faculty of Science and Engineering, 3-4-1, Kowakae, Higashi-osaka, Osaka 577-8502, Japan
| | - Hisashi Yoshida
- Department of Computational Systems Biology, Kindai University Faculty of Biology-Oriented Science and Technology, 930, Nishimitani, Kinokawa, Wakayama 649-6433, Japan
| | - Koji Abe
- Department of Informatics, Kindai University Faculty of Science and Engineering, 3-4-1, Kowakae, Higashi-osaka, Osaka 577-8502, Japan
| | - Masahiro Tada
- Department of Informatics, Kindai University Faculty of Science and Engineering, 3-4-1, Kowakae, Higashi-osaka, Osaka 577-8502, Japan
| | - Hitoshi Habe
- Department of Informatics, Kindai University Faculty of Science and Engineering, 3-4-1, Kowakae, Higashi-osaka, Osaka 577-8502, Japan
| | - Takashi Nagaoka
- Department of Computational Systems Biology, Kindai University Faculty of Biology-Oriented Science and Technology, 930, Nishimitani, Kinokawa, Wakayama 649-6433, Japan
| | - Seiun Nin
- Department of Radiology, Kindai University Faculty of Medicine, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Kazunari Ishii
- Department of Radiology, Kindai University Faculty of Medicine, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Yohan Kondo
- Graduate School of Health Sciences, Niigata University, 2-746, Asahimachidori, Chuouku, Niigata 951-8518, Japan
| |
Collapse
|
14
|
Suganyadevi S, Seethalakshmi V. CVD-HNet: Classifying Pneumonia and COVID-19 in Chest X-ray Images Using Deep Network. WIRELESS PERSONAL COMMUNICATIONS 2022; 126:3279-3303. [PMID: 35756172 PMCID: PMC9206838 DOI: 10.1007/s11277-022-09864-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/29/2022] [Indexed: 06/04/2023]
Abstract
The use of computer-assisted analysis to improve image interpretation has been a long-standing challenge in the medical imaging industry. In terms of image comprehension, Continuous advances in AI (Artificial Intelligence), predominantly in DL (Deep Learning) techniques, are supporting in the classification, Detection, and quantification of anomalies in medical images. DL techniques are the most rapidly evolving branch of AI, and it's recently been successfully pragmatic in a variety of fields, including medicine. This paper provides a classification method for COVID 19 infected X-ray images based on new novel deep CNN model. For COVID19 specified pneumonia analysis, two new customized CNN architectures, CVD-HNet1 (COVID-HybridNetwork1) and CVD-HNet2 (COVID-HybridNetwork2), have been designed. The suggested method utilizes operations based on boundaries and regions, as well as convolution processes, in a systematic manner. In comparison to existing CNNs, the suggested classification method achieves excellent Accuracy 98 percent, F Score 0.99 and MCC 0.97. These results indicate impressive classification accuracy on a limited dataset, with more training examples, much better results can be achieved. Overall, our CVD-HNet model could be a useful tool for radiologists in diagnosing and detecting COVID 19 instances early.
Collapse
Affiliation(s)
- S. Suganyadevi
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore, Tamilnadu 641 407 India
| | - V. Seethalakshmi
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore, Tamilnadu 641 407 India
| |
Collapse
|
15
|
Deep Neural Networks and Machine Learning Radiomics Modelling for Prediction of Relapse in Mantle Cell Lymphoma. Cancers (Basel) 2022; 14:cancers14082008. [PMID: 35454914 PMCID: PMC9028737 DOI: 10.3390/cancers14082008] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 04/05/2022] [Accepted: 04/12/2022] [Indexed: 12/12/2022] Open
Abstract
Simple Summary Mantle cell lymphoma (MCL) is an aggressive lymphoid tumour with a poor prognosis. There exist no routine biomarkers for the early prediction of relapse. Our study compared the potential of radiomics-based machine learning and 3D deep learning models as non-invasive biomarkers to risk-stratify MCL patients, thus promoting precision imaging in clinical oncology. Abstract Mantle cell lymphoma (MCL) is a rare lymphoid malignancy with a poor prognosis characterised by frequent relapse and short durations of treatment response. Most patients present with aggressive disease, but there exist indolent subtypes without the need for immediate intervention. The very heterogeneous behaviour of MCL is genetically characterised by the translocation t(11;14)(q13;q32), leading to Cyclin D1 overexpression with distinct clinical and biological characteristics and outcomes. There is still an unfulfilled need for precise MCL prognostication in real-time. Machine learning and deep learning neural networks are rapidly advancing technologies with promising results in numerous fields of application. This study develops and compares the performance of deep learning (DL) algorithms and radiomics-based machine learning (ML) models to predict MCL relapse on baseline CT scans. Five classification algorithms were used, including three deep learning models (3D SEResNet50, 3D DenseNet, and an optimised 3D CNN) and two machine learning models based on K-nearest Neighbor (KNN) and Random Forest (RF). The best performing method, our optimised 3D CNN, predicted MCL relapse with a 70% accuracy, better than the 3D SEResNet50 (62%) and the 3D DenseNet (59%). The second-best performing method was the KNN-based machine learning model (64%) after principal component analysis for improved accuracy. Our optimised CNN developed by ourselves correctly predicted MCL relapse in 70% of the patients on baseline CT imaging. Once prospectively tested in clinical trials with a larger sample size, our proposed 3D deep learning model could facilitate clinical management by precision imaging in MCL.
Collapse
|
16
|
Lee BD, Gitter A, Greene CS, Raschka S, Maguire F, Titus AJ, Kessler MD, Lee AJ, Chevrette MG, Stewart PA, Britto-Borges T, Cofer EM, Yu KH, Carmona JJ, Fertig EJ, Kalinin AA, Signal B, Lengerich BJ, Triche TJ, Boca SM. Ten quick tips for deep learning in biology. PLoS Comput Biol 2022; 18:e1009803. [PMID: 35324884 PMCID: PMC8946751 DOI: 10.1371/journal.pcbi.1009803] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Affiliation(s)
- Benjamin D. Lee
- In-Q-Tel Labs, Arlington, Virginia, United States of America
- School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts, United States of America
- Department of Genetics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Anthony Gitter
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
- Morgridge Institute for Research, Madison, Wisconsin, United States of America
| | - Casey S. Greene
- Department of Systems Pharmacology and Translational Therapeutics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Department of Biochemistry and Molecular Genetics, University of Colorado School of Medicine, Aurora, Colorado, United States of America
- Center for Health AI, University of Colorado School of Medicine, Aurora, Colorado, United States of America
| | - Sebastian Raschka
- Department of Statistics, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| | - Finlay Maguire
- Faculty of Computer Science, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Alexander J. Titus
- University of New Hampshire, Manchester, New Hampshire, United States of America
- Bioeconomy.XYZ, Manchester, New Hampshire, United States of America
| | - Michael D. Kessler
- Department of Oncology, Johns Hopkins University, Baltimore, Maryland, United States of America
- Institute for Genome Sciences, University of Maryland School of Medicine, Baltimore, Maryland, United States of America
| | - Alexandra J. Lee
- Department of Systems Pharmacology and Translational Therapeutics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Genomics and Computational Biology Graduate Program, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Marc G. Chevrette
- Wisconsin Institute for Discovery and Department of Plant Pathology, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| | - Paul Allen Stewart
- Department of Biostatistics and Bioinformatics, Moffitt Cancer Center, Tampa, Florida, United States of America
| | - Thiago Britto-Borges
- Section of Bioinformatics and Systems Cardiology, Klaus Tschira Institute for Integrative Computational Cardiology, University Hospital Heidelberg, Heidelberg, Germany
- Department of Internal Medicine III (Cardiology, Angiology, and Pneumology), University Hospital Heidelberg, Heidelberg, Germany
| | - Evan M. Cofer
- Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey, United States of America
- Graduate Program in Quantitative and Computational Biology, Princeton University, Princeton, New Jersey, United States of America
| | - Kun-Hsing Yu
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
- Department of Pathology, Brigham and Women’s Hospital, Boston, Massachusetts, United States of America
| | - Juan Jose Carmona
- Philips Healthcare, Cambridge, Massachusetts, United States of America
| | - Elana J. Fertig
- Department of Oncology, Johns Hopkins University, Baltimore, Maryland, United States of America
- Department of Biomedical Engineering, Department of Applied Mathematics and Statistics, Convergence Institute, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Alexandr A. Kalinin
- Medical Big Data Group, Shenzhen Research Institute of Big Data, Shenzhen, China
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Brandon Signal
- School of Medicine, College of Health and Medicine, University of Tasmania, Hobart, Australia
| | - Benjamin J. Lengerich
- Computer Science Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | - Timothy J. Triche
- Center for Epigenetics, Van Andel Research Institute, Grand Rapids, Michigan, United States of America
- Department of Pediatrics, College of Human Medicine, Michigan State University, East Lansing, Michigan, United States of America
- Department of Translational Genomics, Keck School of Medicine, University of Southern California, Los Angeles, California, United States of America
| | - Simina M. Boca
- Innovation Center for Biomedical Informatics, Georgetown University Medical Center, District of Columbia, United States of America
- Department of Oncology, Georgetown University Medical Center, Washington, DC, United States of America
- Department of Biostatistics, Bioinformatics and Biomathematics, Georgetown University Medical Center, Washington, DC, United States of America
- Cancer Prevention and Control Program, Lombardi Comprehensive Cancer Center, Washington, DC, United States of America
| |
Collapse
|
17
|
Yi PH, Arun A, Hafezi-Nejad N, Choy G, Sair HI, Hui FK, Fritz J. Can AI distinguish a bone radiograph from photos of flowers or cars? Evaluation of bone age deep learning model on inappropriate data inputs. Skeletal Radiol 2022; 51:401-406. [PMID: 34351456 PMCID: PMC8339162 DOI: 10.1007/s00256-021-03880-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 07/15/2021] [Accepted: 07/25/2021] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To evaluate the behavior of a publicly available deep convolutional neural network (DCNN) bone age algorithm when presented with inappropriate data inputs in both radiological and non-radiological domains. METHODS We evaluated a publicly available DCNN-based bone age application. The DCNN was trained on 12,612 pediatric hand radiographs and won the 2017 RSNA Pediatric Bone Age Challenge (concordance of 0.991 with radiologist ground-truth). We used the application to analyze 50 left-hand radiographs (appropriate data inputs) and seven classes of inappropriate data inputs in radiological (i.e., chest radiographs) and non-radiological (i.e., image of street numbers) domains. For each image, we noted if (1) the application distinguished between appropriate and inappropriate data inputs and (2) inference time per image. Mean inference times were compared using ANOVA. RESULTS The 16Bit Bone Age application calculated bone age for all pediatric hand radiographs with mean inference time of 1.1 s. The application did not distinguish between pediatric hand radiographs and inappropriate image types, including radiological and non-radiological domains. The application inappropriately calculated bone age for all inappropriate image types, with mean inference time of 1.1 s for all categories (p = 1). CONCLUSION A publicly available DCNN-based bone age application failed to distinguish between appropriate and inappropriate data inputs and calculated bone age for inappropriate images. The awareness of inappropriate outputs based on inappropriate DCNN input is important if tasks such as bone age determination are automated, emphasizing the need for appropriate oversight at the data input and verification stage to avoid unrecognized erroneous results.
Collapse
Affiliation(s)
- Paul H. Yi
- University of Maryland Intelligent Imaging (UMII) Center, Department of Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD USA
| | - Anirudh Arun
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD USA
| | - Nima Hafezi-Nejad
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD USA
| | - Garry Choy
- Department of Radiology, Veterans Affairs Palo Alto Health Care System, Palo Alto, CA USA
| | - Haris I. Sair
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD USA
| | - Ferdinand K. Hui
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD USA
| | - Jan Fritz
- Department of Radiology, New York University Grossman School of Medicine, New York, NY USA
| |
Collapse
|
18
|
Suganyadevi S, Seethalakshmi V, Balasamy K. A review on deep learning in medical image analysis. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:19-38. [PMID: 34513553 PMCID: PMC8417661 DOI: 10.1007/s13735-021-00218-1] [Citation(s) in RCA: 95] [Impact Index Per Article: 31.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 08/06/2021] [Accepted: 08/09/2021] [Indexed: 05/02/2023]
Abstract
Ongoing improvements in AI, particularly concerning deep learning techniques, are assisting to identify, classify, and quantify patterns in clinical images. Deep learning is the quickest developing field in artificial intelligence and is effectively utilized lately in numerous areas, including medication. A brief outline is given on studies carried out on the region of application: neuro, brain, retinal, pneumonic, computerized pathology, bosom, heart, breast, bone, stomach, and musculoskeletal. For information exploration, knowledge deployment, and knowledge-based prediction, deep learning networks can be successfully applied to big data. In the field of medical image processing methods and analysis, fundamental information and state-of-the-art approaches with deep learning are presented in this paper. The primary goals of this paper are to present research on medical image processing as well as to define and implement the key guidelines that are identified and addressed.
Collapse
Affiliation(s)
- S. Suganyadevi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - V. Seethalakshmi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - K. Balasamy
- Department of IT, Dr. Mahalingam College of Engineering and Technology, Coimbatore, India
| |
Collapse
|
19
|
Schumaker G, Becker A, An G, Badylak S, Johnson S, Jiang P, Vodovotz Y, Cockrell RC. Optical Biopsy Using a Neural Network to Predict Gene Expression From Photos of Wounds. J Surg Res 2021; 270:547-554. [PMID: 34826690 DOI: 10.1016/j.jss.2021.10.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 09/16/2021] [Accepted: 10/09/2021] [Indexed: 01/02/2023]
Abstract
BACKGROUND The clinical characterization of the biological status of complex wounds remains a considerable challenge. Digital photography provides a non-invasive means of obtaining wound information and is currently employed to assess wounds qualitatively. Advances in machine learning (ML) image processing provide a means of identifying "hidden" features in pictures. This pilot study trains a convolutional neural network (CNN) to predict gene expression based on digital photographs of wounds in a canine model of volumetric muscle loss (VML). MATERIALS AND METHODS Images of volumetric muscle loss injuries and tissue biopsies were obtained in a canine model of VML. A CNN was trained to regress gene expression values as a function of the extracted image segment (color and spatial distribution). Performance of the CNN was assessed in a held-back test set of images using Mean Absolute Percentage Error (MAPE). RESULTS The CNN was able to predict the gene expression of certain genes based on digital images, with a MAPE ranging from ∼10% to ∼30%, indicating the presence and identification of distinct, and identifiable patterns in gene expression throughout the wound. CONCLUSIONS These initial results suggest promise for further research regarding this novel use of ML regression on medical images. Specifically, the use of CNNs to determine the mechanistic biological state of a VML wound could aid both the design of future mechanistic interventions and the design of trials to test those therapies. Future work will expand the CNN training and/or test set, with potential expansion to predicting functional gene modules.
Collapse
Affiliation(s)
- Grant Schumaker
- Department of Surgery, University of Vermont, Burlington, Vermont
| | - Andrew Becker
- Department of Surgery, University of Vermont, Burlington, Vermont
| | - Gary An
- Department of Surgery, University of Vermont, Burlington, Vermont
| | - Stephen Badylak
- McGowan Institute of Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Scott Johnson
- McGowan Institute of Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Peng Jiang
- Center for Gene Regulation in Health and Disease (GRHD)Department of Biological, Geological and Environmental Sciences (BGES) Cleveland State University, Cleveland, OH
| | - Yoram Vodovotz
- McGowan Institute of Regenerative Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania; Department of Surgery, University of Pittsburgh, W944 Biomedical Sciences Tower, Pittsburgh, Pennsylvania
| | - R Chase Cockrell
- Department of Surgery, University of Vermont, Burlington, Vermont.
| |
Collapse
|
20
|
Zulfiqar R, Majeed F, Irfan R, Rauf HT, Benkhelifa E, Belkacem AN. Abnormal Respiratory Sounds Classification Using Deep CNN Through Artificial Noise Addition. Front Med (Lausanne) 2021; 8:714811. [PMID: 34869413 PMCID: PMC8635523 DOI: 10.3389/fmed.2021.714811] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 10/07/2021] [Indexed: 11/29/2022] Open
Abstract
Respiratory sound (RS) attributes and their analyses structure a fundamental piece of pneumonic pathology, and it gives symptomatic data regarding a patient's lung. A couple of decades back, doctors depended on their hearing to distinguish symptomatic signs in lung audios by utilizing the typical stethoscope, which is usually considered a cheap and secure method for examining the patients. Lung disease is the third most ordinary cause of death worldwide, so; it is essential to classify the RS abnormality accurately to overcome the death rate. In this research, we have applied Fourier analysis for the visual inspection of abnormal respiratory sounds. Spectrum analysis was done through Artificial Noise Addition (ANA) in conjunction with different deep convolutional neural networks (CNN) to classify the seven abnormal respiratory sounds-both continuous (CAS) and discontinuous (DAS). The proposed framework contains an adaptive mechanism of adding a similar type of noise to unhealthy respiratory sounds. ANA makes sound features enough reach to be identified more accurately than the respiratory sounds without ANA. The obtained results using the proposed framework are superior to previous techniques since we simultaneously considered the seven different abnormal respiratory sound classes.
Collapse
Affiliation(s)
- Rizwana Zulfiqar
- Faculty of Computing and Information Technology, University of Gujrat, Gujrat, Pakistan
| | - Fiaz Majeed
- Faculty of Computing and Information Technology, University of Gujrat, Gujrat, Pakistan
| | - Rizwana Irfan
- Department of Information Technology, College of Computing and Information Technology at Khulais, University of Jeddah, Jeddah, Saudi Arabia
| | | | - Elhadj Benkhelifa
- Cloud Computing and Applications Reseach Lab, Staffordshire University, Stoke-on-Trent, United Kingdom
| | - Abdelkader Nasreddine Belkacem
- Department of Computer and Network Engineering, College of Information Technology, UAE University, Al Ain, United Arab Emirates
| |
Collapse
|
21
|
Zhang Y. Computer-Aided Diagnosis for Pneumoconiosis Staging Based on Multi-scale Feature Mapping. INT J COMPUT INT SYS 2021. [DOI: 10.1007/s44196-021-00046-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
AbstractIn this research, we explored a method of multi-scale feature mapping to pre-screen radiographs quickly and accurately in the aided diagnosis of pneumoconiosis staging. We utilized an open dataset and a self-collected dataset as research datasets. We proposed a multi-scale feature mapping model based on deep learning feature extraction technology for detecting pulmonary fibrosis and a discrimination method for pneumoconiosis staging. The diagnostic accuracy was evaluated using under the curve (AUC) of the receiver operating characteristic (ROC) curve. The AUC value of our model was 0.84, which showed the best performance compared with previous work on datasets. The diagnosis results indicated that our method was highly consistent with that of clinical experts on real patient. Furthermore, the AUC value obtained through categories I–IV on the testing dataset demonstrated that categories I (AUC = 0.86) and IV (AUC = 0.82) obtained the best performance and achieved the level of clinician categorization. Our research could be applied to the pre-screening and diagnosis of pneumoconiosis in the clinic.
Collapse
|
22
|
Yuan KC, Tsai LW, Lai KS, Teng ST, Lo YS, Peng SJ. Using Transfer Learning Method to Develop an Artificial Intelligence Assisted Triaging for Endotracheal Tube Position on Chest X-ray. Diagnostics (Basel) 2021; 11:diagnostics11101844. [PMID: 34679542 PMCID: PMC8534985 DOI: 10.3390/diagnostics11101844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Revised: 09/21/2021] [Accepted: 09/28/2021] [Indexed: 11/16/2022] Open
Abstract
Endotracheal tubes (ETTs) provide a vital connection between the ventilator and patient; however, improper placement can hinder ventilation efficiency or injure the patient. Chest X-ray (CXR) is the most common approach to confirming ETT placement; however, technicians require considerable expertise in the interpretation of CXRs, and formal reports are often delayed. In this study, we developed an artificial intelligence-based triage system to enable the automated assessment of ETT placement in CXRs. Three intensivists performed a review of 4293 CXRs obtained from 2568 ICU patients. The CXRs were labeled "CORRECT" or "INCORRECT" in accordance with ETT placement. A region of interest (ROI) was also cropped out, including the bilateral head of the clavicle, the carina, and the tip of the ETT. Transfer learning was used to train four pre-trained models (VGG16, INCEPTION_V3, RESNET, and DENSENET169) and two models developed in the current study (VGG16_Tensor Projection Layer and CNN_Tensor Projection Layer) with the aim of differentiating the placement of ETTs. Only VGG16 based on ROI images presented acceptable performance (AUROC = 92%, F1 score = 0.87). The results obtained in this study demonstrate the feasibility of using the transfer learning method in the development of AI models by which to assess the placement of ETTs in CXRs.
Collapse
Affiliation(s)
- Kuo-Ching Yuan
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei 10675, Taiwan;
- Department of Surgery, DA CHIEN General Hospital, Miaoli 36052, Taiwan
| | - Lung-Wen Tsai
- Department of Medicine Education, Taipei Medical University Hospital, Taipei 110301, Taiwan;
| | - Kevin S. Lai
- Division of Critical Care Medicine, Department of Emergency and Critical Care Medicine, Taipei Medical University Hospital, Taipei 110301, Taiwan; (K.S.L.); (S.-T.T.)
| | - Sing-Teck Teng
- Division of Critical Care Medicine, Department of Emergency and Critical Care Medicine, Taipei Medical University Hospital, Taipei 110301, Taiwan; (K.S.L.); (S.-T.T.)
| | - Yu-Sheng Lo
- Institute of Biomedical Informatics, Taipei Medical University, Taipei 110301, Taiwan
- Correspondence: (Y.-S.L.); (S.-J.P.); Tel.: +886-2-66382736 (Y.-S.L. & S.-J.P.); Fax: +886-2-87320395 (Y.-S.L.); +886-2-27321956 (S.-J.P.)
| | - Syu-Jyun Peng
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei 10675, Taiwan;
- Correspondence: (Y.-S.L.); (S.-J.P.); Tel.: +886-2-66382736 (Y.-S.L. & S.-J.P.); Fax: +886-2-87320395 (Y.-S.L.); +886-2-27321956 (S.-J.P.)
| |
Collapse
|
23
|
Çallı E, Sogancioglu E, van Ginneken B, van Leeuwen KG, Murphy K. Deep learning for chest X-ray analysis: A survey. Med Image Anal 2021; 72:102125. [PMID: 34171622 DOI: 10.1016/j.media.2021.102125] [Citation(s) in RCA: 126] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/17/2021] [Accepted: 05/27/2021] [Indexed: 12/14/2022]
Abstract
Recent advances in deep learning have led to a promising performance in many medical image analysis tasks. As the most commonly performed radiological exam, chest radiographs are a particularly important modality for which a variety of applications have been researched. The release of multiple, large, publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the number of publications. In this paper, we review all studies using deep learning on chest radiographs published before March 2021, categorizing works by task: image-level prediction (classification and regression), segmentation, localization, image generation and domain adaptation. Detailed descriptions of all publicly available datasets are included and commercial systems in the field are described. A comprehensive discussion of the current state of the art is provided, including caveats on the use of public datasets, the requirements of clinically useful systems and gaps in the current literature.
Collapse
Affiliation(s)
- Erdi Çallı
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands.
| | - Ecem Sogancioglu
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Kicky G van Leeuwen
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Keelin Murphy
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| |
Collapse
|
24
|
Ghosh T, Tanwar S, Chumber S, Vani K. Classification of chest radiographs using general purpose cloud-based automated machine learning: pilot study. THE EGYPTIAN JOURNAL OF RADIOLOGY AND NUCLEAR MEDICINE 2021. [DOI: 10.1186/s43055-021-00499-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
Abstract
Background
Widespread implementation of machine learning models in diagnostic imaging is restricted by dearth of expertise and resources. General purpose automated machine learning offers a possible solution.
This study aims to provide a proof of concept that a general purpose automated machine learning platform can be utilized to train a CNN to classify chest radiographs.
In a retrospective study, more than 2000 postero-anterior chest radiographs were assessed for quality, contrast, position, and pathology. A selected dataset of 637 radiographs were used to train a CNN using reinforcement learning based automated machine learning platform. Accuracy metrics of each label was calculated and model performance was compared to previous studies.
Results
The auPRC (area under precision-recall curve) was 0.616. The model achieved precision of 70.8% and recall of 60.7% (P > 0.05) for detection of “Normal” radiographs. Detection of “Pathology” by the model had a precision of 75.6% and recall of 75.6% (P > 0.05). The F1 scores were 0.65 and 0.75 respectively.
Conclusion
Automated machine learning platforms may provide viable alternatives to developing custom CNN models for classification of chest radiographs. However, the accuracy achieved is lower than a comparable traditionally developed neural network model.
Collapse
|
25
|
Liu J, Li M, Luo Y, Yang S, Li W, Bi Y. Alzheimer's disease detection using depthwise separable convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 203:106032. [PMID: 33713959 DOI: 10.1016/j.cmpb.2021.106032] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2020] [Accepted: 02/25/2021] [Indexed: 05/02/2023]
Abstract
To diagnose Alzheimer's disease (AD), neuroimaging methods such as magnetic resonance imaging have been employed. Recent progress in computer vision with deep learning (DL) has further inspired research focused on machine learning algorithms. However, a few limitations of these algorithms, such as the requirement for large number of training images and the necessity for powerful computers, still hinder the extensive usage of AD diagnosis based on machine learning. In addition, large number of training parameters and heavy computation make the DL systems difficult in integrating with mobile embedded devices, for example the mobile phones. For AD detection using DL, most of the current research solely focused on improving the classification performance, while few studies have been done to obtain a more compact model with less complexity and relatively high recognition accuracy. In order to solve this problem and improve the efficiency of the DL algorithm, a deep separable convolutional neural network model is proposed for AD classification in this paper. The depthwise separable convolution (DSC) is used in this work to replace the conventional convolution. Compared to the traditional neural networks, the parameters and computing cost of the proposed neural network are found greatly reduced. The parameters and computational costs of the proposed neural network are found to be significantly reduced compared with conventional neural networks. With its low power consumption, the proposed model is particularly suitable for embedding mobile devices. Experimental findings show that the DSC algorithm, based on the OASIS magnetic resonance imaging dataset, is very successful for AD detection. Moreover, transfer learning is employed in this work to improve model performance. Two trained models with complex networks, namely AlexNet and GoogLeNet, are used for transfer learning, with average classification rates of 91.40%, 93.02% and a less power consumption.
Collapse
Affiliation(s)
- Junxiu Liu
- School of Electronic Engineering, Guangxi Normal University, Guilin, 541004, China
| | - Mingxing Li
- School of Electronic Engineering, Guangxi Normal University, Guilin, 541004, China
| | - Yuling Luo
- School of Electronic Engineering, Guangxi Normal University, Guilin, 541004, China.
| | - Su Yang
- School of Computing and Engineering, University of West London, London, United Kingdom
| | - Wei Li
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Yifei Bi
- College of Foreign Languages, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
26
|
Gaur L, Bhatia U, Jhanjhi NZ, Muhammad G, Masud M. Medical image-based detection of COVID-19 using Deep Convolution Neural Networks. MULTIMEDIA SYSTEMS 2021; 29:1729-1738. [PMID: 33935377 PMCID: PMC8079233 DOI: 10.1007/s00530-021-00794-6] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 04/05/2021] [Indexed: 05/08/2023]
Abstract
The demand for automatic detection of Novel Coronavirus or COVID-19 is increasing across the globe. The exponential rise in cases burdens healthcare facilities, and a vast amount of multimedia healthcare data is being explored to find a solution. This study presents a practical solution to detect COVID-19 from chest X-rays while distinguishing those from normal and impacted by Viral Pneumonia via Deep Convolution Neural Networks (CNN). In this study, three pre-trained CNN models (EfficientNetB0, VGG16, and InceptionV3) are evaluated through transfer learning. The rationale for selecting these specific models is their balance of accuracy and efficiency with fewer parameters suitable for mobile applications. The dataset used for the study is publicly available and compiled from different sources. This study uses deep learning techniques and performance metrics (accuracy, recall, specificity, precision, and F1 scores). The results show that the proposed approach produced a high-quality model, with an overall accuracy of 92.93%, COVID-19, a sensitivity of 94.79%. The work indicates a definite possibility to implement computer vision design to enable effective detection and screening measures.
Collapse
Affiliation(s)
- Loveleen Gaur
- Amity International Business School, Amity University, Noida, India
| | - Ujwal Bhatia
- Amity International Business School, Amity University, Noida, India
| | - N. Z. Jhanjhi
- School of Computer Science and Engineering SCE, Taylor’s University, Subang Jaya, Malaysia
| | - Ghulam Muhammad
- Research Chair of Pervasive and Mobile Computing, King Saud University, Riyadh 11543, Saudi Arabia
- Computer Engineering Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
| | - Mehedi Masud
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif, 21944 Saudi Arabia
| |
Collapse
|
27
|
Kim TK, Yi PH, Wei J, Shin JW, Hager G, Hui FK, Sair HI, Lin CT. Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs. J Digit Imaging 2021; 32:925-930. [PMID: 30972585 DOI: 10.1007/s10278-019-00208-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN's performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.
Collapse
Affiliation(s)
- Tae Kyung Kim
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Paul H Yi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Jinchi Wei
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Ji Won Shin
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Gregory Hager
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Ferdinand K Hui
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Haris I Sair
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Cheng Ting Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA. .,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA.
| |
Collapse
|
28
|
Guy S, Jacquet C, Tsenkoff D, Argenson JN, Ollivier M. Deep learning for the radiographic diagnosis of proximal femur fractures: Limitations and programming issues. Orthop Traumatol Surg Res 2021; 107:102837. [PMID: 33529731 DOI: 10.1016/j.otsr.2021.102837] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Revised: 08/08/2020] [Accepted: 08/17/2020] [Indexed: 02/03/2023]
Abstract
INTRODUCTION Radiology is one of the domains where artificial intelligence (AI) yields encouraging results, with diagnostic accuracy that approaches that of experienced radiologists and physicians. Diagnostic errors in traumatology are rare but can have serious functional consequences. Using AI as a radiological diagnostic aid may be beneficial in the emergency room. Thus, an effective, low-cost software that helps with making radiographic diagnoses would be a relevant tool for current clinical practice, although this concept has rarely been evaluated in orthopedics for proximal femur fractures (PFF). This led us to conduct a prospective study with the goals of: 1) programming deep learning software to help make the diagnosis of PFF on radiographs and 2) to evaluate its performance. HYPOTHESIS It is possible to program an effective deep learning software to help make the diagnosis of PFF based on a limited number of radiographs. METHODS Our database consisted of 1309 radiographs: 963 had a PFF, while 346 did not. The sample size was increased 8-fold (resulting in 10,472 radiographs) using a validated technique. Each radiograph was evaluated by an orthopedic surgeon using RectLabel™ software (https://rectlabel.com), by differentiating between healthy and fractured zones. Fractures were classified according to the AO system. The deep learning algorithm was programmed on Tensorflow™ software (Google Brain, Santa Clara, Ca, USA, tensorflow.org). In all, 9425 annotated radiographs (90%) were used for the training phase and 1074 (10%) for the test phase. RESULTS The sensitivity of the algorithm was 61% for femoral neck fractures and 67% for trochanteric fractures. The specificity was 67% and 69%, the positive predictive value was 55% and 56%, while the negative predictive value was 74% and 78%, respectively. CONCLUSION Our results are not good enough for our algorithm to be used in current clinical practice. Programming of deep learning software with sufficient diagnostic accuracy can only be done with several tens of thousands of radiographs, or by using transfer learning. LEVEL OF EVIDENCE III; Diagnostic studies, Study of nonconsecutive patients, without consistently applied reference "gold" standard.
Collapse
Affiliation(s)
- Sylvain Guy
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France.
| | - Christophe Jacquet
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France
| | - Damien Tsenkoff
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France
| | - Jean-Noël Argenson
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France
| | - Matthieu Ollivier
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France
| |
Collapse
|
29
|
Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, Santamaría J, Fadhel MA, Al-Amidie M, Farhan L. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. JOURNAL OF BIG DATA 2021; 8:53. [PMID: 33816053 PMCID: PMC8010506 DOI: 10.1186/s40537-021-00444-8] [Citation(s) in RCA: 1028] [Impact Index Per Article: 257.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 03/22/2021] [Indexed: 05/04/2023]
Abstract
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad, 10001 Iraq
| | - Ayad Al-Dujaili
- Electrical Engineering Technical College, Middle Technical University, Baghdad, 10001 Iraq
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar, 64005 Iraq
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Laith Farhan
- School of Engineering, Manchester Metropolitan University, Manchester, M1 5GD UK
| |
Collapse
|
30
|
Wang Z, Xiao Y, Weng F, Li X, Zhu D, Lu F, Liu X, Hou M, Meng Y. R-JaunLab: Automatic Multi-Class Recognition of Jaundice on Photos of Subjects with Region Annotation Networks. J Digit Imaging 2021; 34:337-350. [PMID: 33634415 DOI: 10.1007/s10278-021-00432-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Revised: 07/01/2020] [Accepted: 02/09/2021] [Indexed: 12/21/2022] Open
Abstract
Jaundice occurs as a symptom of various diseases, such as hepatitis, the liver cancer, gallbladder or pancreas. Therefore, clinical measurement with special equipment is a common method that is used to identify the total serum bilirubin level in patients. Fully automated multi-class recognition of jaundice combines two key issues: (1) the critical difficulties in multi-class recognition of jaundice approaches contrasting with the binary class and (2) the subtle difficulties in multi-class recognition of jaundice represent extensive individuals variability of high-resolution photos of subjects, huge coherency between healthy controls and occult jaundice, as well as broadly inhomogeneous color distribution. We introduce a novel approach for multi-class recognition of jaundice to detect occult jaundice, obvious jaundice and healthy controls. First, region annotation network is developed and trained to propose eye candidates. Subsequently, an efficient jaundice recognizer is proposed to learn similarities, context, localization features and globalization characteristics on photos of subjects. Finally, both networks are unified by using shared convolutional layer. Evaluation of the structured model in a comparative study resulted in a significant performance boost (categorical accuracy for mean 91.38%) over the independent human observer. Our work was exceeded against the state-of-the-art convolutional neural network (96.85% and 90.06% for training and validation subset, respectively) and showed a remarkable categorical result for mean 95.33% on testing subset. The proposed network makes a performance better than physicians. This work demonstrates the strength of our proposal to help bringing an efficient tool for multi-class recognition of jaundice into clinical practice.
Collapse
Affiliation(s)
- Zheng Wang
- School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China.,Science and Engineering School, Hunan First Normal University, Changsha, 410205, China
| | - Ying Xiao
- Gastroenterology Department of Xiangya Hospital, Central South University, Changsha, 410083, China
| | - Futian Weng
- School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China
| | - Xiaojun Li
- Gastroenterology Department of Xiangya Hospital, Central South University, Changsha, 410083, China
| | - Danhua Zhu
- Department of Gastroenterology, Hunan Provincial People's Hospital, Changsha, 410002, China
| | - Fanggen Lu
- The Second Xiangya Hospital, Central South University, 410083, Changsha, China
| | - Xiaowei Liu
- Gastroenterology Department of Xiangya Hospital, Central South University, Changsha, 410083, China
| | - Muzhou Hou
- School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China.
| | - Yu Meng
- Department of Gastroenterology and Hepatology, Shenzhen University General Hospital, Shenzhen, 518055, China.
| |
Collapse
|
31
|
Yi PH, Singh D, Harvey SC, Hager GD, Mullen LA. DeepCAT: Deep Computer-Aided Triage of Screening Mammography. J Digit Imaging 2021; 34:27-35. [PMID: 33432446 PMCID: PMC7887113 DOI: 10.1007/s10278-020-00407-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 07/30/2020] [Accepted: 11/20/2020] [Indexed: 12/26/2022] Open
Abstract
Although much deep learning research has focused on mammographic detection of breast cancer, relatively little attention has been paid to mammography triage for radiologist review. The purpose of this study was to develop and test DeepCAT, a deep learning system for mammography triage based on suspicion of cancer. Specifically, we evaluate DeepCAT's ability to provide two augmentations to radiologists: (1) discarding images unlikely to have cancer from radiologist review and (2) prioritization of images likely to contain cancer. We used 1878 2D-mammographic images (CC & MLO) from the Digital Database for Screening Mammography to develop DeepCAT, a deep learning triage system composed of 2 components: (1) mammogram classifier cascade and (2) mass detector, which are combined to generate an overall priority score. This priority score is used to order images for radiologist review. Of 595 testing images, DeepCAT recommended low priority for 315 images (53%), of which none contained a malignant mass. In evaluation of prioritizing images according to likelihood of containing cancer, DeepCAT's study ordering required an average of 26 adjacent swaps to obtain perfect review order. Our results suggest that DeepCAT could substantially increase efficiency for breast imagers and effectively triage review of mammograms with malignant masses.
Collapse
Affiliation(s)
- Paul H Yi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA.
| | - Dhananjay Singh
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | | | - Gregory D Hager
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Lisa A Mullen
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| |
Collapse
|
32
|
Dai Q, Liu X, Lin X, Fu Y, Chen C, Yu X, Zhang Z, Li T, Liu M, Yang W, Ye J. A Novel Meibomian Gland Morphology Analytic System Based on a Convolutional Neural Network. IEEE ACCESS 2021; 9:23083-23094. [DOI: 10.1109/access.2021.3056234] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
33
|
Zheng Q, Yang L, Zeng B, Li J, Guo K, Liang Y, Liao G. Artificial intelligence performance in detecting tumor metastasis from medical radiology imaging: A systematic review and meta-analysis. EClinicalMedicine 2021; 31:100669. [PMID: 33392486 PMCID: PMC7773591 DOI: 10.1016/j.eclinm.2020.100669] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 11/14/2020] [Accepted: 11/17/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Early diagnosis of tumor metastasis is crucial for clinical treatment. Artificial intelligence (AI) has shown great promise in the field of medicine. We therefore aimed to evaluate the diagnostic accuracy of AI algorithms in detecting tumor metastasis using medical radiology imaging. METHODS We searched PubMed and Web of Science for studies published from January 1, 1997, to January 30, 2020. Studies evaluating an AI model for the diagnosis of tumor metastasis from medical images were included. We excluded studies that used histopathology images or medical wave-form data and those focused on the region segmentation of interest. Studies providing enough information to construct contingency tables were included in a meta-analysis. FINDINGS We identified 2620 studies, of which 69 were included. Among them, 34 studies were included in a meta-analysis with a pooled sensitivity of 82% (95% CI 79-84%), specificity of 84% (82-87%) and AUC of 0·90 (0·87-0·92). Analysis for different AI algorithms showed a pooled sensitivity of 87% (83-90%) for machine learning and 86% (82-89%) for deep learning, and a pooled specificity of 89% (82-93%) for machine learning, and 87% (82-91%) for deep learning. INTERPRETATION AI algorithms may be used for the diagnosis of tumor metastasis using medical radiology imaging with equivalent or even better performance to health-care professionals, in terms of sensitivity and specificity. At the same time, rigorous reporting standards with external validation and comparison to health-care professionals are urgently needed for AI application in the medical field. FUNDING College students' innovative entrepreneurial training plan program .
Collapse
Affiliation(s)
- Qiuhan Zheng
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Le Yang
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Bin Zeng
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Jiahao Li
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Kaixin Guo
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Yujie Liang
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| | - Guiqing Liao
- Department of Oral and Maxillofacial Surgery, Guanghua School of Stomatology, Hospital of Stomatology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Stomatology, Guangzhou, China
| |
Collapse
|
34
|
Morid MA, Borjali A, Del Fiol G. A scoping review of transfer learning research on medical image analysis using ImageNet. Comput Biol Med 2021; 128:104115. [PMID: 33227578 DOI: 10.1016/j.compbiomed.2020.104115] [Citation(s) in RCA: 162] [Impact Index Per Article: 40.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Revised: 10/19/2020] [Accepted: 11/09/2020] [Indexed: 02/06/2023]
Abstract
OBJECTIVE Employing transfer learning (TL) with convolutional neural networks (CNNs), well-trained on non-medical ImageNet dataset, has shown promising results for medical image analysis in recent years. We aimed to conduct a scoping review to identify these studies and summarize their characteristics in terms of the problem description, input, methodology, and outcome. MATERIALS AND METHODS To identify relevant studies, MEDLINE, IEEE, and ACM digital library were searched for studies published between June 1st, 2012 and January 2nd, 2020. Two investigators independently reviewed articles to determine eligibility and to extract data according to a study protocol defined a priori. RESULTS After screening of 8421 articles, 102 met the inclusion criteria. Of 22 anatomical areas, eye (18%), breast (14%), and brain (12%) were the most commonly studied. Data augmentation was performed in 72% of fine-tuning TL studies versus 15% of the feature-extracting TL studies. Inception models were the most commonly used in breast related studies (50%), while VGGNet was the common in eye (44%), skin (50%) and tooth (57%) studies. AlexNet for brain (42%) and DenseNet for lung studies (38%) were the most frequently used models. Inception models were the most frequently used for studies that analyzed ultrasound (55%), endoscopy (57%), and skeletal system X-rays (57%). VGGNet was the most common for fundus (42%) and optical coherence tomography images (50%). AlexNet was the most frequent model for brain MRIs (36%) and breast X-Rays (50%). 35% of the studies compared their model with other well-trained CNN models and 33% of them provided visualization for interpretation. DISCUSSION This study identified the most prevalent tracks of implementation in the literature for data preparation, methodology selection and output evaluation for various medical image analysis tasks. Also, we identified several critical research gaps existing in the TL studies on medical image analysis. The findings of this scoping review can be used in future TL studies to guide the selection of appropriate research approaches, as well as identify research gaps and opportunities for innovation.
Collapse
Affiliation(s)
- Mohammad Amin Morid
- Department of Information Systems and Analytics, Leavey School of Business, Santa Clara University, Santa Clara, CA, USA.
| | - Alireza Borjali
- Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA; Department of Orthopaedic Surgery, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA
| | - Guilherme Del Fiol
- Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
35
|
Fang X, Harris L, Zhou W, Huo D. Generalized Radiographic View Identification with Deep Learning. J Digit Imaging 2020; 34:66-74. [PMID: 33263143 DOI: 10.1007/s10278-020-00408-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 07/13/2020] [Accepted: 11/20/2020] [Indexed: 01/19/2023] Open
Abstract
To explore the feasibility of an automatic machine-learning algorithm-based quality control system for the practice of diagnostic radiography, performance of a convolutional neural networks (CNN)-based algorithm for identifying radiographic (X-ray) views at different levels was examined with a retrospective, HIPAA-compliant, and IRB-approved study performed on 15,046 radiographic images acquired between 2013 and 2018 from nine clinical sites affiliated with our institution. Images were labeled according to four classification levels: level 1 (anatomy level, 25 classes), level 2 (laterality level, 41 classes), level 3 (projection level, 108 classes), and level 4 (detailed level, 143 classes). An Inception V3 model pre-trained with ImageNet dataset was trained with transfer learning to classify the image at all levels. Sensitivity and positive predictive value were reported for each class, and overall accuracy was reported for each level. Accuracy was also reported when we allowed for "reasonable errors". The overall accuracy was 0.96, 0.93, 0.90, and 0.86 at levels 1, 2, 3, and 4, respectively. Overall accuracy increased to 0.99, 0.97, 0.94, and 0.88 when "reasonable errors" were allowed. Machine learning algorithms resulted in reasonable model performance for identifying radiographic views with acceptable accuracy when "reasonable errors" were allowed. Our findings demonstrate the feasibility of building a quality-control program based on machine-learning algorithms to identify radiographic views with acceptable accuracy at lower levels, which could be applied in a clinical setting.
Collapse
Affiliation(s)
- Xiang Fang
- Sheldon B. Lubar School of Business, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - Leah Harris
- Department of Radiology, UCHealth University of Colorado Hospital, Aurora, CO, USA
| | - Wei Zhou
- Department of Radiology, University of Colorado School of Medicine, Aurora, CO, USA
| | - Donglai Huo
- Department of Radiology, University of Colorado School of Medicine, Aurora, CO, USA.
| |
Collapse
|
36
|
Hussain L, Nguyen T, Li H, Abbasi AA, Lone KJ, Zhao Z, Zaib M, Chen A, Duong TQ. Machine-learning classification of texture features of portable chest X-ray accurately classifies COVID-19 lung infection. Biomed Eng Online 2020; 19:88. [PMID: 33239006 PMCID: PMC7686836 DOI: 10.1186/s12938-020-00831-x] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 11/17/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND The large volume and suboptimal image quality of portable chest X-rays (CXRs) as a result of the COVID-19 pandemic could post significant challenges for radiologists and frontline physicians. Deep-learning artificial intelligent (AI) methods have the potential to help improve diagnostic efficiency and accuracy for reading portable CXRs. PURPOSE The study aimed at developing an AI imaging analysis tool to classify COVID-19 lung infection based on portable CXRs. MATERIALS AND METHODS Public datasets of COVID-19 (N = 130), bacterial pneumonia (N = 145), non-COVID-19 viral pneumonia (N = 145), and normal (N = 138) CXRs were analyzed. Texture and morphological features were extracted. Five supervised machine-learning AI algorithms were used to classify COVID-19 from other conditions. Two-class and multi-class classification were performed. Statistical analysis was done using unpaired two-tailed t tests with unequal variance between groups. Performance of classification models used the receiver-operating characteristic (ROC) curve analysis. RESULTS For the two-class classification, the accuracy, sensitivity and specificity were, respectively, 100%, 100%, and 100% for COVID-19 vs normal; 96.34%, 95.35% and 97.44% for COVID-19 vs bacterial pneumonia; and 97.56%, 97.44% and 97.67% for COVID-19 vs non-COVID-19 viral pneumonia. For the multi-class classification, the combined accuracy and AUC were 79.52% and 0.87, respectively. CONCLUSION AI classification of texture and morphological features of portable CXRs accurately distinguishes COVID-19 lung infection in patients in multi-class datasets. Deep-learning methods have the potential to improve diagnostic efficiency and accuracy for portable CXRs.
Collapse
Affiliation(s)
- Lal Hussain
- Department of Computer Science and IT, King Abdullah Campus, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, Azad Kashmir, Pakistan.
- Department of Computer Science and IT, Neelum Campus, University of Azad Jammu and Kashmir, Athmuqam, 13230, Azad Kashmir, Pakistan.
| | - Tony Nguyen
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Haifang Li
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Adeel A Abbasi
- Department of Computer Science and IT, King Abdullah Campus, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, Azad Kashmir, Pakistan
| | - Kashif J Lone
- Department of Computer Science and IT, King Abdullah Campus, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, Azad Kashmir, Pakistan
| | - Zirun Zhao
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Mahnoor Zaib
- Department of Computer Science and IT, Neelum Campus, University of Azad Jammu and Kashmir, Athmuqam, 13230, Azad Kashmir, Pakistan
| | - Anne Chen
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Tim Q Duong
- Department of Radiology, Renaissance School of Medicine at Stony Brook University, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| |
Collapse
|
37
|
Yi PH, Lin A, Wei J, Yu AC, Sair HI, Hui FK, Hager GD, Harvey SC. Deep-Learning-Based Semantic Labeling for 2D Mammography and Comparison of Complexity for Machine Learning Tasks. J Digit Imaging 2020; 32:565-570. [PMID: 31197559 PMCID: PMC6646449 DOI: 10.1007/s10278-019-00244-w] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Machine learning has several potential uses in medical imaging for semantic labeling of images to improve radiologist workflow and to triage studies for review. The purpose of this study was to (1) develop deep convolutional neural networks (DCNNs) for automated classification of 2D mammography views, determination of breast laterality, and assessment and of breast tissue density; and (2) compare the performance of DCNNs on these tasks of varying complexity to each other. We obtained 3034 2D-mammographic images from the Digital Database for Screening Mammography, annotated with mammographic view, image laterality, and breast tissue density. These images were used to train a DCNN to classify images for these three tasks. The DCNN trained to classify mammographic view achieved receiver-operating-characteristic (ROC) area under the curve (AUC) of 1. The DCNN trained to classify breast image laterality initially misclassified right and left breasts (AUC 0.75); however, after discontinuing horizontal flips during data augmentation, AUC improved to 0.93 (p < 0.0001). Breast density classification proved more difficult, with the DCNN achieving 68% accuracy. Automated semantic labeling of 2D mammography is feasible using DCNNs and can be performed with small datasets. However, automated classification of differences in breast density is more difficult, likely requiring larger datasets.
Collapse
Affiliation(s)
- Paul H Yi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA. .,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA.
| | - Abigail Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA
| | - Jinchi Wei
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Alice C Yu
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA
| | - Haris I Sair
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Ferdinand K Hui
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Gregory D Hager
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Susan C Harvey
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA
| |
Collapse
|
38
|
Shah RF, Bini SA, Martinez AM, Pedoia V, Vail TP. Incremental inputs improve the automated detection of implant loosening using machine-learning algorithms. Bone Joint J 2020; 102-B:101-106. [DOI: 10.1302/0301-620x.102b6.bjj-2019-1577.r1] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Aims The aim of this study was to evaluate the ability of a machine-learning algorithm to diagnose prosthetic loosening from preoperative radiographs and to investigate the inputs that might improve its performance. Methods A group of 697 patients underwent a first-time revision of a total hip (THA) or total knee arthroplasty (TKA) at our institution between 2012 and 2018. Preoperative anteroposterior (AP) and lateral radiographs, and historical and comorbidity information were collected from their electronic records. Each patient was defined as having loose or fixed components based on the operation notes. We trained a series of convolutional neural network (CNN) models to predict a diagnosis of loosening at the time of surgery from the preoperative radiographs. We then added historical data about the patients to the best performing model to create a final model and tested it on an independent dataset. Results The convolutional neural network we built performed well when detecting loosening from radiographs alone. The first model built de novo with only the radiological image as input had an accuracy of 70%. The final model, which was built by fine-tuning a publicly available model named DenseNet, combining the AP and lateral radiographs, and incorporating information from the patient’s history, had an accuracy, sensitivity, and specificity of 88.3%, 70.2%, and 95.6% on the independent test dataset. It performed better for cases of revision THA with an accuracy of 90.1%, than for cases of revision TKA with an accuracy of 85.8%. Conclusion This study showed that machine learning can detect prosthetic loosening from radiographs. Its accuracy is enhanced when using highly trained public algorithms, and when adding clinical data to the algorithm. While this algorithm may not be sufficient in its present state of development as a standalone metric of loosening, it is currently a useful augment for clinical decision making. Cite this article: Bone Joint J 2020;102-B(6 Supple A):101–106.
Collapse
Affiliation(s)
- Romil F. Shah
- Department of Orthopaedic Surgery, University of Texas, Austin, Texas, USA
- Department of Orthopedic Surgery, University of California, San Francisco, California, USA
| | - Stefano A. Bini
- Department of Orthopedic Surgery, University of California, San Francisco, California, USA
| | - Alejandro M. Martinez
- Musculoskeletal and Imaging Research Group, University of California, San Francisco, California, USA
| | - Valentina Pedoia
- Musculoskeletal and Imaging Research Group, University of California, San Francisco, California, USA
| | - Thomas P. Vail
- Department of Orthopedic Surgery, University of California, San Francisco, California, USA
| |
Collapse
|
39
|
Rajaraman S, Sornapudi S, Kohli M, Antani S. Assessment of an ensemble of machine learning models toward abnormality detection in chest radiographs. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:3689-3692. [PMID: 31946676 DOI: 10.1109/embc.2019.8856715] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Respiratory diseases account for a significant proportion of deaths and disabilities across the world. Chest X-ray (CXR) analysis remains a common diagnostic imaging modality for confirming intra-thoracic cardiopulmonary abnormalities. However, there remains an acute shortage of expert radiologists, particularly in under-resourced settings, resulting in severe interpretation delays. These issues can be mitigated by a computer-aided diagnostic (CADx) system to supplement decision-making and improve throughput while preserving and possibly improving the standard-of-care. Systems reported in the literature or popular media use handcrafted features and/or data-driven algorithms like deep learning (DL) to learn underlying data distributions. The remarkable success of convolutional neural networks (CNN) toward image recognition tasks has made them a promising choice for automated medical image analyses. However, CNNs suffer from high variance and may overfit due to their sensitivity to training data fluctuations. Ensemble learning helps to reduce this variance by combining predictions of multiple learning algorithms to construct complex, non-linear functions and improve robustness and generalization. This study aims to construct and assess the performance of an ensemble of machine learning (ML) models applied to the challenge of classifying normal and abnormal CXRs and significantly reducing the diagnostic load of radiologists and primary-care physicians.
Collapse
|
40
|
Yee WLS, Drum CL. Increasing Complexity to Simplify Clinical Care: High Resolution Mass Spectrometry as an Enabler of AI Guided Clinical and Therapeutic Monitoring. ADVANCED THERAPEUTICS 2020. [DOI: 10.1002/adtp.201900163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Wei Loong Sherman Yee
- Yong Loo Lin School of MedicineDepartment of MedicineNational University of Singapore Singapore 119077 Singapore
- Cardiovascular Research Institute (CVRI)National University Health System Singapore 119228 Singapore
| | - Chester Lee Drum
- Yong Loo Lin School of MedicineDepartment of MedicineNational University of Singapore Singapore 119077 Singapore
- Cardiovascular Research Institute (CVRI)National University Health System Singapore 119228 Singapore
- Yong Loo Lin School of MedicineDepartment of BiochemistryNational University of Singapore Singapore 119077 Singapore
- The N.1 Institute for Health (N.1)National University of Singapore Singapore 119077 Singapore
| |
Collapse
|
41
|
Ellis R, Ellestad E, Elicker B, Hope MD, Tosun D. Impact of hybrid supervision approaches on the performance of artificial intelligence for the classification of chest radiographs. Comput Biol Med 2020; 120:103699. [PMID: 32217281 DOI: 10.1016/j.compbiomed.2020.103699] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 03/09/2020] [Accepted: 03/10/2020] [Indexed: 01/02/2023]
Abstract
PURPOSE To evaluate the impact of different supervision regimens on the training of artificial intelligence (AI) in the classification of chest radiographs as normal or abnormal in a moderately sized cohort of individuals more likely to be outpatients. MATERIALS AND METHODS In a retrospective study, 7000 consecutive two-view chest radiographs obtained from 2012 to 2015 were labeled as normal or abnormal based on clinical reports. A convolutional neural network (CNN) was trained on this dataset and then evaluated with an unseen subset of 500 radiographs. Five different training approaches were tested: (1) weak supervision and four hybrid approaches combining weak supervision and extra supervision with annotation in (2) an unbalanced set of normal and abnormal cases, (3) a set of only abnormal cases, (4) a set of only normal cases, and (5) a balanced set of normal and abnormal cases. Standard binary classification metrics were assessed. RESULTS The weakly supervised model achieved an accuracy of 82%, but yielded 75 false negative cases, at a sensitivity of 70.0% and a negative predictive value (NPV) of 75.5%. Extra supervision increased NPV at the expense of the false positive rate and overall accuracy. Extra supervision with training using a balance of abnormal and normal radiographs resulted in the greatest increase in NPV (87.2%), improved sensitivity (92.8%), and reduced the number of false negatives by more than fourfold (18 compared to 75 cases). CONCLUSION Extra supervision using a balance of annotated normal and abnormal cases applied to a weakly supervised model can minimize the number of false negative cases when classifying two-view chest radiographs. Further refinement of such hybrid training approaches for AI is warranted to refine models for practical clinical applications.
Collapse
Affiliation(s)
- Ryan Ellis
- San Francisco Veterans Affairs Medical Center, San Francisco Veterans Affairs Medical Center, 4150 Clement St, San Francisco, CA, 94121, USA
| | - Erik Ellestad
- Department of Radiology and Biomedical Imaging, University of California - San Francisco, 505 Parnassus Avenue M-391, San Francisco, CA, 94143, USA; San Francisco Veterans Affairs Medical Center, San Francisco Veterans Affairs Medical Center, 4150 Clement St, San Francisco, CA, 94121, USA
| | - Brett Elicker
- Department of Radiology and Biomedical Imaging, University of California - San Francisco, 505 Parnassus Avenue M-391, San Francisco, CA, 94143, USA
| | - Michael D Hope
- Department of Radiology and Biomedical Imaging, University of California - San Francisco, 505 Parnassus Avenue M-391, San Francisco, CA, 94143, USA; San Francisco Veterans Affairs Medical Center, San Francisco Veterans Affairs Medical Center, 4150 Clement St, San Francisco, CA, 94121, USA
| | - Duygu Tosun
- Department of Radiology and Biomedical Imaging, University of California - San Francisco, 505 Parnassus Avenue M-391, San Francisco, CA, 94143, USA; San Francisco Veterans Affairs Medical Center, San Francisco Veterans Affairs Medical Center, 4150 Clement St, San Francisco, CA, 94121, USA.
| |
Collapse
|
42
|
Crosby J, Rhines T, Li F, MacMahon H, Giger M. Deep convolutional neural networks in the classification of dual-energy thoracic radiographic views for efficient workflow: analysis on over 6500 clinical radiographs. J Med Imaging (Bellingham) 2020; 7:016501. [PMID: 32042858 DOI: 10.1117/1.jmi.7.1.016501] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Accepted: 01/07/2020] [Indexed: 11/14/2022] Open
Abstract
DICOM header information is frequently used to classify medical image types; however, if a header is missing fields or contains incorrect data, the utility is limited. To expedite image classification, we trained convolutional neural networks (CNNs) in two classification tasks for thoracic radiographic views obtained from dual-energy studies: (a) distinguishing between frontal, lateral, soft tissue, and bone images and (b) distinguishing between posteroanterior (PA) or anteroposterior (AP) chest radiographs. CNNs with AlexNet architecture were trained from scratch. 1910 manually classified radiographs were used for training the network to accomplish task (a), then tested with an independent test set (3757 images). Frontal radiographs from the two datasets were combined to train a network to accomplish task (b); tested using an independent test set of 1000 radiographs. ROC analysis was performed for each trained CNN with area under the curve (AUC) as a performance metric. Classification between frontal images (AP/PA) and other image types yielded an AUC of 0.997 [95% confidence interval (CI): 0.996, 0.998]. Classification between PA and AP radiographs resulted in an AUC of 0.973 (95% CI: 0.961, 0.981). CNNs were able to rapidly classify thoracic radiographs with high accuracy, thus potentially contributing to effective and efficient workflow.
Collapse
Affiliation(s)
- Jennie Crosby
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Thomas Rhines
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Feng Li
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Heber MacMahon
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Maryellen Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
43
|
Li M, Tang H, Chan MD, Zhou X, Qian X. DC-AL GAN: Pseudoprogression and true tumor progression of glioblastoma multiform image classification based on DCGAN and AlexNet. Med Phys 2020; 47:1139-1150. [PMID: 31885094 DOI: 10.1002/mp.14003] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Revised: 12/16/2019] [Accepted: 12/17/2019] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Pseudoprogression (PsP) occurs in 20-30% of patients with glioblastoma multiforme (GBM) after receiving the standard treatment. PsP exhibits similarities in shape and intensity to the true tumor progression (TTP) of GBM on the follow-up magnetic resonance imaging (MRI). These similarities pose challenges to the differentiation of these types of progression and hence the selection of the appropriate clinical treatment strategy. METHODS To address this challenge, we introduced a novel feature learning method based on deep convolutional generative adversarial network (DCGAN) and AlexNet, termed DC-AL GAN, to discriminate between PsP and TTP in MRI images. Due to the adversarial relationship between the generator and the discriminator of DCGAN, high-level discriminative features of PsP and TTP can be derived for the discriminator with AlexNet. We also constructed a multifeature selection module to concatenate features from different layers, contributing to more powerful features used for effectively discriminating between PsP and TTP. Finally, these discriminative features from the discriminator are used for classification by a support vector machine (SVM). Tenfold cross-validation (CV) and the area under the receiver operating characteristic (AUC) were applied to evaluate the performance of this developed algorithm. RESULTS The accuracy and AUC of DC-AL GAN for discriminating PsP and TTP after tenfold CV were 0.920 and 0.947. We also assessed the effects of different indicators (such as sensitivity and specificity) for features extracted from different layers to obtain a model with the best classification performance. CONCLUSIONS The proposed model DC-AL GAN is capable of learning discriminative representations from GBM datasets, and it achieves desirable PsP and TTP classification performance superior to other state-of-the-art methods. Therefore, the developed model would be useful in the diagnosis of PsP and TTP for GBM.
Collapse
Affiliation(s)
- Meiyu Li
- College of Electronic Science and Engineering, Jilin University, Changchun, 130012, China.,Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Hailiang Tang
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, 200040, China
| | - Michael D Chan
- Department of Radiology, Wake Forest School of Medicine, Winston-Salem, NC, 27157, USA
| | - Xiaobo Zhou
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA
| | - Xiaohua Qian
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China.,Department of Radiology, Wake Forest School of Medicine, Winston-Salem, NC, 27157, USA
| |
Collapse
|
44
|
Fujiwara K, Fang W, Okino T, Sutherland K, Furusaki A, Sagawa A, Kamishima T. Quick and accurate selection of hand images among radiographs from various body parts using deep learning. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:1199-1206. [PMID: 32925161 DOI: 10.3233/xst-200694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
BACKGROUND Although rheumatoid arthritis (RA) causes destruction of articular cartilage, early treatment significantly improves symptoms and delays progression. It is important to detect subtle damage for an early diagnosis. Recent software programs are comparable with the conventional human scoring method regarding detectability of the radiographic progression of RA. Thus, automatic and accurate selection of relevant images (e.g. hand images) among radiographic images of various body parts is necessary for serial analysis on a large scale. OBJECTIVE In this study we examined whether deep learning can select target images from a large number of stored images retrieved from a picture archiving and communication system (PACS) including miscellaneous body parts of patients. METHODS We selected 1,047 X-ray images including various body parts and divided them into two groups: 841 images for training and 206 images for testing. The training images were augmented and used to train a convolutional neural network (CNN) consisting of 4 convolution layers, 2 pooling layers and 2 fully connected layers. After training, we created software to classify the test images and examined the accuracy. RESULTS The image extraction accuracy was 0.952 and 0.979 for unilateral hand and both hands, respectively. In addition, all 206 test images were perfectly classified into unilateral hand, both hands, and the others. CONCLUSIONS Deep learning showed promise to enable efficiently automatic selection of target X-ray images of RA patients.
Collapse
Affiliation(s)
- Kohei Fujiwara
- Department of Health Sciences, Hokkaido University, Kita-ku, Sapporo, Japan
| | - Wanxuan Fang
- Faculty of Health Sciences, Hokkaido University, Kita-ku, Sapporo, Japan
| | - Taichi Okino
- Graduate School of Health Sciences, Hokkaido University, Kita-ku, Sapporo, Japan
| | - Kenneth Sutherland
- Global Center for Biomedical Science and Engineering, Hokkaido University, Kita-ku, Sapporo, Japan
| | - Akira Furusaki
- Sagawa Akira Rheumatology Clinic, Chuo-ku, Sapporo, Japan
| | - Akira Sagawa
- Sagawa Akira Rheumatology Clinic, Chuo-ku, Sapporo, Japan
| | - Tamotsu Kamishima
- Faculty of Health Sciences, Hokkaido University, Kita-ku, Sapporo, Japan
| |
Collapse
|
45
|
Yan L, Liu S, Qi J, Zhang Z, Zhong J, Li Q, Liu X, Zhu Q, Yao Z, Lu Y, Gu L. Three-dimensional reconstruction of internal fascicles and microvascular structures of human peripheral nerves. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2019; 35:e3245. [PMID: 31370097 DOI: 10.1002/cnm.3245] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2019] [Revised: 07/15/2019] [Accepted: 07/28/2019] [Indexed: 06/10/2023]
Abstract
Biofabricated nanostructured and microstructured scaffolds have exhibited great potential for nerve tissue regeneration and functional restoration, and prevascularization and biotransportation within 3D fascicle structures are critical. Unfortunately, an ideal internal fascicle and microvascular model of human peripheral nerves is lacking. In this study, we used microcomputed tomography (microCT) to acquire high-resolution images of the human sciatic nerve. We propose a novel deep-learning network technique, called ResNetH3D-Unet, to segment fascicles and microvascular structures. We reconstructed 3D intraneural fascicles and microvascular topography to quantify the fascicle volume ratio (FVR), microvascular volume ratio (MVR), microvascular to fascicle volume ratio (MFVR), fascicle surface area to volume ratio (FSAVR), and microvascular surface area to volume ratio (MSAVR) of human samples. The frequency distributions of the fascicle diameter, microvascular diameter, and fascicle-to-microvasculature distance were analyzed. The obtained microCT analysis and reconstruction provided high-resolution microstructures of human peripheral nerves. Our proposed ResNetH3D-Unet method for fascicle and microvasculature segmentation yielded a mean intersection over union (IOU) of 92.1% (approximately 5% higher than the U-net IOU). The 3D reconstructed model showed that the internal microvasculature runs longitudinally within the internal epineurium and connects to the external vasculature at some points. Analysis of the 3D data indicated a 48.2 ± 3% FVR, 23.7 ± 1.8% MVR, 4.9 ± 0.5% MFVR, 7.26 ± 2.58 mm-1 FSAVR, and 1.52 ± 0.52 mm-1 MSAVR. A fascicle diameter of 0.98 mm, microvascular diameter of 0.125 mm, and microvasculature-to-fascicle distance of 0.196 mm were most frequent. This study provides fundamental data and structural references for designing bionic scaffolding constructs with 3D microvascular and fascicle distributions.
Collapse
Affiliation(s)
- Liwei Yan
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangzhou, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou, China
| | - Shouliang Liu
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China
- Guangdong Province Key Laboratory of Computational Science, Guangzhou, China
| | - Jian Qi
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangzhou, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou, China
| | - Zhongpu Zhang
- School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, Darlington, NSW, Australia
| | - Jingxiao Zhong
- School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, Darlington, NSW, Australia
| | - Qing Li
- School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, Darlington, NSW, Australia
| | - Xiaolin Liu
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangzhou, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou, China
| | - Qingtang Zhu
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangzhou, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou, China
| | - Zhi Yao
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangzhou, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou, China
| | - Yao Lu
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China
- Guangdong Province Key Laboratory of Computational Science, Guangzhou, China
| | - Liqiang Gu
- Department of Microsurgery and Orthopedic Trauma, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Center for Peripheral Nerve Tissue Engineering and Technology Research, Guangzhou, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou, China
| |
Collapse
|
46
|
Ogawa R, Kido T, Kido T, Mochizuki T. Effect of augmented datasets on deep convolutional neural networks applied to chest radiographs. Clin Radiol 2019; 74:697-701. [DOI: 10.1016/j.crad.2019.04.025] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2018] [Accepted: 04/09/2019] [Indexed: 10/26/2022]
|
47
|
|
48
|
Peng J, Kang S, Ning Z, Deng H, Shen J, Xu Y, Zhang J, Zhao W, Li X, Gong W, Huang J, Liu L. Residual convolutional neural network for predicting response of transarterial chemoembolization in hepatocellular carcinoma from CT imaging. Eur Radiol 2019; 30:413-424. [PMID: 31332558 PMCID: PMC6890698 DOI: 10.1007/s00330-019-06318-1] [Citation(s) in RCA: 109] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 05/21/2019] [Accepted: 06/11/2019] [Indexed: 02/07/2023]
Abstract
Background We attempted to train and validate a model of deep learning for the preoperative prediction of the response of patients with intermediate-stage hepatocellular carcinoma (HCC) undergoing transarterial chemoembolization (TACE). Method All computed tomography (CT) images were acquired for 562 patients from the Nan Fang Hospital (NFH), 89 patients from Zhu Hai Hospital Affiliated with Jinan University (ZHHAJU), and 138 patients from the Sun Yat-sen University Cancer Center (SYUCC). We built a predictive model from the outputs using the transfer learning techniques of a residual convolutional neural network (ResNet50). The prediction accuracy for each patch was revaluated in two independent validation cohorts. Results In the training set (NFH), the deep learning model had an accuracy of 84.3% and areas under curves (AUCs) of 0.97, 0.96, 0.95, and 0.96 for complete response (CR), partial response (PR), stable disease (SD), and progressive disease (PD), respectively. In the other two validation sets (ZHHAJU and SYUCC), the deep learning model had accuracies of 85.1% and 82.8% for CR, PR, SD, and PD. The ResNet50 model also had high AUCs for predicting the objective response of TACE therapy in patches and patients of three cohorts. Decision curve analysis (DCA) showed that the ResNet50 model had a high net benefit in the two validation cohorts. Conclusion The deep learning model presented a good performance for predicting the response of TACE therapy and could help clinicians in better screening patients with HCC who can benefit from the interventional treatment. Key Points • Therapy response of TACE can be predicted by a deep learning model based on CT images. • The probability value from a trained or validation deep learning model showed significant correlation with different therapy responses. • Further improvement is necessary before clinical utilization. Electronic supplementary material The online version of this article (10.1007/s00330-019-06318-1) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Jie Peng
- Hepatology Unit and Department of Infectious Diseases, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
- Department of Oncology, The Second Affiliated Hospital of Guizhou Medical University, Kaili, China
| | - Shuai Kang
- Hepatology Unit and Department of Infectious Diseases, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Zhengyuan Ning
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Hangxia Deng
- Department of Minimal Invasive Interventional Therapy, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Guangzhou, 510000, China
| | - Jingxian Shen
- Department of Radiology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Guangzhou, China
| | - Yikai Xu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Jing Zhang
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Wei Zhao
- Department of Interventional Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Xinling Li
- Department of Interventional Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Wuxing Gong
- Department of Oncology, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, China
| | - Jinhua Huang
- Department of Minimal Invasive Interventional Therapy, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Guangzhou, 510000, China.
| | - Li Liu
- Hepatology Unit and Department of Infectious Diseases, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China.
| |
Collapse
|
49
|
McCallum C, Riordon J, Wang Y, Kong T, You JB, Sanner S, Lagunov A, Hannam TG, Jarvi K, Sinton D. Deep learning-based selection of human sperm with high DNA integrity. Commun Biol 2019; 2:250. [PMID: 31286067 PMCID: PMC6610103 DOI: 10.1038/s42003-019-0491-6] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Accepted: 06/05/2019] [Indexed: 12/13/2022] Open
Abstract
Despite the importance of sperm DNA to human reproduction, currently no method exists to assess individual sperm DNA quality prior to clinical selection. Traditionally, skilled clinicians select sperm based on a variety of morphological and motility criteria, but without direct knowledge of their DNA cargo. Here, we show how a deep convolutional neural network can be trained on a collection of ~1000 sperm cells of known DNA quality, to predict DNA quality from brightfield images alone. Our results demonstrate moderate correlation (bivariate correlation ~0.43) between a sperm cell image and DNA quality and the ability to identify higher DNA integrity cells relative to the median. This deep learning selection process is directly compatible with current, manual microscopy-based sperm selection and could assist clinicians, by providing rapid DNA quality predictions (under 10 ms per cell) and sperm selection within the 86th percentile from a given sample.
Collapse
Affiliation(s)
- Christopher McCallum
- Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, ON Canada M5S 3G8
| | - Jason Riordon
- Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, ON Canada M5S 3G8
| | - Yihe Wang
- Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, ON Canada M5S 3G8
| | - Tian Kong
- Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, ON Canada M5S 3G8
| | - Jae Bem You
- Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, ON Canada M5S 3G8
| | - Scott Sanner
- Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, ON Canada M5S 3G8
| | - Alexander Lagunov
- Hannam Fertility Centre, 160 Bloor St. East, Toronto, ON Canada M4W 3R2
| | - Thomas G. Hannam
- Hannam Fertility Centre, 160 Bloor St. East, Toronto, ON Canada M4W 3R2
| | - Keith Jarvi
- Department of Surgery, Division of Urology, Mount Sinai Hospital, University of Toronto, 60 Murray Street, 6th Floor, Toronto, ON Canada M5T 3L9
| | - David Sinton
- Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, ON Canada M5S 3G8
| |
Collapse
|
50
|
Yi PH, Kim TK, Wei J, Shin J, Hui FK, Sair HI, Hager GD, Fritz J. Automated semantic labeling of pediatric musculoskeletal radiographs using deep learning. Pediatr Radiol 2019; 49:1066-1070. [PMID: 31041454 DOI: 10.1007/s00247-019-04408-2] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 03/24/2019] [Accepted: 04/11/2019] [Indexed: 11/25/2022]
Abstract
BACKGROUND An automated method for identifying the anatomical region of an image independent of metadata labels could improve radiologist workflow (e.g., automated hanging protocols) and help facilitate the automated curation of large medical imaging data sets for machine learning purposes. Deep learning is a potential tool for this purpose. OBJECTIVE To develop and test the performance of deep convolutional neural networks (DCNN) for the automated classification of pediatric musculoskeletal radiographs by anatomical area. MATERIALS AND METHODS We utilized a database of 250 pediatric bone radiographs (50 each of the shoulder, elbow, hand, pelvis and knee) to train 5 DCNNs, one to detect each anatomical region amongst the others, based on ResNet-18 pretrained on ImageNet (transfer learning). For each DCNN, the radiographs were randomly split into training (64%), validation (12%) and test (24%) data sets. The training and validation data sets were augmented 30 times using standard preprocessing methods. We also tested our DCNNs on a separate test set of 100 radiographs from a single institution. Receiver operating characteristics (ROC) with area under the curve (AUC) were used to evaluate DCNN performances. RESULTS All five DCNN trained for classification of the radiographs into anatomical region achieved ROC AUC of 1, respectively, for both test sets. Classification of the test radiographs occurred at a rate of 33 radiographs per s. CONCLUSION DCNNs trained on a small set of images with 30 times augmentation through standard processing techniques are able to automatically classify pediatric musculoskeletal radiographs into anatomical region with near-perfect to perfect accuracy at superhuman speeds. This concept may apply to other body parts and radiographic views with the potential to create an all-encompassing semantic-labeling DCNN.
Collapse
Affiliation(s)
- Paul H Yi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Tae Kyung Kim
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Jinchi Wei
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Jiwon Shin
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Ferdinand K Hui
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Haris I Sair
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Gregory D Hager
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Jan Fritz
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 601 N. Caroline St., Room 4223, Baltimore, MD, 21287, USA. .,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA.
| |
Collapse
|