1
|
Cheng CT, Ooyang CH, Kang SC, Liao CH. Applications of Deep Learning in Trauma Radiology: A Narrative Review. Biomed J 2024:100743. [PMID: 38679199 DOI: 10.1016/j.bj.2024.100743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/26/2024] [Accepted: 04/24/2024] [Indexed: 05/01/2024] Open
Abstract
Diagnostic imaging is essential in modern trauma care for initial evaluation and identifying injuries requiring intervention. Deep learning (DL) has become mainstream in medical image analysis and has shown promising efficacy for classification, segmentation, and lesion detection. This narrative review provides the fundamental concepts for developing DL algorithms in trauma imaging and presents an overview of current progress in each modality. DL has been applied to detect free fluid on Focused Assessment with Sonography for Trauma (FAST), traumatic findings on chest and pelvic X-rays, and computed tomography (CT) scans, identify intracranial hemorrhage on head CT, detect vertebral fractures, and identify injuries to organs like the spleen, liver, and lungs on abdominal and chest CT. Future directions involve expanding dataset size and diversity through federated learning, enhancing model explainability and transparency to build clinician trust, and integrating multimodal data to provide more meaningful insights into traumatic injuries. Though some commercial artificial intelligence products are Food and Drug Administration-approved for clinical use in the trauma field, adoption remains limited, highlighting the need for multi-disciplinary teams to engineer practical, real-world solutions. Overall, DL shows immense potential to improve the efficiency and accuracy of trauma imaging, but thoughtful development and validation are critical to ensure these technologies positively impact patient care.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| | - Chun-Hsiang Ooyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| | - Shih-Ching Kang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan.
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| |
Collapse
|
2
|
Nowroozi A, Salehi MA, Shobeiri P, Agahi S, Momtazmanesh S, Kaviani P, Kalra MK. Artificial intelligence diagnostic accuracy in fracture detection from plain radiographs and comparing it with clinicians: a systematic review and meta-analysis. Clin Radiol 2024:S0009-9260(24)00200-9. [PMID: 38772766 DOI: 10.1016/j.crad.2024.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 04/09/2024] [Accepted: 04/15/2024] [Indexed: 05/23/2024]
Abstract
PURPOSE Fracture detection is one of the most commonly used and studied aspects of artificial intelligence (AI) in medicine. In this systematic review and meta-analysis, we aimed to summarize available literature and data regarding AI performance in fracture detection on plain radiographs and various factors affecting it. METHODS We systematically reviewed studies evaluating AI algorithms in detecting bone fractures in plain radiographs, combined their performance using meta-analysis (a bivariate regression approach), and compared it with that of clinicians. We also analyzed the factors potentially affecting algorithm performance using meta-regression. RESULTS Our analysis included 100 studies. In 83 studies with confusion matrices, AI algorithms showed a sensitivity of 91.43% and a specificity of 92.12% (Area under the summary receiver operator curve = 0.968). After adjustment and false discovery rate correction, tibia/fibula (excluding ankle) fractures were associated with higher (7.0%, p=0.004) AI sensitivity, while more recent publications (5.5%, p=0.003) and Xception architecture (6.6%, p<0.001) were associated with higher specificity. Clinicians and AI showed similar specificity in fracture identification, although AI leaned to higher sensitivity (7.6%, p=0.07). Radiologists, on the other hand, were more specific than AI overall and in several subgroups, and more sensitive to hip fractures before FDR correction. CONCLUSIONS Currently available AI aids could result in a significant improvement in care where radiologists are not readily available. Moreover, identifying factors affecting algorithm performance could guide AI development teams in their process of optimizing their products.
Collapse
Affiliation(s)
- A Nowroozi
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - M A Salehi
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - P Shobeiri
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - S Agahi
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - S Momtazmanesh
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - P Kaviani
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - M K Kalra
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA.
| |
Collapse
|
3
|
Cheng CT, Kuo LW, Ouyang CH, Hsu CP, Lin WC, Fu CY, Kang SC, Liao CH. Development and evaluation of a deep learning-based model for simultaneous detection and localization of rib and clavicle fractures in trauma patients' chest radiographs. Trauma Surg Acute Care Open 2024; 9:e001300. [PMID: 38646620 PMCID: PMC11029226 DOI: 10.1136/tsaco-2023-001300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024] Open
Abstract
Purpose To develop a rib and clavicle fracture detection model for chest radiographs in trauma patients using a deep learning (DL) algorithm. Materials and methods We retrospectively collected 56 145 chest X-rays (CXRs) from trauma patients in a trauma center between August 2008 and December 2016. A rib/clavicle fracture detection DL algorithm was trained using this data set with 991 (1.8%) images labeled by experts with fracture site locations. The algorithm was tested on independently collected 300 CXRs in 2017. An external test set was also collected from hospitalized trauma patients in a regional hospital for evaluation. The receiver operating characteristic curve with area under the curve (AUC), accuracy, sensitivity, specificity, precision, and negative predictive value of the model on each test set was evaluated. The prediction probability on the images was visualized as heatmaps. Results The trained DL model achieved an AUC of 0.912 (95% CI 87.8 to 94.7) on the independent test set. The accuracy, sensitivity, and specificity on the given cut-off value are 83.7, 86.8, and 80.4, respectively. On the external test set, the model had a sensitivity of 88.0 and an accuracy of 72.5. While the model exhibited a slight decrease in accuracy on the external test set, it maintained its sensitivity in detecting fractures. Conclusion The algorithm detects rib and clavicle fractures concomitantly in the CXR of trauma patients with high accuracy in locating lesions through heatmap visualization.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Ling-Wei Kuo
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Chun-Hsiang Ouyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Chi-Po Hsu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Wei-Cheng Lin
- Department of Electrical Engineering, Chang Gung University, Taoyuan, Taiwan
| | - Chih-Yuan Fu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Shih-Ching Kang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| |
Collapse
|
4
|
Binh LN, Nhu NT, Vy VPT, Son DLH, Hung TNK, Bach N, Huy HQ, Tuan LV, Le NQK, Kang JH. Multi-Class Deep Learning Model for Detecting Pediatric Distal Forearm Fractures Based on the AO/OTA Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:725-733. [PMID: 38308069 PMCID: PMC11031555 DOI: 10.1007/s10278-024-00968-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/19/2023] [Accepted: 01/11/2024] [Indexed: 02/04/2024]
Abstract
Common pediatric distal forearm fractures necessitate precise detection. To support prompt treatment planning by clinicians, our study aimed to create a multi-class convolutional neural network (CNN) model for pediatric distal forearm fractures, guided by the AO Foundation/Orthopaedic Trauma Association (AO/ATO) classification system for pediatric fractures. The GRAZPEDWRI-DX dataset (2008-2018) of wrist X-ray images was used. We labeled images into four fracture classes (FRM, FUM, FRE, and FUE with F, fracture; R, radius; U, ulna; M, metaphysis; and E, epiphysis) based on the pediatric AO/ATO classification. We performed multi-class classification by training a YOLOv4-based CNN object detection model with 7006 images from 1809 patients (80% for training and 20% for validation). An 88-image test set from 34 patients was used to evaluate the model performance, which was then compared to the diagnosis performances of two readers-an orthopedist and a radiologist. The overall mean average precision levels on the validation set in four classes of the model were 0.97, 0.92, 0.95, and 0.94, respectively. On the test set, the model's performance included sensitivities of 0.86, 0.71, 0.88, and 0.89; specificities of 0.88, 0.94, 0.97, and 0.98; and area under the curve (AUC) values of 0.87, 0.83, 0.93, and 0.94, respectively. The best performance among the three readers belonged to the radiologist, with a mean AUC of 0.922, followed by our model (0.892) and the orthopedist (0.830). Therefore, using the AO/OTA concept, our multi-class fracture detection model excelled in identifying pediatric distal forearm fractures.
Collapse
Affiliation(s)
- Le Nguyen Binh
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan
- Department of Orthopedics and Trauma, Cho Ray Hospital, Ho Chi Minh City, Vietnam
- AIBioMed Research Group, Taipei Medical University, Taipei, 11031, Taiwan
| | - Nguyen Thanh Nhu
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan
- Faculty of Medicine, Can Tho University of Medicine and Pharmacy, Can Tho 94117, Can Tho, Vietnam
| | - Vu Pham Thao Vy
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan
| | - Do Le Hoang Son
- Department of Orthopedics and Trauma, Cho Ray Hospital, Ho Chi Minh City, Vietnam
| | | | - Nguyen Bach
- Department of Orthopedics, University Medical Center Ho Chi Minh City, 201 Nguyen Chi Thanh Street, District 5, Ho Chi Minh City, Vietnam
| | - Hoang Quoc Huy
- Department of Orthopedics, University Medical Center Ho Chi Minh City, 201 Nguyen Chi Thanh Street, District 5, Ho Chi Minh City, Vietnam
| | - Le Van Tuan
- Department of Orthopedics and Trauma, Cho Ray Hospital, Ho Chi Minh City, Vietnam
| | - Nguyen Quoc Khanh Le
- AIBioMed Research Group, Taipei Medical University, Taipei, 11031, Taiwan.
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
| | - Jiunn-Horng Kang
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
- Department of Physical Medicine and Rehabilitation, School of Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
- Department of Physical Medicine and Rehabilitation, Taipei Medical University Hospital, Taipei, 11031, Taiwan.
- Graduate Institute of Nanomedicine and Medical Engineering, College of Biomedical Engineering, Taipei Medical University, Xinyi District, No.250, Wuxing Street, Taipei, 11031, Taiwan.
| |
Collapse
|
5
|
Yuh WT, Khil EK, Yoon YS, Kim B, Yoon H, Lim J, Lee KY, Yoo YS, An KD. Deep Learning-Assisted Quantitative Measurement of Thoracolumbar Fracture Features on Lateral Radiographs. Neurospine 2024; 21:30-43. [PMID: 38569629 PMCID: PMC10992637 DOI: 10.14245/ns.2347366.683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Revised: 01/24/2024] [Accepted: 02/02/2024] [Indexed: 04/05/2024] Open
Abstract
OBJECTIVE This study aimed to develop and validate a deep learning (DL) algorithm for the quantitative measurement of thoracolumbar (TL) fracture features, and to evaluate its efficacy across varying levels of clinical expertise. METHODS Using the pretrained Mask Region-Based Convolutional Neural Networks model, originally developed for vertebral body segmentation and fracture detection, we fine-tuned the model and added a new module for measuring fracture metrics-compression rate (CR), Cobb angle (CA), Gardner angle (GA), and sagittal index (SI)-from lumbar spine lateral radiographs. These metrics were derived from six-point labeling by 3 radiologists, forming the ground truth (GT). Training utilized 1,000 nonfractured and 318 fractured radiographs, while validations employed 213 internal and 200 external fractured radiographs. The accuracy of the DL algorithm in quantifying fracture features was evaluated against GT using the intraclass correlation coefficient. Additionally, 4 readers with varying expertise levels, including trainees and an attending spine surgeon, performed measurements with and without DL assistance, and their results were compared to GT and the DL model. RESULTS The DL algorithm demonstrated good to excellent agreement with GT for CR, CA, GA, and SI in both internal (0.860, 0.944, 0.932, and 0.779, respectively) and external (0.836, 0.940, 0.916, and 0.815, respectively) validations. DL-assisted measurements significantly improved most measurement values, particularly for trainees. CONCLUSION The DL algorithm was validated as an accurate tool for quantifying TL fracture features using radiographs. DL-assisted measurement is expected to expedite the diagnostic process and enhance reliability, particularly benefiting less experienced clinicians.
Collapse
Affiliation(s)
- Woon Tak Yuh
- Department of Neurosurgery, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
| | - Eun Kyung Khil
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
- Department of Radiology, Fastbone Orthopedic Hospital, Hwaseong, Korea
| | - Yu Sung Yoon
- Department of Radiology, Kyungpook National University Hospital, School of Medicine, Kyungpook National University, Daegu, Korea
| | | | | | - Jihe Lim
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
| | - Kyoung Yeon Lee
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
| | - Yeong Seo Yoo
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
| | - Kyeong Deuk An
- Department of Neurosurgery, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
| |
Collapse
|
6
|
Gitto S, Serpi F, Albano D, Risoleo G, Fusco S, Messina C, Sconfienza LM. AI applications in musculoskeletal imaging: a narrative review. Eur Radiol Exp 2024; 8:22. [PMID: 38355767 PMCID: PMC10866817 DOI: 10.1186/s41747-024-00422-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 12/29/2023] [Indexed: 02/16/2024] Open
Abstract
This narrative review focuses on clinical applications of artificial intelligence (AI) in musculoskeletal imaging. A range of musculoskeletal disorders are discussed using a clinical-based approach, including trauma, bone age estimation, osteoarthritis, bone and soft-tissue tumors, and orthopedic implant-related pathology. Several AI algorithms have been applied to fracture detection and classification, which are potentially helpful tools for radiologists and clinicians. In bone age assessment, AI methods have been applied to assist radiologists by automatizing workflow, thus reducing workload and inter-observer variability. AI may potentially aid radiologists in identifying and grading abnormal findings of osteoarthritis as well as predicting the onset or progression of this disease. Either alone or combined with radiomics, AI algorithms may potentially improve diagnosis and outcome prediction of bone and soft-tissue tumors. Finally, information regarding appropriate positioning of orthopedic implants and related complications may be obtained using AI algorithms. In conclusion, rather than replacing radiologists, the use of AI should instead help them to optimize workflow, augment diagnostic performance, and keep up with ever-increasing workload.Relevance statement This narrative review provides an overview of AI applications in musculoskeletal imaging. As the number of AI technologies continues to increase, it will be crucial for radiologists to play a role in their selection and application as well as to fully understand their potential value in clinical practice. Key points • AI may potentially assist musculoskeletal radiologists in several interpretative tasks.• AI applications to trauma, age estimation, osteoarthritis, tumors, and orthopedic implants are discussed.• AI should help radiologists to optimize workflow and augment diagnostic performance.
Collapse
Affiliation(s)
- Salvatore Gitto
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | - Francesca Serpi
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy
| | - Domenico Albano
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
- Dipartimento di Scienze Biomediche, Chirurgiche ed Odontoiatriche, Università degli Studi di Milano, Milan, Italy
| | - Giovanni Risoleo
- Scuola di Specializzazione in Radiodiagnostica, Università degli Studi di Milano, Milan, Italy
| | - Stefano Fusco
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy
| | - Carmelo Messina
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | - Luca Maria Sconfienza
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy.
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy.
| |
Collapse
|
7
|
Hoy MK, Desai V, Mutasa S, Hoy RC, Gorniak R, Belair JA. Deep Learning-Assisted Identification of Femoroacetabular Impingement (FAI) on Routine Pelvic Radiographs. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:339-346. [PMID: 38343231 DOI: 10.1007/s10278-023-00920-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/08/2023] [Accepted: 08/22/2023] [Indexed: 03/02/2024]
Abstract
To use a novel deep learning system to localize the hip joints and detect findings of cam-type femoroacetabular impingement (FAI). A retrospective search of hip/pelvis radiographs obtained in patients to evaluate for FAI yielded 3050 total studies. Each hip was classified separately by the original interpreting radiologist in the following manner: 724 hips had severe cam-type FAI morphology, 962 moderate cam-type FAI morphology, 846 mild cam-type FAI morphology, and 518 hips were normal. The anteroposterior (AP) view from each study was anonymized and extracted. After localization of the hip joints by a novel convolutional neural network (CNN) based on the focal loss principle, a second CNN classified the images of the hip as cam positive, or no FAI. Accuracy was 74% for diagnosing normal vs. abnormal cam-type FAI morphology, with aggregate sensitivity and specificity of 0.821 and 0.669, respectively, at the chosen operating point. The aggregate AUC was 0.736. A deep learning system can be applied to detect FAI-related changes on single view pelvic radiographs. Deep learning is useful for quickly identifying and categorizing pathology on imaging, which may aid the interpreting radiologist.
Collapse
Affiliation(s)
| | - Vishal Desai
- Thomas Jefferson University, Philadelphia, PA, USA
| | | | - Robert C Hoy
- Temple University Hospital, Philadelphia, PA, USA
| | | | | |
Collapse
|
8
|
Pham TD, Holmes SB, Coulthard P. A review on artificial intelligence for the diagnosis of fractures in facial trauma imaging. Front Artif Intell 2024; 6:1278529. [PMID: 38249794 PMCID: PMC10797131 DOI: 10.3389/frai.2023.1278529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/11/2023] [Indexed: 01/23/2024] Open
Abstract
Patients with facial trauma may suffer from injuries such as broken bones, bleeding, swelling, bruising, lacerations, burns, and deformity in the face. Common causes of facial-bone fractures are the results of road accidents, violence, and sports injuries. Surgery is needed if the trauma patient would be deprived of normal functioning or subject to facial deformity based on findings from radiology. Although the image reading by radiologists is useful for evaluating suspected facial fractures, there are certain challenges in human-based diagnostics. Artificial intelligence (AI) is making a quantum leap in radiology, producing significant improvements of reports and workflows. Here, an updated literature review is presented on the impact of AI in facial trauma with a special reference to fracture detection in radiology. The purpose is to gain insights into the current development and demand for future research in facial trauma. This review also discusses limitations to be overcome and current important issues for investigation in order to make AI applications to the trauma more effective and realistic in practical settings. The publications selected for review were based on their clinical significance, journal metrics, and journal indexing.
Collapse
Affiliation(s)
- Tuan D. Pham
- Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | | | | |
Collapse
|
9
|
Bachmann R, Gunes G, Hangaard S, Nexmann A, Lisouski P, Boesen M, Lundemann M, Baginski SG. Improving traumatic fracture detection on radiographs with artificial intelligence support: a multi-reader study. BJR Open 2024; 6:tzae011. [PMID: 38757067 PMCID: PMC11096271 DOI: 10.1093/bjro/tzae011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 10/13/2023] [Accepted: 04/21/2024] [Indexed: 05/18/2024] Open
Abstract
Objectives The aim of this study was to evaluate the diagnostic performance of nonspecialist readers with and without the use of an artificial intelligence (AI) support tool to detect traumatic fractures on radiographs of the appendicular skeleton. Methods The design was a retrospective, fully crossed multi-reader, multi-case study on a balanced dataset of patients (≥2 years of age) with an AI tool as a diagnostic intervention. Fifteen readers assessed 340 radiographic exams, with and without the AI tool in 2 different sessions and the time spent was automatically recorded. Reference standard was established by 3 consultant radiologists. Sensitivity, specificity, and false positives per patient were calculated. Results Patient-wise sensitivity increased from 72% to 80% (P < .05) and patient-wise specificity increased from 81% to 85% (P < .05) in exams aided by the AI tool compared to the unaided exams. The increase in sensitivity resulted in a relative reduction of missed fractures of 29%. The average rate of false positives per patient decreased from 0.16 to 0.14, corresponding to a relative reduction of 21%. There was no significant difference in average reading time spent per exam. The largest gain in fracture detection performance, with AI support, across all readers, was on nonobvious fractures with a significant increase in sensitivity of 11 percentage points (pp) (60%-71%). Conclusions The diagnostic performance for detection of traumatic fractures on radiographs of the appendicular skeleton improved among nonspecialist readers tested AI fracture detection support tool showed an overall reader improvement in sensitivity and specificity when supported by an AI tool. Improvement was seen in both sensitivity and specificity without negatively affecting the interpretation time. Advances in knowledge The division and analysis of obvious and nonobvious fractures are novel in AI reader comparison studies like this.
Collapse
Affiliation(s)
| | | | - Stine Hangaard
- Department of Radiology, Herlev and Gentofte, Copenhagen University Hospital, Denmark
| | | | | | - Mikael Boesen
- Department of Radiology and Radiological AI Testcenter (RAIT) Denmark, Bispebjerg and Frederiksberg, Copenhagen University Hospital, Denmark
- Department of Clinical Medicine, Faculty of Health, and Medical Sciences, University of Copenhagen, Denmark
| | | | | |
Collapse
|
10
|
Wang M, Seibel MJ. Approach to the Patient With Bone Fracture: Making the First Fracture the Last. J Clin Endocrinol Metab 2023; 108:3345-3352. [PMID: 37290052 PMCID: PMC10655538 DOI: 10.1210/clinem/dgad345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/02/2023] [Accepted: 06/05/2023] [Indexed: 06/10/2023]
Abstract
The global burden of osteoporosis and osteoporotic fractures will increase significantly as we enter a rapidly aging population. Osteoporotic fractures lead to increased morbidity, mortality, and risk of subsequent fractures if left untreated. However, studies have shown that the majority of patients who suffer an osteoporotic fracture are not investigated or treated for osteoporosis, leading to an inexcusable "osteoporosis care gap." Systematic and coordinated models of care in secondary fracture prevention known as fracture liaison services (FLS) have been established to streamline and improve the care of patients with osteoporotic fractures, and employ core principles of identification, investigation, and initiation of treatment. Our approach to the multifaceted care of secondary fracture prevention at a hospital-based FLS is illustrated through several case vignettes.
Collapse
Affiliation(s)
- Mawson Wang
- The University of Sydney, Bone Research Program, ANZAC Research Institute, Concord, NSW 2139, Australia
| | - Markus J Seibel
- The University of Sydney, Bone Research Program, ANZAC Research Institute, Concord, NSW 2139, Australia
| |
Collapse
|
11
|
Lu X, Chang EY, Du J, Yan A, McAuley J, Gentili A, Hsu CN. Robust Multi-View Fracture Detection in the Presence of Other Abnormalities Using HAMIL-Net. Mil Med 2023; 188:590-597. [PMID: 37948284 DOI: 10.1093/milmed/usad252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 03/31/2023] [Accepted: 06/26/2023] [Indexed: 11/12/2023] Open
Abstract
INTRODUCTION Foot and ankle fractures are the most common military health problem. Automated diagnosis can save time and personnel. It is crucial to distinguish fractures not only from normal healthy cases, but also robust against the presence of other orthopedic pathologies. Artificial intelligence (AI) deep learning has been shown to be promising. Previously, we have developed HAMIL-Net to automatically detect orthopedic injuries for upper extremity injuries. In this research, we investigated the performance of HAMIL-Net for detecting foot and ankle fractures in the presence of other abnormalities. MATERIALS AND METHODS HAMIL-Net is a novel deep neural network consisting of a hierarchical attention layer followed by a multiple-instance learning layer. The design allowed it to deal with imaging studies with multiple views. We used 148K musculoskeletal imaging studies for 51K Veterans at VA San Diego in the past 20 years to create datasets for this research. We annotated each study by a semi-automated pipeline leveraging radiology reports written by board-certified radiologists and extracting findings with a natural language processing tool and manually validated the annotations. RESULTS HAMIL-Net can be trained with study-level, multiple-view examples, and detect foot and ankle fractures with a 0.87 area under the receiver operational curve, but the performance dropped when tested by cases including other abnormalities. By integrating a fracture specialized model with one that detecting a broad range of abnormalities, HAMIL-Net's accuracy of detecting any abnormality improved from 0.53 to 0.77 and F-score from 0.46 to 0.86. We also reported HAMIL-Net's performance under different study types including for young (age 18-35) patients. CONCLUSIONS Automated fracture detection is promising but to be deployed in clinical use, presence of other abnormalities must be considered to deliver its full benefit. Our results with HAMIL-Net showed that considering other abnormalities improved fracture detection and allowed for incidental findings of other musculoskeletal abnormalities pertinent or superimposed on fractures.
Collapse
Affiliation(s)
- Xing Lu
- University of California, San Diego, La Jolla, CA 92093, USA
| | - Eric Y Chang
- University of California, San Diego, La Jolla, CA 92093, USA
- VA San Diego Healthcare System, San Diego, CA 92161, USA
| | - Jiang Du
- University of California, San Diego, La Jolla, CA 92093, USA
| | - An Yan
- University of California, San Diego, La Jolla, CA 92093, USA
| | - Julian McAuley
- University of California, San Diego, La Jolla, CA 92093, USA
| | - Amilcare Gentili
- University of California, San Diego, La Jolla, CA 92093, USA
- VA San Diego Healthcare System, San Diego, CA 92161, USA
| | - Chun-Nan Hsu
- University of California, San Diego, La Jolla, CA 92093, USA
- VA San Diego Healthcare System, San Diego, CA 92161, USA
- VA National Artificial Intelligence Institute, Washington, DC 20422, USA
| |
Collapse
|
12
|
Jo SW, Khil EK, Lee KY, Choi I, Yoon YS, Cha JG, Lee JH, Kim H, Lee SY. Deep learning system for automated detection of posterior ligamentous complex injury in patients with thoracolumbar fracture on MRI. Sci Rep 2023; 13:19017. [PMID: 37923853 PMCID: PMC10624679 DOI: 10.1038/s41598-023-46208-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 10/29/2023] [Indexed: 11/06/2023] Open
Abstract
This study aimed to develop a deep learning (DL) algorithm for automated detection and localization of posterior ligamentous complex (PLC) injury in patients with acute thoracolumbar (TL) fracture on magnetic resonance imaging (MRI) and evaluate its diagnostic performance. In this retrospective multicenter study, using midline sagittal T2-weighted image with fracture (± PLC injury), a training dataset and internal and external validation sets of 300, 100, and 100 patients, were constructed with equal numbers of injured and normal PLCs. The DL algorithm was developed through two steps (Attention U-net and Inception-ResNet-V2). We evaluate the diagnostic performance for PLC injury between the DL algorithm and radiologists with different levels of experience. The area under the curves (AUCs) generated by the DL algorithm were 0.928, 0.916 for internal and external validations, and by two radiologists for observer performance test were 0.930, 0.830, respectively. Although no significant difference was found in diagnosing PLC injury between the DL algorithm and radiologists, the DL algorithm exhibited a trend of higher AUC than the radiology trainee. Notably, the radiology trainee's diagnostic performance significantly improved with DL algorithm assistance. Therefore, the DL algorithm exhibited high diagnostic performance in detecting PLC injuries in acute TL fractures.
Collapse
Affiliation(s)
- Sang Won Jo
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, 7, Keunjaebong-gil, Hwaseong-si, Republic of Korea
| | - Eun Kyung Khil
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, 7, Keunjaebong-gil, Hwaseong-si, Republic of Korea.
- Department of Radiology, Fastbone Orthopedic Hospital, Hwaseong-si, Republic of Korea.
| | - Kyoung Yeon Lee
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, 7, Keunjaebong-gil, Hwaseong-si, Republic of Korea
| | - Il Choi
- Department of Neurologic Surgery, Hallym University Dongtan Sacred Heart Hospital, Hwaseong-si, Republic of Korea
| | - Yu Sung Yoon
- Department of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon, Republic of Korea
- Department of Radiology, Kyungpook National University Hospital, Daegu, Republic of Korea
| | - Jang Gyu Cha
- Department of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon, Republic of Korea
| | | | | | | |
Collapse
|
13
|
Zech JR, Jaramillo D, Altosaar J, Popkin CA, Wong TT. Artificial intelligence to identify fractures on pediatric and young adult upper extremity radiographs. Pediatr Radiol 2023; 53:2386-2397. [PMID: 37740031 DOI: 10.1007/s00247-023-05754-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 08/09/2023] [Accepted: 08/21/2023] [Indexed: 09/24/2023]
Abstract
BACKGROUND Pediatric fractures are challenging to identify given the different response of the pediatric skeleton to injury compared to adults, and most artificial intelligence (AI) fracture detection work has focused on adults. OBJECTIVE Develop and transparently share an AI model capable of detecting a range of pediatric upper extremity fractures. MATERIALS AND METHODS In total, 58,846 upper extremity radiographs (finger/hand, wrist/forearm, elbow, humerus, shoulder/clavicle) from 14,873 pediatric and young adult patients were divided into train (n = 12,232 patients), tune (n = 1,307), internal test (n = 819), and external test (n = 515) splits. Fracture was determined by manual inspection of all test radiographs and the subset of train/tune radiographs whose reports were classified fracture-positive by a rule-based natural language processing (NLP) algorithm. We trained an object detection model (Faster Region-based Convolutional Neural Network [R-CNN]; "strongly-supervised") and an image classification model (EfficientNetV2-Small; "weakly-supervised") to detect fractures using train/tune data and evaluate on test data. AI fracture detection accuracy was compared with accuracy of on-call residents on cases they preliminarily interpreted overnight. RESULTS A strongly-supervised fracture detection AI model achieved overall test area under the receiver operating characteristic curve (AUC) of 0.96 (95% CI 0.95-0.97), accuracy 89.7% (95% CI 88.0-91.3%), sensitivity 90.8% (95% CI 88.5-93.1%), and specificity 88.7% (95% CI 86.4-91.0%), and outperformed a weakly-supervised model (AUC 0.93, 95% CI 0.92-0.94, P < 0.0001). AI accuracy on cases preliminary interpreted overnight was higher than resident accuracy (AI 89.4% vs. 85.1%, 95% CI 87.3-91.5% vs. 82.7-87.5%, P = 0.01). CONCLUSION An object detection AI model identified pediatric upper extremity fractures with high accuracy.
Collapse
Affiliation(s)
- John R Zech
- Department of Radiology, Columbia University Irving Medical Center, 622 W. 168th St., New York, NY, 10032, USA.
| | - Diego Jaramillo
- Department of Radiology, Columbia University Irving Medical Center, 622 W. 168th St., New York, NY, 10032, USA
| | | | - Charles A Popkin
- Department of Orthopedic Surgery, Columbia University Irving Medical Center, New York, NY, USA
| | - Tony T Wong
- Department of Radiology, Columbia University Irving Medical Center, 622 W. 168th St., New York, NY, 10032, USA
| |
Collapse
|
14
|
Su Z, Adam A, Nasrudin MF, Ayob M, Punganan G. Skeletal Fracture Detection with Deep Learning: A Comprehensive Review. Diagnostics (Basel) 2023; 13:3245. [PMID: 37892066 PMCID: PMC10606060 DOI: 10.3390/diagnostics13203245] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 10/12/2023] [Accepted: 10/13/2023] [Indexed: 10/29/2023] Open
Abstract
Deep learning models have shown great promise in diagnosing skeletal fractures from X-ray images. However, challenges remain that hinder progress in this field. Firstly, a lack of clear definitions for recognition, classification, detection, and localization tasks hampers the consistent development and comparison of methodologies. The existing reviews often lack technical depth or have limited scope. Additionally, the absence of explainable facilities undermines the clinical application and expert confidence in results. To address these issues, this comprehensive review analyzes and evaluates 40 out of 337 recent papers identified in prestigious databases, including WOS, Scopus, and EI. The objectives of this review are threefold. Firstly, precise definitions are established for the bone fracture recognition, classification, detection, and localization tasks within deep learning. Secondly, each study is summarized based on key aspects such as the bones involved, research objectives, dataset sizes, methods employed, results obtained, and concluding remarks. This process distills the diverse approaches into a generalized processing framework or workflow. Moreover, this review identifies the crucial areas for future research in deep learning models for bone fracture diagnosis. These include enhancing the network interpretability, integrating multimodal clinical information, providing therapeutic schedule recommendations, and developing advanced visualization methods for clinical application. By addressing these challenges, deep learning models can be made more intelligent and specialized in this domain. In conclusion, this review fills the gap in precise task definitions within deep learning for bone fracture diagnosis and provides a comprehensive analysis of the recent research. The findings serve as a foundation for future advancements, enabling improved interpretability, multimodal integration, clinical decision support, and advanced visualization techniques.
Collapse
Affiliation(s)
- Zhihao Su
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Afzan Adam
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Mohammad Faidzul Nasrudin
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Masri Ayob
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Gauthamen Punganan
- Department of Orthopedics and Traumatology, Hospital Raja Permaisuri Bainun, Ipoh 30450, Perak, Malaysia
| |
Collapse
|
15
|
Gasmi I, Calinghen A, Parienti JJ, Belloy F, Fohlen A, Pelage JP. Comparison of diagnostic performance of a deep learning algorithm, emergency physicians, junior radiologists and senior radiologists in the detection of appendicular fractures in children. Pediatr Radiol 2023; 53:1675-1684. [PMID: 36877239 DOI: 10.1007/s00247-023-05621-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 11/21/2022] [Accepted: 01/30/2023] [Indexed: 03/07/2023]
Abstract
BACKGROUND Advances have been made in the use of artificial intelligence (AI) in the field of diagnostic imaging, particularly in the detection of fractures on conventional radiographs. Studies looking at the detection of fractures in the pediatric population are few. The anatomical variations and evolution according to the child's age require specific studies of this population. Failure to diagnose fractures early in children may lead to serious consequences for growth. OBJECTIVE To evaluate the performance of an AI algorithm based on deep neural networks toward detecting traumatic appendicular fractures in a pediatric population. To compare sensitivity, specificity, positive predictive value and negative predictive value of different readers and the AI algorithm. MATERIALS AND METHODS This retrospective study conducted on 878 patients younger than 18 years of age evaluated conventional radiographs obtained after recent non-life-threatening trauma. All radiographs of the shoulder, arm, elbow, forearm, wrist, hand, leg, knee, ankle and foot were evaluated. The diagnostic performance of a consensus of radiology experts in pediatric imaging (reference standard) was compared with those of pediatric radiologists, emergency physicians, senior residents and junior residents. The predictions made by the AI algorithm and the annotations made by the different physicians were compared. RESULTS The algorithm predicted 174 fractures out of 182, corresponding to a sensitivity of 95.6%, a specificity of 91.64% and a negative predictive value of 98.76%. The AI predictions were close to that of pediatric radiologists (sensitivity 98.35%) and that of senior residents (95.05%) and were above those of emergency physicians (81.87%) and junior residents (90.1%). The algorithm identified 3 (1.6%) fractures not initially seen by pediatric radiologists. CONCLUSION This study suggests that deep learning algorithms can be useful in improving the detection of fractures in children.
Collapse
Affiliation(s)
- Idriss Gasmi
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
| | - Arvin Calinghen
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
| | - Jean-Jacques Parienti
- GRAM 2.0 EA2656 UNICAEN Normandie, University Hospital, Caen, France
- Department of Clinical Research, Caen University Hospital, Caen, France
| | - Frederique Belloy
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
| | - Audrey Fohlen
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
- UNICAEN CEA CNRS ISTCT- CERVOxy, Normandie University, 14000, Caen, France
| | - Jean-Pierre Pelage
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France.
- UNICAEN CEA CNRS ISTCT- CERVOxy, Normandie University, 14000, Caen, France.
| |
Collapse
|
16
|
Salimi M, Parry JA, Shahrokhi R, Mosalamiaghili S. Application of artificial intelligence in trauma orthopedics: Limitation and prospects. World J Clin Cases 2023; 11:4231-4240. [PMID: 37449222 PMCID: PMC10337008 DOI: 10.12998/wjcc.v11.i18.4231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 04/23/2023] [Accepted: 05/08/2023] [Indexed: 06/26/2023] Open
Abstract
The varieties and capabilities of artificial intelligence and machine learning in orthopedic surgery are extensively expanding. One promising method is neural networks, emphasizing big data and computer-based learning systems to develop a statistical fracture-detecting model. It derives patterns and rules from outstanding amounts of data to analyze the probabilities of different outcomes using new sets of similar data. The sensitivity and specificity of machine learning in detecting fractures vary from previous studies. AI may be most promising in the diagnosis of less-obvious fractures that are more commonly missed. Future studies are necessary to develop more accurate and effective detection models that can be used clinically.
Collapse
Affiliation(s)
- Maryam Salimi
- Department of Orthopaedic Surgery, Denver Health Medical Center, Denver, CO 80215, United States
| | - Joshua A Parry
- Department of Orthopaedic Surgery, Denver Health Medical Center, Denver, CO 80215, United States
| | - Raha Shahrokhi
- Student Research Committee, Shiraz University of Medical Sciences, Shiraz 7138433608, Iran
| | | |
Collapse
|
17
|
Ouyang CH, Chen CC, Tee YS, Lin WC, Kuo LW, Liao CA, Cheng CT, Liao CH. The Application of Design Thinking in Developing a Deep Learning Algorithm for Hip Fracture Detection. Bioengineering (Basel) 2023; 10:735. [PMID: 37370666 DOI: 10.3390/bioengineering10060735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 06/05/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
(1) Background: Design thinking is a problem-solving approach that has been applied in various sectors, including healthcare and medical education. While deep learning (DL) algorithms can assist in clinical practice, integrating them into clinical scenarios can be challenging. This study aimed to use design thinking steps to develop a DL algorithm that accelerates deployment in clinical practice and improves its performance to meet clinical requirements. (2) Methods: We applied the design thinking process to interview clinical doctors and gain insights to develop and modify the DL algorithm to meet clinical scenarios. We also compared the DL performance of the algorithm before and after the integration of design thinking. (3) Results: After empathizing with clinical doctors and defining their needs, we identified the unmet need of five trauma surgeons as "how to reduce the misdiagnosis of femoral fracture by pelvic plain film (PXR) at initial emergency visiting". We collected 4235 PXRs from our hospital, of which 2146 had a hip fracture (51%) from 2008 to 2016. We developed hip fracture DL detection models based on the Xception convolutional neural network by using these images. By incorporating design thinking, we improved the diagnostic accuracy from 0.91 (0.84-0.96) to 0.95 (0.93-0.97), the sensitivity from 0.97 (0.89-1.00) to 0.97 (0.94-0.99), and the specificity from 0.84 (0.71-0.93) to 0.93(0.990-0.97). (4) Conclusions: In summary, this study demonstrates that design thinking can ensure that DL solutions developed for trauma care are user-centered and meet the needs of patients and healthcare providers.
Collapse
Affiliation(s)
- Chun-Hsiang Ouyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chih-Chi Chen
- Department of Rehabilitation and Physical Medicine, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Yu-San Tee
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Wei-Cheng Lin
- Department of Electrical Engineering, Chang Gung University, Taoyuan 33327, Taiwan
| | - Ling-Wei Kuo
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chien-An Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| |
Collapse
|
18
|
Agrawal A, Khatri GD, Khurana B, Sodickson AD, Liang Y, Dreizin D. A survey of ASER members on artificial intelligence in emergency radiology: trends, perceptions, and expectations. Emerg Radiol 2023; 30:267-277. [PMID: 36913061 PMCID: PMC10362990 DOI: 10.1007/s10140-023-02121-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 02/28/2023] [Indexed: 03/14/2023]
Abstract
PURPOSE There is a growing body of diagnostic performance studies for emergency radiology-related artificial intelligence/machine learning (AI/ML) tools; however, little is known about user preferences, concerns, experiences, expectations, and the degree of penetration of AI tools in emergency radiology. Our aim is to conduct a survey of the current trends, perceptions, and expectations regarding AI among American Society of Emergency Radiology (ASER) members. METHODS An anonymous and voluntary online survey questionnaire was e-mailed to all ASER members, followed by two reminder e-mails. A descriptive analysis of the data was conducted, and results summarized. RESULTS A total of 113 members responded (response rate 12%). The majority were attending radiologists (90%) with greater than 10 years' experience (80%) and from an academic practice (65%). Most (55%) reported use of commercial AI CAD tools in their practice. Workflow prioritization based on pathology detection, injury or disease severity grading and classification, quantitative visualization, and auto-population of structured reports were identified as high-value tasks. Respondents overwhelmingly indicated a need for explainable and verifiable tools (87%) and the need for transparency in the development process (80%). Most respondents did not feel that AI would reduce the need for emergency radiologists in the next two decades (72%) or diminish interest in fellowship programs (58%). Negative perceptions pertained to potential for automation bias (23%), over-diagnosis (16%), poor generalizability (15%), negative impact on training (11%), and impediments to workflow (10%). CONCLUSION ASER member respondents are in general optimistic about the impact of AI in the practice of emergency radiology and its impact on the popularity of emergency radiology as a subspecialty. The majority expect to see transparent and explainable AI models with the radiologist as the decision-maker.
Collapse
Affiliation(s)
- Anjali Agrawal
- New Delhi operations, Teleradiology Solutions, Delhi, India
| | - Garvit D Khatri
- Nuclear Medicine, Department of Radiology, University of Washington School of Medicine, Seattle, WA, USA
| | - Bharti Khurana
- Emergency Radiology, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Aaron D Sodickson
- Emergency Radiology, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Yuanyuan Liang
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - David Dreizin
- Trauma and Emergency Radiology, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
19
|
Dreizin D, Staziaki PV, Khatri GD, Beckmann NM, Feng Z, Liang Y, Delproposto ZS, Klug M, Spann JS, Sarkar N, Fu Y. Artificial intelligence CAD tools in trauma imaging: a scoping review from the American Society of Emergency Radiology (ASER) AI/ML Expert Panel. Emerg Radiol 2023; 30:251-265. [PMID: 36917287 PMCID: PMC10640925 DOI: 10.1007/s10140-023-02120-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 02/27/2023] [Indexed: 03/16/2023]
Abstract
BACKGROUND AI/ML CAD tools can potentially improve outcomes in the high-stakes, high-volume model of trauma radiology. No prior scoping review has been undertaken to comprehensively assess tools in this subspecialty. PURPOSE To map the evolution and current state of trauma radiology CAD tools along key dimensions of technology readiness. METHODS Following a search of databases, abstract screening, and full-text document review, CAD tool maturity was charted using elements of data curation, performance validation, outcomes research, explainability, user acceptance, and funding patterns. Descriptive statistics were used to illustrate key trends. RESULTS A total of 4052 records were screened, and 233 full-text articles were selected for content analysis. Twenty-one papers described FDA-approved commercial tools, and 212 reported algorithm prototypes. Works ranged from foundational research to multi-reader multi-case trials with heterogeneous external data. Scalable convolutional neural network-based implementations increased steeply after 2016 and were used in all commercial products; however, options for explainability were narrow. Of FDA-approved tools, 9/10 performed detection tasks. Dataset sizes ranged from < 100 to > 500,000 patients, and commercialization coincided with public dataset availability. Cross-sectional torso datasets were uniformly small. Data curation methods with ground truth labeling by independent readers were uncommon. No papers assessed user acceptance, and no method included human-computer interaction. The USA and China had the highest research output and frequency of research funding. CONCLUSIONS Trauma imaging CAD tools are likely to improve patient care but are currently in an early stage of maturity, with few FDA-approved products for a limited number of uses. The scarcity of high-quality annotated data remains a major barrier.
Collapse
Affiliation(s)
- David Dreizin
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| | - Pedro V Staziaki
- Cardiothoracic Imaging, Department of Radiology, Larner College of Medicine, University of Vermont, Burlington, VT, USA
| | - Garvit D Khatri
- Department of Radiology, University of Washington School of Medicine, Seattle, WA, USA
| | - Nicholas M Beckmann
- Memorial Hermann Orthopedic & Spine Hospital, McGovern Medical School at UTHealth, Houston, TX, USA
| | - Zhaoyong Feng
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Yuanyuan Liang
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Zachary S Delproposto
- Division of Emergency Radiology, Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | | | - J Stephen Spann
- Department of Radiology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL, USA
| | - Nathan Sarkar
- University of Maryland School of Medicine, Baltimore, MD, USA
| | - Yunting Fu
- Health Sciences and Human Services Library, University of Maryland, Baltimore, Baltimore, MD, USA
| |
Collapse
|
20
|
Dreizin D. The American Society of Emergency Radiology (ASER) AI/ML expert panel: inception, mandate, work products, and goals. Emerg Radiol 2023; 30:279-283. [PMID: 37071272 DOI: 10.1007/s10140-023-02135-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 04/11/2023] [Indexed: 04/19/2023]
Affiliation(s)
- David Dreizin
- Emergency and Trauma Imaging, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma , Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
21
|
Chen CC, Huang JF, Lin WC, Cheng CT, Chen SC, Fu CY, Lee MS, Liao CH, Chung CY. The Feasibility and Performance of Total Hip Replacement Prediction Deep Learning Algorithm with Real World Data. Bioengineering (Basel) 2023; 10:bioengineering10040458. [PMID: 37106645 PMCID: PMC10136253 DOI: 10.3390/bioengineering10040458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 03/15/2023] [Accepted: 04/04/2023] [Indexed: 04/29/2023] Open
Abstract
(1) Background: Hip degenerative disorder is a common geriatric disease is the main causes to lead to total hip replacement (THR). The surgical timing of THR is crucial for post-operative recovery. Deep learning (DL) algorithms can be used to detect anomalies in medical images and predict the need for THR. The real world data (RWD) were used to validate the artificial intelligence and DL algorithm in medicine but there was no previous study to prove its function in THR prediction. (2) Methods: We designed a sequential two-stage hip replacement prediction deep learning algorithm to identify the possibility of THR in three months of hip joints by plain pelvic radiography (PXR). We also collected RWD to validate the performance of this algorithm. (3) Results: The RWD totally included 3766 PXRs from 2018 to 2019. The overall accuracy of the algorithm was 0.9633; sensitivity was 0.9450; specificity was 1.000 and the precision was 1.000. The negative predictive value was 0.9009, the false negative rate was 0.0550, and the F1 score was 0.9717. The area under curve was 0.972 with 95% confidence interval from 0.953 to 0.987. (4) Conclusions: In summary, this DL algorithm can provide an accurate and reliable method for detecting hip degeneration and predicting the need for further THR. RWD offered an alternative support of the algorithm and validated its function to save time and cost.
Collapse
Affiliation(s)
- Chih-Chi Chen
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Jen-Fu Huang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Wei-Cheng Lin
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
- Department of Electrical Engineering, Chang Gung University, Taoyuan 33302, Taiwan
| | - Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Shann-Ching Chen
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chih-Yuan Fu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Mel S Lee
- Department of Orthopaedic Surgery, Pao-Chien Hospital, Pingtung 90078, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chia-Ying Chung
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| |
Collapse
|
22
|
Berson ER, Aboian MS, Malhotra A, Payabvash S. Artificial Intelligence for Neuroimaging and Musculoskeletal Radiology: Overview of Current Commercial Algorithms. Semin Roentgenol 2023; 58:178-183. [PMID: 37087138 PMCID: PMC10122717 DOI: 10.1053/j.ro.2023.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 03/05/2023] [Accepted: 03/08/2023] [Indexed: 04/03/2023]
Abstract
There is a rapidly increasing number of artificial intelligence (AI) products cleared by the Food and Drug Administration (FDA) for quantification, identification, and even diagnosis in clinical radiology. This review article aims to summarize the landscape of current commercial software products in neuroimaging and musculoskeletal radiology. We will discuss key applications, provide an overview of current FDA cleared products, and summarize relevant peer reviewed publications of these products when available.
Collapse
Affiliation(s)
- Elisa R Berson
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT
| | - Mariam S Aboian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT
| | - Ajay Malhotra
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT
| | - Seyedmehdi Payabvash
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT.
| |
Collapse
|
23
|
Naguib SM, Hamza HM, Hosny KM, Saleh MK, Kassem MA. Classification of Cervical Spine Fracture and Dislocation Using Refined Pre-Trained Deep Model and Saliency Map. Diagnostics (Basel) 2023; 13:diagnostics13071273. [PMID: 37046491 PMCID: PMC10093757 DOI: 10.3390/diagnostics13071273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 03/23/2023] [Accepted: 03/25/2023] [Indexed: 03/30/2023] Open
Abstract
Cervical spine (CS) fractures or dislocations are medical emergencies that may lead to more serious consequences, such as significant functional disability, permanent paralysis, or even death. Therefore, diagnosing CS injuries should be conducted urgently without any delay. This paper proposes an accurate computer-aided-diagnosis system based on deep learning (AlexNet and GoogleNet) for classifying CS injuries as fractures or dislocations. The proposed system aims to support physicians in diagnosing CS injuries, especially in emergency services. We trained the model on a dataset containing 2009 X-ray images (530 CS dislocation, 772 CS fractures, and 707 normal images). The results show 99.56%, 99.33%, 99.67%, and 99.33% for accuracy, sensitivity, specificity, and precision, respectively. Finally, the saliency map has been used to measure the spatial support of a specific class inside an image. This work targets both research and clinical purposes. The designed software could be installed on the imaging devices where the CS images are captured. Then, the captured CS image is used as an input image where the designed code makes a clinical decision in emergencies.
Collapse
|
24
|
Manuel Román-Belmonte J, De la Corte-Rodríguez H, Adriana Rodríguez-Damiani B, Carlos Rodríguez-Merchán E. Artificial Intelligence in Musculoskeletal Conditions. ARTIF INTELL 2023. [DOI: 10.5772/intechopen.110696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Artificial intelligence (AI) refers to computer capabilities that resemble human intelligence. AI implies the ability to learn and perform tasks that have not been specifically programmed. Moreover, it is an iterative process involving the ability of computerized systems to capture information, transform it into knowledge, and process it to produce adaptive changes in the environment. A large labeled database is needed to train the AI system and generate a robust algorithm. Otherwise, the algorithm cannot be applied in a generalized way. AI can facilitate the interpretation and acquisition of radiological images. In addition, it can facilitate the detection of trauma injuries and assist in orthopedic and rehabilitative processes. The applications of AI in musculoskeletal conditions are promising and are likely to have a significant impact on the future management of these patients.
Collapse
|
25
|
Anderson PG, Baum GL, Keathley N, Sicular S, Venkatesh S, Sharma A, Daluiski A, Potter H, Hotchkiss R, Lindsey RV, Jones RM. Deep Learning Assistance Closes the Accuracy Gap in Fracture Detection Across Clinician Types. Clin Orthop Relat Res 2023; 481:580-588. [PMID: 36083847 PMCID: PMC9928835 DOI: 10.1097/corr.0000000000002385] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 08/05/2022] [Indexed: 01/31/2023]
Abstract
BACKGROUND Missed fractures are the most common diagnostic errors in musculoskeletal imaging and can result in treatment delays and preventable morbidity. Deep learning, a subfield of artificial intelligence, can be used to accurately detect fractures by training algorithms to emulate the judgments of expert clinicians. Deep learning systems that detect fractures are often limited to specific anatomic regions and require regulatory approval to be used in practice. Once these hurdles are overcome, deep learning systems have the potential to improve clinician diagnostic accuracy and patient care. QUESTIONS/PURPOSES This study aimed to evaluate whether a Food and Drug Administration-cleared deep learning system that identifies fractures in adult musculoskeletal radiographs would improve diagnostic accuracy for fracture detection across different types of clinicians. Specifically, this study asked: (1) What are the trends in musculoskeletal radiograph interpretation by different clinician types in the publicly available Medicare claims data? (2) Does the deep learning system improve clinician accuracy in diagnosing fractures on radiographs and, if so, is there a greater benefit for clinicians with limited training in musculoskeletal imaging? METHODS We used the publicly available Medicare Part B Physician/Supplier Procedure Summary data provided by the Centers for Medicare & Medicaid Services to determine the trends in musculoskeletal radiograph interpretation by clinician type. In addition, we conducted a multiple-reader, multiple-case study to assess whether clinician accuracy in diagnosing fractures on radiographs was superior when aided by the deep learning system compared with when unaided. Twenty-four clinicians (radiologists, orthopaedic surgeons, physician assistants, primary care physicians, and emergency medicine physicians) with a median (range) of 16 years (2 to 37) of experience postresidency each assessed 175 unique musculoskeletal radiographic cases under aided and unaided conditions (4200 total case-physician pairs per condition). These cases were comprised of radiographs from 12 different anatomic regions (ankle, clavicle, elbow, femur, forearm, hip, humerus, knee, pelvis, shoulder, tibia and fibula, and wrist) and were randomly selected from 12 hospitals and healthcare centers. The gold standard for fracture diagnosis was the majority opinion of three US board-certified orthopaedic surgeons or radiologists who independently interpreted the case. The clinicians' diagnostic accuracy was determined by the area under the curve (AUC) of the receiver operating characteristic (ROC) curve, sensitivity, and specificity. Secondary analyses evaluated the fracture miss rate (1-sensitivity) by clinicians with and without extensive training in musculoskeletal imaging. RESULTS Medicare claims data revealed that physician assistants showed the greatest increase in interpretation of musculoskeletal radiographs within the analyzed time period (2012 to 2018), although clinicians with extensive training in imaging (radiologists and orthopaedic surgeons) still interpreted the majority of the musculoskeletal radiographs. Clinicians aided by the deep learning system had higher accuracy diagnosing fractures in radiographs compared with when unaided (unaided AUC: 0.90 [95% CI 0.89 to 0.92]; aided AUC: 0.94 [95% CI 0.93 to 0.95]; difference in least square mean per the Dorfman, Berbaum, Metz model AUC: 0.04 [95% CI 0.01 to 0.07]; p < 0.01). Clinician sensitivity increased when aided compared with when unaided (aided: 90% [95% CI 88% to 92%]; unaided: 82% [95% CI 79% to 84%]), and specificity increased when aided compared with when unaided (aided: 92% [95% CI 91% to 93%]; unaided: 89% [95% CI 88% to 90%]). Clinicians with limited training in musculoskeletal imaging missed a higher percentage of fractures when unaided compared with radiologists (miss rate for clinicians with limited imaging training: 20% [95% CI 17% to 24%]; miss rate for radiologists: 14% [95% CI 9% to 19%]). However, when assisted by the deep learning system, clinicians with limited training in musculoskeletal imaging reduced their fracture miss rate, resulting in a similar miss rate to radiologists (miss rate for clinicians with limited imaging training: 9% [95% CI 7% to 12%]; miss rate for radiologists: 10% [95% CI 6% to 15%]). CONCLUSION Clinicians were more accurate at diagnosing fractures when aided by the deep learning system, particularly those clinicians with limited training in musculoskeletal image interpretation. Reducing the number of missed fractures may allow for improved patient care and increased patient mobility. LEVEL OF EVIDENCE Level III, diagnostic study.
Collapse
Affiliation(s)
| | | | | | - Serge Sicular
- Imagen Technologies, New York, NY, USA
- The Mount Sinai Hospital, New York, NY, USA
| | | | | | | | | | | | | | | |
Collapse
|
26
|
Kim T, Goh TS, Lee JS, Lee JH, Kim H, Jung ID. Transfer learning-based ensemble convolutional neural network for accelerated diagnosis of foot fractures. Phys Eng Sci Med 2023; 46:265-277. [PMID: 36625995 DOI: 10.1007/s13246-023-01215-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 01/02/2023] [Indexed: 01/11/2023]
Abstract
The complex shape of the foot, consisting of 26 bones, variable ligaments, tendons, and muscles leads to misdiagnosis of foot fractures. Despite the introduction of artificial intelligence (AI) to diagnose fractures, the accuracy of foot fracture diagnosis is lower than that of conventional methods. We developed an AI assistant system that assists with consistent diagnosis and helps interns or non-experts improve their diagnosis of foot fractures, and compared the effectiveness of the AI assistance on various groups with different proficiency. Contrast-limited adaptive histogram equalization was used to improve the visibility of original radiographs and data augmentation was applied to prevent overfitting. Preprocessed radiographs were fed to an ensemble model of a transfer learning-based convolutional neural network (CNN) that was developed for foot fracture detection with three models: InceptionResNetV2, MobilenetV1, and ResNet152V2. After training the model, score class activation mapping was applied to visualize the fracture based on the model prediction. The prediction result was evaluated by the receiver operating characteristic (ROC) curve and its area under the curve (AUC), and the F1-Score. Regarding the test set, the ensemble model exhibited better classification ability (F1-Score: 0.837, AUC: 0.95, Accuracy: 86.1%) than other single models that showed an accuracy of 82.4%. With AI assistance for the orthopedic fellow, resident, intern, and student group, the accuracy of each group improved by 3.75%, 7.25%, 6.25%, and 7% respectively and diagnosis time was reduced by 21.9%, 14.7%, 24.4%, and 34.6% respectively.
Collapse
Affiliation(s)
- Taekyeong Kim
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, Republic of Korea
| | - Tae Sik Goh
- Department of Orthopaedic Surgery, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, 49241, Republic of Korea
| | - Jung Sub Lee
- Department of Orthopaedic Surgery, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, 49241, Republic of Korea
| | - Ji Hyun Lee
- Health Insurance Review & Assessment Service, Wonju, 26465, Republic of Korea
| | - Hayeol Kim
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, Republic of Korea
| | - Im Doo Jung
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, Republic of Korea.
| |
Collapse
|
27
|
Kim S, Rebmann P, Tran PH, Kellner E, Reisert M, Steybe D, Bayer J, Bamberg F, Kotter E, Russe M. Multiclass datasets expand neural network utility: an example on ankle radiographs. Int J Comput Assist Radiol Surg 2023; 18:819-826. [PMID: 36729290 PMCID: PMC10113347 DOI: 10.1007/s11548-023-02839-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Accepted: 01/18/2023] [Indexed: 02/03/2023]
Abstract
PURPOSE Artificial intelligence in computer vision has been increasingly adapted in clinical application since the implementation of neural networks, potentially providing incremental information beyond the mere detection of pathology. As its algorithmic approach propagates input variation, neural networks could be used to identify and evaluate relevant image features. In this study, we introduce a basic dataset structure and demonstrate a pertaining use case. METHODS A multidimensional classification of ankle x-rays (n = 1493) rating a variety of features including fracture certainty was used to confirm its usability for separating input variations. We trained a customized neural network on the task of fracture detection using a state-of-the-art preprocessing and training protocol. By grouping the radiographs into subsets according to their image features, the influence of selected features on model performance was evaluated via selective training. RESULTS The models trained on our dataset outperformed most comparable models of current literature with an ROC AUC of 0.943. Excluding ankle x-rays with signs of surgery improved fracture classification performance (AUC 0.955), while limiting the training set to only healthy ankles with and without fracture had no consistent effect. CONCLUSION Using multiclass datasets and comparing model performance, we were able to demonstrate signs of surgery as a confounding factor, which, following elimination, improved our model. Also eliminating pathologies other than fracture in contrast had no effect on model performance, suggesting a beneficial influence of feature variability for robust model training. Thus, multiclass datasets allow for evaluation of distinct image features, deepening our understanding of pathology imaging.
Collapse
Affiliation(s)
- Suam Kim
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Medical Center-University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany.
| | - Philipp Rebmann
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Medical Center-University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Phuong Hien Tran
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Medical Center-University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Elias Kellner
- Department of Medical Physics, Faculty of Medicine, Medical Center-University of Freiburg, University of Freiburg, Freiburg, Germany
| | - Marco Reisert
- Department of Medical Physics, Faculty of Medicine, Medical Center-University of Freiburg, University of Freiburg, Freiburg, Germany
| | - David Steybe
- Department of Oral and Maxillofacial Surgery, Faculty of Medicine, Medical Center-University of Freiburg, Freiburg, Germany
| | - Jörg Bayer
- Department of Trauma and Orthopaedic Surgery, Schwarzwald-Baar Hospital, Villingen-Schwenningen, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Medical Center-University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Medical Center-University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| | - Maximilian Russe
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Medical Center-University of Freiburg, Hugstetter Str. 55, 79106, Freiburg, Germany
| |
Collapse
|
28
|
'Assessment of an artificial intelligence aid for the detection of appendicular skeletal fractures in children and young adults by senior and junior radiologists': reply to Sammer et al. Pediatr Radiol 2023; 53:341-342. [PMID: 36472646 DOI: 10.1007/s00247-022-05554-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 11/16/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022]
|
29
|
Detecting pediatric wrist fractures using deep-learning-based object detection. Pediatr Radiol 2023; 53:1125-1134. [PMID: 36650360 DOI: 10.1007/s00247-023-05588-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 12/09/2022] [Accepted: 12/30/2022] [Indexed: 01/19/2023]
Abstract
BACKGROUND Missed fractures are the leading cause of diagnostic error in the emergency department, and fractures of pediatric bones, particularly subtle wrist fractures, can be misidentified because of their varying characteristics and responses to injury. OBJECTIVE This study evaluated the utility of an object detection deep learning framework for classifying pediatric wrist fractures as positive or negative for fracture, including subtle buckle fractures of the distal radius, and evaluated the performance of this algorithm as augmentation to trainee radiograph interpretation. MATERIALS AND METHODS We obtained 395 posteroanterior wrist radiographs from unique pediatric patients (65% positive for fracture, 30% positive for distal radial buckle fracture) and divided them into train (n = 229), tune (n = 41) and test (n = 125) sets. We trained a Faster R-CNN (region-based convolutional neural network) deep learning object-detection model. Two pediatric and two radiology residents evaluated radiographs initially without the artificial intelligence (AI) assistance, and then subsequently with access to the bounding box generated by the Faster R-CNN model. RESULTS The Faster R-CNN model demonstrated an area under the curve (AUC) of 0.92 (95% confidence interval [CI] 0.87-0.97), accuracy of 88% (n = 110/125; 95% CI 81-93%), sensitivity of 88% (n = 70/80; 95% CI 78-94%) and specificity of 89% (n = 40/45, 95% CI 76-96%) in identifying any fracture and identified 90% of buckle fractures (n = 35/39, 95% CI 76-97%). Access to Faster R-CNN model predictions significantly improved average resident accuracy from 80 to 93% in detecting any fracture (P < 0.001) and from 69 to 92% in detecting buckle fracture (P < 0.001). After accessing AI predictions, residents significantly outperformed AI in cases of disagreement (73% resident correct vs. 27% AI, P = 0.002). CONCLUSION An object-detection-based deep learning approach trained with only a few hundred examples identified radiographs containing pediatric wrist fractures with high accuracy. Access to model predictions significantly improved resident accuracy in diagnosing these fractures.
Collapse
|
30
|
Keller M, Guebeli A, Thieringer F, Honigmann P. Artificial intelligence in patient-specific hand surgery: a scoping review of literature. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02831-3. [PMID: 36633789 PMCID: PMC10363089 DOI: 10.1007/s11548-023-02831-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Accepted: 01/02/2023] [Indexed: 01/13/2023]
Abstract
PURPOSE The implementation of artificial intelligence in hand surgery and rehabilitation is gaining popularity. The purpose of this scoping review was to give an overview of implementations of artificial intelligence in hand surgery and rehabilitation and their current significance in clinical practice. METHODS A systematic literature search of the MEDLINE/PubMed and Cochrane Collaboration libraries was conducted. The review was conducted according to the framework outlined by the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Extension for Scoping Reviews. A narrative summary of the papers is presented to give an orienting overview of this rapidly evolving topic. RESULTS Primary search yielded 435 articles. After application of the inclusion/exclusion criteria and addition of supplementary search, 235 articles were included in the final review. In order to facilitate navigation through this heterogenous field, the articles were clustered into four groups of thematically related publications. The most common applications of artificial intelligence in hand surgery and rehabilitation target automated image analysis of anatomic structures, fracture detection and localization and automated screening for other hand and wrist pathologies such as carpal tunnel syndrome, rheumatoid arthritis or osteoporosis. Compared to other medical subspecialties the number of applications in hand surgery is still small. CONCLUSION Although various promising applications of artificial intelligence in hand surgery and rehabilitation show strong performances, their implementation mostly takes place within the context of experimental studies. Therefore, their use in daily clinical routine is still limited.
Collapse
Affiliation(s)
- Marco Keller
- Hand Surgery, Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland, 4410, Liestal, Switzerland. .,Medical Additive Manufacturing Research Group, Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.
| | - Alissa Guebeli
- Hand Surgery, Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland, 4410, Liestal, Switzerland.,Medical Additive Manufacturing Research Group, Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.,Department of Plastic and Hand Surgery, Kantonsspital Aarau, 5001, Aarau, Switzerland
| | - Florian Thieringer
- Medical Additive Manufacturing Research Group, Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.,Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, Basel, Switzerland
| | - Philipp Honigmann
- Hand Surgery, Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland, 4410, Liestal, Switzerland.,Medical Additive Manufacturing Research Group, Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.,Department of Biomedical Engineering and Physics, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
31
|
Karanam SR, Srinivas Y, Chakravarty S. A statistical model approach based on the Gaussian Mixture Model for the diagnosis and classification of bone fractures. INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 2023. [DOI: 10.1080/20479700.2022.2161146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Affiliation(s)
| | - Y. Srinivas
- Department of IT, GITAM University, Visakhapatnam, India
| | - S. Chakravarty
- Centurion University of Technology and Management, Odisha, India
| |
Collapse
|
32
|
Ye P, Li S, Wang Z, Tian S, Luo Y, Wu Z, Zhuang Y, Zhang Y, Grzegorzek M, Hou Z. Development and validation of a deep learning-based model to distinguish acetabular fractures on pelvic anteroposterior radiographs. Front Physiol 2023; 14:1146910. [PMID: 37187961 PMCID: PMC10176114 DOI: 10.3389/fphys.2023.1146910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
Objective: To develop and test a deep learning (DL) model to distinguish acetabular fractures (AFs) on pelvic anteroposterior radiographs (PARs) and compare its performance to that of clinicians. Materials and methods: A total of 1,120 patients from a big level-I trauma center were enrolled and allocated at a 3:1 ratio for the DL model's development and internal test. Another 86 patients from two independent hospitals were collected for external validation. A DL model for identifying AFs was constructed based on DenseNet. AFs were classified into types A, B, and C according to the three-column classification theory. Ten clinicians were recruited for AF detection. A potential misdiagnosed case (PMC) was defined based on clinicians' detection results. The detection performance of the clinicians and DL model were evaluated and compared. The detection performance of different subtypes using DL was assessed using the area under the receiver operating characteristic curve (AUC). Results: The means of 10 clinicians' sensitivity, specificity, and accuracy to identify AFs were 0.750/0.735, 0.909/0.909, and 0.829/0.822, in the internal test/external validation set, respectively. The sensitivity, specificity, and accuracy of the DL detection model were 0.926/0.872, 0.978/0.988, and 0.952/0.930, respectively. The DL model identified type A fractures with an AUC of 0.963 [95% confidence interval (CI): 0.927-0.985]/0.950 (95% CI: 0.867-0.989); type B fractures with an AUC of 0.991 (95% CI: 0.967-0.999)/0.989 (95% CI: 0.930-1.000); and type C fractures with an AUC of 1.000 (95% CI: 0.975-1.000)/1.000 (95% CI: 0.897-1.000) in the test/validation set. The DL model correctly recognized 56.5% (26/46) of PMCs. Conclusion: A DL model for distinguishing AFs on PARs is feasible. In this study, the DL model achieved a diagnostic performance comparable to or even superior to that of clinicians.
Collapse
Affiliation(s)
- Pengyu Ye
- Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Sihe Li
- University of Lübeck, Lübeck, Schleswig-Holstein, Germany
| | - Zhongzheng Wang
- Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Siyu Tian
- Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Yi Luo
- Heidelberg University, Heidelberg, Baden-Württemberg, Germany
| | - Zhanyong Wu
- Orthopedic Hospital of Xingtai, Xingtai, China
| | - Yan Zhuang
- Xi’an Honghui Hospital, Xi’an, Shaanxi, China
| | - Yingze Zhang
- Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | | | - Zhiyong Hou
- Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
- *Correspondence: Zhiyong Hou,
| |
Collapse
|
33
|
Adaptive IoU Thresholding for Improving Small Object Detection: A Proof-of-Concept Study of Hand Erosions Classification of Patients with Rheumatic Arthritis on X-ray Images. Diagnostics (Basel) 2022; 13:diagnostics13010104. [PMID: 36611395 PMCID: PMC9818241 DOI: 10.3390/diagnostics13010104] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 12/08/2022] [Accepted: 12/28/2022] [Indexed: 12/31/2022] Open
Abstract
In recent years, much research evaluating the radiographic destruction of finger joints in patients with rheumatoid arthritis (RA) using deep learning models was conducted. Unfortunately, most previous models were not clinically applicable due to the small object regions as well as the close spatial relationship. In recent years, a new network structure called RetinaNets, in combination with the focal loss function, proved reliable for detecting even small objects. Therefore, the study aimed to increase the recognition performance to a clinically valuable level by proposing an innovative approach with adaptive changes in intersection over union (IoU) values during training of Retina Networks using the focal loss error function. To this end, the erosion score was determined using the Sharp van der Heijde (SvH) metric on 300 conventional radiographs from 119 patients with RA. Subsequently, a standard RetinaNet with different IoU values as well as adaptively modified IoU values were trained and compared in terms of accuracy, mean average accuracy (mAP), and IoU. With the proposed approach of adaptive IoU values during training, erosion detection accuracy could be improved to 94% and an mAP of 0.81 ± 0.18. In contrast Retina networks with static IoU values achieved only an accuracy of 80% and an mAP of 0.43 ± 0.24. Thus, adaptive adjustment of IoU values during training is a simple and effective method to increase the recognition accuracy of small objects such as finger and wrist joints.
Collapse
|
34
|
Cellina M, Cè M, Irmici G, Ascenti V, Caloro E, Bianchi L, Pellegrino G, D’Amico N, Papa S, Carrafiello G. Artificial Intelligence in Emergency Radiology: Where Are We Going? Diagnostics (Basel) 2022; 12:diagnostics12123223. [PMID: 36553230 PMCID: PMC9777804 DOI: 10.3390/diagnostics12123223] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 12/11/2022] [Accepted: 12/16/2022] [Indexed: 12/23/2022] Open
Abstract
Emergency Radiology is a unique branch of imaging, as rapidity in the diagnosis and management of different pathologies is essential to saving patients' lives. Artificial Intelligence (AI) has many potential applications in emergency radiology: firstly, image acquisition can be facilitated by reducing acquisition times through automatic positioning and minimizing artifacts with AI-based reconstruction systems to optimize image quality, even in critical patients; secondly, it enables an efficient workflow (AI algorithms integrated with RIS-PACS workflow), by analyzing the characteristics and images of patients, detecting high-priority examinations and patients with emergent critical findings. Different machine and deep learning algorithms have been trained for the automated detection of different types of emergency disorders (e.g., intracranial hemorrhage, bone fractures, pneumonia), to help radiologists to detect relevant findings. AI-based smart reporting, summarizing patients' clinical data, and analyzing the grading of the imaging abnormalities, can provide an objective indicator of the disease's severity, resulting in quick and optimized treatment planning. In this review, we provide an overview of the different AI tools available in emergency radiology, to keep radiologists up to date on the current technological evolution in this field.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
- Correspondence:
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Giovanni Irmici
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Velio Ascenti
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Elena Caloro
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Lorenzo Bianchi
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Giuseppe Pellegrino
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Natascha D’Amico
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Sergio Papa
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Gianpaolo Carrafiello
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
- Radiology Department, Fondazione IRCCS Cà Granda, Policlinico di Milano Ospedale Maggiore, Via Sforza 35, 20122 Milan, Italy
| |
Collapse
|
35
|
Artificial Intelligence (AI) for Fracture Diagnosis: An Overview of Current Products and Considerations for Clinical Adoption, From the AJR Special Series on AI Applications. AJR Am J Roentgenol 2022; 219:869-878. [PMID: 35731103 DOI: 10.2214/ajr.22.27873] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Fractures are common injuries that can be difficult to diagnose, with missed fractures accounting for most misdiagnoses in the emergency department. Artificial intelligence (AI) and, specifically, deep learning have shown a strong ability to accurately detect fractures and augment the performance of radiologists in proof-of-concept research settings. Although the number of real-world AI products available for clinical use continues to increase, guidance for practicing radiologists in the adoption of this new technology is limited. This review describes how AI and deep learning algorithms can help radiologists to better diagnose fractures. The article also provides an overview of commercially available U.S. FDA-cleared AI tools for fracture detection as well as considerations for the clinical adoption of these tools by radiology practices.
Collapse
|
36
|
Hayashi D, Kompel AJ, Ventre J, Ducarouge A, Nguyen T, Regnard NE, Guermazi A. Automated detection of acute appendicular skeletal fractures in pediatric patients using deep learning. Skeletal Radiol 2022; 51:2129-2139. [PMID: 35522332 DOI: 10.1007/s00256-022-04070-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 04/28/2022] [Accepted: 04/28/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVE We aimed to perform an external validation of an existing commercial AI software program (BoneView™) for the detection of acute appendicular fractures in pediatric patients. MATERIALS AND METHODS In our retrospective study, anonymized radiographic exams of extremities, with or without fractures, from pediatric patients (aged 2-21) were included. Three hundred exams (150 with fractures and 150 without fractures) were included, comprising 60 exams per body part (hand/wrist, elbow/upper arm, shoulder/clavicle, foot/ankle, leg/knee). The Ground Truth was defined by experienced radiologists. A deep learning algorithm interpreted the radiographs for fracture detection, and its diagnostic performance was compared against the Ground Truth, and receiver operating characteristic analysis was done. Statistical analyses included sensitivity per patient (the proportion of patients for whom all fractures were identified) and sensitivity per fracture (the proportion of fractures identified by the AI among all fractures), specificity per patient, and false-positive rate per patient. RESULTS There were 167 boys and 133 girls with a mean age of 10.8 years. For all fractures, sensitivity per patient (average [95% confidence interval]) was 91.3% [85.6, 95.3], specificity per patient was 90.0% [84.0,94.3], sensitivity per fracture was 92.5% [87.0, 96.2], and false-positive rate per patient in patients who had no fracture was 0.11. The patient-wise area under the curve was 0.93 for all fractures. AI diagnostic performance was consistently high across all anatomical locations and different types of fractures except for avulsion fractures (sensitivity per fracture 72.7% [39.0, 94.0]). CONCLUSION The BoneView™ deep learning algorithm provides high overall diagnostic performance for appendicular fracture detection in pediatric patients.
Collapse
Affiliation(s)
- Daichi Hayashi
- Department of Radiology, Boston University School of Medicine, 820 Harrison Avenue, FGH Building, 3rd Floor, Boston, MA, 02118, USA. .,Department of Radiology, Stony Brook University Renaissance School of Medicine, HSc Level 4, Room 120, Stony Brook, NY, 11794, USA.
| | - Andrew J Kompel
- Department of Radiology, Boston University School of Medicine, 820 Harrison Avenue, FGH Building, 3rd Floor, Boston, MA, 02118, USA
| | - Jeanne Ventre
- Gleamer, 117-119 Quai de Valmy, 75010, Paris, France
| | | | - Toan Nguyen
- Gleamer, 117-119 Quai de Valmy, 75010, Paris, France.,Service de Radiopédiatrie, Hôpital Armand-Trousseau, AP-HP, Médecine Sorbonne Université, 26 avenue du Docteur Arnold-Netter, 75012, Paris, France
| | - Nor-Eddine Regnard
- Gleamer, 117-119 Quai de Valmy, 75010, Paris, France.,Réseau d'Imagerie Sud Francilien, 2 avenue de Mousseau, 91000, Evry, France
| | - Ali Guermazi
- Department of Radiology, Boston University School of Medicine, 820 Harrison Avenue, FGH Building, 3rd Floor, Boston, MA, 02118, USA.,Department of Radiology, VA Boston Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA, 02132, USA
| |
Collapse
|
37
|
Nguyen T, Maarek R, Hermann AL, Kammoun A, Marchi A, Khelifi-Touhami MR, Collin M, Jaillard A, Kompel AJ, Hayashi D, Guermazi A, Le Pointe HD. Assessment of an artificial intelligence aid for the detection of appendicular skeletal fractures in children and young adults by senior and junior radiologists. Pediatr Radiol 2022; 52:2215-2226. [PMID: 36169667 DOI: 10.1007/s00247-022-05496-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 07/07/2022] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
BACKGROUND As the number of conventional radiographic examinations in pediatric emergency departments increases, so, too, does the number of reading errors by radiologists. OBJECTIVE The aim of this study is to investigate the ability of artificial intelligence (AI) to improve the detection of fractures by radiologists in children and young adults. MATERIALS AND METHODS A cohort of 300 anonymized radiographs performed for the detection of appendicular fractures in patients ages 2 to 21 years was collected retrospectively. The ground truth for each examination was established after an independent review by two radiologists with expertise in musculoskeletal imaging. Discrepancies were resolved by consensus with a third radiologist. Half of the 300 examinations showed at least 1 fracture. Radiographs were read by three senior pediatric radiologists and five radiology residents in the usual manner and then read again immediately after with the help of AI. RESULTS The mean sensitivity for all groups was 73.3% (110/150) without AI; it increased significantly by almost 10% (P<0.001) to 82.8% (125/150) with AI. For junior radiologists, it increased by 10.3% (P<0.001) and for senior radiologists by 8.2% (P=0.08). On average, there was no significant change in specificity (from 89.6% to 90.3% [+0.7%, P=0.28]); for junior radiologists, specificity increased from 86.2% to 87.6% (+1.4%, P=0.42) and for senior radiologists, it decreased from 95.1% to 94.9% (-0.2%, P=0.23). The stand-alone sensitivity and specificity of the AI were, respectively, 91% and 90%. CONCLUSION With the help of AI, sensitivity increased by an average of 10% without significantly decreasing specificity in fracture detection in a predominantly pediatric population.
Collapse
Affiliation(s)
- Toan Nguyen
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France.
| | - Richard Maarek
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Anne-Laure Hermann
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Amina Kammoun
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Antoine Marchi
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Mohamed R Khelifi-Touhami
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Mégane Collin
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Aliénor Jaillard
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| | - Andrew J Kompel
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA
| | - Daichi Hayashi
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA.,Department of Radiology, Stony Brook University Renaissance School of Medicine, Stony Brook, NY, USA
| | - Ali Guermazi
- Department of Radiology, Boston University School of Medicine, Boston, MA, USA.,Department of Radiology, VA Boston Healthcare System, West Roxbury, MA, USA
| | - Hubert Ducou Le Pointe
- Department of Pediatric Radiology, Armand Trousseau Hospital, 26 Av. du Dr Arnold Netter, 75012, Paris, France
| |
Collapse
|
38
|
Hill BG, Krogue JD, Jevsevar DS, Schilling PL. Deep Learning and Imaging for the Orthopaedic Surgeon: How Machines "Read" Radiographs. J Bone Joint Surg Am 2022; 104:1675-1686. [PMID: 35867718 DOI: 10.2106/jbjs.21.01387] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
➤ In the not-so-distant future, orthopaedic surgeons will be exposed to machines that begin to automatically "read" medical imaging studies using a technology called deep learning. ➤ Deep learning has demonstrated remarkable progress in the analysis of medical imaging across a range of modalities that are commonly used in orthopaedics, including radiographs, computed tomographic scans, and magnetic resonance imaging scans. ➤ There is a growing body of evidence showing clinical utility for deep learning in musculoskeletal radiography, as evidenced by studies that use deep learning to achieve an expert or near-expert level of performance for the identification and localization of fractures on radiographs. ➤ Deep learning is currently in the very early stages of entering the clinical setting, involving validation and proof-of-concept studies for automated medical image interpretation. ➤ The success of deep learning in the analysis of medical imaging has been propelling the field forward so rapidly that now is the time for surgeons to pause and understand how this technology works at a conceptual level, before (not after) the technology ends up in front of us and our patients. That is the purpose of this article.
Collapse
Affiliation(s)
- Brandon G Hill
- Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire
| | - Justin D Krogue
- Google Health, Palo Alto, California.,Department of Orthopaedic Surgery, University of California San Francisco, San Francisco, California
| | - David S Jevsevar
- Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire.,The Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| | - Peter L Schilling
- Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire.,The Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| |
Collapse
|
39
|
Assessment of performances of a deep learning algorithm for the detection of limbs and pelvic fractures, dislocations, focal bone lesions, and elbow effusions on trauma X-rays. Eur J Radiol 2022; 154:110447. [DOI: 10.1016/j.ejrad.2022.110447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 04/29/2022] [Accepted: 07/19/2022] [Indexed: 11/23/2022]
|
40
|
Hybrid SFNet Model for Bone Fracture Detection and Classification Using ML/DL. SENSORS 2022; 22:s22155823. [PMID: 35957380 PMCID: PMC9371081 DOI: 10.3390/s22155823] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Revised: 07/28/2022] [Accepted: 08/02/2022] [Indexed: 02/05/2023]
Abstract
An expert performs bone fracture diagnosis using an X-ray image manually, which is a time-consuming process. The development of machine learning (ML), as well as deep learning (DL), has set a new path in medical image diagnosis. In this study, we proposed a novel multi-scale feature fusion of a convolution neural network (CNN) and an improved canny edge algorithm that segregate fracture and healthy bone image. The hybrid scale fracture network (SFNet) is a novel two-scale sequential DL model. This model is highly efficient for bone fracture diagnosis and takes less computation time compared to other state-of-the-art deep CNN models. The innovation behind this research is that it works with an improved canny edge algorithm to obtain edges in the images that localize the fracture region. After that, grey images and their corresponding canny edge images are fed to the proposed hybrid SFNet for training and evaluation. Furthermore, the performance is also compared with the state-of-the-art deep CNN models on a bone image dataset. Our results showed that SFNet with canny (SFNet + canny) achieved the highest accuracy, F1-score and recall of 99.12%, 99% and 100%, respectively, for bone fracture diagnosis. It showed that using a canny edge algorithm improves the performance of CNN.
Collapse
|
41
|
Zhou X, Wang H, Feng C, Xu R, He Y, Li L, Tu C. Emerging Applications of Deep Learning in Bone Tumors: Current Advances and Challenges. Front Oncol 2022; 12:908873. [PMID: 35928860 PMCID: PMC9345628 DOI: 10.3389/fonc.2022.908873] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 06/15/2022] [Indexed: 12/12/2022] Open
Abstract
Deep learning is a subfield of state-of-the-art artificial intelligence (AI) technology, and multiple deep learning-based AI models have been applied to musculoskeletal diseases. Deep learning has shown the capability to assist clinical diagnosis and prognosis prediction in a spectrum of musculoskeletal disorders, including fracture detection, cartilage and spinal lesions identification, and osteoarthritis severity assessment. Meanwhile, deep learning has also been extensively explored in diverse tumors such as prostate, breast, and lung cancers. Recently, the application of deep learning emerges in bone tumors. A growing number of deep learning models have demonstrated good performance in detection, segmentation, classification, volume calculation, grading, and assessment of tumor necrosis rate in primary and metastatic bone tumors based on both radiological (such as X-ray, CT, MRI, SPECT) and pathological images, implicating a potential for diagnosis assistance and prognosis prediction of deep learning in bone tumors. In this review, we first summarized the workflows of deep learning methods in medical images and the current applications of deep learning-based AI for diagnosis and prognosis prediction in bone tumors. Moreover, the current challenges in the implementation of the deep learning method and future perspectives in this field were extensively discussed.
Collapse
Affiliation(s)
- Xiaowen Zhou
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Hua Wang
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Chengyao Feng
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Ruilin Xu
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Yu He
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Lan Li
- Department of Pathology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Chao Tu
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
- *Correspondence: Chao Tu,
| |
Collapse
|
42
|
Zhang X, Yang Y, Shen YW, Zhang KR, Jiang ZK, Ma LT, Ding C, Wang BY, Meng Y, Liu H. Diagnostic accuracy and potential covariates of artificial intelligence for diagnosing orthopedic fractures: a systematic literature review and meta-analysis. Eur Radiol 2022; 32:7196-7216. [PMID: 35754091 DOI: 10.1007/s00330-022-08956-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/07/2022] [Accepted: 06/08/2022] [Indexed: 02/05/2023]
Abstract
OBJECTIVES To systematically quantify the diagnostic accuracy and identify potential covariates affecting the performance of artificial intelligence (AI) in diagnosing orthopedic fractures. METHODS PubMed, Embase, Web of Science, and Cochrane Library were systematically searched for studies on AI applications in diagnosing orthopedic fractures from inception to September 29, 2021. Pooled sensitivity and specificity and the area under the receiver operating characteristic curves (AUC) were obtained. This study was registered in the PROSPERO database prior to initiation (CRD 42021254618). RESULTS Thirty-nine were eligible for quantitative analysis. The overall pooled AUC, sensitivity, and specificity were 0.96 (95% CI 0.94-0.98), 90% (95% CI 87-92%), and 92% (95% CI 90-94%), respectively. In subgroup analyses, multicenter designed studies yielded higher sensitivity (92% vs. 88%) and specificity (94% vs. 91%) than single-center studies. AI demonstrated higher sensitivity with transfer learning (with vs. without: 92% vs. 87%) or data augmentation (with vs. without: 92% vs. 87%), compared to those without. Utilizing plain X-rays as input images for AI achieved results comparable to CT (AUC 0.96 vs. 0.96). Moreover, AI achieved comparable results to humans (AUC 0.97 vs. 0.97) and better results than non-expert human readers (AUC 0.98 vs. 0.96; sensitivity 95% vs. 88%). CONCLUSIONS AI demonstrated high accuracy in diagnosing orthopedic fractures from medical images. Larger-scale studies with higher design quality are needed to validate our findings. KEY POINTS • Multicenter study design, application of transfer learning, and data augmentation are closely related to improving the performance of artificial intelligence models in diagnosing orthopedic fractures. • Utilizing plain X-rays as input images for AI to diagnose fractures achieved results comparable to CT (AUC 0.96 vs. 0.96). • AI achieved comparable results to humans (AUC 0.97 vs. 0.97) but was superior to non-expert human readers (AUC 0.98 vs. 0.96, sensitivity 95% vs. 88%) in diagnosing fractures.
Collapse
Affiliation(s)
- Xiang Zhang
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Yi Yang
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Yi-Wei Shen
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Ke-Rui Zhang
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Ze-Kun Jiang
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, 610000, China
| | - Li-Tai Ma
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Chen Ding
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Bei-Yu Wang
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Yang Meng
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China
| | - Hao Liu
- Department of Orthopedics, Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guo Xue Rd, Chengdu, 610041, China.
| |
Collapse
|
43
|
Gipson J, Tang V, Seah J, Kavnoudias H, Zia A, Lee R, Mitra B, Clements W. Diagnostic accuracy of a commercially available deep-learning algorithm in supine chest radiographs following trauma. Br J Radiol 2022; 95:20210979. [PMID: 35271382 PMCID: PMC10996416 DOI: 10.1259/bjr.20210979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 03/01/2022] [Accepted: 03/04/2022] [Indexed: 01/17/2023] Open
Abstract
OBJECTIVES Trauma chest radiographs may contain subtle and time-critical pathology. Artificial intelligence (AI) may aid in accurate reporting, timely identification and worklist prioritisation. However, few AI programs have been externally validated. This study aimed to evaluate the performance of a commercially available deep convolutional neural network - Annalise CXR V1.2 (Annalise.ai) - for detection of traumatic injuries on supine chest radiographs. METHODS Chest radiographs with a CT performed within 24 h in the setting of trauma were retrospectively identified at a level one adult trauma centre between January 2009 and June 2019. Annalise.ai assessment of the chest radiograph was compared to the radiologist report of the chest radiograph. Contemporaneous CT report was taken as the ground truth. Agreement with CT was measured using Cohen's κ and sensitivity/specificity for both AI and radiologists were calculated. RESULTS There were 1404 cases identified with a median age of 52 (IQR 33-69) years, 949 males. AI demonstrated superior performance compared to radiologists in identifying pneumothorax (p = 0.007) and segmental collapse (p = 0.012) on chest radiograph. Radiologists performed better than AI for clavicle fracture (p = 0.002), humerus fracture (p < 0.0015) and scapula fracture (p = 0.014). No statistical difference was found for identification of rib fractures and pneumomediastinum. CONCLUSION The evaluated AI performed comparably to radiologists in interpreting chest radiographs. Further evaluation of this AI program has the potential to enable it to be safely incorporated in clinical processes. ADVANCES IN KNOWLEDGE Clinically useful AI programs represent promising decision support tools.
Collapse
Affiliation(s)
- Jacob Gipson
- Department of Radiology, Alfred Health,
Melbourne, Victoria, Australia
| | - Victor Tang
- Department of Radiology, Alfred Health,
Melbourne, Victoria, Australia
- Faculty of Medicine, University of Queensland,
Brisbane, Queensland,
Australia
| | - Jarrel Seah
- Department of Radiology, Alfred Health,
Melbourne, Victoria, Australia
- Harrison.ai, Sydney, NSW,
Australia
| | - Helen Kavnoudias
- Department of Radiology, Alfred Health,
Melbourne, Victoria, Australia
- Department of Surgery, Monash University,
Melbourne, Victoria, Australia
| | - Adil Zia
- Department of Radiology, Alfred Health,
Melbourne, Victoria, Australia
| | - Robin Lee
- Department of Radiology, Alfred Health,
Melbourne, Victoria, Australia
| | - Biswadev Mitra
- National Trauma Research Institute,
Melbourne, Victoria, Australia
- Emergency & Trauma Centre, The Alfred
Hospital, Melbourne, Victoria,
Australia
- School of Public Health & Preventive Medicine, Monash
University, Melbourne, Victoria,
Australia
| | - Warren Clements
- Department of Radiology, Alfred Health,
Melbourne, Victoria, Australia
- Department of Surgery, Monash University,
Melbourne, Victoria, Australia
- National Trauma Research Institute,
Melbourne, Victoria, Australia
| |
Collapse
|
44
|
Schwartz JT, Valliani AA, Arvind V, Cho BH, Geng E, Henson P, Riew KD, Lehman RA, Lenke LG, Cho SK, Kim JS. Identification of Anterior Cervical Spinal Instrumentation Using a Smartphone Application Powered by Machine Learning. Spine (Phila Pa 1976) 2022; 47:E407-E414. [PMID: 34269759 DOI: 10.1097/brs.0000000000004172] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
STUDY DESIGN Cross-sectional study. OBJECTIVE The purpose of this study is to develop and validate a machine learning algorithm for the automated identification of anterior cervical discectomy and fusion (ACDF) plates from smartphone images of anterior-posterior (AP) cervical spine radiographs. SUMMARY OF BACKGROUND DATA Identification of existing instrumentation is a critical step in planning revision surgery for ACDF. Machine learning algorithms that are known to be adept at image classification may be applied to the problem of ACDF plate identification. METHODS A total of 402 smartphone images containing 15 different types of ACDF plates were gathered. Two hundred seventy-five images (∼70%) were used to train and validate a convolution neural network (CNN) for classification of images from radiographs. One hundred twenty-seven (∼30%) images were held out to test algorithm performance. RESULTS The algorithm performed with an overall accuracy of 94.4% and 85.8% for top-3 and top-1 accuracy, respectively. Overall positive predictive value, sensitivity, and f1-scores were 0.873, 0.858, and 0.855, respectively. CONCLUSION This algorithm demonstrates strong performance in the classification of ACDF plates from smartphone images and will be deployed as an accessible smartphone application for further evaluation, improvement, and eventual widespread use.Level of Evidence: 3.
Collapse
Affiliation(s)
- John T Schwartz
- Department of Orthopedic Surgery, Mount Sinai Health System, New York, NY
| | - Aly A Valliani
- Department of Orthopedic Surgery, Mount Sinai Health System, New York, NY
| | - Varun Arvind
- Department of Orthopedic Surgery, Mount Sinai Health System, New York, NY
| | - Brian H Cho
- Department of Orthopedic Surgery, Mount Sinai Health System, New York, NY
| | - Eric Geng
- Department of Orthopedic Surgery, Mount Sinai Health System, New York, NY
| | - Philip Henson
- Department of Orthopedic Surgery, Mount Sinai Health System, New York, NY
| | - K Daniel Riew
- Department of Orthopedic Surgery, Columbia University Medical Center, New York, NY
| | - Ronald A Lehman
- Department of Orthopedic Surgery, Columbia University Medical Center, New York, NY
| | - Lawrence G Lenke
- Department of Orthopedic Surgery, Columbia University Medical Center, New York, NY
| | - Samuel K Cho
- Department of Orthopedic Surgery, Mount Sinai Health System, New York, NY
| | - Jun S Kim
- Department of Orthopedic Surgery, Mount Sinai Health System, New York, NY
| |
Collapse
|
45
|
Assessment of deep convolutional neural network models for mandibular fracture detection in panoramic radiographs. Int J Oral Maxillofac Surg 2022; 51:1488-1494. [DOI: 10.1016/j.ijom.2022.03.056] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 01/11/2022] [Accepted: 03/21/2022] [Indexed: 01/17/2023]
|
46
|
Addressing Motion Blurs in Brain MRI Scans Using Conditional Adversarial Networks and Simulated Curvilinear Motions. J Imaging 2022; 8:jimaging8040084. [PMID: 35448211 PMCID: PMC9027264 DOI: 10.3390/jimaging8040084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 03/16/2022] [Accepted: 03/21/2022] [Indexed: 11/27/2022] Open
Abstract
In-scanner head motion often leads to degradation in MRI scans and is a major source of error in diagnosing brain abnormalities. Researchers have explored various approaches, including blind and nonblind deconvolutions, to correct the motion artifacts in MRI scans. Inspired by the recent success of deep learning models in medical image analysis, we investigate the efficacy of employing generative adversarial networks (GANs) to address motion blurs in brain MRI scans. We cast the problem as a blind deconvolution task where a neural network is trained to guess a blurring kernel that produced the observed corruption. Specifically, our study explores a new approach under the sparse coding paradigm where every ground truth corrupting kernel is assumed to be a “combination” of a relatively small universe of “basis” kernels. This assumption is based on the intuition that, on small distance scales, patients’ moves follow simple curves and that complex motions can be obtained by combining a number of simple ones. We show that, with a suitably dense basis, a neural network can effectively guess the degrading kernel and reverse some of the damage in the motion-affected real-world scans. To this end, we generated 10,000 continuous and curvilinear kernels in random positions and directions that are likely to uniformly populate the space of corrupting kernels in real-world scans. We further generated a large dataset of 225,000 pairs of sharp and blurred MR images to facilitate training effective deep learning models. Our experimental results demonstrate the viability of the proposed approach evaluated using synthetic and real-world MRI scans. Our study further suggests there is merit in exploring separate models for the sagittal, axial, and coronal planes.
Collapse
|
47
|
A Progressive and Cross-Domain Deep Transfer Learning Framework for Wrist Fracture Detection. JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH 2022. [DOI: 10.2478/jaiscr-2022-0007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Abstract
There has been an amplified focus on and benefit from the adoption of artificial intelligence (AI) in medical imaging applications. However, deep learning approaches involve training with massive amounts of annotated data in order to guarantee generalization and achieve high accuracies. Gathering and annotating large sets of training images require expertise which is both expensive and time-consuming, especially in the medical field. Furthermore, in health care systems where mistakes can have catastrophic consequences, there is a general mistrust in the black-box aspect of AI models. In this work, we focus on improving the performance of medical imaging applications when limited data is available while focusing on the interpretability aspect of the proposed AI model. This is achieved by employing a novel transfer learning framework, progressive transfer learning, an automated annotation technique and a correlation analysis experiment on the learned representations.
Progressive transfer learning helps jump-start the training of deep neural networks while improving the performance by gradually transferring knowledge from two source tasks into the target task. It is empirically tested on the wrist fracture detection application by first training a general radiology network RadiNet and using its weights to initialize RadiNetwrist
, that is trained on wrist images to detect fractures. Experiments show that RadiNetwrist
achieves an accuracy of 87% and an AUC ROC of 94% as opposed to 83% and 92% when it is pre-trained on the ImageNet dataset.
This improvement in performance is investigated within an explainable AI framework. More concretely, the learned deep representations of RadiNetwrist
are compared to those learned by the baseline model by conducting a correlation analysis experiment. The results show that, when transfer learning is gradually applied, some features are learned earlier in the network. Moreover, the deep layers in the progressive transfer learning framework are shown to encode features that are not encountered when traditional transfer learning techniques are applied.
In addition to the empirical results, a clinical study is conducted and the performance of RadiNetwrist
is compared to that of an expert radiologist. We found that RadiNetwrist
exhibited similar performance to that of radiologists with more than 20 years of experience.
This motivates follow-up research to train on more data to feasibly surpass radiologists’ performance, and investigate the interpretability of AI models in the healthcare domain where the decision-making process needs to be credible and transparent.
Collapse
|
48
|
Jang M, Kim M, Bae SJ, Lee SH, Koh JM, Kim N. Opportunistic Osteoporosis Screening Using Chest Radiographs With Deep Learning: Development and External Validation With a Cohort Dataset. J Bone Miner Res 2022; 37:369-377. [PMID: 34812546 DOI: 10.1002/jbmr.4477] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 11/05/2021] [Accepted: 11/17/2021] [Indexed: 01/02/2023]
Abstract
Osteoporosis is a common, but silent disease until it is complicated by fractures that are associated with morbidity and mortality. Over the past few years, although deep learning-based disease diagnosis on chest radiographs has yielded promising results, osteoporosis screening remains unexplored. Paired data with 13,026 chest radiographs and dual-energy X-ray absorptiometry (DXA) results from the Health Screening and Promotion Center of Asan Medical Center, between 2012 and 2019, were used as the primary dataset in this study. For the external test, we additionally used the Asan osteoporosis cohort dataset (1089 chest radiographs, 2010 and 2017). Using a well-performed deep learning model, we trained the OsPor-screen model with labels defined by DXA based diagnosis of osteoporosis (lumbar spine, femoral neck, or total hip T-score ≤ -2.5) in a supervised learning manner. The OsPor-screen model was assessed in the internal and external test sets. We performed substudies for evaluating the effect of various anatomical subregions and image sizes of input images. OsPor-screen model performances including sensitivity, specificity, and area under the curve (AUC) were measured in the internal and external test sets. In addition, visual explanations of the model to predict each class were expressed in gradient-weighted class activation maps (Grad-CAMs). The OsPor-screen model showed promising performances. Osteoporosis screening with the OsPor-screen model achieved an AUC of 0.91 (95% confidence interval [CI], 0.90-0.92) and an AUC of 0.88 (95% CI, 0.85-0.90) in the internal and external test set, respectively. Even though the medical relevance of these average Grad-CAMs is unclear, these results suggest that a deep learning-based model using chest radiographs could have the potential to be used for opportunistic automated screening of patients with osteoporosis in clinical settings. © 2021 American Society for Bone and Mineral Research (ASBMR).
Collapse
Affiliation(s)
- Miso Jang
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.,Department of Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Mingyu Kim
- Department of Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Sung Jin Bae
- Department of Health Screening and Promotion Center, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung Hun Lee
- Division of Endocrinology and Metabolism, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jung-Min Koh
- Division of Endocrinology and Metabolism, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Namkug Kim
- Department of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.,Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| |
Collapse
|
49
|
Laur O, Wang B. Musculoskeletal trauma and artificial intelligence: current trends and projections. Skeletal Radiol 2022; 51:257-269. [PMID: 34089338 DOI: 10.1007/s00256-021-03824-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 05/13/2021] [Accepted: 05/18/2021] [Indexed: 02/02/2023]
Abstract
Musculoskeletal trauma accounts for a significant fraction of emergency department visits and patients seeking urgent care, with a high financial cost to society. Diagnostic imaging is indispensable in the workup and management of trauma patients. However, diagnostic imaging represents a complex multifaceted system, with many aspects of its workflow prone to inefficiencies or human error. Recent technological innovations in artificial intelligence and machine learning have shown promise to revolutionize our systems for providing medical care to patients. This review will provide a general overview of the current state of artificial intelligence and machine learning applications in different aspects of trauma imaging and provide a vision for how such applications could be leveraged to enhance our diagnostic imaging systems and optimize patient outcomes.
Collapse
Affiliation(s)
- Olga Laur
- Division of Musculoskeletal Radiology, Department of Radiology, NYU Langone Health, 301 East 17th Street, 6th Floor, New York, NY, 10003, USA
| | - Benjamin Wang
- Division of Musculoskeletal Radiology, Department of Radiology, NYU Langone Health, 301 East 17th Street, 6th Floor, New York, NY, 10003, USA.
| |
Collapse
|
50
|
Dempsey N, Bassed R, Amarasiri R, Blau S. Exploring the use of machine learning for the assessment of skeletal fracture morphology and differentiation between impact mechanisms: A pilot study. J Forensic Sci 2022; 67:683-696. [DOI: 10.1111/1556-4029.14996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 12/24/2021] [Accepted: 01/11/2022] [Indexed: 12/01/2022]
Affiliation(s)
- Nicholas Dempsey
- Department of Forensic Medicine Monash University Southbank Victoria Australia
| | - Richard Bassed
- Victorian Institute of Forensic Medicine Department of Forensic Medicine Monash University Southbank Victoria Australia
| | - Rasika Amarasiri
- Victorian Institute of Forensic Medicine, Information, Communication & Technology Southbank Victoria Australia
| | - Soren Blau
- Victorian Institute of Forensic Medicine Department of Forensic Medicine Monash University Southbank Victoria Australia
| |
Collapse
|