1
|
Cai X, Lu Z, Peng Z, Xu Y, Huang J, Luo H, Zhao Y, Lou Z, Shen Z, Chen Z, Yang X, Wu Y, Lu S. A Neural Network Model for Intelligent Classification of Distal Radius Fractures Using Statistical Shape Model Extraction Features. Orthop Surg 2025; 17:1513-1524. [PMID: 40180705 PMCID: PMC12050184 DOI: 10.1111/os.70034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2024] [Revised: 03/10/2025] [Accepted: 03/12/2025] [Indexed: 04/05/2025] Open
Abstract
OBJECTIVE Distal radius fractures account for 12%-17% of all fractures, with accurate classification being crucial for proper treatment planning. Studies have shown that in emergency settings, the misdiagnosis rate of hand/wrist fractures can reach up to 29%, particularly among non-specialist physicians due to a high workload and limited experience. While existing AI methods can detect fractures, they typically require large training datasets and are limited to fracture detection without type classification. Therefore, there is an urgent need for an efficient and accurate method that can both detect and classify different types of distal radius fractures. To develop and validate an intelligent classifier for distal radius fractures by combining a statistical shape model (SSM) with a neural network (NN) based on CT imaging data. METHODS From August 2022 to May 2023, a total of 80 CT scans were collected, including 43 normal radial bones and 37 distal radius fractures (17 Colles', 12 Barton's, and 8 Smith's fractures). We established the distal radius SSM by combining mean values with PCA (Principal Component Analysis) features and proposed six morphological indicators across four groups. The intelligent classifier (SSM + NN) was trained using SSM features as input data and different fracture types as output data. Four-fold cross-validations were performed to verify the classifier's robustness. The SSMs for both normal and fractured distal radius were successfully established based on CT data. Analysis of variance revealed significant differences in all six morphological indicators among groups (p < 0.001). The intelligent classifier achieved optimal performance when using the first 15 PCA-extracted features, with a cumulative variance contribution rate exceeding 75%. The classifier demonstrated excellent discrimination capability with a mean area under the curve (AUC) of 0.95 in four-fold cross-validation, and achieved an overall classification accuracy of 97.5% in the test set. The optimal prediction threshold range was determined to be 0.2-0.4. RESULTS The SSMs for both normal and fractured distal radius were successfully established based on CT data. Analysis of variance revealed significant differences in all six morphological indicators among groups (p < 0.001). The intelligent classifier achieved optimal performance when using the first 15 PCA-extracted features, with a cumulative variance contribution rate exceeding 75%. The classifier demonstrated excellent discrimination capability with a mean AUC of 0.95 in four-fold cross-validation and achieved an overall classification accuracy of 97.5% in the test set. The optimal prediction threshold range was determined to be 0.2-0.4. CONCLUSION The CT-based SSM + NN intelligent classifier demonstrated excellent performance in identifying and classifying different types of distal radius fractures. This novel approach provides an efficient, accurate, and automated tool for clinical fracture diagnosis, which could potentially improve diagnostic efficiency and treatment planning in orthopedic practice.
Collapse
Affiliation(s)
- Xing‐bo Cai
- Department of Orthopedic SurgeryThe First People's Hospital of Yunnan Province, The Affiliated Hospital of Kunming University of Science and TechnologyKunmingYunnanChina
- The Key Laboratory of Digital Orthopaedics of Yunnan ProvinceKunmingYunnanChina
- Department of Orthopedics920th Hospital of Joint Logistics Support Force, PLAKunmingChina
| | - Ze‐hui Lu
- The Faculty of Medicine, Nursing, and Health Sciences, Monash UniversityAustralia
| | - Zhi Peng
- Department of Orthopedic SurgeryThe First People's Hospital of Yunnan Province, The Affiliated Hospital of Kunming University of Science and TechnologyKunmingYunnanChina
- The Key Laboratory of Digital Orthopaedics of Yunnan ProvinceKunmingYunnanChina
| | - Yong‐qing Xu
- Department of Orthopedics920th Hospital of Joint Logistics Support Force, PLAKunmingChina
| | - Jun‐shen Huang
- Key Lab of Statistical Modeling and Data Analysis of Yunnan, Yunnan UniversityKunmingChina
| | - Hao‐tian Luo
- Department of Orthopedic SurgeryThe First People's Hospital of Yunnan Province, The Affiliated Hospital of Kunming University of Science and TechnologyKunmingYunnanChina
- The Key Laboratory of Digital Orthopaedics of Yunnan ProvinceKunmingYunnanChina
| | - Yu Zhao
- Department of OrthopaedicsPeking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical SciencesBeijingChina
| | - Zhong‐qi Lou
- Department of Orthopedic SurgeryThe First People's Hospital of Yunnan Province, The Affiliated Hospital of Kunming University of Science and TechnologyKunmingYunnanChina
- The Key Laboratory of Digital Orthopaedics of Yunnan ProvinceKunmingYunnanChina
| | - Zi‐qi Shen
- Department of Orthopedic SurgeryThe First People's Hospital of Yunnan Province, The Affiliated Hospital of Kunming University of Science and TechnologyKunmingYunnanChina
- The Key Laboratory of Digital Orthopaedics of Yunnan ProvinceKunmingYunnanChina
| | - Zhang‐cong Chen
- Department of Orthopedic SurgeryThe First People's Hospital of Yunnan Province, The Affiliated Hospital of Kunming University of Science and TechnologyKunmingYunnanChina
- The Key Laboratory of Digital Orthopaedics of Yunnan ProvinceKunmingYunnanChina
| | - Xiong‐gang Yang
- Department of Orthopedic SurgeryThe First People's Hospital of Yunnan Province, The Affiliated Hospital of Kunming University of Science and TechnologyKunmingYunnanChina
- The Key Laboratory of Digital Orthopaedics of Yunnan ProvinceKunmingYunnanChina
| | - Ying Wu
- Key Lab of Statistical Modeling and Data Analysis of Yunnan, Yunnan UniversityKunmingChina
| | - Sheng Lu
- Department of Orthopedic SurgeryThe First People's Hospital of Yunnan Province, The Affiliated Hospital of Kunming University of Science and TechnologyKunmingYunnanChina
- The Key Laboratory of Digital Orthopaedics of Yunnan ProvinceKunmingYunnanChina
| |
Collapse
|
2
|
Lim J, Chang S, Kim K, Park HJ, Kim E, Hong SW. Machine learning-based prediction of the necessity for the surgical treatment of distal radius fractures. J Orthop Surg Res 2025; 20:419. [PMID: 40287717 PMCID: PMC12032687 DOI: 10.1186/s13018-025-05830-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2025] [Accepted: 04/18/2025] [Indexed: 04/29/2025] Open
Abstract
BACKGROUND Treatments for distal radius fractures (DRFs) are determined by various factors. Therefore, quantitative or qualitative tools have been introduced to assist in deciding the treatment approach. This study aimed to develop a machine learning (ML) model that determines the need for surgical treatment in patients with DRFs using a ML model that incorporates various clinical data concatenated with plain radiographs in the anteroposterior and lateral views. METHODS Radiographic and clinical data from 1,139 patients were collected and used to train the ML models. To analyze and integrate data effectively, the proposed ML model was mainly composed of a U-Net-based image feature extractor for radiographs, a multilayer perceptron based clinical feature extractor for clinical data, and a final classifier that combined the extracted features to predict the necessity of surgical treatment. To promote interpretability and support clinical adoption, Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to provide visual insights into the radiographic data. SHapley Additive exPlanations (SHAP) were utilized to elucidate the contributions of each clinical feature to the predictions of the model. RESULTS The model integrating image and clinical data achieved accuracy, sensitivity, and specificity of 92.98%, 93.28%, and 92.55%, respectively, in predicting the need for surgical treatment in patients with DRFs. These findings demonstrate the enhanced performance of the integrated model compared to the image-only model. In the Grad-CAM heatmaps, key regions such as the radiocarpal joint, volar, and dorsal cortex of the radial metaphysis were highlighted, indicating critical areas for model training. The SHAP results indicated that being female and having subsequent or concomitant fractures were strongly associated with the need for surgical treatment. CONCLUSIONS The proposed ML models may assist in assessing the need for surgical treatment in patients with DRFs. By improving the accuracy of treatment decisions, this model may enhance the success rate of fracture treatments, guiding clinical decisions and improving efficiency in clinical settings.
Collapse
Affiliation(s)
- Jongmin Lim
- Department of Computer Science and Engineering, Sungkyunkwan University College of Computing and Informatics, Suwon, South Korea
| | - Sehun Chang
- Department of Computer Science and Engineering, Sungkyunkwan University College of Computing and Informatics, Suwon, South Korea
| | - Kwangsu Kim
- Department of Computer Science and Engineering, Sungkyunkwan University College of Computing and Informatics, Suwon, South Korea
| | - Hee Jin Park
- Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Eugene Kim
- Department of Orthopedic Surgery, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Seok Woo Hong
- Department of Orthopedic Surgery, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, South Korea.
| |
Collapse
|
3
|
Nakabayashi D, Inui A, Mifune Y, Yamaura K, Kato T, Furukawa T, Hayashi S, Matsumoto T, Matsushita T, Kuroda R. Quantitative Evaluation of Tendon Gliding Sounds and Their Classification Using Deep Learning Models. Cureus 2025; 17:e81790. [PMID: 40330348 PMCID: PMC12054386 DOI: 10.7759/cureus.81790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/06/2025] [Indexed: 05/08/2025] Open
Abstract
This study aims to develop and evaluate a deep learning (DL) model for classifying tendon gliding sounds recorded using digital stethoscopes (Nexteto, ShareMedical, Japan, Nagoya). Specifically, we investigate whether differences in tendon excursion and biomechanics produce distinct acoustic signatures that can be identified through spectrogram analysis and machine learning (ML). Tendon disorders often present characteristic tactile and acoustic features, such as clicking or resistance during movement. In recent years, artificial intelligence (AI) and ML have achieved significant success in medical diagnostics, particularly through pattern recognition in medical imaging. Leveraging these advancements, we recorded tendon gliding sounds from the thumb and index finger in healthy volunteers and transformed these recordings into spectrograms for analysis. Although the sample size was small, we performed classification based on the frequency characteristics of the spectrograms using DL models, achieving high classification accuracy. These findings indicate that AI-based models can accurately distinguish between different tendon sounds and strongly suggest their potential as a non-invasive diagnostic tool for musculoskeletal disorders. This approach could offer a non-invasive diagnostic tool for detecting tendon disorders such as tenosynovitis or carpal tunnel syndrome, potentially aiding early diagnosis and treatment planning.
Collapse
Affiliation(s)
- Daiji Nakabayashi
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, JPN
| | - Atsuyuki Inui
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, JPN
| | - Yutaka Mifune
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, JPN
| | - Kohei Yamaura
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, JPN
| | - Tatsuo Kato
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, JPN
| | - Takahiro Furukawa
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, JPN
| | - Shinya Hayashi
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, JPN
| | - Tomoyuki Matsumoto
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, JPN
| | - Takehiko Matsushita
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, JPN
| | - Ryosuke Kuroda
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, JPN
| |
Collapse
|
4
|
Dubreucq Guerif E, Agut S, Rousseau A, Bompard R, Goulet H. Evaluation of the use of artificial intelligence in the detection of appendicular skeletal fractures in adult patients consulting in an emergency department, a retrospective study. Eur J Emerg Med 2025; 32:144-146. [PMID: 40009538 DOI: 10.1097/mej.0000000000001193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/28/2025]
Affiliation(s)
| | | | - Alexandra Rousseau
- Unité de Recherche Clinique (URC) de l'Est Parisien Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Paris, France
| | | | | |
Collapse
|
5
|
Binh LN, Nhu NT, Nhi PTU, Son DLH, Bach N, Huy HQ, Le NQK, Kang JH. Impact of deep learning on pediatric elbow fracture detection: a systematic review and meta-analysis. Eur J Trauma Emerg Surg 2025; 51:115. [PMID: 39976732 DOI: 10.1007/s00068-025-02779-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2024] [Accepted: 01/25/2025] [Indexed: 05/10/2025]
Abstract
OBJECTIVES Pediatric elbow fractures are a common injury among children. Recent advancements in artificial intelligence (AI), particularly deep learning (DL), have shown promise in diagnosing these fractures. This study systematically evaluated the performance of DL models in detecting pediatric elbow fractures. MATERIALS AND METHODS A comprehensive search was conducted in PubMed (Medline), EMBASE, and IEEE Xplore for studies published up to October 20, 2023. Studies employing DL models for detecting elbow fractures in patients aged 0 to 16 years were included. Key performance metrics, including sensitivity, specificity, and area under the curve (AUC), were extracted. The study was registered in PROSPERO (ID: CRD42023470558). RESULTS The search identified 22 studies, of which six met the inclusion criteria for the meta-analysis. The pooled sensitivity of DL models for pediatric elbow fracture detection was 0.93 (95% CI: 0.91-0.96). Specificity values ranged from 0.84 to 0.92 across studies, with a pooled estimate of 0.89 (95% CI: 0.85-0.92). The AUC ranged from 0.91 to 0.99, with a pooled estimate of 0.95 (95% CI: 0.93-0.97). Further analysis highlighted the impact of preprocessing techniques and the choice of model backbone architecture on performance. CONCLUSION DL models demonstrate exceptional accuracy in detecting pediatric elbow fractures. For optimal performance, we recommend leveraging backbone architectures like ResNet, combined with manual preprocessing supervised by radiology and orthopedic experts.
Collapse
Affiliation(s)
- Le Nguyen Binh
- College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan
- Department of Orthopedics and Trauma, Cho Ray Hospital, Ho Chi Minh City, Vietnam
- AIBioMed Research Group, Taipei Medical University, Taipei, 11031, Taiwan
- SBH Ortho Clinic, Ho Chi Minh City, Vietnam
| | - Nguyen Thanh Nhu
- College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan
- Faculty of Medicine, Can Tho University of Medicine and Pharmacy, Can Tho, 94117, Vietnam
| | - Pham Thi Uyen Nhi
- Ho Chi Minh City Hospital of Dermato-Venereology, Ho Chi Minh City, Vietnam
| | - Do Le Hoang Son
- Department of Orthopedics and Trauma, Cho Ray Hospital, Ho Chi Minh City, Vietnam
- Faculty of Medicine, Can Tho University of Medicine and Pharmacy, Can Tho, 94117, Vietnam
| | - Nguyen Bach
- Faculty of Medicine, Can Tho University of Medicine and Pharmacy, Can Tho, 94117, Vietnam
- Department of Orthopedics, University Medical Center Ho Chi Minh City, 201 Nguyen Chi Thanh Street, District 5, Ho Chi Minh City, Vietnam
| | - Hoang Quoc Huy
- Faculty of Medicine, Can Tho University of Medicine and Pharmacy, Can Tho, 94117, Vietnam
- Department of Orthopedics, University Medical Center Ho Chi Minh City, 201 Nguyen Chi Thanh Street, District 5, Ho Chi Minh City, Vietnam
| | - Nguyen Quoc Khanh Le
- AIBioMed Research Group, Taipei Medical University, Taipei, 11031, Taiwan.
- In-Service Master Program in Artificial Intelligence in Medicine, College of Medicine, Taiwan and AIBioMed Research Group, Taipei Medical University, Taipei, 11031, Taiwan.
- Translational Imaging Research Center, Taipei Medical University Hospital, Taipei, 11031, Taiwan.
| | - Jiunn-Horng Kang
- College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
- Department of Physical Medicine and Rehabilitation, School of Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
- Department of Physical Medicine and Rehabilitation, Taipei Medical University Hospital, Taipei, 11031, Taiwan.
- Graduate Institute of Nanomedicine and Medical Engineering, College of Biomedical Engineering, Taipei Medical University, 250 Wuxing Street, Xinyi District, Taipei, 11031, Taiwan.
| |
Collapse
|
6
|
Sharifi G, Hajibeygi R, Zamani SAM, Easa AM, Bahrami A, Eshraghi R, Moafi M, Ebrahimi MJ, Fathi M, Mirjafari A, Chan JS, Dixe de Oliveira Santo I, Anar MA, Rezaei O, Tu LH. Diagnostic performance of neural network algorithms in skull fracture detection on CT scans: a systematic review and meta-analysis. Emerg Radiol 2025; 32:97-111. [PMID: 39680295 DOI: 10.1007/s10140-024-02300-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Accepted: 11/08/2024] [Indexed: 12/17/2024]
Abstract
BACKGROUND AND AIM The potential intricacy of skull fractures as well as the complexity of underlying anatomy poses diagnostic hurdles for radiologists evaluating computed tomography (CT) scans. The necessity for automated diagnostic tools has been brought to light by the shortage of radiologists and the growing demand for rapid and accurate fracture diagnosis. Convolutional Neural Networks (CNNs) are a potential new class of medical imaging technologies that use deep learning (DL) to improve diagnosis accuracy. The objective of this systematic review and meta-analysis is to assess how well CNN models diagnose skull fractures on CT images. METHODS PubMed, Scopus, and Web of Science were searched for studies published before February 2024 that used CNN models to detect skull fractures on CT scans. Meta-analyses were conducted for area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy. Egger's and Begg's tests were used to assess publication bias. RESULTS Meta-analysis was performed for 11 studies with 20,798 patients. Pooled average AUC for implementing pre-training for transfer learning in CNN models within their training model's architecture was 0.96 ± 0.02. The pooled averages of the studies' sensitivity and specificity were 1.0 and 0.93, respectively. The accuracy was obtained 0.92 ± 0.04. Studies showed heterogeneity, which was explained by differences in model topologies, training models, and validation techniques. There was no significant publication bias detected. CONCLUSION CNN models perform well in identifying skull fractures on CT scans. Although there is considerable heterogeneity and possibly publication bias, the results suggest that CNNs have the potential to improve diagnostic accuracy in the imaging of acute skull trauma. To further enhance these models' practical applicability, future studies could concentrate on the utility of DL models in prospective clinical trials.
Collapse
Affiliation(s)
- Guive Sharifi
- Skull base Research Center, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ramtin Hajibeygi
- Tehran University of Medical Sciences, School of Medicine, Tehran, Iran
| | | | - Ahmed Mohamedbaqer Easa
- Department of Radiology Technology, Collage of Health and Medical Technology, Al-Ayen Iraqi University, Thi-Qar, 64001, Iraq
| | | | | | - Maral Moafi
- Cell Biology and Anatomical Sciences, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammad Javad Ebrahimi
- Cell Biology and Anatomical Sciences, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mobina Fathi
- Skull base Research Center, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Arshia Mirjafari
- Department of Radiological Sciences, University of California, Los Angeles, CA, USA
- College of Osteopathic Medicine of The Pacific, Western University of Health Sciences, Pomona, CA, USA
| | - Janine S Chan
- Keck School of Medicine of USC, Los Angeles, CA, USA
| | | | | | - Omidvar Rezaei
- Skull base Research Center, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Long H Tu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, CT, USA.
| |
Collapse
|
7
|
Murrad BG, Mohsin AN, Al-Obaidi RH, Albaaji GF, Ali AA, Hamzah MS, Abdulridha RN, Al-Sharifi HKR. An AI-Driven Framework for Detecting Bone Fractures in Orthopedic Therapy. ACS Biomater Sci Eng 2025; 11:577-585. [PMID: 39648498 DOI: 10.1021/acsbiomaterials.4c01483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/10/2024]
Abstract
This study presents an advanced artificial intelligence-driven framework designed to enhance the speed and accuracy of bone fracture detection, addressing key limitations in traditional diagnostic approaches that rely on manual image analysis. The proposed framework integrates the YOLOv8 object detection model with a ResNet backbone to combine robust feature extraction and precise fracture classification. This combination effectively identifies and categorizes bone fractures within X-ray images, supporting reliable diagnostic outcomes. Evaluated on an extensive data set, the model demonstrated a mean average precision of 0.9 and overall classification accuracy of 90.5%, indicating substantial improvements over conventional methods. These results underscore a potential framework to provide healthcare professionals with a powerful, automated tool for orthopedic diagnostics, enhancing diagnostic efficiency and accuracy in routine and emergency care settings. The study contributes to the field by offering an effective solution for automated fracture detection that aims to improve patient outcomes through timely and accurate intervention.
Collapse
Affiliation(s)
| | - Abdulhadi Nadhim Mohsin
- Department of Computer Science, College of Education for Pure Sciences, Wasit University, Wasit 52001, Iraq
| | - R H Al-Obaidi
- Fuel and Energy Techniques Engineering Department, College of Engineering and Technologies, Al-Mustaqbal University, Babylon 51001, Iraq
| | - Ghassan Faisal Albaaji
- Machine Intelligence Research Laboratory,Department of Computer Science, University of Kerala, Thiruvananthapuram 695582,India
| | - Ahmed Adnan Ali
- Alnumaniyah General Hospital, Iraqi Ministry of Health, Wasit 52001, Iraq
| | - Mohamed Sachit Hamzah
- High Health Institute of Wasit,Republic of Iraq Ministry of Health, Kut 52001, Iraq
- Department of Medical Instrumentation Techniques Engineering, Kut University College, Wasit 52001, Iraq
| | | | | |
Collapse
|
8
|
Hermans S, Hu Z, Ball RL, Lin HM, Prevedello LM, Berger FH, Yusuf I, Rudie JD, Vazirabad M, Flanders AE, Shih G, Mongan J, Nicolaou S, Marinelli BS, Davis MA, Magudia K, Sejdić E, Colak E. RSNA 2023 Abdominal Trauma AI Challenge: Review and Outcomes. Radiol Artif Intell 2025; 7:e240334. [PMID: 39503604 DOI: 10.1148/ryai.240334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2025]
Abstract
Purpose To evaluate the performance of the winning machine learning models from the 2023 RSNA Abdominal Trauma Detection AI Challenge. Materials and Methods The competition was hosted on Kaggle and took place between July 26 and October 15, 2023. The multicenter competition dataset consisted of 4274 abdominal trauma CT scans, in which solid organs (liver, spleen, and kidneys) were annotated as healthy, low-grade, or high-grade injury. Studies were labeled as positive or negative for the presence of bowel and mesenteric injury and active extravasation. In this study, performances of the eight award-winning models were retrospectively assessed and compared using various metrics, including the area under the receiver operating characteristic curve (AUC), for each injury category. The reported mean values of these metrics were calculated by averaging the performance across all models for each specified injury type. Results The models exhibited strong performance in detecting solid organ injuries, particularly high-grade injuries. For binary detection of injuries, the models demonstrated mean AUC values of 0.92 (range, 0.90-0.94) for liver, 0.91 (range, 0.87-0.93) for splenic, and 0.94 (range, 0.93-0.95) for kidney injuries. The models achieved mean AUC values of 0.98 (range, 0.96-0.98) for high-grade liver, 0.98 (range, 0.97-0.99) for high-grade splenic, and 0.98 (range, 0.97-0.98) for high-grade kidney injuries. For the detection of bowel and mesenteric injuries and active extravasation, the models demonstrated mean AUC values of 0.85 (range, 0.74-0.93) and 0.85 (range, 0.79-0.89), respectively. Conclusion The award-winning models from the artificial intelligence challenge demonstrated strong performance in the detection of traumatic abdominal injuries on CT scans, particularly high-grade injuries. These models may serve as a performance baseline for future investigations and algorithms. Keywords: Abdominal Trauma, CT, American Association for the Surgery of Trauma, Machine Learning, Artificial Intelligence Supplemental material is available for this article. © RSNA, 2024.
Collapse
Affiliation(s)
- Sebastiaan Hermans
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Zixuan Hu
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Robyn L Ball
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Hui Ming Lin
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Luciano M Prevedello
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Ferco H Berger
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Ibrahim Yusuf
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Jeffrey D Rudie
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Maryam Vazirabad
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Adam E Flanders
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - George Shih
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - John Mongan
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Savvas Nicolaou
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Brett S Marinelli
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Melissa A Davis
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Kirti Magudia
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Ervin Sejdić
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| | - Errol Colak
- From the Department of Medical Imaging, St Michael's Hospital, Unity Health Toronto, 30 Bond St, Toronto, ON, Canada M5B 1W8 (S.H., Z.H., H.M.L., I.Y., E.C.); Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada (Z.H., E.S.); The Jackson Laboratory, Bar Harbor, Me (R.L.B.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Medical Imaging, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (F.H.B.); Department of Radiology, Scripps Clinic Medical Group and University of California San Diego, San Diego, Calif (J.D.R.); Radiological Society of North America, Oak Brook, Ill (M.V.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Department of Radiology, Weill Cornell Medicine, New York, NY (G.S.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.M.), Department of Radiology, Vancouver General Hospital, Vancouver, Canada (S.N.); Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, NY (B.S.M.); Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Conn (M.A.D.); Duke University School of Medicine, Durham, NC (K.M.); North York General Hospital, Toronto, Ontario, Canada (E.S.); and Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada (E.C.)
| |
Collapse
|
9
|
Gan K, Liu Y, Zhang T, Xu D, Lian L, Luo Z, Li J, Lu L. Deep Learning Model for Automatic Identification and Classification of Distal Radius Fracture. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2874-2882. [PMID: 38862852 PMCID: PMC11612100 DOI: 10.1007/s10278-024-01144-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 05/13/2024] [Accepted: 05/15/2024] [Indexed: 06/13/2024]
Abstract
Distal radius fracture (DRF) is one of the most common types of wrist fractures. We aimed to construct a model for the automatic segmentation of wrist radiographs using a deep learning approach and further perform automatic identification and classification of DRF. A total of 2240 participants with anteroposterior wrist radiographs from one hospital between January 2015 and October 2021 were included. The outcomes were automatic segmentation of wrist radiographs, identification of DRF, and classification of DRF (type A, type B, type C). The Unet model and Fast-RCNN model were used for automatic segmentation. The DenseNet121 model and ResNet50 model were applied to DRF identification of DRF. The DenseNet121 model, ResNet50 model, VGG-19 model, and InceptionV3 model were used for DRF classification. The area under the curve (AUC) with 95% confidence interval (CI), accuracy, precision, and F1-score was utilized to assess the effectiveness of the identification and classification models. Of these 2240 participants, 1440 (64.3%) had DRF, of which 701 (48.7%) were type A, 278 (19.3%) were type B, and 461 (32.0%) were type C. Both the Unet model and the Fast-RCNN model showed good segmentation of wrist radiographs. For DRF identification, the AUCs of the DenseNet121 model and the ResNet50 model in the testing set were 0.941 (95%CI: 0.926-0.965) and 0.936 (95%CI: 0.913-0.955), respectively. The AUCs of the DenseNet121 model (testing set) for classification type A, type B, and type C were 0.96, 0.96, and 0.96, respectively. The DenseNet121 model may provide clinicians with a tool for interpreting wrist radiographs.
Collapse
Affiliation(s)
- Kaifeng Gan
- Department of Orthopaedics, the Affiliated LiHuiLi Hospital of Ningbo University, No. 57 Xingning Road, Yinzhou District, Ningbo, 315211, Zhejiang, China
| | - Yunpeng Liu
- Ningbo University of Technology, Ningbo, 315100, Zhejiang, China
| | - Ting Zhang
- Department of Orthopaedics, the Affiliated LiHuiLi Hospital of Ningbo University, No. 57 Xingning Road, Yinzhou District, Ningbo, 315211, Zhejiang, China
| | - Dingli Xu
- Health Science Center, Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Leidong Lian
- Health Science Center, Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Zhe Luo
- Health Science Center, Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Jin Li
- Department of Orthopaedics, the Affiliated LiHuiLi Hospital of Ningbo University, No. 57 Xingning Road, Yinzhou District, Ningbo, 315211, Zhejiang, China
| | - Liangjie Lu
- Department of Orthopaedics, the Affiliated LiHuiLi Hospital of Ningbo University, No. 57 Xingning Road, Yinzhou District, Ningbo, 315211, Zhejiang, China.
| |
Collapse
|
10
|
Aydin Şimşek Ş, Aydin A, Say F, Cengiz T, Özcan C, Öztürk M, Okay E, Özkan K. Enhanced enchondroma detection from x-ray images using deep learning: A step towards accurate and cost-effective diagnosis. J Orthop Res 2024; 42:2826-2834. [PMID: 39007705 DOI: 10.1002/jor.25938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Revised: 06/27/2024] [Accepted: 07/03/2024] [Indexed: 07/16/2024]
Abstract
This study investigates the automated detection of enchondromas, benign cartilage tumors, from x-ray images using deep learning techniques. Enchondromas pose diagnostic challenges due to their potential for malignant transformation and overlapping radiographic features with other conditions. Leveraging a data set comprising 1645 x-ray images from 1173 patients, a deep-learning model implemented with Detectron2 achieved an accuracy of 0.9899 in detecting enchondromas. The study employed rigorous validation processes and compared its findings with the existing literature, highlighting the superior performance of the deep learning approach. Results indicate the potential of machine learning in improving diagnostic accuracy and reducing healthcare costs associated with advanced imaging modalities. The study underscores the significance of early and accurate detection of enchondromas for effective patient management and suggests avenues for further research in musculoskeletal tumor detection.
Collapse
Affiliation(s)
- Şafak Aydin Şimşek
- Department of Orthopedics and Traumatology, Faculty of Medicine, Ondokuz Mayis University, Samsun, Turkey
| | - Ayhan Aydin
- Department of Computer Engineering, Karabuk University, Karabük, Turkey
| | - Ferhat Say
- Department of Orthopedics and Traumatology, Faculty of Medicine, Ondokuz Mayis University, Samsun, Turkey
| | - Tolgahan Cengiz
- Clinic of Orthopedics and Traumatology, Inebolu State Hospital, Kastamonu, Turkey
| | - Caner Özcan
- Department of Software Engineering, Karabuk University, Karabük, Turkey
| | - Mesut Öztürk
- Department of Radiology, Faculty of Medicine, Samsun University, Samsun, Turkey
| | - Erhan Okay
- Department of Orthopedics and Traumatology, Istanbul Medeniyet University Goztepe Education and Research Hospital, İstanbul, Turkey
| | - Korhan Özkan
- Department of Orthopedics and Traumatology, Acıbadem Atasehir Hospital, Istanbul, Turkey
| |
Collapse
|
11
|
Spek RWA, Smith WJ, Sverdlov M, Broos S, Zhao Y, Liao Z, Verjans JW, Prijs J, To MS, Åberg H, Chiri W, IJpma FFA, Jadav B, White J, Bain GI, Jutte PC, van den Bekerom MPJ, Jaarsma RL, Doornberg JN, Ashkani S, Assink N, Colaris JW, der Gaast NV, Jayakumar P, Kim LJ, de Klerk HH, Kuipers J, Mallee WH, Meesters AML, Mennes SRJ, Oldhof MGE, Pijpker PAJ, Yiu Lau C, Wijffels MME, Wolf AD. Detection, classification, and characterization of proximal humerus fractures on plain radiographs. Bone Joint J 2024; 106-B:1348-1360. [PMID: 39481431 DOI: 10.1302/0301-620x.106b11.bjj-2024-0264.r1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/02/2024]
Abstract
Aims The purpose of this study was to develop a convolutional neural network (CNN) for fracture detection, classification, and identification of greater tuberosity displacement ≥ 1 cm, neck-shaft angle (NSA) ≤ 100°, shaft translation, and articular fracture involvement, on plain radiographs. Methods The CNN was trained and tested on radiographs sourced from 11 hospitals in Australia and externally validated on radiographs from the Netherlands. Each radiograph was paired with corresponding CT scans to serve as the reference standard based on dual independent evaluation by trained researchers and attending orthopaedic surgeons. Presence of a fracture, classification (non- to minimally displaced; two-part, multipart, and glenohumeral dislocation), and four characteristics were determined on 2D and 3D CT scans and subsequently allocated to each series of radiographs. Fracture characteristics included greater tuberosity displacement ≥ 1 cm, NSA ≤ 100°, shaft translation (0% to < 75%, 75% to 95%, > 95%), and the extent of articular involvement (0% to < 15%, 15% to 35%, or > 35%). Results For detection and classification, the algorithm was trained on 1,709 radiographs (n = 803), tested on 567 radiographs (n = 244), and subsequently externally validated on 535 radiographs (n = 227). For characterization, healthy shoulders and glenohumeral dislocation were excluded. The overall accuracy for fracture detection was 94% (area under the receiver operating characteristic curve (AUC) = 0.98) and for classification 78% (AUC 0.68 to 0.93). Accuracy to detect greater tuberosity fracture displacement ≥ 1 cm was 35.0% (AUC 0.57). The CNN did not recognize NSAs ≤ 100° (AUC 0.42), nor fractures with ≥ 75% shaft translation (AUC 0.51 to 0.53), or with ≥ 15% articular involvement (AUC 0.48 to 0.49). For all objectives, the model's performance on the external dataset showed similar accuracy levels. Conclusion CNNs proficiently rule out proximal humerus fractures on plain radiographs. Despite rigorous training methodology based on CT imaging with multi-rater consensus to serve as the reference standard, artificial intelligence-driven classification is insufficient for clinical implementation. The CNN exhibited poor diagnostic ability to detect greater tuberosity displacement ≥ 1 cm and failed to identify NSAs ≤ 100°, shaft translations, or articular fractures.
Collapse
Affiliation(s)
- Reinier W A Spek
- Department of Orthopaedic Surgery, Flinders Medical Centre, and Flinders University, Adelaide, Australia
- Department of Orthopaedic Surgery, University Medical Center Groningen, and University of Groningen, Groningen, Netherlands
- Department of Orthopaedic Surgery, OLVG, Amsterdam, Netherlands
| | - William J Smith
- Department of Orthopaedic Surgery, Flinders Medical Centre, and Flinders University, Adelaide, Australia
| | - Marat Sverdlov
- Department of Orthopaedic Surgery, Flinders Medical Centre, and Flinders University, Adelaide, Australia
| | - Sebastiaan Broos
- Department of Orthopaedic Surgery, Flinders Medical Centre, and Flinders University, Adelaide, Australia
| | - Yang Zhao
- Australian Institute for Machine Learning, Adelaide, Australia
| | - Zhibin Liao
- Australian Institute for Machine Learning, Adelaide, Australia
| | - Johan W Verjans
- Australian Institute for Machine Learning, Adelaide, Australia
- Department of Cardiology, Royal Adelaide Hospital, Adelaide, Australia
- Adelaide Medical School, University of Adelaide, Adelaide, Australia
| | - Jasper Prijs
- Department of Orthopaedic Surgery, Flinders Medical Centre, and Flinders University, Adelaide, Australia
- Department of Orthopaedic Surgery, University Medical Center Groningen, and University of Groningen, Groningen, Netherlands
| | - Minh-Son To
- South Australia Medical Imaging, Flinders Medical Centre, Adelaide, Australia
| | - Henrik Åberg
- Department of Orthopaedic Surgery, Institution of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - Wael Chiri
- Department of Orthopaedic Surgery, Flinders Medical Centre, and Flinders University, Adelaide, Australia
| | - Frank F A IJpma
- Department of Orthopaedic Surgery, University Medical Center Groningen, and University of Groningen, Groningen, Netherlands
| | - Bhavin Jadav
- Department of Orthopaedic Surgery, Flinders Medical Centre, and Flinders University, Adelaide, Australia
| | - John White
- Department of Orthopaedic Surgery, Flinders Medical Centre, and Flinders University, Adelaide, Australia
| | - Gregory I Bain
- Department of Orthopaedic Surgery, Flinders Medical Centre, and Flinders University, Adelaide, Australia
| | - Paul C Jutte
- Department of Orthopaedic Surgery, University Medical Center Groningen, and University of Groningen, Groningen, Netherlands
| | - Michel P J van den Bekerom
- Shoulder and Elbow Expertise Center, Department of Orthopaedic Surgery, OLVG, Amsterdam, Netherlands
- Department of Human Movement Sciences, Faculty of Behavioral and Movement Sciences, Vrije Universiteit, Amsterdam, Netherlands
| | - Ruurd L Jaarsma
- Department of Orthopaedic Surgery, Flinders Medical Centre, and Flinders University, Adelaide, Australia
| | - Job N Doornberg
- Department of Orthopaedic Surgery, Flinders Medical Centre, and Flinders University, Adelaide, Australia
- Department of Orthopaedic Surgery, University Medical Center Groningen, and University of Groningen, Groningen, Netherlands
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
12
|
Tian J, Wang K, Wu P, Li J, Zhang X, Wang X. Development of a deep learning model for detecting lumbar vertebral fractures on CT images: An external validation. Eur J Radiol 2024; 180:111685. [PMID: 39197270 DOI: 10.1016/j.ejrad.2024.111685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 05/31/2024] [Accepted: 08/14/2024] [Indexed: 09/01/2024]
Abstract
OBJECTIVE To develop and externally validate a binary classification model for lumbar vertebral body fractures based on CT images using deep learning methods. METHODS This study involved data collection from two hospitals for AI model training and external validation. In Cohort A from Hospital 1, CT images from 248 patients, comprising 1508 vertebrae, revealed that 20.9% had fractures (315 vertebrae) and 79.1% were non-fractured (1193 vertebrae). In Cohort B from Hospital 2, CT images from 148 patients, comprising 887 vertebrae, indicated that 14.8% had fractures (131 vertebrae) and 85.2% were non-fractured (756 vertebrae). The AI model for lumbar spine fractures underwent two stages: vertebral body segmentation and fracture classification. The first stage utilized a 3D V-Net convolutional deep neural network, which produced a 3D segmentation map. From this map, region of each vertebra body were extracted and then input into the second stage of the algorithm. The second stage employed a 3D ResNet convolutional deep neural network to classify each proposed region as positive (fractured) or negative (not fractured). RESULTS The AI model's accuracy for detecting vertebral fractures in Cohort A's training set (n = 1199), validation set (n = 157), and test set (n = 152) was 100.0 %, 96.2 %, and 97.4 %, respectively. For Cohort B (n = 148), the accuracy was 96.3 %. The area under the receiver operating characteristic curve (AUC-ROC) values for the training, validation, and test sets of Cohort A, as well as Cohort B, and their 95 % confidence intervals (CIs) were as follows: 1.000 (1.000, 1.000), 0.978 (0.944, 1.000), 0.986 (0.969, 1.000), and 0.981 (0.970, 0.992). The area under the precision-recall curve (AUC-PR) values were 1.000 (0.996, 1.000), 0.964 (0.927, 0.985), 0.907 (0.924, 0.984), and 0.890 (0.846, 0.971), respectively. According to the DeLong test, there was no significant difference in the AUC-ROC values between the test set of Cohort A and Cohort B, both for the overall data and for each specific vertebral location (all P>0.05). CONCLUSION The developed model demonstrates promising diagnostic accuracy and applicability for detecting lumbar vertebral fractures.
Collapse
Affiliation(s)
- Jingyi Tian
- Department of Radiology, Peking University First Hospital, Beijing, China; Department of Radiology, Beijing Water Conservancy Hospital, Beijing, China
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University, Beijing, China
| | - Pengsheng Wu
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Jialun Li
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China.
| |
Collapse
|
13
|
Breu R, Avelar C, Bertalan Z, Grillari J, Redl H, Ljuhar R, Quadlbauer S, Hausner T. Artificial intelligence in traumatology. Bone Joint Res 2024; 13:588-595. [PMID: 39417424 PMCID: PMC11484119 DOI: 10.1302/2046-3758.1310.bjr-2023-0275.r3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2024] Open
Abstract
Aims The aim of this study was to create artificial intelligence (AI) software with the purpose of providing a second opinion to physicians to support distal radius fracture (DRF) detection, and to compare the accuracy of fracture detection of physicians with and without software support. Methods The dataset consisted of 26,121 anonymized anterior-posterior (AP) and lateral standard view radiographs of the wrist, with and without DRF. The convolutional neural network (CNN) model was trained to detect the presence of a DRF by comparing the radiographs containing a fracture to the inconspicuous ones. A total of 11 physicians (six surgeons in training and five hand surgeons) assessed 200 pairs of randomly selected digital radiographs of the wrist (AP and lateral) for the presence of a DRF. The same images were first evaluated without, and then with, the support of the CNN model, and the diagnostic accuracy of the two methods was compared. Results At the time of the study, the CNN model showed an area under the receiver operating curve of 0.97. AI assistance improved the physician's sensitivity (correct fracture detection) from 80% to 87%, and the specificity (correct fracture exclusion) from 91% to 95%. The overall error rate (combined false positive and false negative) was reduced from 14% without AI to 9% with AI. Conclusion The use of a CNN model as a second opinion can improve the diagnostic accuracy of DRF detection in the study setting.
Collapse
Affiliation(s)
- Rosmarie Breu
- Orthopedic Hospital Vienna-Speising, Vienna, Austria
- AUVA Trauma Hospital Lorenz Böhler, Vienna, Austria
- Ludwig Boltzmann Institute for Traumatology, the Research Center in Cooperation with AUVA, Vienna, Austria
| | | | | | - Johannes Grillari
- Ludwig Boltzmann Institute for Traumatology, the Research Center in Cooperation with AUVA, Vienna, Austria
- Institute of Molecular Biotechnology, University of Natural Resources and Life Sciences, Vienna, Austria
- Austrian Cluster for Tissue Regeneration, Vienna, Austria
| | - Heinz Redl
- Ludwig Boltzmann Institute for Traumatology, the Research Center in Cooperation with AUVA, Vienna, Austria
- Austrian Cluster for Tissue Regeneration, Vienna, Austria
| | - Richard Ljuhar
- ImageBiopsy Lab, Vienna, Austria
- Institute of Molecular Biotechnology, University of Natural Resources and Life Sciences, Vienna, Austria
| | | | - Thomas Hausner
- AUVA Trauma Hospital Lorenz Böhler, Vienna, Austria
- Ludwig Boltzmann Institute for Traumatology, the Research Center in Cooperation with AUVA, Vienna, Austria
- Austrian Cluster for Tissue Regeneration, Vienna, Austria
- Department for Orthopedic Surgery and Traumatology, Paracelsus Medical University, Salzburg, Austria
| |
Collapse
|
14
|
Wu S, Kurugol S, Tsai A. Improving the radiographic image analysis of the classic metaphyseal lesion via conditional diffusion models. Med Image Anal 2024; 97:103284. [PMID: 39096843 PMCID: PMC11365766 DOI: 10.1016/j.media.2024.103284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 06/06/2024] [Accepted: 07/19/2024] [Indexed: 08/05/2024]
Abstract
The classic metaphyseal lesion (CML) is a unique fracture highly specific for infant abuse. This fracture is often subtle in radiographic appearance and commonly occurs in the distal tibia. The development of an automated model that can accurately identify distal tibial radiographs with CMLs is important to assist radiologists in detecting these fractures. However, building such a model typically requires a large and diverse training dataset. To address this problem, we propose a novel diffusion model for data augmentation called masked conditional diffusion model (MaC-DM). In contrast to previous generative models, our approach produces a wide range of realistic-appearing synthetic images of distal tibial radiographs along with their associated segmentation masks. MaC-DM achieves this by incorporating weighted segmentation masks of the distal tibias and CML fracture sites as image conditions for guidance. The augmented images produced by MaC-DM significantly enhance the performance of various commonly used classification models, accurately distinguishing normal distal tibial radiographs from those with CMLs. Additionally, it substantially improves the performance of different segmentation models, accurately labeling areas of the CMLs on distal tibial radiographs. Furthermore, MaC-DM can control the size of the CML fracture in the augmented images.
Collapse
Affiliation(s)
- Shaoju Wu
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Sila Kurugol
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Andy Tsai
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
15
|
Zoulakis M, Axelsson KF, Litsne H, Johansson L, Lorentzon M. Real-world effectiveness of osteoporosis screening in older Swedish women (SUPERB). Bone 2024; 187:117204. [PMID: 39019129 DOI: 10.1016/j.bone.2024.117204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Revised: 06/24/2024] [Accepted: 07/12/2024] [Indexed: 07/19/2024]
Abstract
Older women diagnosed with osteoporosis and referred to their general practitioners (GPs) exhibited significantly higher osteoporosis treatment rates and a reduced fracture risk compared to non-osteoporotic women who were not referred to their GPs. OBJECTIVE The objective of this study was to investigate treatment rates and fracture outcomes in older women, from a population-based study, 1) diagnosed with osteoporosis, with subsequent referral to their general practitioner (GP), 2) women without osteoporosis, without referral to their GP. METHODS In total, 3028 women, 75-80 years old were included in the SUPERB cohort. At inclusion, 443 women were diagnosed with osteoporosis (bone mineral density (BMD) T-score ≤ -2.5) at the lumbar spine or hip, did not have current or recent osteoporosis treatment, and were referred to their GP for evaluation (referral group). The remaining 2585 women without osteoporosis composed the control group. Sensitivity analysis was performed on subsets of the original groups. Adjusted Cox regression (hazard ratios (HR) and 95 % confidence intervals (CI)) analyses were performed to investigate the risk of incident fractures and the incidence of osteoporosis treatment. RESULTS Cox regression models, adjusted for age, sex, body mass index (BMI), smoking, alcohol, glucocorticoid use, previous fracture, parent hip fracture, secondary osteoporosis, rheumatoid arthritis, and BMD at the femoral neck, revealed that the risk of major osteoporotic fracture was significantly lower (HR = 0.81, 95 % CI [0.67-0.99]) in the referral group than in the controls. Similarly, the risk of hip fracture (HR = 0.69, [0.48-0.98]) and any fracture (HR = 0.84, [0.70-1.00]) were lower in the referral group. During follow-up, there was a 5-fold increase (HR = 5.00, [4.39-5.74]) in the prescription of osteoporosis medication in the referral group compared to the control group. CONCLUSION Screening older women for osteoporosis and referring those with osteoporosis diagnosis was associated with substantially increased treatment rates and reduced risk of any fracture, MOF, and hip fracture, compared to non-osteoporotic women.
Collapse
Affiliation(s)
- Michail Zoulakis
- Sahlgrenska Osteoporosis Centre, Department of Internal Medicine and Clinical Nutrition, Institute of Medicine, University of Gothenburg, Gothenburg, Sweden; Region Västra Götaland, Department of Geriatric Medicine, Sahlgrenska University Hospital, Mölndal, Sweden
| | - Kristian F Axelsson
- Sahlgrenska Osteoporosis Centre, Department of Internal Medicine and Clinical Nutrition, Institute of Medicine, University of Gothenburg, Gothenburg, Sweden; Region Västra Götaland, Närhälsan Norrmalm, Health Centre, Sweden
| | - Henrik Litsne
- Sahlgrenska Osteoporosis Centre, Department of Internal Medicine and Clinical Nutrition, Institute of Medicine, University of Gothenburg, Gothenburg, Sweden
| | - Lisa Johansson
- Sahlgrenska Osteoporosis Centre, Department of Internal Medicine and Clinical Nutrition, Institute of Medicine, University of Gothenburg, Gothenburg, Sweden; Region Västra Götaland, Department of Orthopedic Surgery, Sahlgrenska University Hospital, Mölndal, Sweden
| | - Mattias Lorentzon
- Sahlgrenska Osteoporosis Centre, Department of Internal Medicine and Clinical Nutrition, Institute of Medicine, University of Gothenburg, Gothenburg, Sweden; Region Västra Götaland, Department of Geriatric Medicine, Sahlgrenska University Hospital, Mölndal, Sweden; Mary MacKillop Institute for Health Research, Australian Catholic University, Melbourne, VIC, Australia.
| |
Collapse
|
16
|
Herpe G, Nelken H, Vendeuvre T, Guenezan J, Giraud C, Mimoz O, Feydy A, Tasu JP, Guillevin R. Effectiveness of an Artificial Intelligence Software for Limb Radiographic Fracture Recognition in an Emergency Department. J Clin Med 2024; 13:5575. [PMID: 39337062 PMCID: PMC11433213 DOI: 10.3390/jcm13185575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 09/08/2024] [Accepted: 09/12/2024] [Indexed: 09/30/2024] Open
Abstract
Objectives: To assess the impact of an Artificial Intelligence (AI) limb bone fracture diagnosis software (AIS) on emergency department (ED) workflow and diagnostic accuracy. Materials and Methods: A retrospective study was conducted in two phases-without AIS (Period 1: 1 January 2020-30 June 2020) and with AIS (Period 2: 1 January 2021-30 June 2021). Results: Among 3720 patients (1780 in Period 1; 1940 in Period 2), the discrepancy rate decreased by 17% (p = 0.04) after AIS implementation. Clinically relevant discrepancies showed no significant change (-1.8%, p = 0.99). The mean length of stay in the ED was reduced by 9 min (p = 0.03), and expert consultation rates decreased by 1% (p = 0.38). Conclusions: AIS implementation reduced the overall discrepancy rate and slightly decreased ED length of stay, although its impact on clinically relevant discrepancies remains inconclusive. Key Point: After AI software deployment, the rate of radiographic discrepancies decreased by 17% (p = 0.04) but this was not clinically relevant (-2%, p = 0.99). Length of patient stay in the emergency department decreased by 5% with AI (p = 0.03). Bone fracture AI software is effective, but its effectiveness remains to be demonstrated.
Collapse
Affiliation(s)
- Guillaume Herpe
- Emergency Radiology Unit, University Hospital Center of Poitiers, 86000 Poitiers, France
- Laboratoire de Mathématiques Appliquées LMA, CNRS UMR 7348, 86021 Poitiers, France
| | - Helena Nelken
- Emergency Radiology Unit, University Hospital Center of Poitiers, 86000 Poitiers, France
| | - Tanguy Vendeuvre
- Emergency Department, University Hospital Center of Poitiers, 86000 Poitiers, France
| | - Jeremy Guenezan
- Emergency Department, University Hospital Center of Poitiers, 86000 Poitiers, France
| | - Clement Giraud
- Laboratoire de Mathématiques Appliquées LMA, CNRS UMR 7348, 86021 Poitiers, France
| | - Olivier Mimoz
- Emergency Department, University Hospital Center of Poitiers, 86000 Poitiers, France
| | - Antoine Feydy
- Department of Musculoskeletal Imaging, Cochin Hospital, AP-HP, 75014 Paris, France
| | - Jean-Pierre Tasu
- Department of Diagnostic and Interventional Radiology, Poitiers University Hospital, 86000 Poitiers, France
| | - Rémy Guillevin
- Emergency Department, University Hospital Center of Poitiers, 86000 Poitiers, France
- CHU de Poitiers Service de Radiologie, 86000 Poitiers, France
| |
Collapse
|
17
|
Alzubaidi L, Al-Dulaimi K, Salhi A, Alammar Z, Fadhel MA, Albahri AS, Alamoodi AH, Albahri OS, Hasan AF, Bai J, Gilliland L, Peng J, Branni M, Shuker T, Cutbush K, Santamaría J, Moreira C, Ouyang C, Duan Y, Manoufali M, Jomaa M, Gupta A, Abbosh A, Gu Y. Comprehensive review of deep learning in orthopaedics: Applications, challenges, trustworthiness, and fusion. Artif Intell Med 2024; 155:102935. [PMID: 39079201 DOI: 10.1016/j.artmed.2024.102935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 03/18/2024] [Accepted: 07/22/2024] [Indexed: 08/24/2024]
Abstract
Deep learning (DL) in orthopaedics has gained significant attention in recent years. Previous studies have shown that DL can be applied to a wide variety of orthopaedic tasks, including fracture detection, bone tumour diagnosis, implant recognition, and evaluation of osteoarthritis severity. The utilisation of DL is expected to increase, owing to its ability to present accurate diagnoses more efficiently than traditional methods in many scenarios. This reduces the time and cost of diagnosis for patients and orthopaedic surgeons. To our knowledge, no exclusive study has comprehensively reviewed all aspects of DL currently used in orthopaedic practice. This review addresses this knowledge gap using articles from Science Direct, Scopus, IEEE Xplore, and Web of Science between 2017 and 2023. The authors begin with the motivation for using DL in orthopaedics, including its ability to enhance diagnosis and treatment planning. The review then covers various applications of DL in orthopaedics, including fracture detection, detection of supraspinatus tears using MRI, osteoarthritis, prediction of types of arthroplasty implants, bone age assessment, and detection of joint-specific soft tissue disease. We also examine the challenges for implementing DL in orthopaedics, including the scarcity of data to train DL and the lack of interpretability, as well as possible solutions to these common pitfalls. Our work highlights the requirements to achieve trustworthiness in the outcomes generated by DL, including the need for accuracy, explainability, and fairness in the DL models. We pay particular attention to fusion techniques as one of the ways to increase trustworthiness, which have also been used to address the common multimodality in orthopaedics. Finally, we have reviewed the approval requirements set forth by the US Food and Drug Administration to enable the use of DL applications. As such, we aim to have this review function as a guide for researchers to develop a reliable DL application for orthopaedic tasks from scratch for use in the market.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia.
| | - Khamael Al-Dulaimi
- Computer Science Department, College of Science, Al-Nahrain University, Baghdad, Baghdad 10011, Iraq; School of Electrical Engineering and Robotics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Asma Salhi
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Zaenab Alammar
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Mohammed A Fadhel
- Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - A S Albahri
- Technical College, Imam Ja'afar Al-Sadiq University, Baghdad, Iraq
| | - A H Alamoodi
- Institute of Informatics and Computing in Energy, Universiti Tenaga Nasional, Kajang 43000, Malaysia
| | - O S Albahri
- Australian Technical and Management College, Melbourne, Australia
| | - Amjad F Hasan
- Faculty of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | - Jinshuai Bai
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Luke Gilliland
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Jing Peng
- Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Marco Branni
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Tristan Shuker
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Kenneth Cutbush
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén 23071, Spain
| | - Catarina Moreira
- Data Science Institute, University of Technology Sydney, Australia
| | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Ye Duan
- School of Computing, Clemson University, Clemson, 29631, SC, USA
| | - Mohamed Manoufali
- CSIRO, Kensington, WA 6151, Australia; School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD 4067, Australia
| | - Mohammad Jomaa
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Ashish Gupta
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Amin Abbosh
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD 4067, Australia
| | - Yuantong Gu
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| |
Collapse
|
18
|
Zhu X, Liu D, Liu L, Guo J, Li Z, Zhao Y, Wu T, Liu K, Liu X, Pan X, Qi L, Zhang Y, Cheng L, Chen B. Fully Automatic Deep Learning Model for Spine Refracture in Patients with OVCF: A Multi-Center Study. Orthop Surg 2024; 16:2052-2065. [PMID: 38952050 PMCID: PMC11293932 DOI: 10.1111/os.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 06/06/2024] [Accepted: 06/09/2024] [Indexed: 07/03/2024] Open
Abstract
BACKGROUND The reaserch of artificial intelligence (AI) model for predicting spinal refracture is limited to bone mineral density, X-ray and some conventional laboratory indicators, which has its own limitations. Besides, it lacks specific indicators related to osteoporosis and imaging factors that can better reflect bone quality, such as computed tomography (CT). OBJECTIVE To construct a novel predicting model based on bone turn-over markers and CT to identify patients who were more inclined to suffer spine refracture. METHODS CT images and clinical information of 383 patients (training set = 240 cases of osteoporotic vertebral compression fractures (OVCF), validation set = 63, test set = 80) were retrospectively collected from January 2015 to October 2022 at three medical centers. The U-net model was adopted to automatically segment ROI. Three-dimensional (3D) cropping of all spine regions was used to achieve the final ROI regions including 3D_Full and 3D_RoiOnly. We used the Densenet 121-3D model to model the cropped region and simultaneously build a T-NIPT prediction model. Diagnostics of deep learning models were assessed by constructing ROC curves. We generated calibration curves to assess the calibration performance. Additionally, decision curve analysis (DCA) was used to assess the clinical utility of the predictive models. RESULTS The performance of the test model is comparable to its performance on the training set (dice coefficients of 0.798, an mIOU of 0.755, an SA of 0.767, and an OS of 0.017). Univariable and multivariable analysis indicate that T_P1NT was an independent risk factor for refracture. The performance of predicting refractures in different ROI regions showed that 3D_Full model exhibits the highest calibration performance, with a Hosmer-Lemeshow goodness-of-fit (HL) test statistic exceeding 0.05. The analysis of the training and test sets showed that the 3D_Full model, which integrates clinical and deep learning results, demonstrated superior performance with significant improvement (p-value < 0.05) compared to using clinical features independently or using only 3D_RoiOnly. CONCLUSION T_P1NT was an independent risk factor of refracture. Our 3D-FULL model showed better performance in predicting high-risk population of spine refracture than other models and junior doctors do. This model can be applicable to real-world translation due to its automatic segmentation and detection.
Collapse
Affiliation(s)
- Xuetao Zhu
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Dejian Liu
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Lian Liu
- Department of Emergency SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Jingxuan Guo
- Department of anesthesiologyAffiliated Hospital of Shandong University of Traditional Chinese MedicineJinanChina
| | - Zedi Li
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Yixiang Zhao
- Department of Orthopaedic SurgeryYantaishan HospitalYantaiChina
| | - Tianhao Wu
- Department of Hepatopancreatobiliary SurgeryGraduate School of Dalian Medical UniversityDalianChina
| | - Kaiwen Liu
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Xinyu Liu
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Xin Pan
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Lei Qi
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Yuanqiang Zhang
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Lei Cheng
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Bin Chen
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| |
Collapse
|
19
|
Inoue K, Maki S, Yamaguchi S, Kimura S, Akagi R, Sasho T, Ohtori S, Orita S. Estimation of the Radiographic Parameters for Hallux Valgus From Photography of the Feet Using a Deep Convolutional Neural Network. Cureus 2024; 16:e65557. [PMID: 39192936 PMCID: PMC11348822 DOI: 10.7759/cureus.65557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/25/2024] [Indexed: 08/29/2024] Open
Abstract
BACKGROUND Hallux valgus (HV), also known as bunion deformity, is one of the most common forefoot deformities. Early diagnosis and proper evaluation of HV are important because timely management can improve symptoms and quality of life. Here, we propose a deep learning estimation for the radiographic measurement of HV based on a regression network where the input to the algorithm is digital photographs of the forefoot, and the radiographic measurement of HV is computed as output directly. The purpose of our study was to estimate the radiographic parameters of HV using deep learning, to classify the severity by grade, and to assess the agreement of the predicted measurement with the actual radiographic measurement. METHODS There were 131 patients enrolled in this study. A total of 248 radiographs and 337 photographs of the feet were acquired. Radiographic parameters, including the HV angle (HVA), M1-M2 angle, and M1-M5 angle, were measured. We constructed a convolutional neural network using Xception and made the classification model into the regression model. Then, we fine-tuned the model using images of the feet and the radiographic parameters. The coefficient of determination (R2) and root mean squared error (RMSE), as well as Cohen's kappa coefficient, were calculated to evaluate the performance of the model. RESULTS The radiographic parameters, including the HVA, M1-M2 angle, and M1-M5 angle, were predicted with a coefficient of determination (R2)=0.684, root mean squared error (RMSE)=7.91; R2=0.573, RMSE=3.29; R2=0.381, RMSE=5.80, respectively. CONCLUSION The present study demonstrated that our model could predict the radiographic parameters of HV from photography. Moreover, the agreement between the expected and actual grade of HV was substantial. This study shows a potential application of a convolutional neural network for the screening of HV.
Collapse
Affiliation(s)
- Kana Inoue
- Department of Medical Engineering, Graduate School of Science and Engineering, Chiba University, Chiba, JPN
| | - Satoshi Maki
- Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, Chiba, JPN
- Center for Frontier Medical Engineering, Chiba University, Chiba, JPN
| | - Satoshi Yamaguchi
- Graduate School of Global and Transdisciplinary Studies, College of Liberal Arts and Sciences, Chiba University, Chiba, JPN
- Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, Chiba, JPN
| | - Seiji Kimura
- Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, Chiba, JPN
| | - Ryuichiro Akagi
- Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, Chiba, JPN
| | - Takahisa Sasho
- Center for Preventive Medical Sciences, Chiba University, Chiba, JPN
- Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, Chiba, JPN
| | - Seiji Ohtori
- Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, Chiba, JPN
| | - Sumihisa Orita
- Department of Orthopedic Surgery, Chiba University, Chiba, JPN
- Center for Frontier Medical Engineering, Chiba University, Chiba, JPN
| |
Collapse
|
20
|
Nadeem SA, Comellas AP, Regan EA, Hoffman EA, Saha PK. Chest CT-based automated vertebral fracture assessment using artificial intelligence and morphologic features. Med Phys 2024; 51:4201-4218. [PMID: 38721977 PMCID: PMC11661457 DOI: 10.1002/mp.17072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 04/02/2024] [Accepted: 04/02/2024] [Indexed: 06/05/2024] Open
Abstract
BACKGROUND Spinal degeneration and vertebral compression fractures are common among the elderly that adversely affect their mobility, quality of life, lung function, and mortality. Assessment of vertebral fractures in chronic obstructive pulmonary disease (COPD) is important due to the high prevalence of osteoporosis and associated vertebral fractures in COPD. PURPOSE We present new automated methods for (1) segmentation and labelling of individual vertebrae in chest computed tomography (CT) images using deep learning (DL), multi-parametric freeze-and-grow (FG) algorithm, and separation of apparently fused vertebrae using intensity autocorrelation and (2) vertebral deformity fracture detection using computed vertebral height features and parametric computational modelling of an established protocol outlined for trained human experts. METHODS A chest CT-based automated method was developed for quantitative deformity fracture assessment following the protocol by Genant et al. The computational method was accomplished in the following steps: (1) computation of a voxel-level vertebral body likelihood map from chest CT using a trained DL network; (2) delineation and labelling of individual vertebrae on the likelihood map using an iterative multi-parametric FG algorithm; (3) separation of apparently fused vertebrae in CT using intensity autocorrelation; (4) computation of vertebral heights using contour analysis on the central anterior-posterior (AP) plane of a vertebral body; (5) assessment of vertebral fracture status using ratio functions of vertebral heights and optimized thresholds. The method was applied to inspiratory or total lung capacity (TLC) chest scans from the multi-site Genetic Epidemiology of COPD (COPDGene) (ClinicalTrials.gov: NCT00608764) study, and the performance was examined (n = 3231). One hundred and twenty scans randomly selected from this dataset were partitioned into training (n = 80) and validation (n = 40) datasets for the DL-based vertebral body classifier. Also, generalizability of the method to low dose CT imaging (n = 236) was evaluated. RESULTS The vertebral segmentation module achieved a Dice score of .984 as compared to manual outlining results as reference (n = 100); the segmentation performance was consistent across images with the minimum and maximum of Dice scores among images being .980 and .989, respectively. The vertebral labelling module achieved 100% accuracy (n = 100). For low dose CT, the segmentation module produced image-level minimum and maximum Dice scores of .995 and .999, respectively, as compared to standard dose CT as the reference; vertebral labelling at low dose CT was fully consistent with standard dose CT (n = 236). The fracture assessment method achieved overall accuracy, sensitivity, and specificity of 98.3%, 94.8%, and 98.5%, respectively, for 40,050 vertebrae from 3231 COPDGene participants. For generalizability experiments, fracture assessment from low dose CT was consistent with the reference standard dose CT results across all participants. CONCLUSIONS Our CT-based automated method for vertebral fracture assessment is accurate, and it offers a feasible alternative to manual expert reading, especially for large population-based studies, where automation is important for high efficiency. Generalizability of the method to low dose CT imaging further extends the scope of application of the method, particularly since the usage of low dose CT imaging in large population-based studies has increased to reduce cumulative radiation exposure.
Collapse
Affiliation(s)
- Syed Ahmed Nadeem
- Department of Radiology, Carver College of Medicine, The University of Iowa, Iowa City, Iowa, USA
| | - Alejandro P Comellas
- Department of Internal Medicine, Carver College of Medicine, The University of Iowa, Iowa City, Iowa, USA
| | - Elizabeth A Regan
- Department of Epidemiology, Colorado School of Public Health, University of Colorado, Aurora, Colorado, USA
- Division of Rheumatology, National Jewish Health, Denver, Colorado, USA
| | - Eric A Hoffman
- Department of Radiology, Carver College of Medicine, The University of Iowa, Iowa City, Iowa, USA
- Department of Internal Medicine, Carver College of Medicine, The University of Iowa, Iowa City, Iowa, USA
- Department of Biomedical Engineering, College of Engineering, The University of Iowa, Iowa City, Iowa, USA
| | - Punam K Saha
- Department of Radiology, Carver College of Medicine, The University of Iowa, Iowa City, Iowa, USA
- Department of Electrical and Computer Engineering, College of Engineering, The University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
21
|
Oeding JF, Kunze KN, Messer CJ, Pareek A, Fufa DT, Pulos N, Rhee PC. Diagnostic Performance of Artificial Intelligence for Detection of Scaphoid and Distal Radius Fractures: A Systematic Review. J Hand Surg Am 2024; 49:411-422. [PMID: 38551529 DOI: 10.1016/j.jhsa.2024.01.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 01/19/2024] [Accepted: 01/31/2024] [Indexed: 05/05/2024]
Abstract
PURPOSE To review the existing literature to (1) determine the diagnostic efficacy of artificial intelligence (AI) models for detecting scaphoid and distal radius fractures and (2) compare the efficacy to human clinical experts. METHODS PubMed, OVID/Medline, and Cochrane libraries were queried for studies investigating the development, validation, and analysis of AI for the detection of scaphoid or distal radius fractures. Data regarding study design, AI model development and architecture, prediction accuracy/area under the receiver operator characteristic curve (AUROC), and imaging modalities were recorded. RESULTS A total of 21 studies were identified, of which 12 (57.1%) used AI to detect fractures of the distal radius, and nine (42.9%) used AI to detect fractures of the scaphoid. AI models demonstrated good diagnostic performance on average, with AUROC values ranging from 0.77 to 0.96 for scaphoid fractures and from 0.90 to 0.99 for distal radius fractures. Accuracy of AI models ranged between 72.0% to 90.3% and 89.0% to 98.0% for scaphoid and distal radius fractures, respectively. When compared to clinical experts, 13 of 14 (92.9%) studies reported that AI models demonstrated comparable or better performance. The type of fracture influenced model performance, with worse overall performance on occult scaphoid fractures; however, models trained specifically on occult fractures demonstrated substantially improved performance when compared to humans. CONCLUSIONS AI models demonstrated excellent performance for detecting scaphoid and distal radius fractures, with the majority demonstrating comparable or better performance compared with human experts. Worse performance was demonstrated on occult fractures. However, when trained specifically on difficult fracture patterns, AI models demonstrated improved performance. CLINICAL RELEVANCE AI models can help detect commonly missed occult fractures while enhancing workflow efficiency for distal radius and scaphoid fracture diagnoses. As performance varies based on fracture type, future studies focused on wrist fracture detection should clearly define whether the goal is to (1) identify difficult-to-detect fractures or (2) improve workflow efficiency by assisting in routine tasks.
Collapse
Affiliation(s)
- Jacob F Oeding
- School of Medicine, Mayo Clinic Alix School of Medicine, Rochester, MN; Department of Orthopaedics, Institute of Clinical Sciences, The Sahlgrenska Academy, University of Gotenburg, Gothenburg, Sweden.
| | - Kyle N Kunze
- Department of Orthopaedic Surgery, Hospital for Special Surgery, New York, NY
| | - Caden J Messer
- School of Medicine, Mayo Clinic Alix School of Medicine, Rochester, MN
| | - Ayoosh Pareek
- Department of Orthopaedic Surgery, Hospital for Special Surgery, New York, NY
| | - Duretti T Fufa
- Department of Orthopaedic Surgery, Hospital for Special Surgery, New York, NY
| | - Nicholas Pulos
- Department of Orthopaedic Surgery, Mayo Clinic, Rochester, MN
| | - Peter C Rhee
- Department of Orthopaedic Surgery, Mayo Clinic, Rochester, MN
| |
Collapse
|
22
|
Carriero A, Groenhoff L, Vologina E, Basile P, Albera M. Deep Learning in Breast Cancer Imaging: State of the Art and Recent Advancements in Early 2024. Diagnostics (Basel) 2024; 14:848. [PMID: 38667493 PMCID: PMC11048882 DOI: 10.3390/diagnostics14080848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 04/07/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has significantly impacted various aspects of healthcare, particularly in the medical imaging field. This review focuses on recent developments in the application of deep learning (DL) techniques to breast cancer imaging. DL models, a subset of AI algorithms inspired by human brain architecture, have demonstrated remarkable success in analyzing complex medical images, enhancing diagnostic precision, and streamlining workflows. DL models have been applied to breast cancer diagnosis via mammography, ultrasonography, and magnetic resonance imaging. Furthermore, DL-based radiomic approaches may play a role in breast cancer risk assessment, prognosis prediction, and therapeutic response monitoring. Nevertheless, several challenges have limited the widespread adoption of AI techniques in clinical practice, emphasizing the importance of rigorous validation, interpretability, and technical considerations when implementing DL solutions. By examining fundamental concepts in DL techniques applied to medical imaging and synthesizing the latest advancements and trends, this narrative review aims to provide valuable and up-to-date insights for radiologists seeking to harness the power of AI in breast cancer care.
Collapse
Affiliation(s)
| | - Léon Groenhoff
- Radiology Department, Maggiore della Carità Hospital, 28100 Novara, Italy; (A.C.); (E.V.); (P.B.); (M.A.)
| | | | | | | |
Collapse
|
23
|
Polzer C, Yilmaz E, Meyer C, Jang H, Jansen O, Lorenz C, Bürger C, Glüer CC, Sedaghat S. AI-based automated detection and stability analysis of traumatic vertebral body fractures on computed tomography. Eur J Radiol 2024; 173:111364. [PMID: 38364589 DOI: 10.1016/j.ejrad.2024.111364] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 12/29/2023] [Accepted: 02/08/2024] [Indexed: 02/18/2024]
Abstract
PURPOSE We developed and tested a neural network for automated detection and stability analysis of vertebral body fractures on computed tomography (CT). MATERIALS AND METHODS 257 patients who underwent CT were included in this Institutional Review Board (IRB) approved study. 463 fractured and 1883 non-fractured vertebral bodies were included, with 190 fractures unstable. Two readers identified vertebral body fractures and assessed their stability. A combination of a Hierarchical Convolutional Neural Network (hNet) and a fracture Classification Network (fNet) was used to build a neural network for the automated detection and stability analysis of vertebral body fractures on CT. Two final test settings were chosen: one with vertebral body levels C1/2 included and one where they were excluded. RESULTS The mean age of the patients was 68 ± 14 years. 140 patients were female. The network showed a slightly higher diagnostic performance when excluding C1/2. Accordingly, the network was able to distinguish fractured and non-fractured vertebral bodies with a sensitivity of 75.8 % and a specificity of 80.3 %. Additionally, the network determined the stability of the vertebral bodies with a sensitivity of 88.4 % and a specificity of 80.3 %. The AUC was 87 % and 91 % for fracture detection and stability analysis, respectively. The sensitivity of our network in indicating the presence of at least one fracture / one unstable fracture within the whole spine achieved values of 78.7 % and 97.2 %, respectively, when excluding C1/2. CONCLUSION The developed neural network can automatically detect vertebral body fractures and evaluate their stability concurrently with a high diagnostic performance.
Collapse
Affiliation(s)
- Constanze Polzer
- Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | - Eren Yilmaz
- Section Biomedical Imaging, Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany; Department of Computer Science, Ostfalia University of Applied Sciences, Wolfenbüttel, Germany
| | - Carsten Meyer
- Department of Computer Science, Ostfalia University of Applied Sciences, Wolfenbüttel, Germany; Department of Computer Science, Faculty of Engineering, Kiel University, Kiel, Germany
| | - Hyungseok Jang
- Department of Radiology, University of California San Diego, San Diego, USA
| | - Olav Jansen
- Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | | | | | - Claus-Christian Glüer
- Section Biomedical Imaging, Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | - Sam Sedaghat
- Department of Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
24
|
Bhatnagar A, Kekatpure AL, Velagala VR, Kekatpure A. A Review on the Use of Artificial Intelligence in Fracture Detection. Cureus 2024; 16:e58364. [PMID: 38756254 PMCID: PMC11097122 DOI: 10.7759/cureus.58364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Accepted: 04/16/2024] [Indexed: 05/18/2024] Open
Abstract
Artificial intelligence (AI) simulates intelligent behavior using computers with minimum human intervention. Recent advances in AI, especially deep learning, have made significant progress in perceptual operations, enabling computers to convey and comprehend complicated input more accurately. Worldwide, fractures affect people of all ages and in all regions of the planet. One of the most prevalent causes of inaccurate diagnosis and medical lawsuits is overlooked fractures on radiographs taken in the emergency room, which can range from 2% to 9%. The workforce will soon be under a great deal of strain due to the growing demand for fracture detection on multiple imaging modalities. A dearth of radiologists worsens this rise in demand as a result of a delay in hiring and a significant percentage of radiologists close to retirement. Additionally, the process of interpreting diagnostic images can sometimes be challenging and tedious. Integrating orthopedic radio-diagnosis with AI presents a promising solution to these problems. There has recently been a noticeable rise in the application of deep learning techniques, namely convolutional neural networks (CNNs), in medical imaging. In the field of orthopedic trauma, CNNs are being documented to operate at the proficiency of expert orthopedic surgeons and radiologists in the identification and categorization of fractures. CNNs can analyze vast amounts of data at a rate that surpasses that of human observations. In this review, we discuss the use of deep learning methods in fracture detection and classification, the integration of AI with various imaging modalities, and the benefits and disadvantages of integrating AI with radio-diagnostics.
Collapse
Affiliation(s)
- Aayushi Bhatnagar
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Aditya L Kekatpure
- Orthopedic Surgery, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Vivek R Velagala
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Aashay Kekatpure
- Orthopedic Surgery, Narendra Kumar Prasadrao Salve Institute of Medical Sciences and Research, Nagpur, IND
| |
Collapse
|
25
|
Alzubaidi L, Salhi A, A.Fadhel M, Bai J, Hollman F, Italia K, Pareyon R, Albahri AS, Ouyang C, Santamaría J, Cutbush K, Gupta A, Abbosh A, Gu Y. Trustworthy deep learning framework for the detection of abnormalities in X-ray shoulder images. PLoS One 2024; 19:e0299545. [PMID: 38466693 PMCID: PMC10927121 DOI: 10.1371/journal.pone.0299545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 02/12/2024] [Indexed: 03/13/2024] Open
Abstract
Musculoskeletal conditions affect an estimated 1.7 billion people worldwide, causing intense pain and disability. These conditions lead to 30 million emergency room visits yearly, and the numbers are only increasing. However, diagnosing musculoskeletal issues can be challenging, especially in emergencies where quick decisions are necessary. Deep learning (DL) has shown promise in various medical applications. However, previous methods had poor performance and a lack of transparency in detecting shoulder abnormalities on X-ray images due to a lack of training data and better representation of features. This often resulted in overfitting, poor generalisation, and potential bias in decision-making. To address these issues, a new trustworthy DL framework has been proposed to detect shoulder abnormalities (such as fractures, deformities, and arthritis) using X-ray images. The framework consists of two parts: same-domain transfer learning (TL) to mitigate imageNet mismatch and feature fusion to reduce error rates and improve trust in the final result. Same-domain TL involves training pre-trained models on a large number of labelled X-ray images from various body parts and fine-tuning them on the target dataset of shoulder X-ray images. Feature fusion combines the extracted features with seven DL models to train several ML classifiers. The proposed framework achieved an excellent accuracy rate of 99.2%, F1Score of 99.2%, and Cohen's kappa of 98.5%. Furthermore, the accuracy of the results was validated using three visualisation tools, including gradient-based class activation heat map (Grad CAM), activation visualisation, and locally interpretable model-independent explanations (LIME). The proposed framework outperformed previous DL methods and three orthopaedic surgeons invited to classify the test set, who obtained an average accuracy of 79.1%. The proposed framework has proven effective and robust, improving generalisation and increasing trust in the final results.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Centre for Data Science, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | - Asma Salhi
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | | | - Jinshuai Bai
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Freek Hollman
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Kristine Italia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | - Roberto Pareyon
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - A. S. Albahri
- Technical College, Imam Ja’afar Al-Sadiq University, Baghdad, Iraq
| | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Brisbane, QLD, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén, Spain
| | - Kenneth Cutbush
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- School of Medicine, The University of Queensland, Brisbane, QLD, Australia
| | - Ashish Gupta
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
- Greenslopes Private Hospital, Brisbane, QLD, Australia
| | - Amin Abbosh
- School of Information Technology and Electrical Engineering, Brisbane, QLD, Australia
| | - Yuantong Gu
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
26
|
Xie Y, Li X, Chen F, Wen R, Jing Y, Liu C, Wang J. Artificial intelligence diagnostic model for multi-site fracture X-ray images of extremities based on deep convolutional neural networks. Quant Imaging Med Surg 2024; 14:1930-1943. [PMID: 38415122 PMCID: PMC10895109 DOI: 10.21037/qims-23-878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 11/24/2023] [Indexed: 02/29/2024]
Abstract
Background The rapid and accurate diagnosis of fractures is crucial for timely treatment of trauma patients. Deep learning, one of the most widely used forms of artificial intelligence (AI), is now commonly employed in medical imaging for fracture detection. This study aimed to construct a deep learning model using big data to recognize multiple-fracture X-ray images of extremity bones. Methods Radiographic imaging data of extremities were retrospectively collected from five hospitals between January 2017 and September 2020. The total number of people finally included was 25,635 and the total number of images included was 26,098. After labeling the lesions, the randomized method used 90% of the data as the training set to develop the fracture detection model, and the remaining 10% was used as the validation set to verify the model. The faster region convolutional neural networks (R-CNN) algorithm was adopted to construct diagnostic models for detection. The Dice coefficient was used to evaluate the image segmentation accuracy. The performances of detection models were evaluated with sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Results The free-response receiver operating characteristic (FROC) curve value was 0.886 and 0.843 for the detection of single and multiple fractures, respectively. Additionally, the effective identification AUC for all parts was higher than 0.920. Notably, the AUC for wrist fractures reached 0.952. The average accuracy in detecting bone fracture regions in the extremities was 0.865. When analyzing single and multiple lesions at the patient level, the sensitivity was 0.957 for patients with multiple lesions and 0.852 for those with single lesions. In the segmentation task, the training set (the data set used by the machine learning model to train and learn) and the validation set (the data set used to evaluate the performance of the model) reached 0.996 and 0.975, respectively. Conclusions The faster R-CNN training algorithm exhibits excellent performance in simultaneously identifying fractures in the hands, feet, wrists, ankles, radius and ulna, and tibia and fibula on X-ray images. It demonstrates high accuracy, low false-negative rates, and controllable false-positive rates. It can serve as a valuable screening tool.
Collapse
Affiliation(s)
- Yanling Xie
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Xiaoming Li
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Fengxi Chen
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Ru Wen
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Yang Jing
- Huiying Medical Technology Co., Ltd., Beijing, China
| | - Chen Liu
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| | - Jian Wang
- Department of Radiology, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing, China
| |
Collapse
|
27
|
Russe MF, Rebmann P, Tran PH, Kellner E, Reisert M, Bamberg F, Kotter E, Kim S. AI-based X-ray fracture analysis of the distal radius: accuracy between representative classification, detection and segmentation deep learning models for clinical practice. BMJ Open 2024; 14:e076954. [PMID: 38262641 PMCID: PMC10823998 DOI: 10.1136/bmjopen-2023-076954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 12/21/2023] [Indexed: 01/25/2024] Open
Abstract
OBJECTIVES To aid in selecting the optimal artificial intelligence (AI) solution for clinical application, we directly compared performances of selected representative custom-trained or commercial classification, detection and segmentation models for fracture detection on musculoskeletal radiographs of the distal radius by aligning their outputs. DESIGN AND SETTING This single-centre retrospective study was conducted on a random subset of emergency department radiographs from 2008 to 2018 of the distal radius in Germany. MATERIALS AND METHODS An image set was created to be compatible with training and testing classification and segmentation models by annotating examinations for fractures and overlaying fracture masks, if applicable. Representative classification and segmentation models were trained on 80% of the data. After output binarisation, their derived fracture detection performances as well as that of a standard commercially available solution were compared on the remaining X-rays (20%) using mainly accuracy and area under the receiver operating characteristic (AUROC). RESULTS A total of 2856 examinations with 712 (24.9%) fractures were included in the analysis. Accuracies reached up to 0.97 for the classification model, 0.94 for the segmentation model and 0.95 for BoneView. Cohen's kappa was at least 0.80 in pairwise comparisons, while Fleiss' kappa was 0.83 for all models. Fracture predictions were visualised with all three methods at different levels of detail, ranking from downsampled image region for classification over bounding box for detection to single pixel-level delineation for segmentation. CONCLUSIONS All three investigated approaches reached high performances for detection of distal radius fractures with simple preprocessing and postprocessing protocols on the custom-trained models. Despite their underlying structural differences, selection of one's fracture analysis AI tool in the frame of this study reduces to the desired flavour of automation: automated classification, AI-assisted manual fracture reading or minimised false negatives.
Collapse
Affiliation(s)
- Maximilian Frederik Russe
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Philipp Rebmann
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Phuong Hien Tran
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Elias Kellner
- Department of Medical Physics, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Marco Reisert
- Department of Medical Physics, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| | - Suam Kim
- Department of Diagnostic and Interventional Radiology, Universitätsklinikum Freiburg Medizinische Universitätsklinik, Freiburg im Breisgau, Germany
| |
Collapse
|
28
|
Lee CL, Liu WJ, Tsai SF. Effects of AST-120 on mortality in patients with chronic kidney disease modeled by artificial intelligence or traditional statistical analysis. Sci Rep 2024; 14:738. [PMID: 38184721 PMCID: PMC10771424 DOI: 10.1038/s41598-024-51498-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/05/2024] [Indexed: 01/08/2024] Open
Abstract
Chronic kidney disease (CKD) imposes a substantial burden, and patient prognosis remains grim. The impact of AST-120 (AST-120) on the survival of CKD patients lacks a consensus. This study aims to investigate the effects of AST-120 usage on the survival of CKD patients and explore the utility of artificial intelligence models for decision-making. We conducted a retrospective analysis of CKD patients receiving care in the pre-end-stage renal disease (ESRD) program at Taichung Veterans General Hospital from 2000 to 2019. We employed Cox regression models to evaluate the relationship between AST-120 use and patient survival, both before and after propensity score matching. Subsequently, we employed Deep Neural Network (DNN) and Extreme Gradient Boosting (XGBoost) models to assess their performance in predicting AST-120's impact on patient survival. Among the 2584 patients in our cohort, 2199 did not use AST-120, while 385 patients received AST-120. AST-120 users exhibited significantly lower mortality rates compared to non-AST-120 users (13.51% vs. 37.88%, p < 0.0001) and a reduced prevalence of ESRD (44.16% vs. 53.17%, p = 0.0005). Propensity score matching at 1:1 and 1:2 revealed no significant differences, except for dialysis and all-cause mortality, where AST-120 users exhibited significantly lower all-cause mortality (p < 0.0001), with a hazard ratio (HR) of 0.395 (95% CI = 0.295-0.522). This difference remained statistically significant even after propensity matching. In terms of model performance, the XGBoost model demonstrated the highest accuracy (0.72), specificity (0.90), and positive predictive value (0.48), while the logistic regression model showed the highest sensitivity (0.63) and negative predictive value (0.84). The area under the curve (AUC) values for logistic regression, DNN, and XGBoost were 0.73, 0.73, and 0.69, respectively, indicating similar predictive capabilities for mortality. In this cohort of CKD patients, the use of AST-120 is significantly associated with reduced mortality. However, the performance of artificial intelligence models in predicting the impact of AST-120 is not superior to statistical analysis using the current architecture and algorithm.
Collapse
Grants
- TCVGH-1093605D, TCVGH-1097316C, TCVGH-1097327D, TCVGH-1103502C, TCVGH-1107305D, TCVGH-1117308C, TCVGH-1117305D, TCVGH-1113602C, TCVGH-1113602D and TCVGH-1103601D Taichung Veterans General Hospital
- TCVGH-1093605D, TCVGH-1097316C, TCVGH-1097327D, TCVGH-1103502C, TCVGH-1107305D, TCVGH-1117308C, TCVGH-1117305D, TCVGH-1113602C, TCVGH-1113602D and TCVGH-1103601D Taichung Veterans General Hospital
Collapse
Affiliation(s)
- Chia-Lin Lee
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Taichung Veterans General Hospital, Taichung, Taiwan
- Intelligent Data Mining Laboratory, Department of Medical Research, Taichung Veterans General Hospital, Taichung, Taiwan
- Department of Public Health, College of Public Health, China Medical University, Taichung, Taiwan
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
- Department of Post-Baccalaureate Medicine, College of Medicine, National Chung Hsing University, Taichung, Taiwan
| | - Wei-Ju Liu
- Intelligent Data Mining Laboratory, Department of Medical Research, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Shang-Feng Tsai
- School of Medicine, National Yang-Ming University, Taipei, Taiwan.
- Department of Post-Baccalaureate Medicine, College of Medicine, National Chung Hsing University, Taichung, Taiwan.
- Division of Nephrology, Taichung Veterans General Hospital, Taiwan, 160, Sec. 3, Taiwan Boulevard, Taichung, 407, Taiwan.
- Department of Life Science, Tunghai University, Taichung, Taiwan.
| |
Collapse
|
29
|
Pham TD, Holmes SB, Coulthard P. A review on artificial intelligence for the diagnosis of fractures in facial trauma imaging. Front Artif Intell 2024; 6:1278529. [PMID: 38249794 PMCID: PMC10797131 DOI: 10.3389/frai.2023.1278529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/11/2023] [Indexed: 01/23/2024] Open
Abstract
Patients with facial trauma may suffer from injuries such as broken bones, bleeding, swelling, bruising, lacerations, burns, and deformity in the face. Common causes of facial-bone fractures are the results of road accidents, violence, and sports injuries. Surgery is needed if the trauma patient would be deprived of normal functioning or subject to facial deformity based on findings from radiology. Although the image reading by radiologists is useful for evaluating suspected facial fractures, there are certain challenges in human-based diagnostics. Artificial intelligence (AI) is making a quantum leap in radiology, producing significant improvements of reports and workflows. Here, an updated literature review is presented on the impact of AI in facial trauma with a special reference to fracture detection in radiology. The purpose is to gain insights into the current development and demand for future research in facial trauma. This review also discusses limitations to be overcome and current important issues for investigation in order to make AI applications to the trauma more effective and realistic in practical settings. The publications selected for review were based on their clinical significance, journal metrics, and journal indexing.
Collapse
Affiliation(s)
- Tuan D. Pham
- Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | | | | |
Collapse
|
30
|
田 楚, 陈 翔, 朱 桓, 秦 晟, 石 柳, 芮 云. [Application and prospect of machine learning in orthopaedic trauma]. ZHONGGUO XIU FU CHONG JIAN WAI KE ZA ZHI = ZHONGGUO XIUFU CHONGJIAN WAIKE ZAZHI = CHINESE JOURNAL OF REPARATIVE AND RECONSTRUCTIVE SURGERY 2023; 37:1562-1568. [PMID: 38130202 PMCID: PMC10739668 DOI: 10.7507/1002-1892.202308064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 10/13/2023] [Accepted: 10/19/2023] [Indexed: 12/23/2023]
Abstract
OBJECTIVE To review the current applications of machine learning in orthopaedic trauma and anticipate its future role in clinical practice. METHODS A comprehensive literature review was conducted to assess the status of machine learning algorithms in orthopaedic trauma research, both nationally and internationally. RESULTS The rapid advancement of computer data processing and the growing convergence of medicine and industry have led to the widespread utilization of artificial intelligence in healthcare. Currently, machine learning plays a significant role in orthopaedic trauma, demonstrating high performance and accuracy in various areas including fracture image recognition, diagnosis stratification, clinical decision-making, evaluation, perioperative considerations, and prognostic risk prediction. Nevertheless, challenges persist in the development and clinical implementation of machine learning. These include limited database samples, model interpretation difficulties, and universality and individualisation variations. CONCLUSION The expansion of clinical sample sizes and enhancements in algorithm performance hold significant promise for the extensive application of machine learning in supporting orthopaedic trauma diagnosis, guiding decision-making, devising individualized medical strategies, and optimizing the allocation of clinical resources.
Collapse
Affiliation(s)
- 楚伟 田
- 东南大学附属中大医院骨科(南京 210009)Department of Orthopedics, Zhongda Hospital Affiliated to Southeast University, Nanjing Jiangsu, 210009, P. R. China
- 东南大学医学院(南京 210009)School of Medicine, Southeast University, Nanjing Jiangsu, 210009, P. R. China
| | - 翔溆 陈
- 东南大学附属中大医院骨科(南京 210009)Department of Orthopedics, Zhongda Hospital Affiliated to Southeast University, Nanjing Jiangsu, 210009, P. R. China
| | - 桓毅 朱
- 东南大学附属中大医院骨科(南京 210009)Department of Orthopedics, Zhongda Hospital Affiliated to Southeast University, Nanjing Jiangsu, 210009, P. R. China
- 东南大学医学院(南京 210009)School of Medicine, Southeast University, Nanjing Jiangsu, 210009, P. R. China
| | - 晟博 秦
- 东南大学附属中大医院骨科(南京 210009)Department of Orthopedics, Zhongda Hospital Affiliated to Southeast University, Nanjing Jiangsu, 210009, P. R. China
| | - 柳 石
- 东南大学附属中大医院骨科(南京 210009)Department of Orthopedics, Zhongda Hospital Affiliated to Southeast University, Nanjing Jiangsu, 210009, P. R. China
- 东南大学医学院(南京 210009)School of Medicine, Southeast University, Nanjing Jiangsu, 210009, P. R. China
- 东南大学附属中大医院创伤救治中心(南京 210009)Trauma Center, Zhongda Hospital Affiliated to Southeast University, Nanjing Jiangsu, 210009, P. R. China
| | - 云峰 芮
- 东南大学附属中大医院骨科(南京 210009)Department of Orthopedics, Zhongda Hospital Affiliated to Southeast University, Nanjing Jiangsu, 210009, P. R. China
- 东南大学医学院(南京 210009)School of Medicine, Southeast University, Nanjing Jiangsu, 210009, P. R. China
- 东南大学附属中大医院创伤救治中心(南京 210009)Trauma Center, Zhongda Hospital Affiliated to Southeast University, Nanjing Jiangsu, 210009, P. R. China
| |
Collapse
|
31
|
Chatterjee S, Bhattacharya M, Pal S, Lee SS, Chakraborty C. ChatGPT and large language models in orthopedics: from education and surgery to research. J Exp Orthop 2023; 10:128. [PMID: 38038796 PMCID: PMC10692045 DOI: 10.1186/s40634-023-00700-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 11/16/2023] [Indexed: 12/02/2023] Open
Abstract
ChatGPT has quickly popularized since its release in November 2022. Currently, large language models (LLMs) and ChatGPT have been applied in various domains of medical science, including in cardiology, nephrology, orthopedics, ophthalmology, gastroenterology, and radiology. Researchers are exploring the potential of LLMs and ChatGPT for clinicians and surgeons in every domain. This study discusses how ChatGPT can help orthopedic clinicians and surgeons perform various medical tasks. LLMs and ChatGPT can help the patient community by providing suggestions and diagnostic guidelines. In this study, the use of LLMs and ChatGPT to enhance and expand the field of orthopedics, including orthopedic education, surgery, and research, is explored. Present LLMs have several shortcomings, which are discussed herein. However, next-generation and future domain-specific LLMs are expected to be more potent and transform patients' quality of life.
Collapse
Affiliation(s)
- Srijan Chatterjee
- Institute for Skeletal Aging & Orthopaedic Surgery, Hallym University-Chuncheon Sacred Heart Hospital, Chuncheon-Si, 24252, Gangwon-Do, Republic of Korea
| | - Manojit Bhattacharya
- Department of Zoology, Fakir Mohan University, Vyasa Vihar, Balasore, 756020, Odisha, India
| | - Soumen Pal
- School of Mechanical Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Sang-Soo Lee
- Institute for Skeletal Aging & Orthopaedic Surgery, Hallym University-Chuncheon Sacred Heart Hospital, Chuncheon-Si, 24252, Gangwon-Do, Republic of Korea.
| | - Chiranjib Chakraborty
- Department of Biotechnology, School of Life Science and Biotechnology, Adamas University, Kolkata, West Bengal, 700126, India.
| |
Collapse
|
32
|
Keller G, Rachunek K, Springer F, Kraus M. Evaluation of a newly designed deep learning-based algorithm for automated assessment of scapholunate distance in wrist radiography as a surrogate parameter for scapholunate ligament rupture and the correlation with arthroscopy. LA RADIOLOGIA MEDICA 2023; 128:1535-1541. [PMID: 37726593 PMCID: PMC10700195 DOI: 10.1007/s11547-023-01720-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 09/04/2023] [Indexed: 09/21/2023]
Abstract
PURPOSE Not diagnosed or mistreated scapholunate ligament (SL) tears represent a frequent cause of degenerative wrist arthritis. A newly developed deep learning (DL)-based automated assessment of the SL distance on radiographs may support clinicians in initial image interpretation. MATERIALS AND METHODS A pre-trained DL algorithm was specifically fine-tuned on static and dynamic dorsopalmar wrist radiography (training data set n = 201) for the automated assessment of the SL distance. Afterwards the DL algorithm was evaluated (evaluation data set n = 364 patients with n = 1604 radiographs) and correlated with results of an experienced human reader and with arthroscopic findings. RESULTS The evaluation data set comprised arthroscopically diagnosed SL insufficiency according to Geissler's stages 0-4 (56.5%, 2.5%, 5.5%, 7.5%, 28.0%). Diagnostic accuracy of the DL algorithm on dorsopalmar radiography regarding SL integrity was close to that of the human reader (e.g. differentiation of Geissler's stages ≤ 2 versus > 2 with a sensitivity of 74% and a specificity of 78% compared to 77% and 80%) with a correlation coefficient of 0.81 (P < 0.01). CONCLUSION A DL algorithm like this might become a valuable tool supporting clinicians' initial decision making on radiography regarding SL integrity and consequential triage for further patient management.
Collapse
Affiliation(s)
- Gabriel Keller
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, Eberhard Karls University Tübingen, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany.
- Department of Diagnostic Radiology, BG Trauma Center Tübingen, Eberhard Karls University Tübingen, Tübingen, Germany.
| | - Katarzyna Rachunek
- Department of Hand, Plastic, Reconstructive and Burn Surgery, BG Trauma Center Tübingen, Eberhard Karls University of Tübingen, 72076, Tübingen, Germany
| | - Fabian Springer
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, Eberhard Karls University Tübingen, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany
- Department of Diagnostic Radiology, BG Trauma Center Tübingen, Eberhard Karls University Tübingen, Tübingen, Germany
| | - Mathias Kraus
- Institute of Information Systems, FAU Erlangen-Nuremberg, Nuremberg, Germany
| |
Collapse
|
33
|
Riazi Esfahani P, Guirgus M, Maalouf M, Mazboudi P, Reddy AJ, Sarsour RO, Hassan SS. Development of a Machine Learning-Based Model for Accurate Detection and Classification of Cervical Spine Fractures Using CT Imaging. Cureus 2023; 15:e47328. [PMID: 38021776 PMCID: PMC10657145 DOI: 10.7759/cureus.47328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/09/2023] [Indexed: 12/01/2023] Open
Abstract
Cervical spine fractures represent a significant healthcare challenge, necessitating accurate detection for appropriate management and improved patient outcomes. This study aims to develop a machine learning-based model utilizing a computed tomography (CT) image dataset to detect and classify cervical spine fractures. Leveraging a large dataset of 4,050 CT images obtained from the Radiological Society of North America (RSNA) Cervical Spine Fracture dataset, we evaluate the potential of machine learning and deep learning algorithms in achieving accurate and reliable cervical spine fracture detection. The model demonstrates outstanding performance, achieving an average precision of 1 and 100% precision, recall, sensitivity, specificity, and accuracy values. These exceptional results highlight the potential of machine learning algorithms to enhance clinical decision-making and facilitate prompt treatment initiation for cervical spine fractures. However, further research and validation efforts are warranted to assess the model's generalizability across diverse populations and real-world clinical settings, ultimately contributing to improved patient outcomes in cervical spine fracture cases.
Collapse
Affiliation(s)
| | - Monica Guirgus
- Medicine, California University of Science and Medicine, Colton, USA
| | - Maya Maalouf
- Medicine, California University of Science and Medicine, Colton, USA
| | - Pasha Mazboudi
- Medicine, California University of Science and Medicine, Colton, USA
| | - Akshay J Reddy
- Medicine, California University of Science and Medicine, Colton, USA
| | - Reem O Sarsour
- Medicine, California University of Science and Medicine, Colton, USA
| | - Sherif S Hassan
- Anatomy, Faculty of Medicine, Cairo University, Cairo, EGY
- Medical Education, Anatomy, & Neuroanatomy, California University of Science and Medicine, Colton, USA
| |
Collapse
|
34
|
Achar S, Hwang D, Finkenstaedt T, Malis V, Bae WC. Deep-Learning-Aided Evaluation of Spondylolysis Imaged with Ultrashort Echo Time Magnetic Resonance Imaging. SENSORS (BASEL, SWITZERLAND) 2023; 23:8001. [PMID: 37766055 PMCID: PMC10538057 DOI: 10.3390/s23188001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 08/31/2023] [Accepted: 09/15/2023] [Indexed: 09/29/2023]
Abstract
Isthmic spondylolysis results in fracture of pars interarticularis of the lumbar spine, found in as many as half of adolescent athletes with persistent low back pain. While computed tomography (CT) is the gold standard for the diagnosis of spondylolysis, the use of ionizing radiation near reproductive organs in young subjects is undesirable. While magnetic resonance imaging (MRI) is preferable, it has lowered sensitivity for detecting the condition. Recently, it has been shown that ultrashort echo time (UTE) MRI can provide markedly improved bone contrast compared to conventional MRI. To take UTE MRI further, we developed supervised deep learning tools to generate (1) CT-like images and (2) saliency maps of fracture probability from UTE MRI, using ex vivo preparation of cadaveric spines. We further compared quantitative metrics of the contrast-to-noise ratio (CNR), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) between UTE MRI (inverted to make the appearance similar to CT) and CT and between CT-like images and CT. Qualitative results demonstrated the feasibility of successfully generating CT-like images from UTE MRI to provide easier interpretability for bone fractures thanks to improved image contrast and CNR. Quantitatively, the mean CNR of bone against defect-filled tissue was 35, 97, and 146 for UTE MRI, CT-like, and CT images, respectively, being significantly higher for CT-like than UTE MRI images. For the image similarity metrics using the CT image as the reference, CT-like images provided a significantly lower mean MSE (0.038 vs. 0.0528), higher mean PSNR (28.6 vs. 16.5), and higher SSIM (0.73 vs. 0.68) compared to UTE MRI images. Additionally, the saliency maps enabled quick detection of the location with probable pars fracture by providing visual cues to the reader. This proof-of-concept study is limited to the data from ex vivo samples, and additional work in human subjects with spondylolysis would be necessary to refine the models for clinical use. Nonetheless, this study shows that the utilization of UTE MRI and deep learning tools could be highly useful for the evaluation of isthmic spondylolysis.
Collapse
Affiliation(s)
- Suraj Achar
- Department of Family Medicine, University of California-San Diego, La Jolla, CA 92093, USA
| | - Dosik Hwang
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Republic of Korea
- Center for Healthcare Robotics, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul 03722, Republic of Korea
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul 03722, Republic of Korea
| | - Tim Finkenstaedt
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University Zurich, 8091 Zurich, Switzerland
| | - Vadim Malis
- Department of Radiology, University of California-San Diego, La Jolla, CA 92093, USA
| | - Won C. Bae
- Department of Radiology, University of California-San Diego, La Jolla, CA 92093, USA
- Department of Radiology, VA San Diego Healthcare System, San Diego, CA 92161, USA
| |
Collapse
|
35
|
Shen L, Gao C, Hu S, Kang D, Zhang Z, Xia D, Xu Y, Xiang S, Zhu Q, Xu G, Tang F, Yue H, Yu W, Zhang Z. Using Artificial Intelligence to Diagnose Osteoporotic Vertebral Fractures on Plain Radiographs. J Bone Miner Res 2023; 38:1278-1287. [PMID: 37449775 DOI: 10.1002/jbmr.4879] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 06/18/2023] [Accepted: 07/06/2023] [Indexed: 07/18/2023]
Abstract
Osteoporotic vertebral fracture (OVF) is a risk factor for morbidity and mortality in elderly population, and accurate diagnosis is important for improving treatment outcomes. OVF diagnosis suffers from high misdiagnosis and underdiagnosis rates, as well as high workload. Deep learning methods applied to plain radiographs, a simple, fast, and inexpensive examination, might solve this problem. We developed and validated a deep-learning-based vertebral fracture diagnostic system using area loss ratio, which assisted a multitasking network to perform skeletal position detection and segmentation and identify and grade vertebral fractures. As the training set and internal validation set, we used 11,397 plain radiographs from six community centers in Shanghai. For the external validation set, 1276 participants were recruited from the outpatient clinic of the Shanghai Sixth People's Hospital (1276 plain radiographs). Radiologists performed all X-ray images and used the Genant semiquantitative tool for fracture diagnosis and grading as the ground truth data. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value were used to evaluate diagnostic performance. The AI_OVF_SH system demonstrated high accuracy and computational speed in skeletal position detection and segmentation. In the internal validation set, the accuracy, sensitivity, and specificity with the AI_OVF_SH model were 97.41%, 84.08%, and 97.25%, respectively, for all fractures. The sensitivity and specificity for moderate fractures were 88.55% and 99.74%, respectively, and for severe fractures, they were 92.30% and 99.92%. In the external validation set, the accuracy, sensitivity, and specificity for all fractures were 96.85%, 83.35%, and 94.70%, respectively. For moderate fractures, the sensitivity and specificity were 85.61% and 99.85%, respectively, and 93.46% and 99.92% for severe fractures. Therefore, the AI_OVF_SH system is an efficient tool to assist radiologists and clinicians to improve the diagnosing of vertebral fractures. © 2023 The Authors. Journal of Bone and Mineral Research published by Wiley Periodicals LLC on behalf of American Society for Bone and Mineral Research (ASBMR).
Collapse
Affiliation(s)
- Li Shen
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Clinical Research Center, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chao Gao
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shundong Hu
- Department of Radiology, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Dan Kang
- Shanghai Jiyinghui Intelligent Technology Co, Shanghai, China
| | - Zhaogang Zhang
- Shanghai Jiyinghui Intelligent Technology Co, Shanghai, China
| | - Dongdong Xia
- Department of Orthopaedics, Ning Bo First Hospital, Zhejiang, China
| | - Yiren Xu
- Department of Radiology, Ning Bo First Hospital, Zhejiang, China
| | - Shoukui Xiang
- Department of Endocrinology and Metabolism, The First People's Hospital of Changzhou, Changzhou, China
| | - Qiong Zhu
- Kangjian Community Health Service Center, Shanghai, China
| | - GeWen Xu
- Kangjian Community Health Service Center, Shanghai, China
| | - Feng Tang
- Jinhui Community Health Service Center, Shanghai, China
| | - Hua Yue
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Yu
- Department of Radiology, Peking Union Medical College Hospital, Beijing, China
| | - Zhenlin Zhang
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Clinical Research Center, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
36
|
Tang S, Jing C, Jiang Y, Yang K, Huang Z, Wu H, Cui C, Shi S, Ye X, Tian H, Song D, Xu J, Dong F. The effect of image resolution on convolutional neural networks in breast ultrasound. Heliyon 2023; 9:e19253. [PMID: 37664701 PMCID: PMC10469557 DOI: 10.1016/j.heliyon.2023.e19253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 04/01/2023] [Accepted: 08/16/2023] [Indexed: 09/05/2023] Open
Abstract
PURPOSE The objective of this research was to investigate the efficacy of various parameter combinations of Convolutional Neural Networks (CNNs) models, namely MobileNet and DenseNet121, and different input image resolutions (REZs) ranging from 64×64 to 512×512 pixels, for diagnosing breast cancer. MATERIALS AND METHODS During the period of June 2015 to November 2020, two hospitals were involved in the collection of two-dimensional ultrasound breast images for this retrospective multicenter study. The diagnostic performance of the computer models MobileNet and DenseNet 121 was compared at different resolutions. RESULTS The results showed that MobileNet had the best breast cancer diagnosis performance at 320×320pixel REZ and DenseNet121 had the best breast cancer diagnosis performance at 448×448pixel REZ. CONCLUSION Our study reveals a significant correlation between image resolution and breast cancer diagnosis accuracy. Through the comparison of MobileNet and DenseNet121, it is highlighted that lightweight neural networks (LW-CNNs) can achieve model performance similar to or even slightly better than large neural networks models (HW-CNNs) in ultrasound images, and LW-CNNs' prediction time per image is lower.
Collapse
Affiliation(s)
- Shuzhen Tang
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
| | - Chen Jing
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Yitao Jiang
- Research and Development Department, Illuminate, LLC, Shenzhen, Guangdong 518000, China
| | - Keen Yang
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Zhibin Huang
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Huaiyu Wu
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Chen Cui
- Research and Development Department, Illuminate, LLC, Shenzhen, Guangdong 518000, China
| | - Siyuan Shi
- Research and Development Department, Illuminate, LLC, Shenzhen, Guangdong 518000, China
| | - Xiuqin Ye
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Hongtian Tian
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Di Song
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Jinfeng Xu
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
| | - Fajin Dong
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
| |
Collapse
|
37
|
Gasmi I, Calinghen A, Parienti JJ, Belloy F, Fohlen A, Pelage JP. Comparison of diagnostic performance of a deep learning algorithm, emergency physicians, junior radiologists and senior radiologists in the detection of appendicular fractures in children. Pediatr Radiol 2023; 53:1675-1684. [PMID: 36877239 DOI: 10.1007/s00247-023-05621-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 11/21/2022] [Accepted: 01/30/2023] [Indexed: 03/07/2023]
Abstract
BACKGROUND Advances have been made in the use of artificial intelligence (AI) in the field of diagnostic imaging, particularly in the detection of fractures on conventional radiographs. Studies looking at the detection of fractures in the pediatric population are few. The anatomical variations and evolution according to the child's age require specific studies of this population. Failure to diagnose fractures early in children may lead to serious consequences for growth. OBJECTIVE To evaluate the performance of an AI algorithm based on deep neural networks toward detecting traumatic appendicular fractures in a pediatric population. To compare sensitivity, specificity, positive predictive value and negative predictive value of different readers and the AI algorithm. MATERIALS AND METHODS This retrospective study conducted on 878 patients younger than 18 years of age evaluated conventional radiographs obtained after recent non-life-threatening trauma. All radiographs of the shoulder, arm, elbow, forearm, wrist, hand, leg, knee, ankle and foot were evaluated. The diagnostic performance of a consensus of radiology experts in pediatric imaging (reference standard) was compared with those of pediatric radiologists, emergency physicians, senior residents and junior residents. The predictions made by the AI algorithm and the annotations made by the different physicians were compared. RESULTS The algorithm predicted 174 fractures out of 182, corresponding to a sensitivity of 95.6%, a specificity of 91.64% and a negative predictive value of 98.76%. The AI predictions were close to that of pediatric radiologists (sensitivity 98.35%) and that of senior residents (95.05%) and were above those of emergency physicians (81.87%) and junior residents (90.1%). The algorithm identified 3 (1.6%) fractures not initially seen by pediatric radiologists. CONCLUSION This study suggests that deep learning algorithms can be useful in improving the detection of fractures in children.
Collapse
Affiliation(s)
- Idriss Gasmi
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
| | - Arvin Calinghen
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
| | - Jean-Jacques Parienti
- GRAM 2.0 EA2656 UNICAEN Normandie, University Hospital, Caen, France
- Department of Clinical Research, Caen University Hospital, Caen, France
| | - Frederique Belloy
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
| | - Audrey Fohlen
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
- UNICAEN CEA CNRS ISTCT- CERVOxy, Normandie University, 14000, Caen, France
| | - Jean-Pierre Pelage
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France.
- UNICAEN CEA CNRS ISTCT- CERVOxy, Normandie University, 14000, Caen, France.
| |
Collapse
|
38
|
Kang W, Lin L, Sun S, Wu S. Three-round learning strategy based on 3D deep convolutional GANs for Alzheimer's disease staging. Sci Rep 2023; 13:5750. [PMID: 37029214 PMCID: PMC10081988 DOI: 10.1038/s41598-023-33055-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 04/06/2023] [Indexed: 04/09/2023] Open
Abstract
Accurately diagnosing of Alzheimer's disease (AD) and its early stages is critical for prompt treatment or potential intervention to delay the the disease's progression. Convolutional neural networks (CNNs) models have shown promising results in structural MRI (sMRI)-based diagnosis, but their performance, particularly for 3D models, is constrained by the lack of labeled training samples. To address the overfitting problem brought on by the insufficient training sample size, we propose a three-round learning strategy that combines transfer learning with generative adversarial learning. In the first round, a 3D Deep Convolutional Generative Adversarial Networks (DCGAN) model was trained with all available sMRI data to learn the common feature of sMRI through unsupervised generative adversarial learning. The second round involved transferring and fine-tuning, and the pre-trained discriminator (D) of the DCGAN learned more specific features for the classification task between AD and cognitively normal (CN). In the final round, the weights learned in the AD versus CN classification task were transferred to the MCI diagnosis. By highlighting brain regions with high prediction weights using 3D Grad-CAM, we further enhanced the model's interpretability. The proposed model achieved accuracies of 92.8%, 78.1%, and 76.4% in the classifications of AD versus CN, AD versus MCI, and MCI versus CN, respectively. The experimental results show that our proposed model avoids overfitting brought on by a paucity of sMRI data and enables the early detection of AD.
Collapse
Affiliation(s)
- Wenjie Kang
- Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing, 100124, China
| | - Lan Lin
- Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing, 100124, China.
| | - Shen Sun
- Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing, 100124, China
| | - Shuicai Wu
- Beijing International Platform for Scientific and Technological Cooperation, Department of Biomedical Engineering, Faculty of Environment and Life Sciences, Beijing University of Technology, Beijing, 100124, China
| |
Collapse
|
39
|
Manuel Román-Belmonte J, De la Corte-Rodríguez H, Adriana Rodríguez-Damiani B, Carlos Rodríguez-Merchán E. Artificial Intelligence in Musculoskeletal Conditions. ARTIF INTELL 2023. [DOI: 10.5772/intechopen.110696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Artificial intelligence (AI) refers to computer capabilities that resemble human intelligence. AI implies the ability to learn and perform tasks that have not been specifically programmed. Moreover, it is an iterative process involving the ability of computerized systems to capture information, transform it into knowledge, and process it to produce adaptive changes in the environment. A large labeled database is needed to train the AI system and generate a robust algorithm. Otherwise, the algorithm cannot be applied in a generalized way. AI can facilitate the interpretation and acquisition of radiological images. In addition, it can facilitate the detection of trauma injuries and assist in orthopedic and rehabilitative processes. The applications of AI in musculoskeletal conditions are promising and are likely to have a significant impact on the future management of these patients.
Collapse
|
40
|
Zhang S, Zhao Z, Qiu L, Liang D, Wang K, Xu J, Zhao J, Sun J. Automatic vertebral fracture and three-column injury diagnosis with fracture visualization by a multi-scale attention-guided network. Med Biol Eng Comput 2023:10.1007/s11517-023-02805-2. [PMID: 36848011 DOI: 10.1007/s11517-023-02805-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 02/08/2023] [Indexed: 03/01/2023]
Abstract
Deep learning methods have the potential to improve the efficiency of diagnosis for vertebral fractures with computed tomography (CT) images. Most existing intelligent vertebral fracture diagnosis methods only provide dichotomized results at a patient level. However, a fine-grained and more nuanced outcome is clinically needed. This study proposed a novel network, a multi-scale attention-guided network (MAGNet), to diagnose vertebral fractures and three-column injuries with fracture visualization at a vertebra level. By imposing attention constraints through a disease attention map (DAM), a fusion of multi-scale spatial attention maps, the MAGNet can get task highly relevant features and localize fractures. A total of 989 vertebrae were studied here. After four-fold cross-validation, the area under the ROC curve (AUC) of our model for vertebral fracture dichotomized diagnosis and three-column injury diagnosis was 0.884 ± 0.015 and 0.920 ± 0.104, respectively. The overall performance of our model outperformed classical classification models, attention models, visual explanation methods, and attention-guided methods based on class activation mapping. Our work can promote the clinical application of deep learning to diagnose vertebral fractures and provide a way to visualize and improve the diagnosis results with attention constraints.
Collapse
Affiliation(s)
- Shunan Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Ziqi Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Lu Qiu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Duan Liang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Kun Wang
- Renji Hospital, Shanghai, 200127, China
| | - Jun Xu
- Shanghai Sixth People's Hospital, Shanghai, 200233, China.
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Jianqi Sun
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
41
|
Erne F, Dehncke D, Herath SC, Springer F, Pfeifer N, Eggeling R, Küper MA. Deep Learning in the Detection of Rare Fractures - Development of a "Deep Learning Convolutional Network" Model for Detecting Acetabular Fractures. ZEITSCHRIFT FUR ORTHOPADIE UND UNFALLCHIRURGIE 2023; 161:42-50. [PMID: 34311473 DOI: 10.1055/a-1511-8595] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
BACKGROUND Fracture detection by artificial intelligence and especially Deep Convolutional Neural Networks (DCNN) is a topic of growing interest in current orthopaedic and radiological research. As learning a DCNN usually needs a large amount of training data, mostly frequent fractures as well as conventional X-ray are used. Therefore, less common fractures like acetabular fractures (AF) are underrepresented in the literature. The aim of this pilot study was to establish a DCNN for detection of AF using computer tomography (CT) scans. METHODS Patients with an acetabular fracture were identified from the monocentric consecutive pelvic injury registry at the BG Trauma Center XXX from 01/2003 - 12/2019. All patients with unilateral AF and CT scans available in DICOM-format were included for further processing. All datasets were automatically anonymised and digitally post-processed. Extraction of the relevant region of interests was performed and the technique of data augmentation (DA) was implemented to artificially increase the number of training samples. A DCNN based on Med3D was used for autonomous fracture detection, using global average pooling (GAP) to reduce overfitting. RESULTS From a total of 2,340 patients with a pelvic fracture, 654 patients suffered from an AF. After screening and post-processing of the datasets, a total of 159 datasets were enrolled for training of the algorithm. A random assignment into training datasets (80%) and test datasets (20%) was performed. The technique of bone area extraction, DA and GAP increased the accuracy of fracture detection from 58.8% (native DCNN) up to an accuracy of 82.8% despite the low number of datasets. CONCLUSION The accuracy of fracture detection of our trained DCNN is comparable to published values despite the low number of training datasets. The techniques of bone extraction, DA and GAP are useful for increasing the detection rates of rare fractures by a DCNN. Based on the used DCNN in combination with the described techniques from this pilot study, the possibility of an automatic fracture classification of AF is under investigation in a multicentre study.
Collapse
Affiliation(s)
- Felix Erne
- Department of Trauma and Reconstructive Surgery, Occupational Accident Clinic Tübingen, Tübingen, Germany
| | - Daniel Dehncke
- Department of Informatics, Methods in Medical Informatics, Eberhard Karls University of Tübingen Faculty of Mathematics and Natural Sciences, Tubingen, Germany
| | - Steven C Herath
- Department of Trauma and Reconstructive Surgery, Occupational Accident Clinic Tübingen, Tübingen, Germany
| | - Fabian Springer
- Department of Diagnostic & Interventional Radiology, University Hospital Tübingen, Tübingen, Germany
- Department of Radiology, Occupational Accident Clinic Tübingen, Tübingen, Germany
| | - Nico Pfeifer
- Department of Informatics, Methods in Medical Informatics, Eberhard Karls University of Tübingen Faculty of Mathematics and Natural Sciences, Tubingen, Germany
| | - Ralf Eggeling
- Department of Informatics, Methods in Medical Informatics, Eberhard Karls University of Tübingen Faculty of Mathematics and Natural Sciences, Tubingen, Germany
| | - Markus Alexander Küper
- Department of Trauma and Reconstructive Surgery, Occupational Accident Clinic Tübingen, Tübingen, Germany
| |
Collapse
|
42
|
Detecting pediatric wrist fractures using deep-learning-based object detection. Pediatr Radiol 2023; 53:1125-1134. [PMID: 36650360 DOI: 10.1007/s00247-023-05588-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 12/09/2022] [Accepted: 12/30/2022] [Indexed: 01/19/2023]
Abstract
BACKGROUND Missed fractures are the leading cause of diagnostic error in the emergency department, and fractures of pediatric bones, particularly subtle wrist fractures, can be misidentified because of their varying characteristics and responses to injury. OBJECTIVE This study evaluated the utility of an object detection deep learning framework for classifying pediatric wrist fractures as positive or negative for fracture, including subtle buckle fractures of the distal radius, and evaluated the performance of this algorithm as augmentation to trainee radiograph interpretation. MATERIALS AND METHODS We obtained 395 posteroanterior wrist radiographs from unique pediatric patients (65% positive for fracture, 30% positive for distal radial buckle fracture) and divided them into train (n = 229), tune (n = 41) and test (n = 125) sets. We trained a Faster R-CNN (region-based convolutional neural network) deep learning object-detection model. Two pediatric and two radiology residents evaluated radiographs initially without the artificial intelligence (AI) assistance, and then subsequently with access to the bounding box generated by the Faster R-CNN model. RESULTS The Faster R-CNN model demonstrated an area under the curve (AUC) of 0.92 (95% confidence interval [CI] 0.87-0.97), accuracy of 88% (n = 110/125; 95% CI 81-93%), sensitivity of 88% (n = 70/80; 95% CI 78-94%) and specificity of 89% (n = 40/45, 95% CI 76-96%) in identifying any fracture and identified 90% of buckle fractures (n = 35/39, 95% CI 76-97%). Access to Faster R-CNN model predictions significantly improved average resident accuracy from 80 to 93% in detecting any fracture (P < 0.001) and from 69 to 92% in detecting buckle fracture (P < 0.001). After accessing AI predictions, residents significantly outperformed AI in cases of disagreement (73% resident correct vs. 27% AI, P = 0.002). CONCLUSION An object-detection-based deep learning approach trained with only a few hundred examples identified radiographs containing pediatric wrist fractures with high accuracy. Access to model predictions significantly improved resident accuracy in diagnosing these fractures.
Collapse
|
43
|
Morris MX, Rajesh A, Asaad M, Hassan A, Saadoun R, Butler CE. Deep Learning Applications in Surgery: Current Uses and Future Directions. Am Surg 2023; 89:36-42. [PMID: 35567312 DOI: 10.1177/00031348221101490] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Deep learning (DL) is a subset of machine learning that is rapidly gaining traction in surgical fields. Its tremendous capacity for powerful data-driven problem-solving has generated computational breakthroughs in many realms, with the fields of medicine and surgery becoming increasingly prominent avenues. Through its multi-layer architecture of interconnected neural networks, DL enables feature extraction and pattern recognition of highly complex and large-volume data. Across various surgical specialties, DL is being applied to optimize both preoperative planning and intraoperative performance in new and innovative ways. Surgeons are now able to integrate deep learning tools into their practice to improve patient safety and outcomes. Through this review, we explore the applications of deep learning in surgery and related subspecialties with an aim to shed light on the practical utilization of this technology in the present and near future.
Collapse
Affiliation(s)
- Miranda X Morris
- 12277Duke University School of Medicine, Durham, NC, USA.,101571Duke Pratt School of Engineering, Durham, NC, USA
| | - Aashish Rajesh
- Department of Surgery, 14742University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Malke Asaad
- Department of Plastic Surgery, 6595University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Abbas Hassan
- Department of Plastic Surgery, 571198The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Rakan Saadoun
- Department of Plastic Surgery, 6595University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Charles E Butler
- Department of Plastic Surgery, 571198The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
44
|
Tariciotti L, Ferlito D, Caccavella VM, Di Cristofori A, Fiore G, Remore LG, Giordano M, Remoli G, Bertani G, Borsa S, Pluderi M, Remida P, Basso G, Giussani C, Locatelli M, Carrabba G. A Deep Learning Model for Preoperative Differentiation of Glioblastoma, Brain Metastasis, and Primary Central Nervous System Lymphoma: An External Validation Study. NEUROSCI 2022; 4:18-30. [PMCID: PMC11605211 DOI: 10.3390/neurosci4010003] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 12/26/2022] [Accepted: 12/28/2022] [Indexed: 12/20/2024] Open
Abstract
(1) Background: Neuroimaging differentiation of glioblastoma, primary central nervous system lymphoma (PCNSL) and solitary brain metastasis (BM) represents a diagnostic and therapeutic challenge in neurosurgical practice, expanding the burden of care and exposing patients to additional risks related to further invasive procedures and treatment delays. In addition, atypical cases and overlapping features have not been entirely addressed by modern diagnostic research. The aim of this study was to validate a previously designed and internally validated ResNet101 deep learning model to differentiate glioblastomas, PCNSLs and BMs. (2) Methods: We enrolled 126 patients (glioblastoma: n = 64; PCNSL: n = 27; BM: n = 35) with preoperative T1Gd-MRI scans and histopathological confirmation. Each lesion was segmented, and all regions of interest were exported in a DICOM dataset. A pre-trained ResNet101 deep neural network model implemented in a previous work on 121 patients was externally validated on the current cohort to differentiate glioblastomas, PCNSLs and BMs on T1Gd-MRI scans. (3) Results: The model achieved optimal classification performance in distinguishing PCNSLs (AUC: 0.73; 95%CI: 0.62–0.85), glioblastomas (AUC: 0.78; 95%CI: 0.71–0.87) and moderate to low ability in differentiating BMs (AUC: 0.63; 95%CI: 0.52–0.76). The performance of expert neuro-radiologists on conventional plus advanced MR imaging, assessed by retrospectively reviewing the diagnostic reports of the selected cohort of patients, was found superior in accuracy for BMs (89.69%) and not inferior for PCNSL (82.90%) and glioblastomas (84.09%). (4) Conclusions: We investigated whether the previously published deep learning model was generalizable to an external population recruited at a different institution—this validation confirmed the consistency of the model and laid the groundwork for future clinical applications in brain tumour classification. This artificial intelligence-based model might represent a valuable educational resource and, if largely replicated on prospective data, help physicians differentiate glioblastomas, PCNSL and solitary BMs, especially in settings with limited resources.
Collapse
Affiliation(s)
- Leonardo Tariciotti
- Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Unit of Neurosurgery, 20122 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Davide Ferlito
- Unit of Neurosurgery, Ospedale San Gerardo, Azienda Socio-Sanitaria Territoriale di Monza, 20900 Monza, Italy
- School of Medicine and Surgery, University of Milano-Bicocca, 20900 Monza, Italy
| | | | - Andrea Di Cristofori
- Unit of Neurosurgery, Ospedale San Gerardo, Azienda Socio-Sanitaria Territoriale di Monza, 20900 Monza, Italy
| | - Giorgio Fiore
- Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Unit of Neurosurgery, 20122 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Luigi G. Remore
- Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Unit of Neurosurgery, 20122 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Martina Giordano
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Giulia Remoli
- School of Medicine and Surgery, University of Milano-Bicocca, 20900 Monza, Italy
| | - Giulio Bertani
- Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Unit of Neurosurgery, 20122 Milan, Italy
| | - Stefano Borsa
- Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Unit of Neurosurgery, 20122 Milan, Italy
| | - Mauro Pluderi
- Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Unit of Neurosurgery, 20122 Milan, Italy
| | - Paolo Remida
- Unit of Neuroradiology, Ospedale San Gerardo, Azienda Socio-Sanitaria Territoriale di Monza, 20900 Monza, Italy
| | - Gianpaolo Basso
- School of Medicine and Surgery, University of Milano-Bicocca, 20900 Monza, Italy
- Unit of Neuroradiology, Ospedale San Gerardo, Azienda Socio-Sanitaria Territoriale di Monza, 20900 Monza, Italy
| | - Carlo Giussani
- Unit of Neurosurgery, Ospedale San Gerardo, Azienda Socio-Sanitaria Territoriale di Monza, 20900 Monza, Italy
- School of Medicine and Surgery, University of Milano-Bicocca, 20900 Monza, Italy
| | - Marco Locatelli
- Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Unit of Neurosurgery, 20122 Milan, Italy
- Department of Pathophysiology and Transplantation, University of Milan, 20122 Milan, Italy
| | - Giorgio Carrabba
- Unit of Neurosurgery, Ospedale San Gerardo, Azienda Socio-Sanitaria Territoriale di Monza, 20900 Monza, Italy
- School of Medicine and Surgery, University of Milano-Bicocca, 20900 Monza, Italy
| |
Collapse
|
45
|
Scala A, Borrelli A, Improta G. Predictive analysis of lower limb fractures in the orthopedic complex operative unit using artificial intelligence: the case study of AOU Ruggi. Sci Rep 2022; 12:22153. [PMID: 36550192 PMCID: PMC9780352 DOI: 10.1038/s41598-022-26667-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022] Open
Abstract
The length of stay (LOS) in hospital is one of the main parameters for evaluating the management of a health facility, of its departments in relation to the different specializations. Healthcare costs are in fact closely linked to this parameter as well as the profit margin. In the orthopedic field, the provision of this parameter is increasingly complex and of fundamental importance in order to be able to evaluate the planning of resources, the waiting times for any scheduled interventions and the management of the department and related surgical interventions. The purpose of this work is to predict and evaluate the LOS value using machine learning methods and applying multiple linear regression, starting from clinical data of patients hospitalized with lower limb fractures. The data were collected at the "San Giovanni di Dio e Ruggi d'Aragona" hospital in Salerno (Italy).
Collapse
Affiliation(s)
- Arianna Scala
- grid.4691.a0000 0001 0790 385XDepartment of Public Health, University of Naples “Federico II”, Naples, Italy
| | - Anna Borrelli
- San Giovanni di Dio e Ruggi d’Aragona” University Hospital, Salerno, Italy
| | - Giovanni Improta
- grid.4691.a0000 0001 0790 385XDepartment of Public Health, University of Naples “Federico II”, Naples, Italy ,Interdepartmental Center for Research in Healthcare Management and Innovation in Healthcare (CIRMIS), Naples, Italy
| |
Collapse
|
46
|
Cohen M, Puntonet J, Sanchez J, Kierszbaum E, Crema M, Soyer P, Dion E. Artificial intelligence vs. radiologist: accuracy of wrist fracture detection on radiographs. Eur Radiol 2022; 33:3974-3983. [PMID: 36515712 DOI: 10.1007/s00330-022-09349-3] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 09/05/2022] [Accepted: 11/29/2022] [Indexed: 12/15/2022]
Abstract
OBJECTIVE To compare the performances of artificial intelligence (AI) to those of radiologists in wrist fracture detection on radiographs. METHODS This retrospective study included 637 patients (1917 radiographs) with wrist trauma between January 2017 and December 2019. The AI software used was a deep neuronal network algorithm. Ground truth was established by three senior musculoskeletal radiologists who compared the initial radiology reports (IRR) made by non-specialized radiologists, the results of AI, and the combination of AI and IRR (IR+AI) RESULTS: A total of 318 fractures were reported by the senior radiologists in 247 patients. Sensitivity of AI (83%; 95% CI: 78-87%) was significantly greater than that of IRR (76%; 95% CI: 70-81%) (p < 0.001). Specificities were similar for AI (96%; 95% CI: 93-97%) and for IRR (96%; 95% CI: 94-98%) (p = 0.80). The combination of AI+IRR had a significantly greater sensitivity (88%; 95% CI: 84-92%) compared to AI and IRR (p < 0.001) and a lower specificity (92%; 95% CI: 89-95%) (p < 0.001). The sensitivity for scaphoid fracture detection was acceptable for AI (84%) and IRR (80%) but poor for the detection of other carpal bones fracture (41% for AI and 26% for IRR). CONCLUSIONS Performance of AI in wrist fracture detection on radiographs is better than that of non-specialized radiologists. The combination of AI and radiologist's analysis yields best performances. KEY POINTS • Artificial intelligence has better performances for wrist fracture detection compared to non-expert radiologists in daily practice. • Performance of artificial intelligence greatly differs depending on the anatomical area. • Sensitivity of artificial intelligence for the detection of carpal bones fractures is 56%.
Collapse
Affiliation(s)
- Mathieu Cohen
- Department of Radiology - Hotel Dieu Hospital, Assistance Publique-Hopitaux de Paris, Paris, France
- Université Paris Cité, F-75006, Paris, France
| | - Julien Puntonet
- Department of Radiology - Hotel Dieu Hospital, Assistance Publique-Hopitaux de Paris, Paris, France.
- Université Paris Cité, F-75006, Paris, France.
| | - Julien Sanchez
- Université Paris Cité, F-75006, Paris, France
- Institute of Sports Imaging, French National Institute of Sports (INSEP), Paris, France
| | | | - Michel Crema
- Department of Radiology - Hotel Dieu Hospital, Assistance Publique-Hopitaux de Paris, Paris, France
- Institute of Sports Imaging, French National Institute of Sports (INSEP), Paris, France
| | - Philippe Soyer
- Université Paris Cité, F-75006, Paris, France
- Department of Radiology- Cochin Hospital, Assistance Publique-Hopitaux de Paris, 75014, Paris, France
| | - Elisabeth Dion
- Department of Radiology - Hotel Dieu Hospital, Assistance Publique-Hopitaux de Paris, Paris, France
- Université Paris Cité, F-75006, Paris, France
| |
Collapse
|
47
|
Yang L, Gao S, Li P, Shi J, Zhou F. Recognition and Segmentation of Individual Bone Fragments with a Deep Learning Approach in CT Scans of Complex Intertrochanteric Fractures: A Retrospective Study. J Digit Imaging 2022; 35:1681-1689. [PMID: 35711073 PMCID: PMC9712885 DOI: 10.1007/s10278-022-00669-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 05/04/2022] [Accepted: 06/07/2022] [Indexed: 10/18/2022] Open
Abstract
The characteristics of bone fragments are the main influencing factors for the choice of treatment in intertrochanteric fractures. This study aimed to develop a deep learning algorithm for recognizing and segmenting individual fragments in CT images of complex intertrochanteric fractures for orthopedic surgeons. This study was based on 160 hip CT scans (43,510 images) of complex fractures of three types based on the Evans-Jensen classification (40 cases of type 3 (IIA) fractures, 80 cases of type 4 (IIB)fractures, and 40 cases of type 5 (III)fractures) retrospectively. The images were randomly split into two groups to construct a training set of 120 CT scans (32,045 images) and a testing set of 40 CT scans (11,465 images). A deep learning model was built into a cascaded architecture composed by a convolutional neural network (CNN) for location of the fracture ROI and another CNN for recognition and segmentation of individual fragments within the ROI. The accuracy of object detection and dice coefficient of segmentation of individual fragments were used to evaluate model performance. The model yielded an average accuracy of 89.4% for individual fragment recognition and an average dice coefficient of 90.5% for segmentation in CT images. The results demonstrated the feasibility of recognition and segmentation of individual fragments in complex intertrochanteric fractures with a deep learning approach. Altogether, these promising results suggest the potential of our model to be applied to many clinical scenarios.
Collapse
Affiliation(s)
- Lv Yang
- Department of Orthopedics, Peking University Third Hospital, Beijing, China
| | - Shan Gao
- Department of Orthopedics, Peking University Third Hospital, Beijing, China
| | - Pengfei Li
- Department of Orthopedics, Peking University Third Hospital, Beijing, China
| | - Jiancheng Shi
- Department of Radiology, Peking University Third Hospital, Yanqing Hospital, Beijing, China
| | - Fang Zhou
- Department of Orthopedics, Peking University Third Hospital, Beijing, China.
| |
Collapse
|
48
|
Kumar V, Patel S, Baburaj V, Vardhan A, Singh PK, Vaishya R. Current understanding on artificial intelligence and machine learning in orthopaedics - A scoping review. J Orthop 2022; 34:201-206. [PMID: 36104993 PMCID: PMC9465367 DOI: 10.1016/j.jor.2022.08.020] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Artificial Intelligence (AI) has improved the way of looking at technological challenges. Today, we can afford to see many of the problems as just an input-output system rather than solving from the first principles. The field of Orthopaedics is not spared from this rapidly expanding technology. The recent surge in the use of AI can be attributed mainly to advancements in deep learning methodologies and computing resources. This review was conducted to draw an outline on the role of AI in orthopaedics. METHODS We developed a search strategy and looked for articles on PubMed, Scopus, and EMBASE. A total of 40 articles were selected for this study, from tools for medical aid like imaging solutions, implant management, and robotic surgery to understanding scientific questions. RESULTS A total of 40 studies have been included in this review. The role of AI in the various subspecialties such as arthroplasty, trauma, orthopaedic oncology, foot and ankle etc. have been discussed in detail. CONCLUSION AI has touched most of the aspects of Orthopaedics. The increase in technological literacy, data management plans, and hardware systems, amalgamated with the access to hand-held devices like mobiles, and electronic pads, augur well for the exciting times ahead in this field. We have discussed various technological breakthroughs in AI that have been able to perform in Orthopaedics, and also the limitations and the problem with the black-box approach of modern AI algorithms. We advocate for better interpretable algorithms which can help both the patients and surgeons alike.
Collapse
Affiliation(s)
- Vishal Kumar
- Department of Orthopaedics, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Sandeep Patel
- Department of Orthopaedics, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Vishnu Baburaj
- Department of Orthopaedics, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Aditya Vardhan
- Department of Orthopaedics, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Prasoon Kumar Singh
- Department of Orthopaedics, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | | |
Collapse
|
49
|
Dankelman LHM, Schilstra S, IJpma FFA, Doornberg JN, Colaris JW, Verhofstad MHJ, Wijffels MME, Prijs J. Artificial intelligence fracture recognition on computed tomography: review of literature and recommendations. Eur J Trauma Emerg Surg 2022; 49:681-691. [PMID: 36284017 PMCID: PMC10175338 DOI: 10.1007/s00068-022-02128-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 10/02/2022] [Indexed: 11/26/2022]
Abstract
Abstract
Purpose
The use of computed tomography (CT) in fractures is time consuming, challenging and suffers from poor inter-surgeon reliability. Convolutional neural networks (CNNs), a subset of artificial intelligence (AI), may overcome shortcomings and reduce clinical burdens to detect and classify fractures. The aim of this review was to summarize literature on CNNs for the detection and classification of fractures on CT scans, focusing on its accuracy and to evaluate the beneficial role in daily practice.
Methods
Literature search was performed according to the PRISMA statement, and Embase, Medline ALL, Web of Science Core Collection, Cochrane Central Register of Controlled Trials and Google Scholar databases were searched. Studies were eligible when the use of AI for the detection of fractures on CT scans was described. Quality assessment was done with a modified version of the methodologic index for nonrandomized studies (MINORS), with a seven-item checklist. Performance of AI was defined as accuracy, F1-score and area under the curve (AUC).
Results
Of the 1140 identified studies, 17 were included. Accuracy ranged from 69 to 99%, the F1-score ranged from 0.35 to 0.94 and the AUC, ranging from 0.77 to 0.95. Based on ten studies, CNN showed a similar or improved diagnostic accuracy in addition to clinical evaluation only.
Conclusions
CNNs are applicable for the detection and classification fractures on CT scans. This can improve automated and clinician-aided diagnostics. Further research should focus on the additional value of CNN used for CT scans in daily clinics.
Collapse
Affiliation(s)
- Lente H. M. Dankelman
- Trauma Research Unit, Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands
| | - Sanne Schilstra
- Department of Orthopedic Surgery, Groningen University Medical Centre, Groningen, The Netherlands
- Department of Surgery, Groningen University Medical Centre, Groningen, The Netherlands
| | - Frank F. A. IJpma
- Department of Surgery, Groningen University Medical Centre, Groningen, The Netherlands
| | - Job N. Doornberg
- Department of Orthopedic Surgery, Groningen University Medical Centre, Groningen, The Netherlands
- Department of Surgery, Groningen University Medical Centre, Groningen, The Netherlands
- Department of Orthopedic & Trauma Surgery, Flinders Medical Centre, Flinders University, Adelaide, Australia
| | - Joost W. Colaris
- Department of Orthopedics, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Michael H. J. Verhofstad
- Trauma Research Unit, Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands
| | - Mathieu M. E. Wijffels
- Trauma Research Unit, Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, 3000 CA Rotterdam, The Netherlands
| | - Jasper Prijs
- Department of Orthopedic Surgery, Groningen University Medical Centre, Groningen, The Netherlands
- Department of Surgery, Groningen University Medical Centre, Groningen, The Netherlands
- Department of Orthopedic & Trauma Surgery, Flinders Medical Centre, Flinders University, Adelaide, Australia
| | | |
Collapse
|
50
|
Li Y, Yao Q, Yu H, Xie X, Shi Z, Li S, Qiu H, Li C, Qin J. Automated segmentation of vertebral cortex with 3D U-Net-based deep convolutional neural network. Front Bioeng Biotechnol 2022; 10:996723. [PMCID: PMC9626964 DOI: 10.3389/fbioe.2022.996723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 09/02/2022] [Indexed: 11/13/2022] Open
Abstract
Objectives: We developed a 3D U-Net-based deep convolutional neural network for the automatic segmentation of the vertebral cortex. The purpose of this study was to evaluate the accuracy of the 3D U-Net deep learning model. Methods: In this study, a fully automated vertebral cortical segmentation method with 3D U-Net was developed, and ten-fold cross-validation was employed. Through data augmentation, we obtained 1,672 3D images of chest CT scans. Segmentation was performed using a conventional image processing method and manually corrected by a senior radiologist to create the gold standard. To compare the segmentation performance, 3D U-Net, Res U-Net, Ki U-Net, and Seg Net were used to segment the vertebral cortex in CT images. The segmentation performance of 3D U-Net and the other three deep learning algorithms was evaluated using DSC, mIoU, MPA, and FPS. Results: The DSC, mIoU, and MPA of 3D U-Net are better than the other three strategies, reaching 0.71 ± 0.03, 0.74 ± 0.08, and 0.83 ± 0.02, respectively, indicating promising automated segmentation results. The FPS is slightly lower than that of Seg Net (23.09 ± 1.26 vs. 30.42 ± 3.57). Conclusion: Cortical bone can be effectively segmented based on 3D U-net.
Collapse
Affiliation(s)
- Yang Li
- Department of Radiology, The Second Affiliated Hospital of Shandong First Medical University, Tai’an, China
| | - Qianqian Yao
- Department of Radiology, The Second Affiliated Hospital of Shandong First Medical University, Tai’an, China
| | - Haitao Yu
- Mechanical and Electrical Engineering College, Hainan University, Haikou, China
| | - Xiaofeng Xie
- Mechanical and Electrical Engineering College, Hainan University, Haikou, China
| | - Zeren Shi
- Hangzhou Shimai Intelligent Technology Co., Ltd., Hangzhou, China
| | - Shanshan Li
- Department of Radiology, The Second Affiliated Hospital of Shandong First Medical University, Tai’an, China
| | - Hui Qiu
- Department of Radiology, The Second Affiliated Hospital of Shandong First Medical University, Tai’an, China
| | - Changqin Li
- Department of Radiology, The Second Affiliated Hospital of Shandong First Medical University, Tai’an, China
| | - Jian Qin
- Department of Radiology, The Second Affiliated Hospital of Shandong First Medical University, Tai’an, China,*Correspondence: Jian Qin,
| |
Collapse
|