1
|
Heller MT, Maderbacher G, Schuster MF, Forchhammer L, Scharf M, Renkawitz T, Pagano S. Comparison of an AI-driven planning tool and manual radiographic measurements in total knee arthroplasty. Comput Struct Biotechnol J 2025; 28:148-155. [PMID: 40276217 PMCID: PMC12019206 DOI: 10.1016/j.csbj.2025.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2024] [Revised: 04/07/2025] [Accepted: 04/08/2025] [Indexed: 04/26/2025] Open
Abstract
Background Accurate preoperative planning in total knee arthroplasty (TKA) is essential. Traditional manual radiographic planning can be time-consuming and potentially prone to inaccuracies. This study investigates the performance of an AI-based radiographic planning tool in comparison with manual measurements in patients undergoing total knee arthroplasty, using a retrospective observational design to assess reliability and efficiency. Methods We retrospectively compared the Autoplan tool integrated within the mediCAD software (mediCAD Hectec GmbH, Altdorf, Germany), routinely implemented in our institutional workflow, to manual measurements performed by two orthopedic specialists on pre- and postoperative radiographs of 100 patients who underwent elective TKA. The following parameters were measured: leg length, mechanical axis deviation (MAD), mechanical lateral proximal femoral angle (mLPFA), anatomical mechanical angle (AMA), mechanical lateral distal femoral angle (mLDFA), joint line convergence angle (JLCA), mechanical medial proximal tibial angle (mMPTA), and mechanical tibiofemoral angle (mTFA).Intraclass correlation coefficients (ICCs) were calculated to assess measurement reliability, and the time required for each method was recorded. Results The Autoplan tool demonstrated high reliability (ICC > 0.90) compared with manual measurements for linear parameters (e.g., leg length and MAD). However, the angular measurements of mLPFA, JLCA, and AMA exhibited poor reliability (ICC < 0.50) among all raters. The Autoplan tool significantly reduced the time required for measurements compared to manual measurements, with a mean time saving of 44.3 seconds per case (95 % CI: 43.5-45.1 seconds, p < 0.001). Conclusion AI-assisted tools like the Autoplan tool in mediCAD offer substantial time savings and demonstrate reliable measurements for certain linear parameters in preoperative TKA planning. However, the observed low reliability in some measurements, even amongst experienced human raters, suggests inherent challenges in the radiographic assessment of angular parameters. Further development is needed to improve the accuracy of automated angular measurements, and to address the inherent variability in their assessment.
Collapse
Affiliation(s)
- Marie Theres Heller
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Guenther Maderbacher
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Marie Farina Schuster
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Lina Forchhammer
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Markus Scharf
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Tobias Renkawitz
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| | - Stefano Pagano
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum, Bad Abbach, Germany
| |
Collapse
|
2
|
Li J, Xia Y, Zhou T, Dong Q, Lin X, Gu L, Jiang S, Xu M, Wan X, Duan G, Zhu D, Chen R, Zhang Z, Xiang L, Fan L, Liu S. Accelerated Spine MRI with Deep Learning Based Image Reconstruction: A Prospective Comparison with Standard MRI. Acad Radiol 2025; 32:2121-2132. [PMID: 39580249 DOI: 10.1016/j.acra.2024.11.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2024] [Revised: 10/27/2024] [Accepted: 11/01/2024] [Indexed: 11/25/2024]
Abstract
RATIONALE AND OBJECTIVES To evaluate the performance of deep learning (DL) reconstructed MRI in terms of image acquisition time, overall image quality and diagnostic interchangeability compared to standard-of-care (SOC) MRI. MATERIALS AND METHODS This prospective study recruited participants between July 2023 and August 2023 who had spinal discomfort. All participants underwent two separate MRI examinations (Standard and accelerated scanning). Signal-to-noise ratios (SNR), contrast-to-noise ratios (CNR) and similarity metrics were calculated for quantitative evaluation. Four radiologists performed subjective quality and lesion characteristic assessment. Wilcoxon test was used to assess the differences of SNR, CNR and subjective image quality between DL and SOC. Various lesions of spine were also tested for interchangeability using individual equivalence index. Interreader and intrareader agreement and concordance (κ and Kendall τ and W statistics) were computed and McNemar tests were performed for comprehensive evaluation. RESULTS 200 participants (107 male patients, mean age 46.56 ± 17.07 years) were included. Compared with SOC, DL enabled scan time reduced by approximately 40%. The SNR and CNR of DL were significantly higher than those of SOC (P < 0.001). DL showed varying degrees of improvement (0-0.35) in each of similarity metrics. All absolute individual equivalence indexes were less than 4%, indicating interchangeability between SOC and DL. Kappa and Kendall showed a good to near-perfect agreement in range of 0.72-0.98. There is no difference between SOC and DL regarding subjective scoring and frequency of lesion detection. CONCLUSION Compared to SOC, DL provided high-quality image for diagnosis and reduced examination time for patients. DL was found to be interchangeable with SOC in detecting various spinal abnormalities.
Collapse
Affiliation(s)
- Jie Li
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.); College of Health Sciences and Engineering, University of Shanghai for Science and Technology, No.516 Jungong Road, Shanghai 200093, PR China (J.L., X.L.).
| | - Yi Xia
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.).
| | - Taohu Zhou
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.).
| | - Qian Dong
- Department of Radiology, University of Michigan Taubman Center, Room 2904, 1500 E., Medical Center Dr., SPC 5326, Ann Arbor, MI 48109 (Q.D.).
| | - Xiaoqing Lin
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.); College of Health Sciences and Engineering, University of Shanghai for Science and Technology, No.516 Jungong Road, Shanghai 200093, PR China (J.L., X.L.).
| | - Lingling Gu
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.).
| | - Song Jiang
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.).
| | - Meiling Xu
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.).
| | - Xinyi Wan
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.).
| | - Guangwen Duan
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.).
| | - Dongqing Zhu
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.).
| | - Rutan Chen
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.).
| | - Zhihao Zhang
- Shentou Medical Inc, Shentou Medical Room 1105, No. 938 Jinshajiang Road, Shanghai 200062, PR China (Z.Z., L.X.).
| | - Lei Xiang
- Shentou Medical Inc, Shentou Medical Room 1105, No. 938 Jinshajiang Road, Shanghai 200062, PR China (Z.Z., L.X.).
| | - Li Fan
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.).
| | - Shiyuan Liu
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, No. 415 Fengyang Road, Shanghai 200003, PR China (J.L., Y.X., T.Z., X.L., L.G., S.J., M.X., X.W., G.D., D.Z., R.C., L.F., S.L.).
| |
Collapse
|
3
|
Cheng KY, Moazamian D, Namiranian B, Shaterian Mohammadi H, Alenezi S, Chung CB, Jerban S. Estimation of Trabecular Bone Volume with Dual-Echo Ultrashort Echo Time (UTE) Magnetic Resonance Imaging (MRI) Significantly Correlates with High-Resolution Computed Tomography (CT). J Imaging 2025; 11:57. [PMID: 39997559 PMCID: PMC11856473 DOI: 10.3390/jimaging11020057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2025] [Revised: 02/02/2025] [Accepted: 02/08/2025] [Indexed: 02/26/2025] Open
Abstract
Trabecular bone architecture has important implications for the mechanical strength of bone. Trabecular elements appear as signal void when imaged utilizing conventional magnetic resonance imaging (MRI) sequences. Ultrashort echo time (UTE) MRI can acquire high signal from trabecular bone, allowing for quantitative evaluation. However, the trabecular morphology is often disturbed in UTE-MRI due to chemical shift artifacts caused by the presence of fat in marrow. This study aimed to evaluate a UTE-MRI technique to estimate the trabecular bone volume fraction (BVTV) without requiring trabecular-level morphological assessment. A total of six cadaveric distal tibial diaphyseal trabecular bone cubes were scanned using a dual-echo UTE Cones sequence (TE = 0.03 and 2.2 ms) on a clinical 3T MRI scanner and on a micro-computed tomography (μCT) scanner. The BVTV was calculated from 10 consecutive slices on both the MR and μCT images. BVTV calculated from the MR images showed strongly significant correlation with the BVTV determined from μCT images (R = 0.84, p < 0.01), suggesting that UTE-MRI is a feasible technique for the assessment of trabecular bone microarchitecture. This would allow for the non-invasive assessment of information regarding bone strength, and UTE-MRI may potentially serve as a novel tool for assessment of fracture risk.
Collapse
Affiliation(s)
- Karen Y. Cheng
- Department of Radiology, University of California, La Jolla, San Diego, CA 92037, USA
| | - Dina Moazamian
- Department of Radiology, University of California, La Jolla, San Diego, CA 92037, USA
| | - Behnam Namiranian
- Department of Radiology, University of California, La Jolla, San Diego, CA 92037, USA
| | | | - Salem Alenezi
- Research and Laboratories Sector, Saudi Food and Drug Authority, Riyadh 13513-7148, Saudi Arabia
| | - Christine B. Chung
- Department of Radiology, University of California, La Jolla, San Diego, CA 92037, USA
- Department of Radiology, Veterans Affairs San Diego Healthcare System, La Jolla, San Diego, CA 92161, USA
| | - Saeed Jerban
- Department of Radiology, University of California, La Jolla, San Diego, CA 92037, USA
- Research Service, Veterans Affairs San Diego Healthcare System, La Jolla, San Diego, CA 92161, USA
| |
Collapse
|
4
|
Lu Y, Yang L, Mulford K, Grove A, Kaji E, Pareek A, Levy B, Wyles CC, Camp CL, Krych AJ. AKIRA: Deep learning tool for image standardization, implant detection and arthritis grading to establish a radiographic registry in patients with anterior cruciate ligament injuries. Knee Surg Sports Traumatol Arthrosc 2025. [PMID: 39925136 DOI: 10.1002/ksa.12618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Revised: 01/26/2025] [Accepted: 01/27/2025] [Indexed: 02/11/2025]
Abstract
PURPOSE Developing large-scale, standardized radiographic registries for anterior cruciate ligament (ACL) injuries with artificial intelligence (AI) tools can enhance personalized orthopaedics. We propose deploying Artificial Intelligence for Knee Imaging Registration and Analysis (AKIRA), a trio of deep learning (DL) algorithms, to automatically classify and annotate radiographs. We hypothesize that algorithms can efficiently organize radiographs based on laterality, projection, identify implants and classify osteoarthritis (OA) grade. METHODS A collection of 20,836 knee radiographs from all time points of treatment (mean orthopaedic follow-up 70.7 months; interquartile range [IQR]: 6.8-172 months) were aggregated from 1628 ACL-injured patients (median age 26 years [IQR: 19-42], 57% male). Three DL algorithms (EfficientNet, YOLO [You Only Look Once] and Residual Network) were employed. Radiograph laterality and projection (anterior-posterior [AP], lateral, sunrise, posterior-anterior, hip-knee-ankle and Camp-Coventry intercondylar [notch]) were labelled by a DL model. Manually provided labels of metal fixation implants were used to develop a DL object detection algorithm. The degree of OA, both as measured by specific Kellgren-Lawrence (KL) grades, as well as based on a binarized label of OA (defined as KL Grade ≥2), on standing AP radiographs were classified using a DL algorithm. Individual model performances were evaluated on a subset of images prior to the deployment of AKIRA to registry construction using all ACL radiographs. RESULTS The classification algorithms showed excellent performance in classifying radiographic laterality (F1 score: 0.962-0.975) and projection (F1 score: 0.941-1.0). The object detection algorithm achieved high precision-recall (area under the precision-recall curve: 0.695-0.992) for identifying various metal fixations. The KL classifier reached concordances of 0.39-0.40, improving to 0.81-0.82 for binary OA labels. Sequential deployment of AKIRA following internal validation processed and labelled all 20,836 images with the appropriate views, implants, and the presence of OA within 88 min. CONCLUSION AKIRA effectively automated the classification and object detection in a large radiograph cohort of ACL injuries, creating an AI-enabled radiographic registry with comprehensive details on laterality, projection, implants and OA. STUDY DESIGN Cross-sectional study. LEVEL OF EVIDENCE Level IV.
Collapse
Affiliation(s)
- Yining Lu
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota, USA
- Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota, USA
| | - Linjun Yang
- Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota, USA
| | - Kellen Mulford
- Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota, USA
| | - Austin Grove
- Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota, USA
| | - Ellie Kaji
- Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota, USA
| | - Ayoosh Pareek
- Department of Orthopaedic Surgery, Hospital for Special Surgery, New York, New York, USA
| | - Bruce Levy
- Orlando Health Jewett Orthopedic Institute, Orlando, Florida, USA
| | - Cody C Wyles
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota, USA
- Orthopedic Surgery Artificial Intelligence Laboratory, Mayo Clinic, Rochester, Minnesota, USA
| | - Christopher L Camp
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota, USA
| | - Aaron J Krych
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
5
|
Horiuchi D, Tatekawa H, Oura T, Shimono T, Walston SL, Takita H, Matsushita S, Mitsuyama Y, Miki Y, Ueda D. ChatGPT's diagnostic performance based on textual vs. visual information compared to radiologists' diagnostic performance in musculoskeletal radiology. Eur Radiol 2025; 35:506-516. [PMID: 38995378 PMCID: PMC11632015 DOI: 10.1007/s00330-024-10902-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 05/02/2024] [Accepted: 06/24/2024] [Indexed: 07/13/2024]
Abstract
OBJECTIVES To compare the diagnostic accuracy of Generative Pre-trained Transformer (GPT)-4-based ChatGPT, GPT-4 with vision (GPT-4V) based ChatGPT, and radiologists in musculoskeletal radiology. MATERIALS AND METHODS We included 106 "Test Yourself" cases from Skeletal Radiology between January 2014 and September 2023. We input the medical history and imaging findings into GPT-4-based ChatGPT and the medical history and images into GPT-4V-based ChatGPT, then both generated a diagnosis for each case. Two radiologists (a radiology resident and a board-certified radiologist) independently provided diagnoses for all cases. The diagnostic accuracy rates were determined based on the published ground truth. Chi-square tests were performed to compare the diagnostic accuracy of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists. RESULTS GPT-4-based ChatGPT significantly outperformed GPT-4V-based ChatGPT (p < 0.001) with accuracy rates of 43% (46/106) and 8% (9/106), respectively. The radiology resident and the board-certified radiologist achieved accuracy rates of 41% (43/106) and 53% (56/106). The diagnostic accuracy of GPT-4-based ChatGPT was comparable to that of the radiology resident, but was lower than that of the board-certified radiologist although the differences were not significant (p = 0.78 and 0.22, respectively). The diagnostic accuracy of GPT-4V-based ChatGPT was significantly lower than those of both radiologists (p < 0.001 and < 0.001, respectively). CONCLUSION GPT-4-based ChatGPT demonstrated significantly higher diagnostic accuracy than GPT-4V-based ChatGPT. While GPT-4-based ChatGPT's diagnostic performance was comparable to radiology residents, it did not reach the performance level of board-certified radiologists in musculoskeletal radiology. CLINICAL RELEVANCE STATEMENT GPT-4-based ChatGPT outperformed GPT-4V-based ChatGPT and was comparable to radiology residents, but it did not reach the level of board-certified radiologists in musculoskeletal radiology. Radiologists should comprehend ChatGPT's current performance as a diagnostic tool for optimal utilization. KEY POINTS This study compared the diagnostic performance of GPT-4-based ChatGPT, GPT-4V-based ChatGPT, and radiologists in musculoskeletal radiology. GPT-4-based ChatGPT was comparable to radiology residents, but did not reach the level of board-certified radiologists. When utilizing ChatGPT, it is crucial to input appropriate descriptions of imaging findings rather than the images.
Collapse
Affiliation(s)
- Daisuke Horiuchi
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hiroyuki Tatekawa
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Tatsushi Oura
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Taro Shimono
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hirotaka Takita
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Shu Matsushita
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yasuhito Mitsuyama
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan.
- Department of Artificial Intelligence, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan.
| |
Collapse
|
6
|
Chen J, Xu H, Zhou H, Wang Z, Li W, Guo J, Zhou Y. Knowledge mapping and bibliometric analysis of medical knee magnetic resonance imaging for knee osteoarthritis (2004-2023). Front Surg 2024; 11:1387351. [PMID: 39345660 PMCID: PMC11427760 DOI: 10.3389/fsurg.2024.1387351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 08/29/2024] [Indexed: 10/01/2024] Open
Abstract
Objectives Magnetic resonance imaging (MRI) is increasingly used to detect knee osteoarthritis (KOA). In this study, we aimed to systematically examine the global research status on the application of medical knee MRI in the treatment of KOA, analyze research hotspots, explore future trends, and present results in the form of a knowledge graph. Methods The Web of Science core database was searched for studies on medical knee MRI scans in patients with KOA between 2004 and 2023. CiteSpace, SCImago Graphica, and VOSviewer were used for the country, institution, journal, author, reference, and keyword analyses. Results A total of 2,904 articles were included. The United States and Europe are leading countries. Boston University is the main institution. Osteoarthritis and cartilage is the main magazine. The most frequently cocited article was "Radiological assessment of osteoarthrosis". Guermazi A was the author with the highest number of publications and total references. The keywords most closely linked to MRI and KOA were "cartilage", "pain", and "injury". Conclusions The application of medical knee MRI in KOA can be divided into the following parts: (1). MRI was used to assess the relationship between the characteristics of local tissue damage and pathological changes and clinical symptoms. (2).The risk factors of KOA were analyzed by MRI to determine the early diagnosis of KOA. (3). MRI was used to evaluate the efficacy of multiple interventions for KOA tissue damage (e.g., cartilage defects, bone marrow edema, bone marrow microfracture, and subchondral bone remodeling). Artificial intelligence, particularly deep learning, has become the focus of research on MRI applications for KOA.
Collapse
Affiliation(s)
- Juntao Chen
- College of Acupuncture and Tuina, Henan University of Chinese Medicine, Zhengzhou, China
| | - Hui Xu
- College of Acupuncture and Tuina, Henan University of Chinese Medicine, Zhengzhou, China
- Tuina Department, The Third Affiliated Hospital of Henan University of Chinese Medicine, Zhengzhou, China
| | - Hang Zhou
- College of Acupuncture and Tuina, Henan University of Chinese Medicine, Zhengzhou, China
| | - Zheng Wang
- College of Acupuncture and Tuina, Henan University of Chinese Medicine, Zhengzhou, China
| | - Wanyu Li
- College of Acupuncture and Tuina, Henan University of Chinese Medicine, Zhengzhou, China
| | - Juan Guo
- College of Acupuncture and Tuina, Henan University of Chinese Medicine, Zhengzhou, China
| | - Yunfeng Zhou
- College of Acupuncture and Tuina, Henan University of Chinese Medicine, Zhengzhou, China
- Tuina Department, The Third Affiliated Hospital of Henan University of Chinese Medicine, Zhengzhou, China
| |
Collapse
|
7
|
Zia-ur-Rehman, Awang MK, Rashid J, Ali G, Hamid M, Mahmoud SF, Saleh DI, Ahmad HI. Classification of Alzheimer disease using DenseNet-201 based on deep transfer learning technique. PLoS One 2024; 19:e0304995. [PMID: 39240975 PMCID: PMC11379170 DOI: 10.1371/journal.pone.0304995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 05/22/2024] [Indexed: 09/08/2024] Open
Abstract
Alzheimer's disease (AD) is a brain illness that causes gradual memory loss. AD has no treatment and cannot be cured, so early detection is critical. Various AD diagnosis approaches are used in this regard, but Magnetic Resonance Imaging (MRI) provides the most helpful neuroimaging tool for detecting AD. In this paper, we employ a DenseNet-201 based transfer learning technique for diagnosing different Alzheimer's stages as Non-Demented (ND), Moderate Demented (MOD), Mild Demented (MD), Very Mild Demented (VMD), and Severe Demented (SD). The suggested method for a dataset of MRI scans for Alzheimer's disease is divided into five classes. Data augmentation methods were used to expand the size of the dataset and increase DenseNet-201's accuracy. It was found that the proposed strategy provides a very high classification accuracy. This practical and reliable model delivers a success rate of 98.24%. The findings of the experiments demonstrate that the suggested deep learning approach is more accurate and performs well compared to existing techniques and state-of-the-art methods.
Collapse
Affiliation(s)
- Zia-ur-Rehman
- Faculty of Informatics and Computing, Universiti Sultan Zainal Abidin (UniSZA), Terengganu, Malaysia
| | - Mohd Khalid Awang
- Faculty of Informatics and Computing, Universiti Sultan Zainal Abidin (UniSZA), Terengganu, Malaysia
| | - Javed Rashid
- Information Technology Services, University of Okara, Okara, Pakistan
- Department of CS and SE, International Islamic University, Islamabad, Pakistan
- MLC Lab, Meharban House, Okara, Pakistan
| | - Ghulam Ali
- Department of CS, University of Okara, Okara, Pakistan
| | - Muhammad Hamid
- Department of Computer Science, Government College Women University, Sialkot, Pakistan
| | - Samy F. Mahmoud
- Department of Biotechnology, College of Science, Taif University, Taif, Saudi Arabia
| | - Dalia I. Saleh
- Department of chemistry, College of Science, Taif University, Taif, Saudi Arabia
| | - Hafiz Ishfaq Ahmad
- Department of Animal Breeding and Genetics, Faculty of Veterinary and Animal Sciences, The Islamia University of Bahawalpur, Bahawalpur, Pakistan
| |
Collapse
|
8
|
Suojärvi N, Waris E. Radiographic measurements in distal radius fracture evaluation: a review of current techniques and a recommendation for standardization. Acta Radiol 2024; 65:1065-1079. [PMID: 39043232 DOI: 10.1177/02841851241266369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024]
Abstract
Radiographic measurements play a crucial role in evaluating the alignment of distal radius fractures (DRFs). Various manual methods have been used to perform the measurements, but they are susceptible to inaccuracies. Recently, computer-aided methods have become available. This review explores the methods commonly used to assess DRFs. The review introduces the different measurement techniques, discusses the sources of measurement errors and measurement reliability, and provides a recommendation for their use. Radiographic measurements used in the evaluation of DRFs are not reliable. Standardizing the measurement techniques is crucial to address this and automated image analysis could help improve accuracy and reliability.
Collapse
Affiliation(s)
- Nora Suojärvi
- Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Eero Waris
- Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| |
Collapse
|
9
|
Xie Y, Nie Y, Lundgren J, Yang M, Zhang Y, Chen Z. Cervical Spondylosis Diagnosis Based on Convolutional Neural Network with X-ray Images. SENSORS (BASEL, SWITZERLAND) 2024; 24:3428. [PMID: 38894217 PMCID: PMC11174662 DOI: 10.3390/s24113428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 05/18/2024] [Accepted: 05/23/2024] [Indexed: 06/21/2024]
Abstract
The increase in Cervical Spondylosis cases and the expansion of the affected demographic to younger patients have escalated the demand for X-ray screening. Challenges include variability in imaging technology, differences in equipment specifications, and the diverse experience levels of clinicians, which collectively hinder diagnostic accuracy. In response, a deep learning approach utilizing a ResNet-34 convolutional neural network has been developed. This model, trained on a comprehensive dataset of 1235 cervical spine X-ray images representing a wide range of projection angles, aims to mitigate these issues by providing a robust tool for diagnosis. Validation of the model was performed on an independent set of 136 X-ray images, also varied in projection angles, to ensure its efficacy across diverse clinical scenarios. The model achieved a classification accuracy of 89.7%, significantly outperforming the traditional manual diagnostic approach, which has an accuracy of 68.3%. This advancement demonstrates the viability of deep learning models to not only complement but enhance the diagnostic capabilities of clinicians in identifying Cervical Spondylosis, offering a promising avenue for improving diagnostic accuracy and efficiency in clinical settings.
Collapse
Affiliation(s)
- Yang Xie
- Department of Medical Imaging, China Rehabilitation Research Center and Capital Medical University School of Rehabilitation Medicine, Beijing 100068, China;
| | - Yali Nie
- Department of Electronics Design, Mid Sweden University, 85170 Sundsvall, Sweden; (Y.N.); (J.L.); (Y.Z.)
| | - Jan Lundgren
- Department of Electronics Design, Mid Sweden University, 85170 Sundsvall, Sweden; (Y.N.); (J.L.); (Y.Z.)
| | - Mingliang Yang
- Department of Spinal and Neural Function Reconstruction, China Rehabilitation Research Center and Capital Medical University School of Rehabilitation Medicine, Beijing 100068, China;
| | - Yuxuan Zhang
- Department of Electronics Design, Mid Sweden University, 85170 Sundsvall, Sweden; (Y.N.); (J.L.); (Y.Z.)
| | - Zhenbo Chen
- Department of Medical Imaging, China Rehabilitation Research Center and Capital Medical University School of Rehabilitation Medicine, Beijing 100068, China;
| |
Collapse
|
10
|
Fu T, Viswanathan V, Attia A, Zerbib-Attal E, Kosaraju V, Barger R, Vidal J, Bittencourt LK, Faraji N. Assessing the Potential of a Deep Learning Tool to Improve Fracture Detection by Radiologists and Emergency Physicians on Extremity Radiographs. Acad Radiol 2024; 31:1989-1999. [PMID: 37993303 DOI: 10.1016/j.acra.2023.10.042] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 10/23/2023] [Accepted: 10/25/2023] [Indexed: 11/24/2023]
Abstract
RATIONALE AND OBJECTIVES To evaluate the standalone performance of a deep learning (DL) based fracture detection tool on extremity radiographs and assess the performance of radiologists and emergency physicians in identifying fractures of the extremities with and without the DL aid. MATERIALS AND METHODS The DL tool was previously developed using 132,000 appendicular skeletal radiographs divided into 87% training, 11% validation, and 2% test sets. Stand-alone performance was evaluated on 2626 de-identified radiographs from a single institution in Ohio, including at least 140 exams per body region. Consensus from three US board-certified musculoskeletal (MSK) radiologists served as ground truth. A multi-reader retrospective study was performed in which 24 readers (eight each of emergency physicians, non-MSK radiologists, and MSK radiologists) identified fractures in 186 cases during two independent sessions with and without DL aid, separated by a one-month washout period. The accuracy (area under the receiver operating curve), sensitivity, specificity, and reading time were compared with and without model aid. RESULTS The model achieved a stand-alone accuracy of 0.986, sensitivity of 0.987, and specificity of 0.885, and high accuracy (> 0.95) across stratification for body part, age, gender, radiographic views, and scanner type. With DL aid, reader accuracy increased by 0.047 (95% CI: 0.034, 0.061; p = 0.004) and sensitivity significantly improved from 0.865 (95% CI: 0.848, 0.881) to 0.955 (95% CI: 0.944, 0.964). Average reading time was shortened by 7.1 s (27%) per exam. When stratified by physician type, this improvement was greater for emergency physicians and non-MSK radiologists. CONCLUSION The DL tool demonstrated high stand-alone accuracy, aided physician diagnostic accuracy, and decreased interpretation time.
Collapse
Affiliation(s)
- Tianyuan Fu
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA (T.F., V.V., V.K., R.B., L.K.B., N.F.).
| | - Vidya Viswanathan
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA (T.F., V.V., V.K., R.B., L.K.B., N.F.)
| | - Alexandre Attia
- Azmed, 10 Rue d'Uzès, 75002, Paris, France (A.A., E.Z.A., J.V.)
| | | | - Vijaya Kosaraju
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA (T.F., V.V., V.K., R.B., L.K.B., N.F.)
| | - Richard Barger
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA (T.F., V.V., V.K., R.B., L.K.B., N.F.)
| | - Julien Vidal
- Azmed, 10 Rue d'Uzès, 75002, Paris, France (A.A., E.Z.A., J.V.)
| | - Leonardo K Bittencourt
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA (T.F., V.V., V.K., R.B., L.K.B., N.F.)
| | - Navid Faraji
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA (T.F., V.V., V.K., R.B., L.K.B., N.F.)
| |
Collapse
|
11
|
Cheng CT, Kuo LW, Ouyang CH, Hsu CP, Lin WC, Fu CY, Kang SC, Liao CH. Development and evaluation of a deep learning-based model for simultaneous detection and localization of rib and clavicle fractures in trauma patients' chest radiographs. Trauma Surg Acute Care Open 2024; 9:e001300. [PMID: 38646620 PMCID: PMC11029226 DOI: 10.1136/tsaco-2023-001300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024] Open
Abstract
Purpose To develop a rib and clavicle fracture detection model for chest radiographs in trauma patients using a deep learning (DL) algorithm. Materials and methods We retrospectively collected 56 145 chest X-rays (CXRs) from trauma patients in a trauma center between August 2008 and December 2016. A rib/clavicle fracture detection DL algorithm was trained using this data set with 991 (1.8%) images labeled by experts with fracture site locations. The algorithm was tested on independently collected 300 CXRs in 2017. An external test set was also collected from hospitalized trauma patients in a regional hospital for evaluation. The receiver operating characteristic curve with area under the curve (AUC), accuracy, sensitivity, specificity, precision, and negative predictive value of the model on each test set was evaluated. The prediction probability on the images was visualized as heatmaps. Results The trained DL model achieved an AUC of 0.912 (95% CI 87.8 to 94.7) on the independent test set. The accuracy, sensitivity, and specificity on the given cut-off value are 83.7, 86.8, and 80.4, respectively. On the external test set, the model had a sensitivity of 88.0 and an accuracy of 72.5. While the model exhibited a slight decrease in accuracy on the external test set, it maintained its sensitivity in detecting fractures. Conclusion The algorithm detects rib and clavicle fractures concomitantly in the CXR of trauma patients with high accuracy in locating lesions through heatmap visualization.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Ling-Wei Kuo
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Chun-Hsiang Ouyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Chi-Po Hsu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Wei-Cheng Lin
- Department of Electrical Engineering, Chang Gung University, Taoyuan, Taiwan
| | - Chih-Yuan Fu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Shih-Ching Kang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Taoyuan, Taiwan
- Department of medicine, Chang Gung university, Taoyuan, Taiwan
| |
Collapse
|
12
|
Wang XM, Zhang XJ. Role of radiomics in staging liver fibrosis: a meta-analysis. BMC Med Imaging 2024; 24:87. [PMID: 38609843 PMCID: PMC11010385 DOI: 10.1186/s12880-024-01272-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 04/10/2024] [Indexed: 04/14/2024] Open
Abstract
BACKGROUND Fibrosis has important pathoetiological and prognostic roles in chronic liver disease. This study evaluates the role of radiomics in staging liver fibrosis. METHOD After literature search in electronic databases (Embase, Ovid, Science Direct, Springer, and Web of Science), studies were selected by following precise eligibility criteria. The quality of included studies was assessed, and meta-analyses were performed to achieve pooled estimates of area under receiver-operator curve (AUROC), accuracy, sensitivity, and specificity of radiomics in staging liver fibrosis compared to histopathology. RESULTS Fifteen studies (3718 patients; age 47 years [95% confidence interval (CI): 42, 53]; 69% [95% CI: 65, 73] males) were included. AUROC values of radiomics for detecting significant fibrosis (F2-4), advanced fibrosis (F3-4), and cirrhosis (F4) were 0.91 [95%CI: 0.89, 0.94], 0.92 [95%CI: 0.90, 0.95], and 0.94 [95%CI: 0.93, 0.96] in training cohorts and 0.89 [95%CI: 0.83, 0.91], 0.89 [95%CI: 0.83, 0.94], and 0.93 [95%CI: 0.91, 0.95] in validation cohorts, respectively. For diagnosing significant fibrosis, advanced fibrosis, and cirrhosis the sensitivity of radiomics was 84.0% [95%CI: 76.1, 91.9], 86.9% [95%CI: 76.8, 97.0], and 92.7% [95%CI: 89.7, 95.7] in training cohorts, and 75.6% [95%CI: 67.7, 83.5], 80.0% [95%CI: 70.7, 89.3], and 92.0% [95%CI: 87.8, 96.1] in validation cohorts, respectively. Respective specificity was 88.6% [95% CI: 83.0, 94.2], 88.4% [95% CI: 81.9, 94.8], and 91.1% [95% CI: 86.8, 95.5] in training cohorts, and 86.8% [95% CI: 83.3, 90.3], 94.0% [95% CI: 89.5, 98.4], and 88.3% [95% CI: 84.4, 92.2] in validation cohorts. Limitations included use of several methods for feature selection and classification, less availability of studies evaluating a particular radiological modality, lack of a direct comparison between radiology and radiomics, and lack of external validation. CONCLUSION Although radiomics offers good diagnostic accuracy in detecting liver fibrosis, its role in clinical practice is not as clear at present due to comparability and validation constraints.
Collapse
Affiliation(s)
- Xiao-Min Wang
- School of Medical Imaging, Tianjin Medical University, No.1, Guangdong Road, Hexi District, Tianjin, 300203, China.
| | - Xiao-Jing Zhang
- Department of Radiology, The First Medical Center, Chinese PLA General Hospital, Beijing, 100853, China
| |
Collapse
|
13
|
Xie Y, Li X, Hu Y, Liu C, Liang H, Nickel D, Fu C, Chen S, Tao H. Deep learning reconstruction for turbo spin echo to prospectively accelerate ankle MRI: A multi-reader study. Eur J Radiol 2024; 175:111451. [PMID: 38593573 DOI: 10.1016/j.ejrad.2024.111451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 03/10/2024] [Accepted: 04/02/2024] [Indexed: 04/11/2024]
Abstract
PURPOSE To evaluate a deep learning reconstruction for turbo spin echo (DLR-TSE) sequence of ankle magnetic resonance imaging (MRI) in terms of acquisition time, image quality, and lesion detectability by comparing with conventional TSE. METHODS Between March 2023 and May 2023, patients with an indication for ankle MRI were prospectively enrolled. Each patient underwent a conventional TSE protocol and a prospectively undersampled DLR-TSE protocol. Four experienced radiologists independently assessed image quality using a 5-point scale and reviewed structural abnormalities. Image quality assessment included overall image quality, differentiation of anatomic details, diagnostic confidence, artifacts, and noise. Interchangeability analysis was performed to evaluate the equivalence of DLR-TSE relative to conventional TSE for detection of structural pathologies. RESULTS In total, 56 patients were included (mean age, 32.6 ± 10.6 years; 35 men). The DLR-TSE (233 s) protocol enabled a 57.4 % reduction in total acquisition time, compared with the conventional TSE protocol (547 s). DLR-TSE images had superior overall image quality, fewer artifacts, and less noise (all P < 0.05), compared with conventional TSE images, according to mean ratings by the four readers. Differentiation of anatomic details, diagnostic confidence, and assessments of structural abnormalities showed no differences between the two techniques (P > 0.05). Furthermore, DLR-TSE demonstrated diagnostic equivalence with conventional TSE, based on interchangeability analysis involving all analyzed structural abnormalities. CONCLUSION DLR can prospectively accelerate conventional TSE to a level comparable with a 4-minute comprehensive examination of the ankle, while providing superior image quality and similar lesion detectability in clinical practice.
Collapse
Affiliation(s)
- Yuxue Xie
- Department of Radiology & Institute of Medical Functional and Molecular Imaging, Huashan Hospital, Fudan University, Shanghai, China.
| | - Xiangwen Li
- Department of Radiology & Institute of Medical Functional and Molecular Imaging, Huashan Hospital, Fudan University, Shanghai, China.
| | - Yiwen Hu
- Department of Radiology & Institute of Medical Functional and Molecular Imaging, Huashan Hospital, Fudan University, Shanghai, China.
| | - Changyan Liu
- Department of Radiology & Institute of Medical Functional and Molecular Imaging, Huashan Hospital, Fudan University, Shanghai, China.
| | - Haoyu Liang
- Department of Radiology & Institute of Medical Functional and Molecular Imaging, Huashan Hospital, Fudan University, Shanghai, China.
| | - Dominik Nickel
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany.
| | - Caixia Fu
- MR Collaboration, Siemens (Shenzhen) Magnetic Resonance Ltd., Shenzhen, China.
| | - Shuang Chen
- Department of Radiology & Institute of Medical Functional and Molecular Imaging, Huashan Hospital, Fudan University, Shanghai, China; National Clinical Research Center for Aging and Medicine, China.
| | - Hongyue Tao
- Department of Radiology & Institute of Medical Functional and Molecular Imaging, Huashan Hospital, Fudan University, Shanghai, China.
| |
Collapse
|
14
|
Granata V, Fusco R, Coluccino S, Russo C, Grassi F, Tortora F, Conforti R, Caranci F. Preliminary data on artificial intelligence tool in magnetic resonance imaging assessment of degenerative pathologies of lumbar spine. LA RADIOLOGIA MEDICA 2024; 129:623-630. [PMID: 38349415 DOI: 10.1007/s11547-024-01791-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Accepted: 01/15/2024] [Indexed: 04/17/2024]
Abstract
PURPOSE To evaluate the ability of an artificial intelligence (AI) tool in magnetic resonance imaging (MRI) assessment of degenerative pathologies of lumbar spine using radiologist evaluation as a gold standard. METHODS Patients with degenerative pathologies of lumbar spine, evaluated with MRI study, were enrolled in a retrospective study approved by local ethical committee. A comprehensive software solution (CoLumbo; SmartSoft Ltd., Varna, Bulgaria) designed to label the segments of the lumbar spine and to detect a broad spectrum of degenerative pathologies based on a convolutional neural network (CNN) was employed, utilizing an automatic segmentation. The AI tool efficacy was compared to data obtained by a senior neuroradiologist that employed a semiquantitative score. Chi-square test was used to assess the differences among groups, and Spearman's rank correlation coefficient was calculated between the grading assigned by radiologist and the grading obtained by software. Moreover, agreement was assessed between the value assigned by radiologist and software. RESULTS Ninety patients (58 men; 32 women) affected with degenerative pathologies of lumbar spine and aged from 60 to 81 years (mean 66 years) were analyzed. Significant correlations were observed between grading assigned by radiologist and the grading obtained by software for each localization. However, only when the localization was L2-L3, there was a good correlation with a coefficient value of 0.72. The best agreements were obtained in case of L1-L2 and L2-L3 localizations and were, respectively, of 81.1% and 72.2%. The lowest agreement of 51.1% was detected in case of L4-L5 locations. With regard canal stenosis and compression, the highest agreement was obtained for identification of in L5-S1 localization. CONCLUSIONS AI solution represents an efficacy and useful toll degenerative pathologies of lumbar spine to improve radiologist workflow.
Collapse
Affiliation(s)
- Vincenza Granata
- Division of Radiology, "Istituto Nazionale Tumori IRCCS Fondazione Pascale - IRCCS di Napoli", Naples, Italy
| | | | - Simone Coluccino
- Division of Radiology, "Università degli Studi della Campania Luigi Vanvitelli", Naples, Italy
| | - Carmela Russo
- Unit of Neuroradiology, Department of Neurosciences, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Francesca Grassi
- Division of Radiology, "Università degli Studi della Campania Luigi Vanvitelli", Naples, Italy
| | - Fabio Tortora
- Neuroradiology Unit, Department of Advanced Biomedical Sciences, University "Federico II", Naples, Italy
| | - Renata Conforti
- Division of Radiology, "Università degli Studi della Campania Luigi Vanvitelli", Naples, Italy
| | - Ferdinando Caranci
- Division of Radiology, "Università degli Studi della Campania Luigi Vanvitelli", Naples, Italy
- Unit of Neuroradiology, Department of Neurosciences, Santobono-Pausilipon Children's Hospital, Naples, Italy
- Neuroradiology Unit, Department of Advanced Biomedical Sciences, University "Federico II", Naples, Italy
- Italian Society of Medical and Interventional Radiology (SIRM), SIRM Foundation, Via della Signora 2, 20122, Milan, Italy
| |
Collapse
|
15
|
Gifani P, Shalbaf A. Transfer Learning with Pretrained Convolutional Neural Network for Automated Gleason Grading of Prostate Cancer Tissue Microarrays. JOURNAL OF MEDICAL SIGNALS & SENSORS 2024; 14:4. [PMID: 38510670 PMCID: PMC10950311 DOI: 10.4103/jmss.jmss_42_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 12/20/2022] [Accepted: 03/22/2023] [Indexed: 03/22/2024]
Abstract
Background The Gleason grading system has been the most effective prediction for prostate cancer patients. This grading system provides this possibility to assess prostate cancer's aggressiveness and then constitutes an important factor for stratification and therapeutic decisions. However, determining Gleason grade requires highly-trained pathologists and is time-consuming and tedious, and suffers from inter-pathologist variability. To remedy these limitations, this paper introduces an automatic methodology based on transfer learning with pretrained convolutional neural networks (CNNs) for automatic Gleason grading of prostate cancer tissue microarray (TMA). Methods Fifteen pretrained (CNNs): Efficient Nets (B0-B5), NasNetLarge, NasNetMobile, InceptionV3, ResNet-50, SeResnet 50, Xception, DenseNet121, ResNext50, and inception_resnet_v2 were fine-tuned on a dataset of prostate carcinoma TMA images. Six pathologists separately identified benign and cancerous areas for each prostate TMA image by allocating benign, 3, 4, or 5 Gleason grade for 244 patients. The dataset was labeled by these pathologists and majority vote was applied on pixel-wise annotations to obtain a unified label. Results Results showed the NasnetLarge architecture is the best model among them in the classification of prostate TMA images of 244 patients with accuracy of 0.93 and area under the curve of 0.98. Conclusion Our study can act as a highly trained pathologist to categorize the prostate cancer stages with more objective and reproducible results.
Collapse
Affiliation(s)
- Parisa Gifani
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Cancer Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
16
|
Guermazi A, Omoumi P, Tordjman M, Fritz J, Kijowski R, Regnard NE, Carrino J, Kahn CE, Knoll F, Rueckert D, Roemer FW, Hayashi D. How AI May Transform Musculoskeletal Imaging. Radiology 2024; 310:e230764. [PMID: 38165245 PMCID: PMC10831478 DOI: 10.1148/radiol.230764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 06/18/2023] [Accepted: 07/11/2023] [Indexed: 01/03/2024]
Abstract
While musculoskeletal imaging volumes are increasing, there is a relative shortage of subspecialized musculoskeletal radiologists to interpret the studies. Will artificial intelligence (AI) be the solution? For AI to be the solution, the wide implementation of AI-supported data acquisition methods in clinical practice requires establishing trusted and reliable results. This implementation will demand close collaboration between core AI researchers and clinical radiologists. Upon successful clinical implementation, a wide variety of AI-based tools can improve the musculoskeletal radiologist's workflow by triaging imaging examinations, helping with image interpretation, and decreasing the reporting time. Additional AI applications may also be helpful for business, education, and research purposes if successfully integrated into the daily practice of musculoskeletal radiology. The question is not whether AI will replace radiologists, but rather how musculoskeletal radiologists can take advantage of AI to enhance their expert capabilities.
Collapse
Affiliation(s)
- Ali Guermazi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Patrick Omoumi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Mickael Tordjman
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Jan Fritz
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Richard Kijowski
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Nor-Eddine Regnard
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - John Carrino
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Charles E. Kahn
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Florian Knoll
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Daniel Rueckert
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Frank W. Roemer
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Daichi Hayashi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| |
Collapse
|
17
|
Wang Q, Zhao W, Xing X, Wang Y, Xin P, Chen Y, Zhu Y, Xu J, Zhao Q, Yuan H, Lang N. Feasibility of AI-assisted compressed sensing protocols in knee MR imaging: a prospective multi-reader study. Eur Radiol 2023; 33:8585-8596. [PMID: 37382615 PMCID: PMC10667384 DOI: 10.1007/s00330-023-09823-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 03/02/2023] [Accepted: 03/22/2023] [Indexed: 06/30/2023]
Abstract
OBJECTIVES To evaluate the image quality and diagnostic performance of AI-assisted compressed sensing (ACS) accelerated two-dimensional fast spin-echo MRI compared with standard parallel imaging (PI) in clinical 3.0T rapid knee scans. METHODS This prospective study enrolled 130 consecutive participants between March and September 2022. The MRI scan procedure included one 8.0-min PI protocol and two ACS protocols (3.5 min and 2.0 min). Quantitative image quality assessments were performed by evaluating edge rise distance (ERD) and signal-to-noise ratio (SNR). Shapiro-Wilk tests were performed and investigated by the Friedman test and post hoc analyses. Three radiologists independently evaluated structural disorders for each participant. Fleiss κ analysis was used to compare inter-reader and inter-protocol agreements. The diagnostic performance of each protocol was investigated and compared by DeLong's test. The threshold for statistical significance was set at p < 0.05. RESULTS A total of 150 knee MRI examinations constituted the study cohort. For the quantitative assessment of four conventional sequences with ACS protocols, SNR improved significantly (p < 0.001), and ERD was significantly reduced or equivalent to the PI protocol. For the abnormality evaluated, the intraclass correlation coefficient ranged from moderate to substantial between readers (κ = 0.75-0.98) and between protocols (κ = 0.73-0.98). For meniscal tears, cruciate ligament tears, and cartilage defects, the diagnostic performance of ACS protocols was considered equivalent to PI protocol (Delong test, p > 0.05). CONCLUSIONS Compared with the conventional PI acquisition, the novel ACS protocol demonstrated superior image quality and was feasible for achieving equivalent detection of structural abnormalities while reducing acquisition time by half. CLINICAL RELEVANCE STATEMENT Artificial intelligence-assisted compressed sensing (ACS) providing excellent quality and a 75% reduction in scanning time presents significant clinical advantages in improving the efficiency and accessibility of knee MRI for more patients. KEY POINTS • The prospective multi-reader study showed no difference in diagnostic performance between parallel imaging and AI-assisted compression sensing (ACS) was found. • Reduced scan time, sharper delineation, and less noise with ACS reconstruction. • Improved efficiency of the clinical knee MRI examination by the ACS acceleration.
Collapse
Affiliation(s)
- Qizheng Wang
- Department of Radiology, Peking University Third Hospital, Haidian District, 49 North Garden Road, Beijing, 100191, People's Republic of China
| | - Weili Zhao
- Department of Radiology, Peking University Third Hospital, Haidian District, 49 North Garden Road, Beijing, 100191, People's Republic of China
| | - Xiaoying Xing
- Department of Radiology, Peking University Third Hospital, Haidian District, 49 North Garden Road, Beijing, 100191, People's Republic of China
| | - Ying Wang
- Department of Radiology, Peking University Third Hospital, Haidian District, 49 North Garden Road, Beijing, 100191, People's Republic of China
| | - Peijin Xin
- Department of Radiology, Peking University Third Hospital, Haidian District, 49 North Garden Road, Beijing, 100191, People's Republic of China
| | - Yongye Chen
- Department of Radiology, Peking University Third Hospital, Haidian District, 49 North Garden Road, Beijing, 100191, People's Republic of China
| | - Yupeng Zhu
- Department of Radiology, Peking University Third Hospital, Haidian District, 49 North Garden Road, Beijing, 100191, People's Republic of China
| | - Jiajia Xu
- Department of Radiology, Peking University Third Hospital, Haidian District, 49 North Garden Road, Beijing, 100191, People's Republic of China
| | - Qiang Zhao
- Department of Radiology, Peking University Third Hospital, Haidian District, 49 North Garden Road, Beijing, 100191, People's Republic of China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Haidian District, 49 North Garden Road, Beijing, 100191, People's Republic of China
| | - Ning Lang
- Department of Radiology, Peking University Third Hospital, Haidian District, 49 North Garden Road, Beijing, 100191, People's Republic of China.
| |
Collapse
|
18
|
Shah AK, Lavu MS, Hecht CJ, Burkhart RJ, Kamath AF. Understanding the use of artificial intelligence for implant analysis in total joint arthroplasty: a systematic review. ARTHROPLASTY 2023; 5:54. [PMID: 37919812 PMCID: PMC10623774 DOI: 10.1186/s42836-023-00209-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 09/01/2023] [Indexed: 11/04/2023] Open
Abstract
INTRODUCTION In recent years, there has been a significant increase in the development of artificial intelligence (AI) algorithms aimed at reviewing radiographs after total joint arthroplasty (TJA). This disruptive technology is particularly promising in the context of preoperative planning for revision TJA. Yet, the efficacy of AI algorithms regarding TJA implant analysis has not been examined comprehensively. METHODS PubMed, EBSCO, and Google Scholar electronic databases were utilized to identify all studies evaluating AI algorithms related to TJA implant analysis between 1 January 2000, and 27 February 2023 (PROSPERO study protocol registration: CRD42023403497). The mean methodological index for non-randomized studies score was 20.4 ± 0.6. We reported the accuracy, sensitivity, specificity, positive predictive value, and area under the curve (AUC) for the performance of each outcome measure. RESULTS Our initial search yielded 374 articles, and a total of 20 studies with three main use cases were included. Sixteen studies analyzed implant identification, two addressed implant failure, and two addressed implant measurements. Each use case had a median AUC and accuracy above 0.90 and 90%, respectively, indicative of a well-performing AI algorithm. Most studies failed to include explainability methods and conduct external validity testing. CONCLUSION These findings highlight the promising role of AI in recognizing implants in TJA. Preliminary studies have shown strong performance in implant identification, implant failure, and accurately measuring implant dimensions. Future research should follow a standardized guideline to develop and train models and place a strong emphasis on transparency and clarity in reporting results. LEVEL OF EVIDENCE Level III.
Collapse
Affiliation(s)
- Aakash K Shah
- Department of Orthopaedic Surgery, Cleveland Clinic Foundation, Cleveland, OH, 44195, USA
| | - Monish S Lavu
- Department of Orthopaedic Surgery, Cleveland Clinic Foundation, Cleveland, OH, 44195, USA
| | - Christian J Hecht
- Department of Orthopaedic Surgery, Cleveland Clinic Foundation, Cleveland, OH, 44195, USA
| | - Robert J Burkhart
- Department of Orthopaedic Surgery, University Hospitals, Cleveland, OH, 44106, USA
| | - Atul F Kamath
- Department of Orthopaedic Surgery, Cleveland Clinic Foundation, Cleveland, OH, 44195, USA.
- Center for Hip Preservation, Orthopaedic and Rheumatologic Institute, Cleveland Clinic Foundation, 9500 Euclid Avenue, Mail Code A41, Cleveland, OH, 44195, USA.
| |
Collapse
|
19
|
Kijowski R, Fritz J, Deniz CM. Deep learning applications in osteoarthritis imaging. Skeletal Radiol 2023; 52:2225-2238. [PMID: 36759367 PMCID: PMC10409879 DOI: 10.1007/s00256-023-04296-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 12/22/2022] [Accepted: 01/31/2023] [Indexed: 02/11/2023]
Abstract
Deep learning (DL) is one of the most exciting new areas in medical imaging. This article will provide a review of current applications of DL in osteoarthritis (OA) imaging, including methods used for cartilage lesion detection, OA diagnosis, cartilage segmentation, and OA risk assessment. DL techniques have been shown to have similar diagnostic performance as human readers for detecting and grading cartilage lesions within the knee on MRI. A variety of DL methods have been developed for detecting and grading the severity of knee OA and various features of knee OA on X-rays using standardized classification systems with diagnostic performance similar to human readers. Multiple DL approaches have been described for fully automated segmentation of cartilage and other knee tissues and have achieved higher segmentation accuracy than currently used methods with substantial reductions in segmentation times. Various DL models analyzing baseline X-rays and MRI have been developed for OA risk assessment. These models have shown high diagnostic performance for predicting a wide variety of OA outcomes, including the incidence and progression of radiographic knee OA, the presence and progression of knee pain, and future total knee replacement. The preliminary results of DL applications in OA imaging have been encouraging. However, many DL techniques require further technical refinement to maximize diagnostic performance. Furthermore, the generalizability of DL approaches needs to be further investigated in prospective studies using large image datasets acquired at different institutions with different imaging hardware before they can be implemented in clinical practice and research studies.
Collapse
Affiliation(s)
- Richard Kijowski
- Department of Radiology, New York University Grossman School of Medicine, 660 First Avenue, 3Rd Floor, New York, NY, 10016, USA.
| | - Jan Fritz
- Department of Radiology, New York University Grossman School of Medicine, 660 First Avenue, 3Rd Floor, New York, NY, 10016, USA
| | - Cem M Deniz
- Department of Radiology, New York University Grossman School of Medicine, 660 First Avenue, 3Rd Floor, New York, NY, 10016, USA
| |
Collapse
|
20
|
Vera-Garcia DV, Nugen F, Padash S, Khosravi B, Mickley JP, Erickson BJ, Wyles CC, Taunton MJ. Educational Overview of the Concept and Application of Computer Vision in Arthroplasty. J Arthroplasty 2023; 38:1954-1958. [PMID: 37633507 PMCID: PMC10616773 DOI: 10.1016/j.arth.2023.08.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 08/10/2023] [Accepted: 08/11/2023] [Indexed: 08/28/2023] Open
Abstract
Image data has grown exponentially as systems have increased their ability to collect and store it. Unfortunately, there are limits to human resources both in time and knowledge to fully interpret and manage that data. Computer Vision (CV) has grown in popularity as a discipline for better understanding visual data. Computer Vision has become a powerful tool for imaging analytics in orthopedic surgery, allowing computers to evaluate large volumes of image data with greater nuance than previously possible. Nevertheless, even with the growing number of uses in medicine, literature on the fundamentals of CV and its implementation is mainly oriented toward computer scientists rather than clinicians, rendering CV unapproachable for most orthopedic surgeons as a tool for clinical practice and research. The purpose of this article is to summarize and review the fundamental concepts of CV application for the orthopedic surgeon and musculoskeletal researcher.
Collapse
Affiliation(s)
- Diana Victoria Vera-Garcia
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN
| | - Fred Nugen
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN
| | - Sirwa Padash
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN
| | - Bardia Khosravi
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN
| | - John P. Mickley
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN
| | - Bradley J. Erickson
- Radiology Informatics Lab (RIL), Department of Radiology, Mayo Clinic, Rochester, MN
| | - Cody C. Wyles
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN
| | - Michael J. Taunton
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, MN
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN
| |
Collapse
|
21
|
Bousson V, Attané G, Benoist N, Perronne L, Diallo A, Hadid-Beurrier L, Martin E, Hamzi L, Depil Duval A, Revue E, Vicaut E, Salvat C. Artificial Intelligence for Detecting Acute Fractures in Patients Admitted to an Emergency Department: Real-Life Performance of Three Commercial Algorithms. Acad Radiol 2023; 30:2118-2139. [PMID: 37468377 DOI: 10.1016/j.acra.2023.06.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 06/08/2023] [Accepted: 06/20/2023] [Indexed: 07/21/2023]
Abstract
RATIONALE AND OBJECTIVES Interpreting radiographs in emergency settings is stressful and a burden for radiologists. The main objective was to assess the performance of three commercially available artificial intelligence (AI) algorithms for detecting acute peripheral fractures on radiographs in daily emergency practice. MATERIALS AND METHODS Radiographs were collected from consecutive patients admitted for skeletal trauma at our emergency department over a period of 2 months. Three AI algorithms-SmartUrgence, Rayvolve, and BoneView-were used to analyze 13 body regions. Four musculoskeletal radiologists determined the ground truth from radiographs. The diagnostic performance of the three AI algorithms was calculated at the level of the radiography set. Accuracies, sensitivities, and specificities for each algorithm and two-by-two comparisons between algorithms were obtained. Analyses were performed for the whole population and for subgroups of interest (sex, age, body region). RESULTS A total of 1210 patients were included (mean age 41.3 ± 18.5 years; 742 [61.3%] men), corresponding to 1500 radiography sets. The fracture prevalence among the radiography sets was 23.7% (356/1500). Accuracy was 90.1%, 71.0%, and 88.8% for SmartUrgence, Rayvolve, and BoneView, respectively; sensitivity 90.2%, 92.6%, and 91.3%, with specificity 92.5%, 70.4%, and 90.5%. Accuracy and specificity were significantly higher for SmartUrgence and BoneView than Rayvolve for the whole population (P < .0001) and for subgroups. The three algorithms did not differ in sensitivity (P = .27). For SmartUrgence, subgroups did not significantly differ in accuracy, specificity, or sensitivity. For Rayvolve, accuracy and specificity were significantly higher with age 27-36 than ≥53 years (P = .0029 and P = .0019). Specificity was higher for the subgroup knee than foot (P = .0149). For BoneView, accuracy was significantly higher for the subgroups knee than foot (P = .0006) and knee than wrist/hand (P = .0228). Specificity was significantly higher for the subgroups knee than foot (P = .0003) and ankle than foot (P = .0195). CONCLUSION The performance of AI detection of acute peripheral fractures in daily radiological practice in an emergency department was good to high and was related to the AI algorithm, patient age, and body region examined.
Collapse
Affiliation(s)
- Valérie Bousson
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.).
| | - Grégoire Attané
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Nicolas Benoist
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Laetitia Perronne
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Abdourahmane Diallo
- Clinical Research Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D., E.V.)
| | - Lama Hadid-Beurrier
- Medical Physics Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (L.H.-B., C.S.)
| | - Emmanuel Martin
- Information Technology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (E.M.)
| | - Lounis Hamzi
- Radiology Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, 2 rue Ambroise Paré, 75010, Paris, France (V.B., G.A., N.B., L.P., L.H.)
| | - Arnaud Depil Duval
- Emergency Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D.D., E.R.); Emergency Department, Saint-Joseph's Hospital, Paris, France (A.D.D.)
| | - Eric Revue
- Emergency Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D.D., E.R.)
| | - Eric Vicaut
- Clinical Research Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (A.D., E.V.)
| | - Cécile Salvat
- Medical Physics Department, Lariboisière's Hospital, AP-HP.Nord-Université de Paris, Paris, France (L.H.-B., C.S.)
| |
Collapse
|
22
|
Di Dier K, Deppe D, Diekhoff T, Herregods N, Jans L. Clash of the titans: Current CT and CT-like imaging modalities in sacroiliitis in spondyloarthritis. Best Pract Res Clin Rheumatol 2023; 37:101876. [PMID: 37953120 DOI: 10.1016/j.berh.2023.101876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 07/07/2023] [Accepted: 10/16/2023] [Indexed: 11/14/2023]
Abstract
Sacroiliitis is characterised by active and structural changes of the joint. While the Assessment of Spondyloarthritis international Society (ASAS) classification criteria stress the importance of bone marrow inflammation, recent reports suggest that osteitis can occur in various diseases, mechanical conditions and healthy individuals. Thus, structural lesions such as joint surface erosion and ankylosis are important factors for differential diagnosis. Various imaging modalities are available to examine these changes. However, computed tomography (CT) is generally considered the reference standard. Nonetheless, recent advances in magnetic resonance imaging (MRI) allow for direct bone imaging and the reconstruction of CT-like images that can provide similar information. This way, the ability of MRI to detect and measure structural lesions is strengthened. The aim of this review is to provide an overview of the pros and cons of CT and CT-like imaging modalities in sacroiliitis.
Collapse
Affiliation(s)
- Kelly Di Dier
- Department of Radiology, Faculty of Medicine, Ghent University Hospital, De Pintelaan 185, 9000, Gent, Belgium.
| | - Dominik Deppe
- Department of Radiology (CCM), Charité - Universitätsmedizin Berlin, Campus Mitte, Humboldt - Universität Zu Berlin, Freie Unversität Berlin, Charitéplaz 1, 10117, Berlin, Germany.
| | - Torsten Diekhoff
- Department of Radiology (CCM), Charité - Universitätsmedizin Berlin, Campus Mitte, Humboldt - Universität Zu Berlin, Freie Unversität Berlin, Charitéplaz 1, 10117, Berlin, Germany.
| | - Nele Herregods
- Department of Radiology, Faculty of Medicine, Ghent University Hospital, De Pintelaan 185, 9000, Gent, Belgium.
| | - Lennart Jans
- Department of Radiology, Faculty of Medicine, Ghent University Hospital, De Pintelaan 185, 9000, Gent, Belgium.
| |
Collapse
|
23
|
Pagano S, Müller K, Götz J, Reinhard J, Schindler M, Grifka J, Maderbacher G. The Role and Efficiency of an AI-Powered Software in the Evaluation of Lower Limb Radiographs before and after Total Knee Arthroplasty. J Clin Med 2023; 12:5498. [PMID: 37685563 PMCID: PMC10487842 DOI: 10.3390/jcm12175498] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/19/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The rapid evolution of artificial intelligence (AI) in medical imaging analysis has significantly impacted musculoskeletal radiology, offering enhanced accuracy and speed in radiograph evaluations. The potential of AI in clinical settings, however, remains underexplored. This research investigates the efficiency of a commercial AI tool in analyzing radiographs of patients who have undergone total knee arthroplasty. The study retrospectively analyzed 200 radiographs from 100 patients, comparing AI software measurements to expert assessments. Assessed parameters included axial alignments (MAD, AMA), femoral and tibial angles (mLPFA, mLDFA, mMPTA, mLDTA), and other key measurements including JLCA, HKA, and Mikulicz line. The tool demonstrated good to excellent agreement with expert metrics (ICC = 0.78-1.00), analyzed radiographs twice as fast (p < 0.001), yet struggled with accuracy for the JLCA (ICC = 0.79, 95% CI = 0.72-0.84), the Mikulicz line (ICC = 0.78, 95% CI = 0.32-0.90), and if patients had a body mass index higher than 30 kg/m2 (p < 0.001). It also failed to analyze 45 (22.5%) radiographs, potentially due to image overlay or unique patient characteristics. These findings underscore the AI software's potential in musculoskeletal radiology but also highlight the necessity for further development for effective utilization in diverse clinical scenarios. Subsequent studies should explore the integration of AI tools in routine clinical practice and their impact on patient care.
Collapse
Affiliation(s)
- Stefano Pagano
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum Bad Abbach, 93077 Bad Abbach, Germany
| | - Karolina Müller
- Center for Clinical Studies, University of Regensburg, 93053 Regensburg, Germany
| | - Julia Götz
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum Bad Abbach, 93077 Bad Abbach, Germany
| | - Jan Reinhard
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum Bad Abbach, 93077 Bad Abbach, Germany
| | - Melanie Schindler
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum Bad Abbach, 93077 Bad Abbach, Germany
| | - Joachim Grifka
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum Bad Abbach, 93077 Bad Abbach, Germany
| | - Günther Maderbacher
- Department of Orthopedic Surgery, University of Regensburg, Asklepios Klinikum Bad Abbach, 93077 Bad Abbach, Germany
| |
Collapse
|
24
|
Debs P, Fayad LM. The promise and limitations of artificial intelligence in musculoskeletal imaging. FRONTIERS IN RADIOLOGY 2023; 3:1242902. [PMID: 37609456 PMCID: PMC10440743 DOI: 10.3389/fradi.2023.1242902] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 07/26/2023] [Indexed: 08/24/2023]
Abstract
With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.
Collapse
Affiliation(s)
- Patrick Debs
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
| | - Laura M. Fayad
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
- Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Oncology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
25
|
Patton D, Ghosh A, Farkas A, Sotardi S, Francavilla M, Venkatakrishna S, Bose S, Ouyang M, Huang H, Davidson R, Sze R, Nguyen J. Automating Angle Measurements on Foot Radiographs in Young Children: Feasibility and Performance of a Convolutional Neural Network Model. J Digit Imaging 2023; 36:1419-1430. [PMID: 37099224 PMCID: PMC10406755 DOI: 10.1007/s10278-023-00824-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 03/24/2023] [Accepted: 03/27/2023] [Indexed: 04/27/2023] Open
Abstract
Measurement of angles on foot radiographs is an important step in the evaluation of malalignment. The objective is to develop a CNN model to measure angles on radiographs, using radiologists' measurements as the reference standard. This IRB-approved retrospective study included 450 radiographs from 216 patients (< 3 years of age). Angles were automatically measured by means of image segmentation followed by angle calculation, according to Simon's approach for measuring pediatric foot angles. A multiclass U-Net model with a ResNet-34 backbone was used for segmentation. Two pediatric radiologists independently measured anteroposterior and lateral talocalcaneal and talo-1st metatarsal angles using the test dataset and recorded the time used for each study. Intraclass correlation coefficients (ICC) were used to compare angle and paired Wilcoxon signed-rank test to compare time between radiologists and the CNN model. There was high spatial overlap between manual and CNN-based automatic segmentations with dice coefficients ranging between 0.81 (lateral 1st metatarsal) and 0.94 (lateral calcaneus). Agreement was higher for angles on the lateral view when compared to the AP view, between radiologists (ICC: 0.93-0.95, 0.85-0.92, respectively) and between radiologists' mean and CNN calculated (ICC: 0.71-0.73, 0.41-0.52, respectively). Automated angle calculation was significantly faster when compared to radiologists' manual measurements (3 ± 2 vs 114 ± 24 s, respectively; P < 0.001). A CNN model can selectively segment immature ossification centers and automatically calculate angles with a high spatial overlap and moderate to substantial agreement when compared to manual methods, and 39 times faster.
Collapse
Affiliation(s)
- Daniella Patton
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Adarsh Ghosh
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Amy Farkas
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Susan Sotardi
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael Francavilla
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Shyam Venkatakrishna
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Saurav Bose
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Minhui Ouyang
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Hao Huang
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Richard Davidson
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Divison of Orthopaedics, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Raymond Sze
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Jie Nguyen
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA.
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
26
|
Salimi M, Parry JA, Shahrokhi R, Mosalamiaghili S. Application of artificial intelligence in trauma orthopedics: Limitation and prospects. World J Clin Cases 2023; 11:4231-4240. [PMID: 37449222 PMCID: PMC10337008 DOI: 10.12998/wjcc.v11.i18.4231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 04/23/2023] [Accepted: 05/08/2023] [Indexed: 06/26/2023] Open
Abstract
The varieties and capabilities of artificial intelligence and machine learning in orthopedic surgery are extensively expanding. One promising method is neural networks, emphasizing big data and computer-based learning systems to develop a statistical fracture-detecting model. It derives patterns and rules from outstanding amounts of data to analyze the probabilities of different outcomes using new sets of similar data. The sensitivity and specificity of machine learning in detecting fractures vary from previous studies. AI may be most promising in the diagnosis of less-obvious fractures that are more commonly missed. Future studies are necessary to develop more accurate and effective detection models that can be used clinically.
Collapse
Affiliation(s)
- Maryam Salimi
- Department of Orthopaedic Surgery, Denver Health Medical Center, Denver, CO 80215, United States
| | - Joshua A Parry
- Department of Orthopaedic Surgery, Denver Health Medical Center, Denver, CO 80215, United States
| | - Raha Shahrokhi
- Student Research Committee, Shiraz University of Medical Sciences, Shiraz 7138433608, Iran
| | | |
Collapse
|
27
|
Lin DJ, Schwier M, Geiger B, Raithel E, von Busch H, Fritz J, Kline M, Brooks M, Dunham K, Shukla M, Alaia EF, Samim M, Joshi V, Walter WR, Ellermann JM, Ilaslan H, Rubin D, Winalski CS, Recht MP. Deep Learning Diagnosis and Classification of Rotator Cuff Tears on Shoulder MRI. Invest Radiol 2023; 58:405-412. [PMID: 36728041 DOI: 10.1097/rli.0000000000000951] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
BACKGROUND Detection of rotator cuff tears, a common cause of shoulder disability, can be time-consuming and subject to reader variability. Deep learning (DL) has the potential to increase radiologist accuracy and consistency. PURPOSE The aim of this study was to develop a prototype DL model for detection and classification of rotator cuff tears on shoulder magnetic resonance imaging into no tear, partial-thickness tear, or full-thickness tear. MATERIALS AND METHODS This Health Insurance Portability and Accountability Act-compliant, institutional review board-approved study included a total of 11,925 noncontrast shoulder magnetic resonance imaging scans from 2 institutions, with 11,405 for development and 520 dedicated for final testing. A DL ensemble algorithm was developed that used 4 series as input from each examination: fluid-sensitive sequences in 3 planes and a sagittal oblique T1-weighted sequence. Radiology reports served as ground truth for training with categories of no tear, partial tear, or full-thickness tear. A multireader study was conducted for the test set ground truth, which was determined by the majority vote of 3 readers per case. The ensemble comprised 4 parallel 3D ResNet50 convolutional neural network architectures trained via transfer learning and then adapted to the targeted domain. The final tear-type prediction was determined as the class with the highest probability, after averaging the class probabilities of the 4 individual models. RESULTS The AUC overall for supraspinatus, infraspinatus, and subscapularis tendon tears was 0.93, 0.89, and 0.90, respectively. The model performed best for full-thickness supraspinatus, infraspinatus, and subscapularis tears with AUCs of 0.98, 0.99, and 0.95, respectively. Multisequence input demonstrated higher AUCs than single-sequence input for infraspinatus and subscapularis tendon tears, whereas coronal oblique fluid-sensitive and multisequence input showed similar AUCs for supraspinatus tendon tears. Model accuracy for tear types and overall accuracy were similar to that of the clinical readers. CONCLUSIONS Deep learning diagnosis of rotator cuff tears is feasible with excellent diagnostic performance, particularly for full-thickness tears, with model accuracy similar to subspecialty-trained musculoskeletal radiologists.
Collapse
Affiliation(s)
- Dana J Lin
- From the Department of Radiology, NYU Grossman School of Medicine, New York, NY
| | | | | | | | | | - Jan Fritz
- From the Department of Radiology, NYU Grossman School of Medicine, New York, NY
| | - Mitchell Kline
- From the Department of Radiology, NYU Grossman School of Medicine, New York, NY
| | - Michael Brooks
- From the Department of Radiology, NYU Grossman School of Medicine, New York, NY
| | - Kevin Dunham
- From the Department of Radiology, NYU Grossman School of Medicine, New York, NY
| | - Mehool Shukla
- From the Department of Radiology, NYU Grossman School of Medicine, New York, NY
| | - Erin F Alaia
- From the Department of Radiology, NYU Grossman School of Medicine, New York, NY
| | - Mohammad Samim
- From the Department of Radiology, NYU Grossman School of Medicine, New York, NY
| | - Vivek Joshi
- From the Department of Radiology, NYU Grossman School of Medicine, New York, NY
| | - William R Walter
- From the Department of Radiology, NYU Grossman School of Medicine, New York, NY
| | - Jutta M Ellermann
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | | | | | | | - Michael P Recht
- From the Department of Radiology, NYU Grossman School of Medicine, New York, NY
| |
Collapse
|
28
|
Soydan Z, Saglam Y, Key S, Kati YA, Taskiran M, Kiymet S, Salturk T, Aydin AS, Bilgili F, Sen C. An AI based classifier model for lateral pillar classification of Legg-Calve-Perthes. Sci Rep 2023; 13:6870. [PMID: 37106026 PMCID: PMC10140055 DOI: 10.1038/s41598-023-34176-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 04/25/2023] [Indexed: 04/29/2023] Open
Abstract
We intended to compare the doctors with a convolutional neural network (CNN) that we had trained using our own unique method for the Lateral Pillar Classification (LPC) of Legg-Calve-Perthes Disease (LCPD). Thousands of training data sets are frequently required for artificial intelligence (AI) applications in medicine. Since we did not have enough real patient radiographs to train a CNN, we devised a novel method to obtain them. We trained the CNN model with the data we created by modifying the normal hip radiographs. No real patient radiographs were ever used during the training phase. We tested the CNN model on 81 hips with LCPD. Firstly, we detected the interobserver reliability of the whole system and then the reliability of CNN alone. Second, the consensus list was used to compare the results of 11 doctors and the CNN model. Percentage agreement and interobserver analysis revealed that CNN had good reliability (ICC = 0.868). CNN has achieved a 76.54% classification performance and outperformed 9 out of 11 doctors. The CNN, which we trained with the aforementioned method, can now provide better results than doctors. In the future, as training data evolves and improves, we anticipate that AI will perform significantly better than physicians.
Collapse
Affiliation(s)
- Zafer Soydan
- Orthopedics and Traumatology, Bhtclinic İstanbul Tema Hastanesi, Nisantası University, Atakent Mh 4. Cadde No 36 PC, 34307, Kucukcekmece, Istanbul, Turkey.
| | - Yavuz Saglam
- Orthopedics and Traumatology, Istanbul University Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Sefa Key
- Orthopedics and Traumatology, Bingol State Hospital, Bingol Merkez, Turkey
| | - Yusuf Alper Kati
- Orthopedics and Traumatology, Antalya Egitim ve Arastirma Hastanesi, Antalya, Turkey
| | - Murat Taskiran
- Department of Electronics and Communication Engineering, Yildiz Technical University, Istanbul, Turkey
| | - Seyfullah Kiymet
- Department of Electronics and Communication Engineering, Yildiz Technical University, Istanbul, Turkey
| | - Tuba Salturk
- Department of Informatics, Yildiz Technical University, Istanbul, Turkey
| | - Ahmet Serhat Aydin
- Orthopedics and Traumatology, Istanbul University Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Fuat Bilgili
- Orthopedics and Traumatology, Istanbul University Istanbul Faculty of Medicine, Istanbul, Turkey
| | - Cengiz Sen
- Orthopedics and Traumatology, Istanbul University Istanbul Faculty of Medicine, Istanbul, Turkey
| |
Collapse
|
29
|
Chen CC, Huang JF, Lin WC, Cheng CT, Chen SC, Fu CY, Lee MS, Liao CH, Chung CY. The Feasibility and Performance of Total Hip Replacement Prediction Deep Learning Algorithm with Real World Data. Bioengineering (Basel) 2023; 10:458. [PMID: 37106645 PMCID: PMC10136253 DOI: 10.3390/bioengineering10040458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 03/15/2023] [Accepted: 04/04/2023] [Indexed: 04/29/2023] Open
Abstract
(1) Background: Hip degenerative disorder is a common geriatric disease is the main causes to lead to total hip replacement (THR). The surgical timing of THR is crucial for post-operative recovery. Deep learning (DL) algorithms can be used to detect anomalies in medical images and predict the need for THR. The real world data (RWD) were used to validate the artificial intelligence and DL algorithm in medicine but there was no previous study to prove its function in THR prediction. (2) Methods: We designed a sequential two-stage hip replacement prediction deep learning algorithm to identify the possibility of THR in three months of hip joints by plain pelvic radiography (PXR). We also collected RWD to validate the performance of this algorithm. (3) Results: The RWD totally included 3766 PXRs from 2018 to 2019. The overall accuracy of the algorithm was 0.9633; sensitivity was 0.9450; specificity was 1.000 and the precision was 1.000. The negative predictive value was 0.9009, the false negative rate was 0.0550, and the F1 score was 0.9717. The area under curve was 0.972 with 95% confidence interval from 0.953 to 0.987. (4) Conclusions: In summary, this DL algorithm can provide an accurate and reliable method for detecting hip degeneration and predicting the need for further THR. RWD offered an alternative support of the algorithm and validated its function to save time and cost.
Collapse
Affiliation(s)
- Chih-Chi Chen
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Jen-Fu Huang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Wei-Cheng Lin
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
- Department of Electrical Engineering, Chang Gung University, Taoyuan 33302, Taiwan
| | - Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Shann-Ching Chen
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chih-Yuan Fu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Mel S. Lee
- Department of Orthopaedic Surgery, Pao-Chien Hospital, Pingtung 90078, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chia-Ying Chung
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| |
Collapse
|
30
|
Mirmojarabian SA, Kajabi AW, Ketola JHJ, Nykänen O, Liimatainen T, Nieminen MT, Nissi MJ, Casula V. Machine Learning Prediction of Collagen Fiber Orientation and Proteoglycan Content From Multiparametric Quantitative MRI in Articular Cartilage. J Magn Reson Imaging 2023; 57:1056-1068. [PMID: 35861162 DOI: 10.1002/jmri.28353] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 06/30/2022] [Accepted: 07/01/2022] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Machine learning models trained with multiparametric quantitative MRIs (qMRIs) have the potential to provide valuable information about the structural composition of articular cartilage. PURPOSE To study the performance and feasibility of machine learning models combined with qMRIs for noninvasive assessment of collagen fiber orientation and proteoglycan content. STUDY TYPE Retrospective, animal model. ANIMAL MODEL An open-source single slice MRI dataset obtained from 20 samples of 10 Shetland ponies (seven with surgically induced cartilage lesions followed by treatment and three healthy controls) yielded to 1600 data points, including 10% for test and 90% for train validation. FIELD STRENGTH/SEQUENCE A 9.4 T MRI scanner/qMRI sequences: T1 , T2 , adiabatic T1ρ and T2ρ , continuous-wave T1ρ and relaxation along a fictitious field (TRAFF ) maps. ASSESSMENT Five machine learning regression models were developed: random forest (RF), support vector regression (SVR), gradient boosting (GB), multilayer perceptron (MLP), and Gaussian process regression (GPR). A nested cross-validation was used for performance evaluation. For reference, proteoglycan content and collagen fiber orientation were determined by quantitative histology from digital densitometry (DD) and polarized light microscopy (PLM), respectively. STATISTICAL TESTS Normality was tested using Shapiro-Wilk test, and association between predicted and measured values was evaluated using Spearman's Rho test. A P-value of 0.05 was considered as the limit of statistical significance. RESULTS Four out of the five models (RF, GB, MLP, and GPR) yielded high accuracy (R2 = 0.68-0.75 for PLM and 0.62-0.66 for DD), and strong significant correlations between the reference measurements and predicted cartilage matrix properties (Spearman's Rho = 0.72-0.88 for PLM and 0.61-0.83 for DD). GPR algorithm had the highest accuracy (R2 = 0.75 and 0.66) and lowest prediction-error (root mean squared [RMSE] = 1.34 and 2.55) for PLM and DD, respectively. DATA CONCLUSION Multiparametric qMRIs in combination with regression models can determine cartilage compositional and structural features, with higher accuracy for collagen fiber orientation than proteoglycan content. EVIDENCE LEVEL 2 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
| | - Abdul Wahed Kajabi
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, US
| | - Juuso H J Ketola
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - Olli Nykänen
- Department of Applied Physics, University of Eastern Finland, Kuopio, Finland
| | - Timo Liimatainen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - Miika T Nieminen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland.,Medical Research Center, University of Oulu and Oulu University Hospital, Oulu, Finland
| | - Mikko J Nissi
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Department of Applied Physics, University of Eastern Finland, Kuopio, Finland
| | - Victor Casula
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Medical Research Center, University of Oulu and Oulu University Hospital, Oulu, Finland
| |
Collapse
|
31
|
Anderson PG, Baum GL, Keathley N, Sicular S, Venkatesh S, Sharma A, Daluiski A, Potter H, Hotchkiss R, Lindsey RV, Jones RM. Deep Learning Assistance Closes the Accuracy Gap in Fracture Detection Across Clinician Types. Clin Orthop Relat Res 2023; 481:580-588. [PMID: 36083847 PMCID: PMC9928835 DOI: 10.1097/corr.0000000000002385] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 08/05/2022] [Indexed: 01/31/2023]
Abstract
BACKGROUND Missed fractures are the most common diagnostic errors in musculoskeletal imaging and can result in treatment delays and preventable morbidity. Deep learning, a subfield of artificial intelligence, can be used to accurately detect fractures by training algorithms to emulate the judgments of expert clinicians. Deep learning systems that detect fractures are often limited to specific anatomic regions and require regulatory approval to be used in practice. Once these hurdles are overcome, deep learning systems have the potential to improve clinician diagnostic accuracy and patient care. QUESTIONS/PURPOSES This study aimed to evaluate whether a Food and Drug Administration-cleared deep learning system that identifies fractures in adult musculoskeletal radiographs would improve diagnostic accuracy for fracture detection across different types of clinicians. Specifically, this study asked: (1) What are the trends in musculoskeletal radiograph interpretation by different clinician types in the publicly available Medicare claims data? (2) Does the deep learning system improve clinician accuracy in diagnosing fractures on radiographs and, if so, is there a greater benefit for clinicians with limited training in musculoskeletal imaging? METHODS We used the publicly available Medicare Part B Physician/Supplier Procedure Summary data provided by the Centers for Medicare & Medicaid Services to determine the trends in musculoskeletal radiograph interpretation by clinician type. In addition, we conducted a multiple-reader, multiple-case study to assess whether clinician accuracy in diagnosing fractures on radiographs was superior when aided by the deep learning system compared with when unaided. Twenty-four clinicians (radiologists, orthopaedic surgeons, physician assistants, primary care physicians, and emergency medicine physicians) with a median (range) of 16 years (2 to 37) of experience postresidency each assessed 175 unique musculoskeletal radiographic cases under aided and unaided conditions (4200 total case-physician pairs per condition). These cases were comprised of radiographs from 12 different anatomic regions (ankle, clavicle, elbow, femur, forearm, hip, humerus, knee, pelvis, shoulder, tibia and fibula, and wrist) and were randomly selected from 12 hospitals and healthcare centers. The gold standard for fracture diagnosis was the majority opinion of three US board-certified orthopaedic surgeons or radiologists who independently interpreted the case. The clinicians' diagnostic accuracy was determined by the area under the curve (AUC) of the receiver operating characteristic (ROC) curve, sensitivity, and specificity. Secondary analyses evaluated the fracture miss rate (1-sensitivity) by clinicians with and without extensive training in musculoskeletal imaging. RESULTS Medicare claims data revealed that physician assistants showed the greatest increase in interpretation of musculoskeletal radiographs within the analyzed time period (2012 to 2018), although clinicians with extensive training in imaging (radiologists and orthopaedic surgeons) still interpreted the majority of the musculoskeletal radiographs. Clinicians aided by the deep learning system had higher accuracy diagnosing fractures in radiographs compared with when unaided (unaided AUC: 0.90 [95% CI 0.89 to 0.92]; aided AUC: 0.94 [95% CI 0.93 to 0.95]; difference in least square mean per the Dorfman, Berbaum, Metz model AUC: 0.04 [95% CI 0.01 to 0.07]; p < 0.01). Clinician sensitivity increased when aided compared with when unaided (aided: 90% [95% CI 88% to 92%]; unaided: 82% [95% CI 79% to 84%]), and specificity increased when aided compared with when unaided (aided: 92% [95% CI 91% to 93%]; unaided: 89% [95% CI 88% to 90%]). Clinicians with limited training in musculoskeletal imaging missed a higher percentage of fractures when unaided compared with radiologists (miss rate for clinicians with limited imaging training: 20% [95% CI 17% to 24%]; miss rate for radiologists: 14% [95% CI 9% to 19%]). However, when assisted by the deep learning system, clinicians with limited training in musculoskeletal imaging reduced their fracture miss rate, resulting in a similar miss rate to radiologists (miss rate for clinicians with limited imaging training: 9% [95% CI 7% to 12%]; miss rate for radiologists: 10% [95% CI 6% to 15%]). CONCLUSION Clinicians were more accurate at diagnosing fractures when aided by the deep learning system, particularly those clinicians with limited training in musculoskeletal image interpretation. Reducing the number of missed fractures may allow for improved patient care and increased patient mobility. LEVEL OF EVIDENCE Level III, diagnostic study.
Collapse
Affiliation(s)
| | | | | | - Serge Sicular
- Imagen Technologies, New York, NY, USA
- The Mount Sinai Hospital, New York, NY, USA
| | | | | | | | | | | | | | | |
Collapse
|
32
|
Past, present, and future in sports imaging: how to drive in a three-lane freeway. Eur Radiol 2023; 33:1589-1592. [PMID: 36282307 DOI: 10.1007/s00330-022-09193-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 09/09/2022] [Accepted: 09/15/2022] [Indexed: 11/04/2022]
Abstract
KEY POINTS • Morphological evaluation of SRIs is still nowadays the clinical standard in daily practice.• New functional imaging modalities show potential to add valuable physiopathological information about the insights of SRIs in specific clinical scenarios.• In the era of personalized medicine, AI algorithms may help athletes and all professionals involved in their care to improve the evaluation of SRIs through a definitive quantitative metric approach.
Collapse
|
33
|
Chang CY, Huber FA, Yeh KJ, Buckless C, Torriani M. Original research: utilization of a convolutional neural network for automated detection of lytic spinal lesions on body CTs. Skeletal Radiol 2023; 52:1377-1384. [PMID: 36651936 DOI: 10.1007/s00256-023-04283-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 01/11/2023] [Accepted: 01/11/2023] [Indexed: 01/19/2023]
Abstract
OBJECTIVE To develop, train, and test a convolutional neural network (CNN) for detection of spinal lytic lesions in chest, abdomen, and pelvis CT scans. MATERIALS AND METHODS Cases of malignant spinal lytic lesions in CT scans were identified. Images were manually segmented for the following classes: (i) lesion, (ii) normal bone, (iii) background. If more than one lesion was on a single slice, all lesions were segmented. Images were stored as 128×128 pixel grayscale, with 10% segregated for testing. The training pipeline of the dataset included histogram equalization and data augmentation. A model was trained on Keras/Tensorflow using an 80/20 training/validation split, based on U-Net architecture. Additional testing of the model was performed on 1106 images of healthy controls. Global sensitivity measured detection of any lesion on a single image. Local sensitivity and positive predictive value (PPV) measured detection of all lesions on an image. Global specificity measured false positive rate in non-pathologic bone. RESULTS Six hundred images were obtained for model creation. The training set consisted of 540 images, which was augmented to 20,000. The test set consisted of 60 images. Model training was performed in triplicate. Mean Dice scores were 0.61 for lytic lesion, 0.95 for normal bone, and 0.99 for background. Mean global sensitivity was 90.6%, local sensitivity was 74.0%, local PPV was 78.3%, and global specificity was 63.3%. At least one false positive lesion was noted in 28.8-44.9% of control images. CONCLUSION A task-trained CNN showed good sensitivity in detecting spinal lytic lesions in axial CT images.
Collapse
Affiliation(s)
- Connie Y Chang
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street - YAW 6 -, Boston, MA, 02114, USA.
| | - Florian A Huber
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street - YAW 6 -, Boston, MA, 02114, USA
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Faculty of Medicine, University of Zurich, Zurich, Switzerland
| | - Kaitlyn J Yeh
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street - YAW 6 -, Boston, MA, 02114, USA
| | - Colleen Buckless
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street - YAW 6 -, Boston, MA, 02114, USA
| | - Martin Torriani
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street - YAW 6 -, Boston, MA, 02114, USA
| |
Collapse
|
34
|
Kulseng CPS, Nainamalai V, Grøvik E, Geitung JT, Årøen A, Gjesdal KI. Automatic segmentation of human knee anatomy by a convolutional neural network applying a 3D MRI protocol. BMC Musculoskelet Disord 2023; 24:41. [PMID: 36650496 PMCID: PMC9847207 DOI: 10.1186/s12891-023-06153-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 01/10/2023] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND To study deep learning segmentation of knee anatomy with 13 anatomical classes by using a magnetic resonance (MR) protocol of four three-dimensional (3D) pulse sequences, and evaluate possible clinical usefulness. METHODS The sample selection involved 40 healthy right knee volumes from adult participants. Further, a recently injured single left knee with previous known ACL reconstruction was included as a test subject. The MR protocol consisted of the following 3D pulse sequences: T1 TSE, PD TSE, PD FS TSE, and Angio GE. The DenseVNet neural network was considered for these experiments. Five input combinations of sequences (i) T1, (ii) T1 and FS, (iii) PD and FS, (iv) T1, PD, and FS and (v) T1, PD, FS and Angio were trained using the deep learning algorithm. The Dice similarity coefficient (DSC), Jaccard index and Hausdorff were used to compare the performance of the networks. RESULTS Combining all sequences collectively performed significantly better than other alternatives. The following DSCs (±standard deviation) were obtained for the test dataset: Bone medulla 0.997 (±0.002), PCL 0.973 (±0.015), ACL 0.964 (±0.022), muscle 0.998 (±0.001), cartilage 0.966 (±0.018), bone cortex 0.980 (±0.010), arteries 0.943 (±0.038), collateral ligaments 0.919 (± 0.069), tendons 0.982 (±0.005), meniscus 0.955 (±0.032), adipose tissue 0.998 (±0.001), veins 0.980 (±0.010) and nerves 0.921 (±0.071). The deep learning network correctly identified the anterior cruciate ligament (ACL) tear of the left knee, thus indicating a future aid to orthopaedics. CONCLUSIONS The convolutional neural network proves highly capable of correctly labeling all anatomical structures of the knee joint when applied to 3D MR sequences. We have demonstrated that this deep learning model is capable of automatized segmentation that may give 3D models and discover pathology. Both useful for a preoperative evaluation.
Collapse
Affiliation(s)
| | - Varatharajan Nainamalai
- grid.5947.f0000 0001 1516 2393Norwegian University of Science and Technology, Larsgaardvegen 2, Ålesund, 6025 Norway
| | - Endre Grøvik
- grid.5947.f0000 0001 1516 2393Norwegian University of Science and Technology, Høgskoleringen 5, Trondheim, 7491 Norway ,Møre og Romsdal Hospital Trust, Postboks 1600, Ålesund, 6025 Norway
| | - Jonn-Terje Geitung
- Sunnmøre MR-klinikk, Langelandsvegen 15, Ålesund, 6010 Norway ,grid.5510.10000 0004 1936 8921Faculty of Medicine, University of Oslo, Klaus Torgårds vei 3, Oslo, 0372 Norway ,grid.411279.80000 0000 9637 455XDepartment of Radiology, Akershus University Hospital, Postboks 1000, Lørenskog, 1478 Norway
| | - Asbjørn Årøen
- grid.411279.80000 0000 9637 455XDepartment of Orthopedic Surgery, Institute of Clinical Medicine, Akershus University Hospital, Problemveien 7, Oslo, 0315 Norway ,grid.412285.80000 0000 8567 2092Oslo Sports Trauma Research Center, Norwegian School of Sport Sciences, Postboks 4014 Ullevål Stadion, Oslo, 0806 Norway
| | - Kjell-Inge Gjesdal
- Sunnmøre MR-klinikk, Langelandsvegen 15, Ålesund, 6010 Norway ,grid.5947.f0000 0001 1516 2393Norwegian University of Science and Technology, Larsgaardvegen 2, Ålesund, 6025 Norway ,grid.411279.80000 0000 9637 455XDepartment of Radiology, Akershus University Hospital, Postboks 1000, Lørenskog, 1478 Norway
| |
Collapse
|
35
|
Rapid lumbar MRI protocol using 3D imaging and deep learning reconstruction. Skeletal Radiol 2023; 52:1331-1338. [PMID: 36602576 DOI: 10.1007/s00256-022-04268-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 12/11/2022] [Accepted: 12/12/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND AND PURPOSE Three-dimensional (3D) imaging of the spine, augmented with AI-enabled image enhancement and denoising, has the potential to reduce imaging times without compromising image quality or diagnostic performance. This work evaluates the time savings afforded by a novel, rapid lumbar spine MRI protocol as well as image quality and diagnostic differences stemming from the use of an AI-enhanced 3D T2 sequence combined with a single Dixon acquisition. MATERIALS AND METHODS Thirty-five subjects underwent MRI using standard 2D lumbar imaging in addition to a "rapid protocol" consisting of 3D imaging, enhanced and denoised using a prototype DL reconstruction algorithm as well as a two-point Dixon sequence. Images were graded by subspecialized radiologists and imaging times were collected. Comparison was made between 2D sagittal T1 and Dixon fat images for neural foraminal stenosis, intraosseous lesions, and fracture detection. RESULTS This study demonstrated a 54% reduction in total acquisition time of a 3D AI-enhanced imaging lumbar spine MRI rapid protocol combined with a sagittal 2D Dixon sequence, compared to a 2D standard-of-care protocol. The rapid protocol also demonstrated strong agreement with the standard-of-care protocol with respect to osseous lesions (κ = 0.88), fracture detection (κ = 0.96), and neural foraminal stenosis (ICC > 0.9 at all levels). CONCLUSION 3D imaging of the lumbar spine with AI-enhanced DL reconstruction and Dixon imaging demonstrated a significant reduction in imaging time with similar performance for common diagnostic metrics. Although previously limited by long postprocessing times, this technique has the potential to enhance patient throughput in busy radiology practices while providing similar or improved image quality.
Collapse
|
36
|
Bousson V, Benoist N, Guetat P, Attané G, Salvat C, Perronne L. Application of artificial intelligence to imaging interpretations in the musculoskeletal area: Where are we? Where are we going? Joint Bone Spine 2023; 90:105493. [PMID: 36423783 DOI: 10.1016/j.jbspin.2022.105493] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 10/30/2022] [Accepted: 11/02/2022] [Indexed: 11/23/2022]
Abstract
The interest of researchers, clinicians and radiologists, in artificial intelligence (AI) continues to grow. Deep learning is a subset of machine learning, in which the computer algorithm itself can determine the optimal imaging features to answer a clinical question. Convolutional neural networks are the most common architecture for performing deep learning on medical images. The various musculoskeletal applications of deep learning are the detection of abnormalities on X-rays or cross-sectional images (CT, MRI), for example the detection of fractures, meniscal tears, anterior cruciate ligament tears, degenerative lesions of the spine, bone metastases, classification of e.g., dural sac stenosis, degeneration of intervertebral discs, assessment of skeletal age, and segmentation, for example of cartilage. Software developments are already impacting the daily practice of orthopedic imaging by automatically detecting fractures on radiographs. Improving image acquisition protocols, improving the quality of low-dose CT images, reducing acquisition times in MRI, or improving MR image resolution is possible through deep learning. Deep learning offers an automated way to offload time-consuming manual processes and improve practitioner performance. This article reviews the current state of AI in musculoskeletal imaging.
Collapse
Affiliation(s)
- Valérie Bousson
- Service de radiologie ostéoarticulaire, hôpital Lariboisière, AP-HP Nord-université Paris Cité, 75010 Paris, France; Laboratoire B3OA, CNRS UMR 7052, Paris, France.
| | - Nicolas Benoist
- Service de radiologie ostéoarticulaire, hôpital Lariboisière, AP-HP Nord-université Paris Cité, 75010 Paris, France; Laboratoire B3OA, CNRS UMR 7052, Paris, France
| | - Pierre Guetat
- Service de radiologie ostéoarticulaire, hôpital Lariboisière, AP-HP Nord-université Paris Cité, 75010 Paris, France; Laboratoire B3OA, CNRS UMR 7052, Paris, France
| | - Grégoire Attané
- Service de radiologie ostéoarticulaire, hôpital Lariboisière, AP-HP Nord-université Paris Cité, 75010 Paris, France; Laboratoire B3OA, CNRS UMR 7052, Paris, France
| | - Cécile Salvat
- Department of Medical Physics, hôpital Lariboisière, AP-HP Nord-université Paris Cité, Paris, France
| | - Laetitia Perronne
- Service de radiologie ostéoarticulaire, hôpital Lariboisière, AP-HP Nord-université Paris Cité, 75010 Paris, France; Laboratoire B3OA, CNRS UMR 7052, Paris, France
| |
Collapse
|
37
|
Shim H, Lee J, Choi S, Kim J, Jeong J, Cho C, Kim H, Kim JI, Kim J, Eom K. Deep learning-based diagnosis of stifle joint diseases in dogs. Vet Radiol Ultrasound 2023; 64:113-122. [PMID: 36444910 DOI: 10.1111/vru.13181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Revised: 08/24/2022] [Accepted: 08/24/2022] [Indexed: 12/03/2022] Open
Abstract
In this retrospective, analytical study, we developed a deep learning-based diagnostic model that can be applied to canine stifle joint diseases and compared its accuracy with that achieved by veterinarians to verify its potential as a reliable diagnostic method. A total of 2382 radiographs of the canine stifle joint from cooperative animal hospitals were included in a dataset. Stifle joint regions were extracted from the original images using the faster region-based convolutional neural network (R-CNN) model, and the object detection accuracy was evaluated. Four radiographic findings: patellar deviation, drawer sign, osteophyte formation, and joint effusion, were observed in the stifle joint and used to train a residual network (ResNet) classification model. Implant and growth plate groups were analyzed to compare the classification accuracy against the total dataset. All deep learning-based classification models achieved target accuracies exceeding 80%, which is comparable to or slightly less than those achieved by veterinarians. However, in the case of drawer signs, further research is necessary to improve the low sensitivity of the model. When the implant group was excluded, the classification accuracy significantly improved, indicating that the implant acted as a distraction. These results indicate that deep learning-based diagnoses can be expected to become useful diagnostic models in veterinary medicine.
Collapse
Affiliation(s)
- Hyesoo Shim
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Gwangjin-gu, Seoul, Republic of Korea
| | - Jongmo Lee
- Department of Computer Science and Engineering, Konkuk University, Gwangjin-gu, Seoul, Republic of Korea
| | - Seunghoon Choi
- Department of Computer Science and Engineering, Konkuk University, Gwangjin-gu, Seoul, Republic of Korea
| | - Jayon Kim
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Gwangjin-gu, Seoul, Republic of Korea
| | - Jeongyun Jeong
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Gwangjin-gu, Seoul, Republic of Korea
| | - Changhyun Cho
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Gwangjin-gu, Seoul, Republic of Korea
| | - Hyungseok Kim
- Department of Computer Science and Engineering, Konkuk University, Gwangjin-gu, Seoul, Republic of Korea
| | - Jee-In Kim
- Department of Computer Science and Engineering, Konkuk University, Gwangjin-gu, Seoul, Republic of Korea
| | - Jaehwan Kim
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Gwangjin-gu, Seoul, Republic of Korea
| | - Kidong Eom
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Gwangjin-gu, Seoul, Republic of Korea
| |
Collapse
|
38
|
Performance of a deep convolutional neural network for MRI-based vertebral body measurements and insufficiency fracture detection. Eur Radiol 2022; 33:3188-3199. [PMID: 36576545 PMCID: PMC10121505 DOI: 10.1007/s00330-022-09354-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 09/23/2022] [Accepted: 11/29/2022] [Indexed: 12/29/2022]
Abstract
OBJECTIVES The aim is to validate the performance of a deep convolutional neural network (DCNN) for vertebral body measurements and insufficiency fracture detection on lumbar spine MRI. METHODS This retrospective analysis included 1000 vertebral bodies in 200 patients (age 75.2 ± 9.8 years) who underwent lumbar spine MRI at multiple institutions. 160/200 patients had ≥ one vertebral body insufficiency fracture, 40/200 had no fracture. The performance of the DCNN and that of two fellowship-trained musculoskeletal radiologists in vertebral body measurements (anterior/posterior height, extent of endplate concavity, vertebral angle) and evaluation for insufficiency fractures were compared. Statistics included (a) interobserver reliability metrics using intraclass correlation coefficient (ICC), kappa statistics, and Bland-Altman analysis, and (b) diagnostic performance metrics (sensitivity, specificity, accuracy). A statistically significant difference was accepted if the 95% confidence intervals did not overlap. RESULTS The inter-reader agreement between radiologists and the DCNN was excellent for vertebral body measurements, with ICC values of > 0.94 for anterior and posterior vertebral height and vertebral angle, and good to excellent for superior and inferior endplate concavity with ICC values of 0.79-0.85. The performance of the DCNN in fracture detection yielded a sensitivity of 0.941 (0.903-0.968), specificity of 0.969 (0.954-0.980), and accuracy of 0.962 (0.948-0.973). The diagnostic performance of the DCNN was independent of the radiological institution (accuracy 0.964 vs. 0.960), type of MRI scanner (accuracy 0.957 vs. 0.964), and magnetic field strength (accuracy 0.966 vs. 0.957). CONCLUSIONS A DCNN can achieve high diagnostic performance in vertebral body measurements and insufficiency fracture detection on heterogeneous lumbar spine MRI. KEY POINTS • A DCNN has the potential for high diagnostic performance in measuring vertebral bodies and detecting insufficiency fractures of the lumbar spine.
Collapse
|
39
|
Cellina M, Cè M, Irmici G, Ascenti V, Caloro E, Bianchi L, Pellegrino G, D’Amico N, Papa S, Carrafiello G. Artificial Intelligence in Emergency Radiology: Where Are We Going? Diagnostics (Basel) 2022; 12:diagnostics12123223. [PMID: 36553230 PMCID: PMC9777804 DOI: 10.3390/diagnostics12123223] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 12/11/2022] [Accepted: 12/16/2022] [Indexed: 12/23/2022] Open
Abstract
Emergency Radiology is a unique branch of imaging, as rapidity in the diagnosis and management of different pathologies is essential to saving patients' lives. Artificial Intelligence (AI) has many potential applications in emergency radiology: firstly, image acquisition can be facilitated by reducing acquisition times through automatic positioning and minimizing artifacts with AI-based reconstruction systems to optimize image quality, even in critical patients; secondly, it enables an efficient workflow (AI algorithms integrated with RIS-PACS workflow), by analyzing the characteristics and images of patients, detecting high-priority examinations and patients with emergent critical findings. Different machine and deep learning algorithms have been trained for the automated detection of different types of emergency disorders (e.g., intracranial hemorrhage, bone fractures, pneumonia), to help radiologists to detect relevant findings. AI-based smart reporting, summarizing patients' clinical data, and analyzing the grading of the imaging abnormalities, can provide an objective indicator of the disease's severity, resulting in quick and optimized treatment planning. In this review, we provide an overview of the different AI tools available in emergency radiology, to keep radiologists up to date on the current technological evolution in this field.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
- Correspondence:
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Giovanni Irmici
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Velio Ascenti
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Elena Caloro
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Lorenzo Bianchi
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Giuseppe Pellegrino
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Natascha D’Amico
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Sergio Papa
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Gianpaolo Carrafiello
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
- Radiology Department, Fondazione IRCCS Cà Granda, Policlinico di Milano Ospedale Maggiore, Via Sforza 35, 20122 Milan, Italy
| |
Collapse
|
40
|
Droppelmann G, Tello M, García N, Greene C, Jorquera C, Feijoo F. Lateral elbow tendinopathy and artificial intelligence: Binary and multilabel findings detection using machine learning algorithms. Front Med (Lausanne) 2022; 9:945698. [PMID: 36213676 PMCID: PMC9537568 DOI: 10.3389/fmed.2022.945698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
Background Ultrasound (US) is a valuable technique to detect degenerative findings and intrasubstance tears in lateral elbow tendinopathy (LET). Machine learning methods allow supporting this radiological diagnosis. Aim To assess multilabel classification models using machine learning models to detect degenerative findings and intrasubstance tears in US images with LET diagnosis. Materials and methods A retrospective study was performed. US images and medical records from patients with LET diagnosis from January 1st, 2017, to December 30th, 2018, were selected. Datasets were built for training and testing models. For image analysis, features extraction, texture characteristics, intensity distribution, pixel-pixel co-occurrence patterns, and scales granularity were implemented. Six different supervised learning models were implemented for binary and multilabel classification. All models were trained to classify four tendon findings (hypoechogenicity, neovascularity, enthesopathy, and intrasubstance tear). Accuracy indicators and their confidence intervals (CI) were obtained for all models following a K-fold-repeated-cross-validation method. To measure multilabel prediction, multilabel accuracy, sensitivity, specificity, and receiver operating characteristic (ROC) with 95% CI were used. Results A total of 30,007 US images (4,324 exams, 2,917 patients) were included in the analysis. The RF model presented the highest mean values in the area under the curve (AUC), sensitivity, and also specificity by each degenerative finding in the binary classification. The AUC and sensitivity showed the best performance in intrasubstance tear with 0.991 [95% CI, 099, 0.99], and 0.775 [95% CI, 0.77, 0.77], respectively. Instead, specificity showed upper values in hypoechogenicity with 0.821 [95% CI, 0.82, −0.82]. In the multilabel classifier, RF also presented the highest performance. The accuracy was 0.772 [95% CI, 0.771, 0.773], a great macro of 0.948 [95% CI, 0.94, 0.94], and a micro of 0.962 [95% CI, 0.96, 0.96] AUC scores were detected. Diagnostic accuracy, sensitivity, and specificity with 95% CI were calculated. Conclusion Machine learning algorithms based on US images with LET presented high diagnosis accuracy. Mainly the random forest model shows the best performance in binary and multilabel classifiers, particularly for intrasubstance tears.
Collapse
Affiliation(s)
- Guillermo Droppelmann
- Research Center on Medicine, Exercise, Sport and Health, MEDS Clinic, Santiago, RM, Chile
- Health Sciences Ph.D. Program, Universidad Católica de Murcia UCAM, Murcia, Spain
- Principles and Practice of Clinical Research (PPCR), Harvard T.H. Chan School of Public Health, Boston, MA, United States
- *Correspondence: Guillermo Droppelmann,
| | - Manuel Tello
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| | - Nicolás García
- MSK Diagnostic and Interventional Radiology Department, MEDS Clinic, Santiago, RM, Chile
| | - Cristóbal Greene
- Hand and Elbow Unit, Department of Orthopaedic Surgery, MEDS Clinic, Santiago, RM, Chile
| | - Carlos Jorquera
- Facultad de Ciencias, Escuela de Nutrición y Dietética, Universidad Mayor, Santiago, RM, Chile
| | - Felipe Feijoo
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| |
Collapse
|
41
|
Koska OI, Çilengir AH, Uluç ME, Yücel A, Tosun Ö. All-star approach to a small medical imaging dataset: combined deep, transfer, and classical machine learning approaches for the determination of radial head fractures. Acta Radiol 2022; 64:1476-1483. [PMID: 36062584 DOI: 10.1177/02841851221122424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Radial head fractures are often evaluated in emergency departments and can easily be missed. Automated or semi-automated detection methods that help physicians may be valuable regarding the high miss rate. PURPOSE To evaluate the accuracy of combined deep, transfer, and classical machine learning approaches on a small dataset for determination of radial head fractures. MATERIAL AND METHODS A total of 48 patients with radial head fracture and 56 patients without fracture on elbow radiographs were retrospectively evaluated. The input images were obtained by cropping anteroposterior elbow radiographs around a center-point on the radial head. For fracture determination, an algorithm based on feature extraction using distinct prototypes of pretrained networks (VGG16, ResNet50, InceptionV3, MobileNetV2) representing four different approaches was developed. Reduction of feature space dimensions, feeding the most relevant features, and development of ensemble of classifiers were utilized. RESULTS The algorithm with the best performance consisted of preprocessing the input, computation of global maximum and global mean outputs of four distinct pretrained networks, dimensionality reduction by applying univariate and ensemble feature selectors, and applying Support Vector Machines and Random Forest classifiers to the transformed and reduced dataset. A maximum accuracy of 90% with MobileNetV2 pretrained features was reached for fracture determination with a small sample size. CONCLUSION Radial head fractures can be determined with a combined approach and limitations of the small sample size can be overcome by utilizing pretrained deep networks with classical machine learning methods.
Collapse
Affiliation(s)
- Ozgur I Koska
- Department of Biomedical Engineering, 37508Dokuz Eylül University Engineering Faculty, İzmir, Turkey.,ETHZ Computer Vision Laboratory, Zurich, Switzerland
| | | | - Muhsin Engin Uluç
- Department of Radiology, Izmir Katip Celebi University Ataturk Training and Research Hospital, Izmir, Turkey
| | - Aylin Yücel
- 534521Department of Radiology, Afyonkarahisar Health Sciences University, Afyonkarahisar, Turkey
| | - Özgür Tosun
- Department of Radiology, Izmir Katip Celebi University Ataturk Training and Research Hospital, Izmir, Turkey
| |
Collapse
|
42
|
Yao J, Chepelev L, Nisha Y, Sathiadoss P, Rybicki FJ, Sheikh AM. Evaluation of a deep learning method for the automated detection of supraspinatus tears on MRI. Skeletal Radiol 2022; 51:1765-1775. [PMID: 35190850 DOI: 10.1007/s00256-022-04008-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 01/30/2022] [Accepted: 01/30/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To evaluate if deep learning is a feasible approach for automated detection of supraspinatus tears on MRI. MATERIALS AND METHODS A total of 200 shoulder MRI studies performed between 2015 and 2019 were retrospectively obtained from our institutional database using a balanced random sampling of studies containing a full-thickness tear, partial-thickness tear, or intact supraspinatus tendon. A 3-stage pipeline was developed comprised of a slice selection network based on a pre-trained residual neural network (ResNet); a segmentation network based on an encoder-decoder network (U-Net); and a custom multi-input convolutional neural network (CNN) classifier. Binary reference labels were created following review of radiologist reports and images by a radiology fellow and consensus validation by two musculoskeletal radiologists. Twenty percent of the data was reserved as a holdout test set with the remaining 80% used for training and optimization under a fivefold cross-validation strategy. Classification and segmentation accuracy were evaluated using area under the receiver operating characteristic curve (AUROC) and Dice similarity coefficient, respectively. Baseline characteristics in correctly versus incorrectly classified cases were compared using independent sample t-test and chi-squared. RESULTS Test sensitivity and specificity of the classifier at the optimal Youden's index were 85.0% (95% CI: 62.1-96.8%) and 85.0% (95% CI: 62.1-96.8%), respectively. AUROC was 0.943 (95% CI: 0.820-0.991). Dice segmentation accuracy was 0.814 (95% CI: 0.805-0.826). There was no significant difference in AUROC between 1.5 T and 3.0 T studies. Sub-analysis showed superior sensitivity on full-thickness (100%) versus partial-thickness (72.5%) subgroups. DATA CONCLUSION Deep learning is a feasible approach to detect supraspinatus tears on MRI.
Collapse
Affiliation(s)
- Jason Yao
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada.
| | - Leonid Chepelev
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada
| | - Yashmin Nisha
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada
| | - Paul Sathiadoss
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada
| | - Frank J Rybicki
- Department of Radiology, University of Cincinnati College of Medicine, 234 Goodman Street, Box 670761, Cincinnati, OH, 45267-0761, USA
| | - Adnan M Sheikh
- Department of Radiology, The University of British Columbia Faculty of Medicine, 2775 Laurel Street, Vancouver, BC, V5Z 1M9, Canada
| |
Collapse
|
43
|
Musculoskeletal MR Image Segmentation with Artificial Intelligence. ADVANCES IN CLINICAL RADIOLOGY 2022; 4:179-188. [PMID: 36815063 PMCID: PMC9943059 DOI: 10.1016/j.yacr.2022.04.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
44
|
du Toit C, Orlando N, Papernick S, Dima R, Gyacskov I, Fenster A. Automatic femoral articular cartilage segmentation using deep learning in three-dimensional ultrasound images of the knee. OSTEOARTHRITIS AND CARTILAGE OPEN 2022; 4:100290. [PMID: 36474947 PMCID: PMC9718325 DOI: 10.1016/j.ocarto.2022.100290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 05/28/2022] [Accepted: 06/20/2022] [Indexed: 10/17/2022] Open
Abstract
Objective This study aimed to develop a deep learning-based approach to automatically segment the femoral articular cartilage (FAC) in 3D ultrasound (US) images of the knee to increase time efficiency and decrease rater variability. Design Our method involved deep learning predictions on 2DUS slices sampled in the transverse plane to view the cartilage of the femoral trochlea, followed by reconstruction into a 3D surface. A 2D U-Net was modified and trained using a dataset of 200 2DUS images resliced from 20 3DUS images. Segmentation accuracy was evaluated using a holdout dataset of 50 2DUS images resliced from 5 3DUS images. Absolute and signed error metrics were computed and FAC segmentation performance was compared between rater 1 and 2 manual segmentations. Results Our U-Net-based algorithm performed with mean 3D DSC, recall, precision, VPD, MSD, and HD of 73.1 ± 3.9%, 74.8 ± 6.1%, 72.0 ± 6.3%, 10.4 ± 6.0%, 0.3 ± 0.1 mm, and 1.6 ± 0.7 mm, respectively. Compared to the individual 2D predictions, our algorithm demonstrated a decrease in performance after 3D reconstruction, but these differences were not found to be statistically significant. The percent difference between the manually segmented volumes of the 2 raters was 3.4%, and rater 2 demonstrated the largest VPD with 14.2 ± 11.4 mm3 compared to 10.4 ± 6.0 mm3 for rater 1. Conclusion This study investigated the use of a modified U-Net algorithm to automatically segment the FAC in 3DUS knee images of healthy volunteers, demonstrating that this segmentation method would increase the efficiency of anterior femoral cartilage volume estimation and expedite the post-acquisition processing for 3D US images of the knee.
Collapse
Affiliation(s)
- Carla du Toit
- Faculty of Health Sciences, Collaborative Specialization in Musculoskeletal Health Research, and Bone and Joint Institute, Western University, London, ON N6A 3K7, Canada
- Robarts Research Institute, Western University, London, ON N6A 3K7, Canada
| | - Nathan Orlando
- Schulich School of Medicine and Dentistry, Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Robarts Research Institute, Western University, London, ON N6A 3K7, Canada
| | - Sam Papernick
- Schulich School of Medicine and Dentistry, Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Robarts Research Institute, Western University, London, ON N6A 3K7, Canada
| | - Robert Dima
- Faculty of Health Sciences, Collaborative Specialization in Musculoskeletal Health Research, and Bone and Joint Institute, Western University, London, ON N6A 3K7, Canada
- Robarts Research Institute, Western University, London, ON N6A 3K7, Canada
| | - Igor Gyacskov
- Robarts Research Institute, Western University, London, ON N6A 3K7, Canada
| | - Aaron Fenster
- Schulich School of Medicine and Dentistry, Department of Medical Biophysics, Western University, London, ON N6A 3K7, Canada
- Robarts Research Institute, Western University, London, ON N6A 3K7, Canada
| |
Collapse
|
45
|
Spinopelvic measurements of sagittal balance with deep learning: systematic review and critical evaluation. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2022; 31:2031-2045. [PMID: 35278146 DOI: 10.1007/s00586-022-07155-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 02/04/2022] [Accepted: 02/14/2022] [Indexed: 01/20/2023]
Abstract
PURPOSE To summarize and critically evaluate the existing studies for spinopelvic measurements of sagittal balance that are based on deep learning (DL). METHODS Three databases (PubMed, WoS and Scopus) were queried for records using keywords related to DL and measurement of sagittal balance. After screening the resulting 529 records that were augmented with specific web search, 34 studies published between 2017 and 2022 were included in the final review, and evaluated from the perspective of the observed sagittal spinopelvic parameters, properties of spine image datasets, applied DL methodology and resulting measurement performance. RESULTS Studies reported DL measurement of up to 18 different spinopelvic parameters, but the actual number depended on the image field of view. Image datasets were composed of lateral lumbar spine and whole spine X-rays, biplanar whole spine X-rays and lumbar spine magnetic resonance cross sections, and were increasing in size or enriched by augmentation techniques. Spinopelvic parameter measurement was approached either by landmark detection or structure segmentation, and U-Net was the most frequently applied DL architecture. The latest DL methods achieved excellent performance in terms of mean absolute error against reference manual measurements (~ 2° or ~ 1 mm). CONCLUSION Although the application of relatively complex DL architectures resulted in an improved measurement accuracy of sagittal spinopelvic parameters, future methods should focus on multi-institution and multi-observer analyses as well as uncertainty estimation and error handling implementations for integration into the clinical workflow. Further advances will enhance the predictive analytics of DL methods for spinopelvic parameter measurement. LEVEL OF EVIDENCE I Diagnostic: individual cross-sectional studies with the consistently applied reference standard and blinding.
Collapse
|
46
|
Vereecke E, Herregods N, Morbée L, Laloo F, Chen M, Jans L. Imaging of Structural Abnormalities of the Sacrum: The Old Faithful and Newly Emerging Techniques. Semin Musculoskelet Radiol 2022; 26:469-477. [PMID: 36103888 DOI: 10.1055/s-0042-1754342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
The sacrum and sacroiliac joints pose a long-standing challenge for adequate imaging because of their complex anatomical form, oblique orientation, and posterior location in the pelvis, making them subject to superimposition. The sacrum and sacroiliac joints are composed of multiple diverse tissues, further complicating their imaging. Varying imaging techniques are suited to evaluate the sacrum, each with its specific clinical indications, benefits, and drawbacks. New techniques continue to be developed and validated, such as dual-energy computed tomography (CT) and new magnetic resonance imaging (MRI) sequences, for example susceptibility-weighted imaging. Ongoing development of artificial intelligence, such as algorithms allowing reconstruction of MRI-based synthetic CT images, promises even more clinical imaging options.
Collapse
Affiliation(s)
- Elke Vereecke
- Department of Radiology, Ghent University Hospital, Gent, Belgium
| | - Nele Herregods
- Department of Radiology, Ghent University Hospital, Gent, Belgium
| | - Lieve Morbée
- Department of Radiology, Ghent University Hospital, Gent, Belgium
| | - Frederiek Laloo
- Department of Radiology, Ghent University Hospital, Gent, Belgium
| | - Min Chen
- Department of Radiology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Lennart Jans
- Department of Radiology, Ghent University Hospital, Gent, Belgium
| |
Collapse
|
47
|
Huhtanen JT, Nyman M, Doncenco D, Hamedian M, Kawalya D, Salminen L, Sequeiros RB, Koskinen SK, Pudas TK, Kajander S, Niemi P, Hirvonen J, Aronen HJ, Jafaritadi M. Deep learning accurately classifies elbow joint effusion in adult and pediatric radiographs. Sci Rep 2022; 12:11803. [PMID: 35821056 PMCID: PMC9276721 DOI: 10.1038/s41598-022-16154-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 07/05/2022] [Indexed: 11/17/2022] Open
Abstract
Joint effusion due to elbow fractures are common among adults and children. Radiography is the most commonly used imaging procedure to diagnose elbow injuries. The purpose of the study was to investigate the diagnostic accuracy of deep convolutional neural network algorithms in joint effusion classification in pediatric and adult elbow radiographs. This retrospective study consisted of a total of 4423 radiographs in a 3-year period from 2017 to 2020. Data was randomly separated into training (n = 2672), validation (n = 892) and test set (n = 859). Two models using VGG16 as the base architecture were trained with either only lateral projection or with four projections (AP, LAT and Obliques). Three radiologists evaluated joint effusion separately on the test set. Accuracy, precision, recall, specificity, F1 measure, Cohen’s kappa, and two-sided 95% confidence intervals were calculated. Mean patient age was 34.4 years (1–98) and 47% were male patients. Trained deep learning framework showed an AUC of 0.951 (95% CI 0.946–0.955) and 0.906 (95% CI 0.89–0.91) for the lateral and four projection elbow joint images in the test set, respectively. Adult and pediatric patient groups separately showed an AUC of 0.966 and 0.924, respectively. Radiologists showed an average accuracy, sensitivity, specificity, precision, F1 score, and AUC of 92.8%, 91.7%, 93.6%, 91.07%, 91.4%, and 92.6%. There were no statistically significant differences between AUC's of the deep learning model and the radiologists (p value > 0.05). The model on the lateral dataset resulted in higher AUC compared to the model with four projection datasets. Using deep learning it is possible to achieve expert level diagnostic accuracy in elbow joint effusion classification in pediatric and adult radiographs. Deep learning used in this study can classify joint effusion in radiographs and can be used in image interpretation as an aid for radiologists.
Collapse
Affiliation(s)
- Jarno T Huhtanen
- Faculty of Health and Well-Being, Turku University of Applied Sciences, Turku, Finland. .,Department of Radiology, University of Turku, Turku, Finland.
| | - Mikko Nyman
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Dorin Doncenco
- Faculty of Engineering and Business, Turku University of Applied Sciences, Turku, Finland
| | - Maral Hamedian
- Faculty of Engineering and Business, Turku University of Applied Sciences, Turku, Finland
| | - Davis Kawalya
- Faculty of Engineering and Business, Turku University of Applied Sciences, Turku, Finland
| | - Leena Salminen
- Department of Nursing Science, University of Turku and Director of Nursing (Part-Time) Turku University Hospital, Turku, Finland
| | | | | | - Tomi K Pudas
- Terveystalo Inc, Jaakonkatu 3, Helsinki, Finland
| | - Sami Kajander
- Department of Radiology, University of Turku, Turku, Finland
| | - Pekka Niemi
- Department of Radiology, University of Turku, Turku, Finland
| | - Jussi Hirvonen
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Hannu J Aronen
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Mojtaba Jafaritadi
- Faculty of Engineering and Business, Turku University of Applied Sciences, Turku, Finland
| |
Collapse
|
48
|
Abstract
PURPOSE OF REVIEW Imaging of the sacroiliac joints is one of the cornerstones in the diagnosis and monitoring of axial spondyloarthritis. We aim to present an overview of the emerging imaging techniques for sacroiliac joint assessment and provide an insight into their relevant benefits and pitfalls. RECENT FINDINGS Evaluation of structural and active inflammatory lesions in sacroiliitis are both important for understanding the disease process. Dual-energy computed tomography (CT) can detect inflammatory bone marrow edema in the sacroiliac joints and provides an alternative for magnetic resonance imaging (MRI). Three-dimensional gradient echo sequences improve the visualization of erosions on MRI. Susceptibility weighted MRI and deep learning-based synthetic CT are innovative MRI techniques that allow for generating 'CT-like' images and better depict osseous structural lesions than routine MRI sequences. SUMMARY New imaging innovations and developments result in significant improvements in the imaging of spondyloarthritis. Advanced MRI techniques enhance its potential for the accurate detection of structural and active inflammatory lesions of sacroiliitis in one single imaging session.
Collapse
Affiliation(s)
- Lieve Morbée
- Department of Radiology, Ghent University Hospital, Ghent, Belgium
| | | | | |
Collapse
|
49
|
An Extra Set of Intelligent Eyes: Application of Artificial Intelligence in Imaging of Abdominopelvic Pathologies in Emergency Radiology. Diagnostics (Basel) 2022; 12:diagnostics12061351. [PMID: 35741161 PMCID: PMC9221728 DOI: 10.3390/diagnostics12061351] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/19/2022] [Accepted: 05/26/2022] [Indexed: 11/25/2022] Open
Abstract
Imaging in the emergent setting carries high stakes. With increased demand for dedicated on-site service, emergency radiologists face increasingly large image volumes that require rapid turnaround times. However, novel artificial intelligence (AI) algorithms may assist trauma and emergency radiologists with efficient and accurate medical image analysis, providing an opportunity to augment human decision making, including outcome prediction and treatment planning. While traditional radiology practice involves visual assessment of medical images for detection and characterization of pathologies, AI algorithms can automatically identify subtle disease states and provide quantitative characterization of disease severity based on morphologic image details, such as geometry and fluid flow. Taken together, the benefits provided by implementing AI in radiology have the potential to improve workflow efficiency, engender faster turnaround results for complex cases, and reduce heavy workloads. Although analysis of AI applications within abdominopelvic imaging has primarily focused on oncologic detection, localization, and treatment response, several promising algorithms have been developed for use in the emergency setting. This article aims to establish a general understanding of the AI algorithms used in emergent image-based tasks and to discuss the challenges associated with the implementation of AI into the clinical workflow.
Collapse
|
50
|
Addressing Motion Blurs in Brain MRI Scans Using Conditional Adversarial Networks and Simulated Curvilinear Motions. J Imaging 2022; 8:jimaging8040084. [PMID: 35448211 PMCID: PMC9027264 DOI: 10.3390/jimaging8040084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 03/16/2022] [Accepted: 03/21/2022] [Indexed: 11/27/2022] Open
Abstract
In-scanner head motion often leads to degradation in MRI scans and is a major source of error in diagnosing brain abnormalities. Researchers have explored various approaches, including blind and nonblind deconvolutions, to correct the motion artifacts in MRI scans. Inspired by the recent success of deep learning models in medical image analysis, we investigate the efficacy of employing generative adversarial networks (GANs) to address motion blurs in brain MRI scans. We cast the problem as a blind deconvolution task where a neural network is trained to guess a blurring kernel that produced the observed corruption. Specifically, our study explores a new approach under the sparse coding paradigm where every ground truth corrupting kernel is assumed to be a “combination” of a relatively small universe of “basis” kernels. This assumption is based on the intuition that, on small distance scales, patients’ moves follow simple curves and that complex motions can be obtained by combining a number of simple ones. We show that, with a suitably dense basis, a neural network can effectively guess the degrading kernel and reverse some of the damage in the motion-affected real-world scans. To this end, we generated 10,000 continuous and curvilinear kernels in random positions and directions that are likely to uniformly populate the space of corrupting kernels in real-world scans. We further generated a large dataset of 225,000 pairs of sharp and blurred MR images to facilitate training effective deep learning models. Our experimental results demonstrate the viability of the proposed approach evaluated using synthetic and real-world MRI scans. Our study further suggests there is merit in exploring separate models for the sagittal, axial, and coronal planes.
Collapse
|