1
|
Luo A, Gou S, Tong N, Liu B, Jiao L, Xu H, Wang Y, Ding T. Visual interpretable MRI fine grading of meniscus injury for intelligent assisted diagnosis and treatment. NPJ Digit Med 2024; 7:97. [PMID: 38622284 PMCID: PMC11018801 DOI: 10.1038/s41746-024-01082-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 03/22/2024] [Indexed: 04/17/2024] Open
Abstract
Meniscal injury represents a common type of knee injury, accounting for over 50% of all knee injuries. The clinical diagnosis and treatment of meniscal injury heavily rely on magnetic resonance imaging (MRI). However, accurately diagnosing the meniscus from a comprehensive knee MRI is challenging due to its limited and weak signal, significantly impeding the precise grading of meniscal injuries. In this study, a visual interpretable fine grading (VIFG) diagnosis model has been developed to facilitate intelligent and quantified grading of meniscal injuries. Leveraging a multilevel transfer learning framework, it extracts comprehensive features and incorporates an attributional attention module to precisely locate the injured positions. Moreover, the attention-enhancing feedback module effectively concentrates on and distinguishes regions with similar grades of injury. The proposed method underwent validation on FastMRI_Knee and Xijing_Knee dataset, achieving mean grading accuracies of 0.8631 and 0.8502, surpassing the state-of-the-art grading methods notably in error-prone Grade 1 and Grade 2 cases. Additionally, the visually interpretable heatmaps generated by VIFG provide accurate depictions of actual or potential meniscus injury areas beyond human visual capability. Building upon this, a novel fine grading criterion was introduced for subtypes of meniscal injury, further classifying Grade 2 into 2a, 2b, and 2c, aligning with the anatomical knowledge of meniscal blood supply. It can provide enhanced injury-specific details, facilitating the development of more precise surgical strategies. The efficacy of this subtype classification was evidenced in 20 arthroscopic cases, underscoring the potential enhancement brought by intelligent-assisted diagnosis and treatment for meniscal injuries.
Collapse
Affiliation(s)
- Anlin Luo
- Key Laboratory of Intelligent Perception an Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, 710071, Xi'an, China
| | - Shuiping Gou
- Key Laboratory of Intelligent Perception an Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, 710071, Xi'an, China.
- AI-based Big Medical lmaging Data Frontier Research Center, Academy of Advanced Interdisciplinary Research, Xidian University, 710071, Xi'an, Shaanxi, China.
| | - Nuo Tong
- AI-based Big Medical lmaging Data Frontier Research Center, Academy of Advanced Interdisciplinary Research, Xidian University, 710071, Xi'an, Shaanxi, China
| | - Bo Liu
- Key Laboratory of Intelligent Perception an Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, 710071, Xi'an, China
| | - Licheng Jiao
- Key Laboratory of Intelligent Perception an Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, 710071, Xi'an, China
| | - Hu Xu
- Xijing Orthopaedics Hospital, The Fourth Military Medical University, 710032, Xi'an, Shaanxi, China
| | - Yingchun Wang
- Xijing Orthopaedics Hospital, The Fourth Military Medical University, 710032, Xi'an, Shaanxi, China
| | - Tan Ding
- Xijing Orthopaedics Hospital, The Fourth Military Medical University, 710032, Xi'an, Shaanxi, China.
| |
Collapse
|
2
|
Tieu A, Kroen E, Kadish Y, Liu Z, Patel N, Zhou A, Yilmaz A, Lee S, Deyer T. The Role of Artificial Intelligence in the Identification and Evaluation of Bone Fractures. Bioengineering (Basel) 2024; 11:338. [PMID: 38671760 PMCID: PMC11047896 DOI: 10.3390/bioengineering11040338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 03/23/2024] [Accepted: 03/26/2024] [Indexed: 04/28/2024] Open
Abstract
Artificial intelligence (AI), particularly deep learning, has made enormous strides in medical imaging analysis. In the field of musculoskeletal radiology, deep-learning models are actively being developed for the identification and evaluation of bone fractures. These methods provide numerous benefits to radiologists such as increased diagnostic accuracy and efficiency while also achieving standalone performances comparable or superior to clinician readers. Various algorithms are already commercially available for integration into clinical workflows, with the potential to improve healthcare delivery and shape the future practice of radiology. In this systematic review, we explore the performance of current AI methods in the identification and evaluation of fractures, particularly those in the ankle, wrist, hip, and ribs. We also discuss current commercially available products for fracture detection and provide an overview of the current limitations of this technology and future directions of the field.
Collapse
Affiliation(s)
- Andrew Tieu
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Ezriel Kroen
- New York Medical College, Valhalla, NY 10595, USA
| | | | - Zelong Liu
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Nikhil Patel
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Alexander Zhou
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | | | | | - Timothy Deyer
- East River Medical Imaging, New York, NY 10021, USA
- Department of Radiology, Cornell Medicine, New York, NY 10021, USA
| |
Collapse
|
3
|
Santomartino SM, Kung J, Yi PH. Systematic review of artificial intelligence development and evaluation for MRI diagnosis of knee ligament or meniscus tears. Skeletal Radiol 2024; 53:445-454. [PMID: 37584757 DOI: 10.1007/s00256-023-04416-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 07/24/2023] [Accepted: 07/24/2023] [Indexed: 08/17/2023]
Abstract
OBJECTIVE The purpose of this systematic review was to summarize the results of original research studies evaluating the characteristics and performance of deep learning models for detection of knee ligament and meniscus tears on MRI. MATERIALS AND METHODS We searched PubMed for studies published as of February 2, 2022 for original studies evaluating development and evaluation of deep learning models for MRI diagnosis of knee ligament or meniscus tears. We summarized study details according to multiple criteria including baseline article details, model creation, deep learning details, and model evaluation. RESULTS 19 studies were included with radiology departments leading the publications in deep learning development and implementation for detecting knee injuries via MRI. Among the studies, there was a lack of standard reporting and inconsistently described development details. However, all included studies reported consistently high model performance that significantly supplemented human reader performance. CONCLUSION From our review, we found radiology departments have been leading deep learning development for injury detection on knee MRIs. Although studies inconsistently described DL model development details, all reported high model performance, indicating great promise for DL in knee MRI analysis.
Collapse
Affiliation(s)
- Samantha M Santomartino
- Drexel University College of Medicine, Philadelphia, PA, USA
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Justin Kung
- Department of Orthopaedic Surgery, University of South Carolina, Columbia, SC, USA
| | - Paul H Yi
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, University of Maryland School of Medicine, Baltimore, MD, USA.
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore Street First Floor Rm. 1172, Baltimore, MD, 21201, USA.
| |
Collapse
|
4
|
Zhao Y, Coppola A, Karamchandani U, Amiras D, Gupte CM. Artificial intelligence applied to magnetic resonance imaging reliably detects the presence, but not the location, of meniscus tears: a systematic review and meta-analysis. Eur Radiol 2024:10.1007/s00330-024-10625-7. [PMID: 38386028 DOI: 10.1007/s00330-024-10625-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Revised: 12/24/2023] [Accepted: 01/13/2024] [Indexed: 02/23/2024]
Abstract
OBJECTIVES To review and compare the accuracy of convolutional neural networks (CNN) for the diagnosis of meniscal tears in the current literature and analyze the decision-making processes utilized by these CNN algorithms. MATERIALS AND METHODS PubMed, MEDLINE, EMBASE, and Cochrane databases up to December 2022 were searched in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) statement. Risk of analysis was used for all identified articles. Predictive performance values, including sensitivity and specificity, were extracted for quantitative analysis. The meta-analysis was divided between AI prediction models identifying the presence of meniscus tears and the location of meniscus tears. RESULTS Eleven articles were included in the final review, with a total of 13,467 patients and 57,551 images. Heterogeneity was statistically significantly large for the sensitivity of the tear identification analysis (I2 = 79%). A higher level of accuracy was observed in identifying the presence of a meniscal tear over locating tears in specific regions of the meniscus (AUC, 0.939 vs 0.905). Pooled sensitivity and specificity were 0.87 (95% confidence interval (CI) 0.80-0.91) and 0.89 (95% CI 0.83-0.93) for meniscus tear identification and 0.88 (95% CI 0.82-0.91) and 0.84 (95% CI 0.81-0.85) for locating the tears. CONCLUSIONS AI prediction models achieved favorable performance in the diagnosis, but not location, of meniscus tears. Further studies on the clinical utilities of deep learning should include standardized reporting, external validation, and full reports of the predictive performances of these models, with a view to localizing tears more accurately. CLINICAL RELEVANCE STATEMENT Meniscus tears are hard to diagnose in the knee magnetic resonance images. AI prediction models may play an important role in improving the diagnostic accuracy of clinicians and radiologists. KEY POINTS • Artificial intelligence (AI) provides great potential in improving the diagnosis of meniscus tears. • The pooled diagnostic performance for artificial intelligence (AI) in identifying meniscus tears was better (sensitivity 87%, specificity 89%) than locating the tears (sensitivity 88%, specificity 84%). • AI is good at confirming the diagnosis of meniscus tears, but future work is required to guide the management of the disease.
Collapse
Affiliation(s)
- Yi Zhao
- Imperial College London School of Medicine, Exhibition Rd, South Kensington, London, SW7 2BU, UK.
| | - Andrew Coppola
- Imperial College London School of Medicine, Exhibition Rd, South Kensington, London, SW7 2BU, UK
| | | | - Dimitri Amiras
- Imperial College London School of Medicine, Exhibition Rd, South Kensington, London, SW7 2BU, UK
- Imperial College London NHS Trust, London, UK
| | - Chinmay M Gupte
- Imperial College London School of Medicine, Exhibition Rd, South Kensington, London, SW7 2BU, UK
- Imperial College London NHS Trust, London, UK
| |
Collapse
|
5
|
Mahendrakar P, Kumar D, Patil U. A Comprehensive Review on MRI-based Knee Joint Segmentation and Analysis Techniques. Curr Med Imaging 2024; 20:e150523216894. [PMID: 37189281 DOI: 10.2174/1573405620666230515090557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 11/29/2022] [Accepted: 12/28/2022] [Indexed: 05/17/2023]
Abstract
Using magnetic resonance imaging (MRI) in osteoarthritis pathogenesis research has proven extremely beneficial. However, it is always challenging for both clinicians and researchers to detect morphological changes in knee joints from magnetic resonance (MR) imaging since the surrounding tissues produce identical signals in MR studies, making it difficult to distinguish between them. Segmenting the knee bone, articular cartilage and menisci from the MR images allows one to examine the complete volume of the bone, articular cartilage, and menisci. It can also be used to assess certain characteristics quantitatively. However, segmentation is a laborious and time-consuming operation that requires sufficient training to complete correctly. With the advancement of MRI technology and computational methods, researchers have developed several algorithms to automate the task of individual knee bone, articular cartilage and meniscus segmentation during the last two decades. This systematic review aims to present available fully and semi-automatic segmentation methods for knee bone, cartilage, and meniscus published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field of image analysis and segmentation, which helps the development of novel automated methods for clinical applications. The review also contains the recently developed fully automated deep learning-based methods for segmentation, which not only provides better results compared to the conventional techniques but also open a new field of research in Medical Imaging.
Collapse
Affiliation(s)
- Pavan Mahendrakar
- BLDEA’s V.P.Dr. P.G., Halakatti College of Engineering and Technology, Vijayapur, Karnataka, India
| | | | - Uttam Patil
- Jain College of Engineering, T.S Nagar, Hunchanhatti Road, Machhe, Belagavi, Karnataka, India
| |
Collapse
|
6
|
Shetty ND, Dhande R, Unadkat BS, Parihar P. A Comprehensive Review on the Diagnosis of Knee Injury by Deep Learning-Based Magnetic Resonance Imaging. Cureus 2023; 15:e45730. [PMID: 37868582 PMCID: PMC10590246 DOI: 10.7759/cureus.45730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 09/21/2023] [Indexed: 10/24/2023] Open
Abstract
The continual improvement in the field of medical diagnosis has led to the monopoly of using deep learning (DL)-based magnetic resonance imaging (MRI) for the diagnosis of knee injury related to meniscal injury, ligament injury including the cruciate ligaments, collateral ligaments and medial patella-femoral ligament, and cartilage injury. The present systematic review was done by PubMed and Directory of Open Access Journals (DOAJ), wherein we finalised 24 studies conducted on the accuracy of DL MRI studies for knee injury identification. The studies showed an accuracy of 72.5% to 100% indicating that DL MRI holds an equivalent performance as humans in decision-making and management of knee injuries. This further opens up future exploration for improving MRI-based diagnosis keeping in mind the limitations of verification bias and data imbalance in ground truth subjectivity.
Collapse
Affiliation(s)
- Neha D Shetty
- Department of Radiodiagnosis, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Rajasbala Dhande
- Department of Radiodiagnosis, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Bhavik S Unadkat
- Department of Radiodiagnosis, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Pratapsingh Parihar
- Department of Radiodiagnosis, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| |
Collapse
|
7
|
Debs P, Fayad LM. The promise and limitations of artificial intelligence in musculoskeletal imaging. FRONTIERS IN RADIOLOGY 2023; 3:1242902. [PMID: 37609456 PMCID: PMC10440743 DOI: 10.3389/fradi.2023.1242902] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 07/26/2023] [Indexed: 08/24/2023]
Abstract
With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.
Collapse
Affiliation(s)
- Patrick Debs
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
| | - Laura M. Fayad
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
- Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Oncology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
8
|
Korneev A, Lipina M, Lychagin A, Timashev P, Kon E, Telyshev D, Goncharuk Y, Vyazankin I, Elizarov M, Murdalov E, Pogosyan D, Zhidkov S, Bindeeva A, Liang XJ, Lasovskiy V, Grinin V, Anosov A, Kalinsky E. Systematic review of artificial intelligence tack in preventive orthopaedics: is the land coming soon? INTERNATIONAL ORTHOPAEDICS 2023; 47:393-403. [PMID: 36369394 DOI: 10.1007/s00264-022-05628-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 10/26/2022] [Indexed: 11/13/2022]
Abstract
PURPOSE This study aims to describe and assess the current stage of the artificial intelligence (AI) technology integration in preventive orthopaedics of the knee and hip joints. MATERIALS AND METHODS The study was conducted in strict compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement. Literature databases were searched for articles describing the development and validation of AI models aimed at diagnosing knee or hip joint pathologies or predicting their development or course in patients. The quality of the included articles was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) and QUADAS-AI tools. RESULTS 56 articles were found that meet all the inclusion criteria. We identified two problems that block the full integration of AI into the routine of an orthopaedic physician. The first of them is related to the insufficient amount, variety and quality of data for training, and validation and testing of AI models. The second problem is the rarity of rational evaluation of models, which is why their real quality cannot always be evaluated. CONCLUSION The vastness and relevance of the studied topic are beyond doubt. Qualitative and optimally validated models exist in all four scopes considered. Additional optimization and confirmation of the models' quality on various datasets are the last technical stumbling blocks for creating usable software and integrating them into the routine of an orthopaedic physician.
Collapse
Affiliation(s)
- Alexander Korneev
- Medical Polymer Synthesis Laboratory, Institute for Regenerative Medicine, Sechenov University, Moscow, 119991, Russia.,Laboratory of Clinical Smart Nanotechnologies, Institute for Regenerative Medicine, Sechenov University, Moscow, 119991, Russia.,N.V. Sklifosovsky Institute of Clinical Medicine, Sechenov University, Moscow, 119991, Russia
| | - Marina Lipina
- Laboratory of Clinical Smart Nanotechnologies, Institute for Regenerative Medicine, Sechenov University, Moscow, 119991, Russia. .,Department of Traumatology, Orthopaedics and Disaster Surgery, Sechenov University, Moscow, 119991, Russia.
| | - Alexey Lychagin
- Department of Traumatology, Orthopaedics and Disaster Surgery, Sechenov University, Moscow, 119991, Russia
| | - Peter Timashev
- Laboratory of Clinical Smart Nanotechnologies, Institute for Regenerative Medicine, Sechenov University, Moscow, 119991, Russia.,World-Class Research Center "Digital Biodesign and Personalized Healthcare", Sechenov University, Moscow, 119991, Russia.,Institute for Regenerative Medicine, Sechenov University, Moscow, 119991, Russia
| | - Elizaveta Kon
- Department of Traumatology, Orthopaedics and Disaster Surgery, Sechenov University, Moscow, 119991, Russia.,Humanitas Clinical and Research Center - IRCCS, Via Manzoni 56, Rozzano, 20089, Milan, Italy
| | - Dmitry Telyshev
- Russia Institute of Biomedical Systems, National Research University of Electronic Technology Moscow, Zelenograd, 124498, Russia.,Institute of Bionic Technologies and Engineering, Sechenov University, Moscow, 119991, Russia
| | - Yuliya Goncharuk
- Department of Traumatology, Orthopaedics and Disaster Surgery, Sechenov University, Moscow, 119991, Russia
| | - Ivan Vyazankin
- Laboratory of Clinical Smart Nanotechnologies, Institute for Regenerative Medicine, Sechenov University, Moscow, 119991, Russia.,Department of Traumatology, Orthopaedics and Disaster Surgery, Sechenov University, Moscow, 119991, Russia
| | - Mikhail Elizarov
- Department of Traumatology, Orthopaedics and Disaster Surgery, Sechenov University, Moscow, 119991, Russia
| | - Emirkhan Murdalov
- Department of Traumatology, Orthopaedics and Disaster Surgery, Sechenov University, Moscow, 119991, Russia
| | - David Pogosyan
- Department of Traumatology, Orthopaedics and Disaster Surgery, Sechenov University, Moscow, 119991, Russia.,Department of Life Safety and Disaster Medicine, Sechenov University, Moscow, 119991, Russia
| | - Sergei Zhidkov
- N.V. Sklifosovsky Institute of Clinical Medicine, Sechenov University, Moscow, 119991, Russia
| | - Anastasia Bindeeva
- N.V. Sklifosovsky Institute of Clinical Medicine, Sechenov University, Moscow, 119991, Russia
| | - Xing-Jie Liang
- Laboratory of Clinical Smart Nanotechnologies, Institute for Regenerative Medicine, Sechenov University, Moscow, 119991, Russia.,CAS Key Laboratory for Biomedical Effects of Nanomaterials and Nanosafety, CAS Center for Excellence in Nanoscience, National Center for Nanoscience and Technology of China, Beijing, 100190, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Vladimir Lasovskiy
- Department of Artificial Intelligence and Digital Products, VimpelCom, Moscow, 127083, Russia
| | - Victor Grinin
- Department of Artificial Intelligence and Digital Products, VimpelCom, Moscow, 127083, Russia
| | - Alexey Anosov
- Department of Artificial Intelligence and Digital Products, VimpelCom, Moscow, 127083, Russia
| | - Eugene Kalinsky
- Laboratory of Clinical Smart Nanotechnologies, Institute for Regenerative Medicine, Sechenov University, Moscow, 119991, Russia.,Department of Traumatology, Orthopaedics and Disaster Surgery, Sechenov University, Moscow, 119991, Russia
| |
Collapse
|
9
|
Liu TJ, Wang H, Christian M, Chang CW, Lai F, Tai HC. Automatic segmentation and measurement of pressure injuries using deep learning models and a LiDAR camera. Sci Rep 2023; 13:680. [PMID: 36639395 PMCID: PMC9839689 DOI: 10.1038/s41598-022-26812-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 12/20/2022] [Indexed: 01/15/2023] Open
Abstract
Pressure injuries are a common problem resulting in poor prognosis, long-term hospitalization, and increased medical costs in an aging society. This study developed a method to do automatic segmentation and area measurement of pressure injuries using deep learning models and a light detection and ranging (LiDAR) camera. We selected the finest photos of patients with pressure injuries, 528 in total, at National Taiwan University Hospital from 2016 to 2020. The margins of the pressure injuries were labeled by three board-certified plastic surgeons. The labeled photos were trained by Mask R-CNN and U-Net for segmentation. After the segmentation model was constructed, we made an automatic wound area measurement via a LiDAR camera. We conducted a prospective clinical study to test the accuracy of this system. For automatic wound segmentation, the performance of the U-Net (Dice coefficient (DC): 0.8448) was better than Mask R-CNN (DC: 0.5006) in the external validation. In the prospective clinical study, we incorporated the U-Net in our automatic wound area measurement system and got 26.2% mean relative error compared with the traditional manual method. Our segmentation model, U-Net, and area measurement system achieved acceptable accuracy, making them applicable in clinical circumstances.
Collapse
Affiliation(s)
- Tom J. Liu
- grid.19188.390000 0004 0546 0241Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan ,grid.256105.50000 0004 1937 1063Division of Plastic Surgery, Department of Surgery, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Hanwei Wang
- grid.19188.390000 0004 0546 0241Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Mesakh Christian
- grid.19188.390000 0004 0546 0241Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Che-Wei Chang
- grid.19188.390000 0004 0546 0241Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan ,grid.414746.40000 0004 0604 4784Division of Plastic Reconstructive and Aesthetic Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei City, Taiwan
| | - Feipei Lai
- grid.19188.390000 0004 0546 0241Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Hao-Chih Tai
- National Taiwan University Hospital and College of Medicine, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
10
|
Cellina M, Cè M, Irmici G, Ascenti V, Caloro E, Bianchi L, Pellegrino G, D’Amico N, Papa S, Carrafiello G. Artificial Intelligence in Emergency Radiology: Where Are We Going? Diagnostics (Basel) 2022; 12:diagnostics12123223. [PMID: 36553230 PMCID: PMC9777804 DOI: 10.3390/diagnostics12123223] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 12/11/2022] [Accepted: 12/16/2022] [Indexed: 12/23/2022] Open
Abstract
Emergency Radiology is a unique branch of imaging, as rapidity in the diagnosis and management of different pathologies is essential to saving patients' lives. Artificial Intelligence (AI) has many potential applications in emergency radiology: firstly, image acquisition can be facilitated by reducing acquisition times through automatic positioning and minimizing artifacts with AI-based reconstruction systems to optimize image quality, even in critical patients; secondly, it enables an efficient workflow (AI algorithms integrated with RIS-PACS workflow), by analyzing the characteristics and images of patients, detecting high-priority examinations and patients with emergent critical findings. Different machine and deep learning algorithms have been trained for the automated detection of different types of emergency disorders (e.g., intracranial hemorrhage, bone fractures, pneumonia), to help radiologists to detect relevant findings. AI-based smart reporting, summarizing patients' clinical data, and analyzing the grading of the imaging abnormalities, can provide an objective indicator of the disease's severity, resulting in quick and optimized treatment planning. In this review, we provide an overview of the different AI tools available in emergency radiology, to keep radiologists up to date on the current technological evolution in this field.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
- Correspondence:
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Giovanni Irmici
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Velio Ascenti
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Elena Caloro
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Lorenzo Bianchi
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Giuseppe Pellegrino
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Natascha D’Amico
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Sergio Papa
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Gianpaolo Carrafiello
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
- Radiology Department, Fondazione IRCCS Cà Granda, Policlinico di Milano Ospedale Maggiore, Via Sforza 35, 20122 Milan, Italy
| |
Collapse
|
11
|
D'Angelo T, Caudo D, Blandino A, Albrecht MH, Vogl TJ, Gruenewald LD, Gaeta M, Yel I, Koch V, Martin SS, Lenga L, Muscogiuri G, Sironi S, Mazziotti S, Booz C. Artificial intelligence, machine learning and deep learning in musculoskeletal imaging: Current applications. JOURNAL OF CLINICAL ULTRASOUND : JCU 2022; 50:1414-1431. [PMID: 36069404 DOI: 10.1002/jcu.23321] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 08/18/2022] [Accepted: 08/20/2022] [Indexed: 06/15/2023]
Abstract
Artificial intelligence is rapidly expanding in all technological fields. The medical field, and especially diagnostic imaging, has been showing the highest developmental potential. Artificial intelligence aims at human intelligence simulation through the management of complex problems. This review describes the technical background of artificial intelligence, machine learning, and deep learning. The first section illustrates the general potential of artificial intelligence applications in the context of request management, data acquisition, image reconstruction, archiving, and communication systems. In the second section, the prospective of dedicated tools for segmentation, lesion detection, automatic diagnosis, and classification of musculoskeletal disorders is discussed.
Collapse
Affiliation(s)
- Tommaso D'Angelo
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
- Department of Radiology and Nuclear Medicine, Rotterdam, Netherlands
| | - Danilo Caudo
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
- Department or Radiology, IRRCS Centro Neurolesi "Bonino Pulejo", Messina, Italy
| | - Alfredo Blandino
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
| | - Moritz H Albrecht
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Thomas J Vogl
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Leon D Gruenewald
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Michele Gaeta
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
| | - Ibrahim Yel
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Vitali Koch
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Simon S Martin
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Lukas Lenga
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Giuseppe Muscogiuri
- School of Medicine and Surgery, University of Milano-Bicocca, Milan, Italy
- Department of Radiology, IRCCS Istituto Auxologico Italiano, San Luca Hospital, Milan, Italy
| | - Sandro Sironi
- School of Medicine and Surgery, University of Milano-Bicocca, Milan, Italy
- Department of Radiology, ASST Papa Giovanni XXIII Hospital, Bergamo, Italy
| | - Silvio Mazziotti
- Department of Biomedical Sciences and Morphological and Functional Imaging, University Hospital Messina, Messina, Italy
| | - Christian Booz
- Division of Experimental Imaging, Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
12
|
[Automatic identification algorithm of meniscus tear based on radiomics of knee MRI]. ZHONGGUO XIU FU CHONG JIAN WAI KE ZA ZHI = ZHONGGUO XIUFU CHONGJIAN WAIKE ZAZHI = CHINESE JOURNAL OF REPARATIVE AND RECONSTRUCTIVE SURGERY 2022; 36:1395-1399. [PMID: 36382458 PMCID: PMC9681581 DOI: 10.7507/1002-1892.202206016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
OBJECTIVE To establish a classification model based on knee MRI radiomics, realize automatic identification of meniscus tear, and provide reference for accurate diagnosis of meniscus injury. METHODS A total of 228 patients (246 knees) with meniscus injury who were admitted between July 2018 and March 2021 were selected as the research objects. There were 146 males and 82 females; the age ranged from 9 to 76 years, with a median age of 53 years. There were 210 cases of meniscus injury in one knee and 18 cases in both knees. All the patients were confirmed by arthroscopy, among which 117 knees with meniscus tear and 129 knees with meniscus non-tear injury. The proton density weighted-spectral attenuated inversion recovery (PDW-SPAIR) sequence images of sagittal MRI were collected, and two doctors performed radiomics studies. The 246 knees were randomly divided into training group and testing group according to the ratio of 7∶3. First, ITK-SNAP3.6.0 software was used to extract the region of interest (ROI) of the meniscus and radiomic features. After retaining the radiomic features with intraclass correlation coefficient (ICC)>0.8, the max-relevance and min-redundancy (mRMR) and least absolute shrinkage and selection operator (LASSO) were used for filtering the features to establish an automatic identification model of meniscus tear. The receiver operator characteristic curve (ROC) and the corresponding area under the ROC curve (AUC) was obtained; the model performance was comprehensively evaluated by calculating the accuracy, sensitivity, and specificity. RESULTS A total of 1 316-dimensional radiomic features were extracted from the meniscus ROI, and the ICC within the group and ICC between the groups of the 981-dimensional radiomic features were both greater than 0.80. The redundant information in the 981-dimensional radiomic features was eliminated by mRMR, and the 20-dimensional radiomic features were retained. The optimal feature subset was further selected by LASSO, and 8-dimensional radiomic features were selected. The average ICC within the group and the average ICC between the groups were 0.942 and 0.920, respectively. The AUC of the training group was 0.889±0.036 [95% CI (0.845, 0.942), P<0.001], and the accuracy, sensitivity, and specificity were 0.873, 0.869, and 0.842, respectively; the AUC of the testing group was 0.876±0.036 [95% CI (0.875, 0.984), P<0.001], and the accuracy, sensitivity, and specificity were 0.862, 0.851, and 0.845, respectively. CONCLUSION The model established by the radiomics method has good automatic identification performance of meniscus tear.
Collapse
|
13
|
Peng Y, Zheng H, Liang P, Zhang L, Zaman F, Wu X, Sonka M, Chen DZ. KCB-Net: A 3D knee cartilage and bone segmentation network via sparse annotation. Med Image Anal 2022; 82:102574. [PMID: 36126403 PMCID: PMC10515734 DOI: 10.1016/j.media.2022.102574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 05/28/2022] [Accepted: 08/11/2022] [Indexed: 11/26/2022]
Abstract
Knee cartilage and bone segmentation is critical for physicians to analyze and diagnose articular damage and knee osteoarthritis (OA). Deep learning (DL) methods for medical image segmentation have largely outperformed traditional methods, but they often need large amounts of annotated data for model training, which is very costly and time-consuming for medical experts, especially on 3D images. In this paper, we report a new knee cartilage and bone segmentation framework, KCB-Net, for 3D MR images based on sparse annotation. KCB-Net selects a small subset of slices from 3D images for annotation, and seeks to bridge the performance gap between sparse annotation and full annotation. Specifically, it first identifies a subset of the most effective and representative slices with an unsupervised scheme; it then trains an ensemble model using the annotated slices; next, it self-trains the model using 3D images containing pseudo-labels generated by the ensemble method and improved by a bi-directional hierarchical earth mover's distance (bi-HEMD) algorithm; finally, it fine-tunes the segmentation results using the primal-dual Internal Point Method (IPM). Experiments on four 3D MR knee joint datasets (the SKI10 dataset, OAI ZIB dataset, Iowa dataset, and iMorphics dataset) show that our new framework outperforms state-of-the-art methods on full annotation, and yields high quality results for small annotation ratios even as low as 10%.
Collapse
Affiliation(s)
- Yaopeng Peng
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA
| | - Hao Zheng
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA
| | - Peixian Liang
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA
| | - Lichun Zhang
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242, USA
| | - Fahim Zaman
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242, USA
| | - Xiaodong Wu
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242, USA
| | - Milan Sonka
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242, USA
| | - Danny Z Chen
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA.
| |
Collapse
|
14
|
Droppelmann G, Tello M, García N, Greene C, Jorquera C, Feijoo F. Lateral elbow tendinopathy and artificial intelligence: Binary and multilabel findings detection using machine learning algorithms. Front Med (Lausanne) 2022; 9:945698. [PMID: 36213676 PMCID: PMC9537568 DOI: 10.3389/fmed.2022.945698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
Background Ultrasound (US) is a valuable technique to detect degenerative findings and intrasubstance tears in lateral elbow tendinopathy (LET). Machine learning methods allow supporting this radiological diagnosis. Aim To assess multilabel classification models using machine learning models to detect degenerative findings and intrasubstance tears in US images with LET diagnosis. Materials and methods A retrospective study was performed. US images and medical records from patients with LET diagnosis from January 1st, 2017, to December 30th, 2018, were selected. Datasets were built for training and testing models. For image analysis, features extraction, texture characteristics, intensity distribution, pixel-pixel co-occurrence patterns, and scales granularity were implemented. Six different supervised learning models were implemented for binary and multilabel classification. All models were trained to classify four tendon findings (hypoechogenicity, neovascularity, enthesopathy, and intrasubstance tear). Accuracy indicators and their confidence intervals (CI) were obtained for all models following a K-fold-repeated-cross-validation method. To measure multilabel prediction, multilabel accuracy, sensitivity, specificity, and receiver operating characteristic (ROC) with 95% CI were used. Results A total of 30,007 US images (4,324 exams, 2,917 patients) were included in the analysis. The RF model presented the highest mean values in the area under the curve (AUC), sensitivity, and also specificity by each degenerative finding in the binary classification. The AUC and sensitivity showed the best performance in intrasubstance tear with 0.991 [95% CI, 099, 0.99], and 0.775 [95% CI, 0.77, 0.77], respectively. Instead, specificity showed upper values in hypoechogenicity with 0.821 [95% CI, 0.82, −0.82]. In the multilabel classifier, RF also presented the highest performance. The accuracy was 0.772 [95% CI, 0.771, 0.773], a great macro of 0.948 [95% CI, 0.94, 0.94], and a micro of 0.962 [95% CI, 0.96, 0.96] AUC scores were detected. Diagnostic accuracy, sensitivity, and specificity with 95% CI were calculated. Conclusion Machine learning algorithms based on US images with LET presented high diagnosis accuracy. Mainly the random forest model shows the best performance in binary and multilabel classifiers, particularly for intrasubstance tears.
Collapse
Affiliation(s)
- Guillermo Droppelmann
- Research Center on Medicine, Exercise, Sport and Health, MEDS Clinic, Santiago, RM, Chile
- Health Sciences Ph.D. Program, Universidad Católica de Murcia UCAM, Murcia, Spain
- Principles and Practice of Clinical Research (PPCR), Harvard T.H. Chan School of Public Health, Boston, MA, United States
- *Correspondence: Guillermo Droppelmann,
| | - Manuel Tello
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| | - Nicolás García
- MSK Diagnostic and Interventional Radiology Department, MEDS Clinic, Santiago, RM, Chile
| | - Cristóbal Greene
- Hand and Elbow Unit, Department of Orthopaedic Surgery, MEDS Clinic, Santiago, RM, Chile
| | - Carlos Jorquera
- Facultad de Ciencias, Escuela de Nutrición y Dietética, Universidad Mayor, Santiago, RM, Chile
| | - Felipe Feijoo
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| |
Collapse
|
15
|
Study of the Automatic Recognition of Landslides by Using InSAR Images and the Improved Mask R-CNN Model in the Eastern Tibet Plateau. REMOTE SENSING 2022. [DOI: 10.3390/rs14143362] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
The development of landslide hazards is spatially scattered, temporally random, and poorly characterized. Given the advantages of the large spatial scale and high sensitivity of InSAR observations, InSAR is becoming one of the main techniques for active landslide identification. The difficult problem is how to quickly extract landslide information from extensive InSAR image data. Since the instance segmentation model (Mask R-CNN) in deep learning can provide highly robust target recognition, we select the landslide-prone eastern edge of the Tibetan Plateau as a specific test area. Introducing and optimizing this model achieves high-speed and accurate recognition of InSAR observations. First, the InSAR patch landslide instance segmentation dataset (SLD) is established by developing a common object in context (COCO) annotation format conversion code based on InSAR observations. The Mask R-CNN+++ is found by adding three functions of the ResNext module to increase the fineness of the network segmentation results and enhance the noise resistance of the model, the DCB (deformable convolutional block) to improve the feature extraction ability of the network for geometric morphological changes of landslide patches, and an attention mechanism to selectively enhance usefully and suppress features less valuable to the native Mask R-CNN network. The model achieves 92.94% accuracy on the test set, and the active landslide recognition speed based on this model under ordinary computer hardware conditions is 72.3 km2/s. The overall characteristics of the results of this study show that the optimized model effectively enhances the perceptibility of image morphological changes, thereby resulting in smoother recognition boundaries and further improvement of the generalization ability of segmentation detection. This result is expected to serve to identify and monitor active landslides in complex surface conditions on a large spatial scale. Moreover, active landslides of different geometric features, motion patterns, and intensities are expected to be further segmented.
Collapse
|
16
|
Li T, Wang Y, Qu Y, Dong R, Kang M, Zhao J. Feasibility study of hallux valgus measurement with a deep convolutional neural network based on landmark detection. Skeletal Radiol 2022; 51:1235-1247. [PMID: 34748073 DOI: 10.1007/s00256-021-03939-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 10/03/2021] [Accepted: 10/08/2021] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To develop a deep learning algorithm based on automatic detection of landmarks that can be used to automatically calculate forefoot imaging parameters from radiographs and test its performance. MATERIALS AND METHODS A total of 1023 weight-bearing dorsoplantar (DP) radiographs were included. A total of 776 radiographs were used for training and verification of the model, and 247 radiographs were used for testing the performance of the model. The radiologists manually marked 18 landmarks on each image. By training our model to automatically label these landmarks, 4 imaging parameters commonly used for the diagnosis of hallux valgus could be measured, including the first-second intermetatarsal angle (IMA), hallux valgus angle (HVA), hallux interphalangeal angle (HIA), and distal metatarsal articular angle (DMAA). The reference standard was determined by the radiologists' measurements. The percentage of correct key points (PCK), intragroup correlation coefficient (ICC), Pearson correlation coefficient (r), root mean square error (RMSE), and mean absolute error (MAE) between the predicted value of the model and the reference standard were calculated. The Bland-Altman plot shows the mean difference and 95% LoA. RESULTS The PCK was 84-99% at the 3-mm threshold. The correlation between the observed and predicted values of the four angles was high (ICC: 0.89-0.96, r: 0.81-0.97, RMSE: 3.76-6.77, MAE: 3.22-5.52). However, there was a systematic error between the model predicted value and the reference standard (the mean difference ranged from - 3.00 to - 5.08°, and the standard deviation ranged from 2.25 to 4.47°). CONCLUSION Our model can accurately identify landmarks, but there is a certain amount of error in the angle measurement, which needs further improvement.
Collapse
Affiliation(s)
- Tong Li
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China
| | - Yuzhao Wang
- College of Computer Science and Technology, Jilin University, Changchun, 130000, China
| | - Yang Qu
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China
| | - Rongpeng Dong
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China
| | - Mingyang Kang
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China
| | - Jianwu Zhao
- The Second Hospital of Jilin University, Jilin University, Changchun, 130000, China.
| |
Collapse
|
17
|
Li J, Qian K, Liu J, Huang Z, Zhang Y, Zhao G, Wang H, Li M, Liang X, Zhou F, Yu X, Li L, Wang X, Yang X, Jiang Q. Identification and diagnosis of meniscus tear by magnetic resonance imaging using a deep learning model. J Orthop Translat 2022; 34:91-101. [PMID: 35847603 PMCID: PMC9253363 DOI: 10.1016/j.jot.2022.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 05/11/2022] [Accepted: 05/17/2022] [Indexed: 11/25/2022] Open
Abstract
Objective Meniscus tear is a common problem in sports trauma, and its imaging diagnosis mainly relies on MRI. To improve the diagnostic accuracy and efficiency, a deep learning model was employed in this study and the identification efficiency was evaluated. Methods Standard knee MRI images from 924 individual patients were used to complete the training, validation and testing processes. Mask regional convolutional neural network (R–CNN) was used to build the deep learning network structure, and ResNet50 was adopted to develop the backbone network. The deep learning model was trained and validated with a dataset containing 504 and 220 patients, respectively. Internal testing was performed based on a dataset of 200 patients, and 180 patients from 8 hospitals were regarded as an external dataset for model validation. Additionally, 40 patients who were diagnosed by the arthroscopic surgery were enrolled as the final test dataset. Results After training and validation, the deep learning model effectively recognized healthy and injured menisci. Average precision for the three types of menisci (healthy, torn and degenerated menisci) ranged from 68% to 80%. Diagnostic accuracy for healthy, torn and degenerated menisci was 87.50%, 86.96%, and 84.78%, respectively. Validation results from external dataset demonstrated that the accuracy of diagnosing torn and intact meniscus tear through 3.0T MRI images was higher than 80%, while the accuracy verified by arthroscopic surgery was 87.50%. Conclusion Mask R–CNN effectively identified and diagnosed meniscal injuries, especially for tears that occurred in different parts of the meniscus. The recognition ability was admirable, and the diagnostic accuracy could be further improved with increased training sample size. Therefore, this deep learning model showed great potential in diagnosing meniscus injuries. Translational potential of this article Deep learning model exerted unique effect in terms of reducing doctors’ workload and improving diagnostic accuracy. Injured and healthy menisci could be more accurately identified and classified based on training and learning datasets. This model could also distinguish torn from degenerated menisci, making it an effective tool for MRI-assisted diagnosis of meniscus injuries in clinical practice.
Collapse
Affiliation(s)
- Jie Li
- State Key Laboratory of Pharmaceutical Biotechnology, Division of Sports Medicine and Adult Reconstructive Surgery, Department of Orthopedic Surgery, Drum Tower Hospital Affiliated to Medical School of Nanjing University, China
- School of Mechanical Engineering, Southeast University, China
| | - Kun Qian
- Hangzhou Lancet Robotics Company Ltd, China
| | | | | | | | - Guoqian Zhao
- Danyang Hospital of Traditional Chinese Medicine, China
| | - Huifen Wang
- The Second People's Hospital of Xuanwei, China
| | - Meng Li
- Cancer Hospital Chinese Academy of Medical Science, China
| | - Xiaohan Liang
- The First Affiliated Hospital of Bengbu Medical College, China
| | | | - Xiuying Yu
- Lin Yi Hospital of Traditional Chinese Medicine, China
| | - Lan Li
- State Key Laboratory of Pharmaceutical Biotechnology, Division of Sports Medicine and Adult Reconstructive Surgery, Department of Orthopedic Surgery, Drum Tower Hospital Affiliated to Medical School of Nanjing University, China
| | - Xingsong Wang
- School of Mechanical Engineering, Southeast University, China
- Corresponding author. No. 2 Southeast University Road, Nanjing, 210000, China.
| | - Xianfeng Yang
- Department of Radiology, Drum Tower Hospital Affiliated to Medical School of Nanjing University, China
- Corresponding author. No. 321 Zhongshan Road, Nanjing, 210000, China.
| | - Qing Jiang
- State Key Laboratory of Pharmaceutical Biotechnology, Division of Sports Medicine and Adult Reconstructive Surgery, Department of Orthopedic Surgery, Drum Tower Hospital Affiliated to Medical School of Nanjing University, China
| |
Collapse
|
18
|
Busnatu Ș, Niculescu AG, Bolocan A, Petrescu GED, Păduraru DN, Năstasă I, Lupușoru M, Geantă M, Andronic O, Grumezescu AM, Martins H. Clinical Applications of Artificial Intelligence-An Updated Overview. J Clin Med 2022; 11:jcm11082265. [PMID: 35456357 PMCID: PMC9031863 DOI: 10.3390/jcm11082265] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 04/09/2022] [Accepted: 04/14/2022] [Indexed: 12/16/2022] Open
Abstract
Artificial intelligence has the potential to revolutionize modern society in all its aspects. Encouraged by the variety and vast amount of data that can be gathered from patients (e.g., medical images, text, and electronic health records), researchers have recently increased their interest in developing AI solutions for clinical care. Moreover, a diverse repertoire of methods can be chosen towards creating performant models for use in medical applications, ranging from disease prediction, diagnosis, and prognosis to opting for the most appropriate treatment for an individual patient. In this respect, the present paper aims to review the advancements reported at the convergence of AI and clinical care. Thus, this work presents AI clinical applications in a comprehensive manner, discussing the recent literature studies classified according to medical specialties. In addition, the challenges and limitations hindering AI integration in the clinical setting are further pointed out.
Collapse
Affiliation(s)
- Ștefan Busnatu
- “Carol Davila” University of Medicine and Pharmacy, 050474 Bucharest, Romania; (Ș.B.); (A.B.); (G.E.D.P.); (D.N.P.); (I.N.); (M.L.); (O.A.)
| | - Adelina-Gabriela Niculescu
- Department of Science and Engineering of Oxide Materials and Nanomaterials, Faculty of Applied Chemistry and Materials Science, Politehnica University of Bucharest, 011061 Bucharest, Romania;
| | - Alexandra Bolocan
- “Carol Davila” University of Medicine and Pharmacy, 050474 Bucharest, Romania; (Ș.B.); (A.B.); (G.E.D.P.); (D.N.P.); (I.N.); (M.L.); (O.A.)
| | - George E. D. Petrescu
- “Carol Davila” University of Medicine and Pharmacy, 050474 Bucharest, Romania; (Ș.B.); (A.B.); (G.E.D.P.); (D.N.P.); (I.N.); (M.L.); (O.A.)
| | - Dan Nicolae Păduraru
- “Carol Davila” University of Medicine and Pharmacy, 050474 Bucharest, Romania; (Ș.B.); (A.B.); (G.E.D.P.); (D.N.P.); (I.N.); (M.L.); (O.A.)
| | - Iulian Năstasă
- “Carol Davila” University of Medicine and Pharmacy, 050474 Bucharest, Romania; (Ș.B.); (A.B.); (G.E.D.P.); (D.N.P.); (I.N.); (M.L.); (O.A.)
| | - Mircea Lupușoru
- “Carol Davila” University of Medicine and Pharmacy, 050474 Bucharest, Romania; (Ș.B.); (A.B.); (G.E.D.P.); (D.N.P.); (I.N.); (M.L.); (O.A.)
| | - Marius Geantă
- Centre for Innovation in Medicine, “Carol Davila” University of Medicine and Pharmacy, 050474 Bucharest, Romania;
| | - Octavian Andronic
- “Carol Davila” University of Medicine and Pharmacy, 050474 Bucharest, Romania; (Ș.B.); (A.B.); (G.E.D.P.); (D.N.P.); (I.N.); (M.L.); (O.A.)
| | - Alexandru Mihai Grumezescu
- Department of Science and Engineering of Oxide Materials and Nanomaterials, Faculty of Applied Chemistry and Materials Science, Politehnica University of Bucharest, 011061 Bucharest, Romania;
- Research Institute of the University of Bucharest—ICUB, University of Bucharest, 050657 Bucharest, Romania
- Academy of Romanian Scientists, Ilfov No. 3, 50044 Bucharest, Romania
- Correspondence:
| | - Henrique Martins
- Faculty of Health Sciences, Universidade da Beira Interior, 6200-506 Covilha, Portugal;
| |
Collapse
|
19
|
Wang Y, Li Y, Huang M, Lai Q, Huang J, Chen J. Feasibility of Constructing an Automatic Meniscus Injury Detection Model Based on Dual-Mode Magnetic Resonance Imaging (MRI) Radiomics of the Knee Joint. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:2155132. [PMID: 35392588 PMCID: PMC8983204 DOI: 10.1155/2022/2155132] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 02/09/2022] [Accepted: 03/07/2022] [Indexed: 02/08/2023]
Abstract
Objective To explore the feasibility of automatically detecting the degree of meniscus injury by radiomics fusion of dual-mode magnetic resonance imaging (MRI) features of sagittal and coronal planes of the knee joint. Methods This retrospective study included 164 arthroscopically confirmed meniscus injuries in 152 patients admitted to the Department of Orthopaedics of our hospital from July 2018 to March 2021. A total of 1316-dimensional radiomics signatures were extracted from single-mode sagittal and coronal plane images of menisci, respectively. Then, the sagittal and coronal plane features were fused to form a dual-mode joint feature group with a total of 2632-dimensional radiomics signatures. The minimum redundancy maximum relevance (mRMR) algorithm and the least absolute shrinkage and selection operator (LASSO) regression were used to select features and generate optimal radiomics signatures. The single-mode sagittal plane feature model (Model 1), single-mode coronal plane feature model (Model 2), and the combined sagittal and coronal plane feature model (Model 3) performance were tested by receiver operating characteristic (ROC) curves and Delong test. The calibration curve test was used to verify the reliability of radiomics signatures of the three models. Results The average intra- and interobserver intraclass correlation coefficients (ICCs) of the most significant 8-dimensional radiomics signatures of Model 1 and Model 2 were 0.935 (range 0.832-0.998) and 0.928 (range 0.845-0.998), respectively. All the three models had good detection performance; Model 3 had the most significant performance (the areas under the curve (AUCs) of training, and validation sets were 0.947 and 0.923, respectively), which was superior to Model 1 (AUCs of training set and validation set were 0.889 and 0.876, respectively) and Model 2 (AUCs of training set and validation set were 0.831 and 0.851, respectively). The detection probability of training and validation sets in the three models was highly consistent with the actual clinical probability. Conclusions It is feasible to establish a model for automatic detection of meniscus damage by means of radiomics. The detection performance of the dual-mode knee MRI model is better than that of any single-mode model, showing potent feature analysis ability and outstanding detection performance.
Collapse
Affiliation(s)
- Yi Wang
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Yuanzhe Li
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Meiling Huang
- Radiology Department, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Qingquan Lai
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Jing Huang
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Jiayang Chen
- Radiology Department, Anxi Hospital of Traditional Chinese Medicine, Quanzhou 362400, China
| |
Collapse
|
20
|
Peng T, Wang C, Zhang Y, Wang J. H-SegNet: hybrid segmentation network for lung segmentation in chest radiographs using mask region-based convolutional neural network and adaptive closed polyline searching method. Phys Med Biol 2022; 67. [PMID: 35287125 DOI: 10.1088/1361-6560/ac5d74] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 03/14/2022] [Indexed: 12/24/2022]
Abstract
Chest x-ray (CXR) is one of the most commonly used imaging techniques for the detection and diagnosis of pulmonary diseases. One critical component in many computer-aided systems, for either detection or diagnosis in digital CXR, is the accurate segmentation of the lung. Due to low-intensity contrast around lung boundary and large inter-subject variance, it has been challenging to segment lung from structural CXR images accurately. In this work, we propose an automatic Hybrid Segmentation Network (H-SegNet) for lung segmentation on CXR. The proposed H-SegNet consists of two key steps: (1) an image preprocessing step based on a deep learning model to automatically extract coarse lung contours; (2) a refinement step to fine-tune the coarse segmentation results based on an improved principal curve-based method coupled with an improved machine learning method. Experimental results on several public datasets show that the proposed method achieves superior segmentation results in lung CXRs, compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Tao Peng
- Department of Radiation Oncology, Medical Artificial Intelligence and Automation Laboratory, University of Texas Southwestern Medical Center, 2280 Inwood Road, Dallas, TX, United States of America
| | - Caishan Wang
- Department of Ultrasound, The Second Affiliated Hospital of Soochow University, Suzhou, Jiangsu, People's Republic of China
| | - You Zhang
- Department of Radiation Oncology, Medical Artificial Intelligence and Automation Laboratory, University of Texas Southwestern Medical Center, 2280 Inwood Road, Dallas, TX, United States of America
| | - Jing Wang
- Department of Radiation Oncology, Medical Artificial Intelligence and Automation Laboratory, University of Texas Southwestern Medical Center, 2280 Inwood Road, Dallas, TX, United States of America
| |
Collapse
|
21
|
Felfeliyan B, Hareendranathan A, Kuntze G, Jaremko JL, Ronsky JL. Improved-Mask R-CNN: Towards an Accurate Generic MSK MRI instance segmentation platform (Data from the Osteoarthritis Initiative). Comput Med Imaging Graph 2022; 97:102056. [DOI: 10.1016/j.compmedimag.2022.102056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 12/11/2021] [Accepted: 03/04/2022] [Indexed: 10/18/2022]
|
22
|
Siouras A, Moustakidis S, Giannakidis A, Chalatsis G, Liampas I, Vlychou M, Hantes M, Tasoulis S, Tsaopoulos D. Knee Injury Detection Using Deep Learning on MRI Studies: A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12020537. [PMID: 35204625 PMCID: PMC8871256 DOI: 10.3390/diagnostics12020537] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 02/02/2022] [Accepted: 02/17/2022] [Indexed: 01/17/2023] Open
Abstract
The improved treatment of knee injuries critically relies on having an accurate and cost-effective detection. In recent years, deep-learning-based approaches have monopolized knee injury detection in MRI studies. The aim of this paper is to present the findings of a systematic literature review of knee (anterior cruciate ligament, meniscus, and cartilage) injury detection papers using deep learning. The systematic review was carried out following the PRISMA guidelines on several databases, including PubMed, Cochrane Library, EMBASE, and Google Scholar. Appropriate metrics were chosen to interpret the results. The prediction accuracy of the deep-learning models for the identification of knee injuries ranged from 72.5–100%. Deep learning has the potential to act at par with human-level performance in decision-making tasks related to the MRI-based diagnosis of knee injuries. The limitations of the present deep-learning approaches include data imbalance, model generalizability across different centers, verification bias, lack of related classification studies with more than two classes, and ground-truth subjectivity. There are several possible avenues of further exploration of deep learning for improving MRI-based knee injury diagnosis. Explainability and lightweightness of the deployed deep-learning systems are expected to become crucial enablers for their widespread use in clinical practice.
Collapse
Affiliation(s)
- Athanasios Siouras
- Department of Computer Science and Biomedical Informatics, School of Science, University of Thessaly, 35131 Lamia, Greece;
- Centre for Research and Technology Hellas, 38333 Volos, Greece;
- Correspondence:
| | | | - Archontis Giannakidis
- School of Science and Technology, Nottingham Trent University, Nottingham NG11 8NS, UK;
| | - Georgios Chalatsis
- Department of Orthopedic Surgery, Faculty of Medicine, University of Thessaly, 41500 Larissa, Greece; (G.C.); (M.H.)
| | - Ioannis Liampas
- Department of Neurology, School of Medicine, University Hospital of Larissa, University of Thessaly, Mezourlo Hill, 41500 Larissa, Greece;
| | - Marianna Vlychou
- Department of Radiology, School of Health Sciences, University Hospital of Larissa, University of Thessaly, Mezourlo, 41500 Larissa, Greece;
| | - Michael Hantes
- Department of Orthopedic Surgery, Faculty of Medicine, University of Thessaly, 41500 Larissa, Greece; (G.C.); (M.H.)
| | - Sotiris Tasoulis
- Department of Computer Science and Biomedical Informatics, School of Science, University of Thessaly, 35131 Lamia, Greece;
| | | |
Collapse
|
23
|
Laur O, Wang B. Musculoskeletal trauma and artificial intelligence: current trends and projections. Skeletal Radiol 2022; 51:257-269. [PMID: 34089338 DOI: 10.1007/s00256-021-03824-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 05/13/2021] [Accepted: 05/18/2021] [Indexed: 02/02/2023]
Abstract
Musculoskeletal trauma accounts for a significant fraction of emergency department visits and patients seeking urgent care, with a high financial cost to society. Diagnostic imaging is indispensable in the workup and management of trauma patients. However, diagnostic imaging represents a complex multifaceted system, with many aspects of its workflow prone to inefficiencies or human error. Recent technological innovations in artificial intelligence and machine learning have shown promise to revolutionize our systems for providing medical care to patients. This review will provide a general overview of the current state of artificial intelligence and machine learning applications in different aspects of trauma imaging and provide a vision for how such applications could be leveraged to enhance our diagnostic imaging systems and optimize patient outcomes.
Collapse
Affiliation(s)
- Olga Laur
- Division of Musculoskeletal Radiology, Department of Radiology, NYU Langone Health, 301 East 17th Street, 6th Floor, New York, NY, 10003, USA
| | - Benjamin Wang
- Division of Musculoskeletal Radiology, Department of Radiology, NYU Langone Health, 301 East 17th Street, 6th Floor, New York, NY, 10003, USA.
| |
Collapse
|
24
|
Joseph GB, McCulloch CE, Sohn JH, Pedoia V, Majumdar S, Link TM. AI MSK clinical applications: cartilage and osteoarthritis. Skeletal Radiol 2022; 51:331-343. [PMID: 34735607 DOI: 10.1007/s00256-021-03909-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 09/08/2021] [Accepted: 09/12/2021] [Indexed: 02/02/2023]
Abstract
The advancements of artificial intelligence (AI) for osteoarthritis (OA) applications have been rapid in recent years, particularly innovations of deep learning for image classification, lesion detection, cartilage segmentation, and prediction modeling of future knee OA development. This review article focuses on AI applications in OA research, first describing machine learning (ML) techniques and workflow, followed by how these algorithms are used for OA classification tasks through imaging and non-imaging-based ML models. Deep learning applications for OA research, including analysis of both radiographs for automatic detection of OA severity, and MR images for detection of cartilage/meniscus lesions and cartilage segmentation for automatic T2 quantification will be described. In addition, information on ML models that identify individuals at high risk of OA development will be provided. The future vision of machine learning applications in imaging of OA and cartilage hinges on implementation of AI for optimizing imaging protocols, quantitative assessment of cartilage, and automated analysis of disease burden yielding a faster and more efficient workflow for a radiologist with a higher level of reproducibility and precision. It may also provide risk assessment tools for individual patients, which is an integral part of precision medicine.
Collapse
Affiliation(s)
- Gabby B Joseph
- Department of Radiology and Biomedical Imaging, University of California, 185 Berry St, Suite 350, San Francisco, CA, 94158, USA.
| | - Charles E McCulloch
- Department of Epidemiology and Biostatistics, University of California, San Francisco, CA, USA
| | - Jae Ho Sohn
- Department of Radiology and Biomedical Imaging, University of California, 185 Berry St, Suite 350, San Francisco, CA, 94158, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California, 185 Berry St, Suite 350, San Francisco, CA, 94158, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California, 185 Berry St, Suite 350, San Francisco, CA, 94158, USA
| | - Thomas M Link
- Department of Radiology and Biomedical Imaging, University of California, 185 Berry St, Suite 350, San Francisco, CA, 94158, USA
| |
Collapse
|
25
|
Ni M, Wen X, Chen W, Zhao Y, Yuan Y, Zeng P, Wang Q, Wang Y, Yuan H. A Deep Learning Approach for MRI in the Diagnosis of Labral Injuries of the Hip Joint. J Magn Reson Imaging 2022; 56:625-634. [PMID: 35081273 DOI: 10.1002/jmri.28069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 12/24/2021] [Accepted: 12/28/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The diagnosis of labral injury on MRI is time-consuming and potential for incorrect diagnoses. PURPOSE To explore the feasibility of applying deep learning to diagnose and classify labral injuries with MRI. STUDY TYPE Retrospective. POPULATION A total of 1016 patients were divided into normal (n = 168, class 0) and abnormal labrum (n = 848) groups. The abnormal group consisted of n = 111 with class 1 (degeneration), n = 437 with class 2 (partial or complete tear), and n = 300 with unclassified injury. Patients were randomly divided into training, validation, and test cohort according to the ratio of 55%:15%:30%. FIELD STRENGTH/SEQUENCE Fat-saturation proton density-weighted fast spin-echo sequence at 3.0 T. ASSESSMENT Convolutional neural network-6 (CNN-6) was used to extract, discriminate, and detect oblique coronal (OCOR) and oblique sagittal (OSAG) images. Mask R-CNN was used for segmentation. LeNet-5 was used to diagnose and classify labral injuries. The weighting method combined the models of OCOR and OSAG. The output-input connection was used to correlate the whole diagnosis/classification system. Four radiologists performed subjective diagnoses to obtain the diagnosis results. STATISTICAL TESTS CNN-6 and LeNet-5 were evaluated by area under the receiver operating characteristic (ROC) curve and related parameters. The mean average precision (MAP) evaluated the Mask R-CNN. McNemar's test was used to compare the radiologists and models. A P value < 0.05 was considered statistically significant. RESULTS The area under the curve (AUC) of CNN-6 was 0.99 for extraction, discrimination, and detection. MAP values of Mask R-CNN for OCOR and OSAG image segmentation were 0.96 and 0.99. The accuracies of LeNet-5 in the diagnosis and classification were 0.94/0.94 (OCOR) and 0.92/0.91 (OSAG), respectively. The accuracy of the weighted models in the diagnosis and classification were 0.94 and 0.97, respectively. The accuracies of radiologists in the diagnosis and classification of labrum injuries ranged from 0.85 to 0.92 and 0.78 to 0.94, respectively. DATA CONCLUSION Deep learning can assist radiologists in diagnosing and classifying labrum injuries. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Ming Ni
- Department of Radiology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, China
| | - Xiaoyi Wen
- Institute of Statistics and Big Data, Renmin University of China, 59 Zhongguancun Street, Haidian District, Beijing, China
| | - Wen Chen
- Department of Radiology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, China
| | - Yuqing Zhao
- Department of Radiology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, China
| | - Yuan Yuan
- Department of Radiology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, China
| | - Piaoe Zeng
- Department of Radiology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, China
| | - Qizheng Wang
- Department of Radiology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, China
| | - Yong Wang
- Department of Radiology, He Bei Gu Cheng Xian Yi Yuan, 55 East Kangning Road, Zhengkou Town, Gucheng County, Hebei, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, China
| |
Collapse
|
26
|
Feng J, Jiang J. Deep Learning-Based Chest CT Image Features in Diagnosis of Lung Cancer. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:4153211. [PMID: 35096129 PMCID: PMC8791752 DOI: 10.1155/2022/4153211] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 11/28/2021] [Accepted: 12/18/2021] [Indexed: 11/17/2022]
Abstract
This study was to evaluate the diagnostic value of deep learning-optimized chest CT in the patients with lung cancer. 90 patients who were diagnosed with lung cancer by surgery or puncture in hospital were selected as the research subjects. The Mask Region Convolutional Neural Network (Mask-RCNN) model was a typical end-to-end image segmentation model, and Dual Path Network (DPN) was used in nodule detection. The results showed that the accuracy of DPN algorithm model in detecting lung lesions in lung cancer patients was 88.74%, the accuracy of CT diagnosis of lung cancer was 88.37%, the sensitivity was 82.91%, and the specificity was 87.43%. Deep learning-based CT examination combined with serum tumor detection, factoring into Neurospecific enolase (N S E), cytokeratin 19 fragment (CYFRA21), Carcinoembryonic antigen (CEA), and squamous cell carcinoma (SCC) antigen, improved the accuracy to 97.94%, the sensitivity to 98.12%, and the specificity to 100%, all showing significant differences (P < 0.05). In conclusion, this study provides a scientific basis for improving the diagnostic efficiency of CT imaging in lung cancer and theoretical support for subsequent lung cancer diagnosis and treatment.
Collapse
Affiliation(s)
- Jianxin Feng
- Department of Interventional Therapy, People's Hospital of Baoji, Baoji City, 721000 Shaanxi Province, China
| | - Jun Jiang
- Department of Interventional Therapy, People's Hospital of Baoji, Baoji City, 721000 Shaanxi Province, China
| |
Collapse
|
27
|
Deep Learning for Orthopedic Disease Based on Medical Image Analysis: Present and Future. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12020681] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Since its development, deep learning has been quickly incorporated into the field of medicine and has had a profound impact. Since 2017, many studies applying deep learning-based diagnostics in the field of orthopedics have demonstrated outstanding performance. However, most published papers have focused on disease detection or classification, leaving some unsatisfactory reports in areas such as segmentation and prediction. This review introduces research published in the field of orthopedics classified according to disease from the perspective of orthopedic surgeons, and areas of future research are discussed. This paper provides orthopedic surgeons with an overall understanding of artificial intelligence-based image analysis and the information that medical data should be treated with low prejudice, providing developers and researchers with insight into the real-world context in which clinicians are embracing medical artificial intelligence.
Collapse
|
28
|
Fritz B, Fritz J. Artificial intelligence for MRI diagnosis of joints: a scoping review of the current state-of-the-art of deep learning-based approaches. Skeletal Radiol 2022; 51:315-329. [PMID: 34467424 PMCID: PMC8692303 DOI: 10.1007/s00256-021-03830-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 05/17/2021] [Accepted: 05/23/2021] [Indexed: 02/02/2023]
Abstract
Deep learning-based MRI diagnosis of internal joint derangement is an emerging field of artificial intelligence, which offers many exciting possibilities for musculoskeletal radiology. A variety of investigational deep learning algorithms have been developed to detect anterior cruciate ligament tears, meniscus tears, and rotator cuff disorders. Additional deep learning-based MRI algorithms have been investigated to detect Achilles tendon tears, recurrence prediction of musculoskeletal neoplasms, and complex segmentation of nerves, bones, and muscles. Proof-of-concept studies suggest that deep learning algorithms may achieve similar diagnostic performances when compared to human readers in meta-analyses; however, musculoskeletal radiologists outperformed most deep learning algorithms in studies including a direct comparison. Earlier investigations and developments of deep learning algorithms focused on the binary classification of the presence or absence of an abnormality, whereas more advanced deep learning algorithms start to include features for characterization and severity grading. While many studies have focused on comparing deep learning algorithms against human readers, there is a paucity of data on the performance differences of radiologists interpreting musculoskeletal MRI studies without and with artificial intelligence support. Similarly, studies demonstrating the generalizability and clinical applicability of deep learning algorithms using realistic clinical settings with workflow-integrated deep learning algorithms are sparse. Contingent upon future studies showing the clinical utility of deep learning algorithms, artificial intelligence may eventually translate into clinical practice to assist detection and characterization of various conditions on musculoskeletal MRI exams.
Collapse
Affiliation(s)
- Benjamin Fritz
- Department of Radiology, Balgrist University Hospital, Forchstrasse 340, CH-8008 Zurich, Switzerland ,Faculty of Medicine, University of Zurich, Zurich, Switzerland
| | - Jan Fritz
- New York University Grossman School of Medicine, New York University, New York, NY 10016 USA
| |
Collapse
|
29
|
Zhang Y, Chan S, Park VY, Chang KT, Mehta S, Kim MJ, Combs FJ, Chang P, Chow D, Parajuli R, Mehta RS, Lin CY, Chien SH, Chen JH, Su MY. Automatic Detection and Segmentation of Breast Cancer on MRI Using Mask R-CNN Trained on Non-Fat-Sat Images and Tested on Fat-Sat Images. Acad Radiol 2022; 29 Suppl 1:S135-S144. [PMID: 33317911 PMCID: PMC8192591 DOI: 10.1016/j.acra.2020.12.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Revised: 12/02/2020] [Accepted: 12/03/2020] [Indexed: 01/03/2023]
Abstract
RATIONALE AND OBJECTIVES Computer-aided methods have been widely applied to diagnose lesions on breast magnetic resonance imaging (MRI). The first step was to identify abnormal areas. A deep learning Mask Regional Convolutional Neural Network (R-CNN) was implemented to search the entire set of images and detect suspicious lesions. MATERIALS AND METHODS Two DCE-MRI datasets were used, 241 patients acquired using non-fat-sat sequence for training, and 98 patients acquired using fat-sat sequence for testing. All patients have confirmed unilateral mass cancers. The tumor was segmented using fuzzy c-means clustering algorithm to serve as the ground truth. Mask R-CNN was implemented with ResNet-101 as the backbone. The neural network output the bounding boxes and the segmented tumor for evaluation using the Dice Similarity Coefficient (DSC). The detection performance, and the trade-off between sensitivity and specificity, was analyzed using free response receiver operating characteristic. RESULTS When the precontrast and subtraction image of both breasts were used as input, the false positive from the heart and normal parenchymal enhancements could be minimized. The training set had 1469 positive slices (containing lesion) and 9135 negative slices. In 10-fold cross-validation, the mean accuracy = 0.86 and DSC = 0.82. The testing dataset had 1568 positive and 7264 negative slices, with accuracy = 0.75 and DSC = 0.79. When the obtained per-slice results were combined, 240 of 241 (99.5%) lesions in the training and 98 of 98 (100%) lesions in the testing datasets were identified. CONCLUSION Deep learning using Mask R-CNN provided a feasible method to search breast MRI, localize, and segment lesions. This may be integrated with other artificial intelligence algorithms to develop a fully automatic breast MRI diagnostic system.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Siwa Chan
- Department of Medical Imaging, Taichung Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taichung, Taiwan,School of Medicine, Tzu Chi University, Hualien, Taiwan
| | - Vivian Youngjean Park
- Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Kai-Ting Chang
- Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Siddharth Mehta
- Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Min Jung Kim
- Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Freddie J. Combs
- Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Peter Chang
- Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Daniel Chow
- Department of Radiological Sciences, University of California, Irvine, CA, United States
| | - Ritesh Parajuli
- Department of Medicine, University of California, Irvine, CA, United States
| | - Rita S. Mehta
- Department of Medicine, University of California, Irvine, CA, United States
| | - Chin-Yao Lin
- Department of Medical Imaging, Taichung Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taichung, Taiwan,School of Medicine, Tzu Chi University, Hualien, Taiwan
| | - Sou-Hsin Chien
- Department of Medical Imaging, Taichung Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taichung, Taiwan,School of Medicine, Tzu Chi University, Hualien, Taiwan
| | - Jeon-Hor Chen
- Department of Radiological Sciences, University of California, Irvine, CA, United States,Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung, Taiwan
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, Irvine, CA, United States,Corresponding Author:Min-Ying Su, PhD, John Tu and Thomas Yuen Center for Functional Onco-Imaging, 164 Irvine Hall, University of California, Irvine, CA 92697-5020, USA, Tel: +1 (949) 824-4925; Fax: +1 (949) 824-3481;
| |
Collapse
|
30
|
Detection and Classification of Knee Injuries from MR Images Using the MRNet Dataset with Progressively Operating Deep Learning Methods. MACHINE LEARNING AND KNOWLEDGE EXTRACTION 2021. [DOI: 10.3390/make3040050] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
This study aimed to build progressively operating deep learning models that could detect meniscus injuries, anterior cruciate ligament (ACL) tears and knee abnormalities in magnetic resonance imaging (MRI). The Stanford Machine Learning Group MRNet dataset was employed in the study, which included MRI image indexes in the coronal, sagittal, and axial axes, each having 1130 trains and 120 validation items. The study is divided into three sections. In the first section, suitable images are selected to determine the disease in the image index based on the disturbance under examination. It is also used to identify images that have been misclassified or are noisy and/or damaged to the degree that they cannot be utilised for diagnosis in the first section. The study employed the 50-layer residual networks (ResNet50) model in this section. The second part of the study involves locating the region to be focused on based on the disturbance that is targeted to be diagnosed in the image under examination. A novel model was built by integrating the convolutional neural networks (CNN) and the denoising autoencoder models in the second section. The third section is dedicated to making a diagnosis of the disease. In this section, a novel ResNet50 model is trained to identify disease diagnoses or abnormalities, independent of the ResNet50 model used in the first section. The images that each model selects as output after training are referred to as progressively operating deep learning methods since they are supplied as an input to the following model.
Collapse
|
31
|
Felfeliyan B, Hareendranathan A, Kuntze G, Jaremko J, Ronsky J. MRI Knee Domain Translation for Unsupervised Segmentation By CycleGAN (data from Osteoarthritis initiative (OAI)). ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4052-4055. [PMID: 34892119 DOI: 10.1109/embc46164.2021.9629705] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Accurate quantification of bone and cartilage features is the key to efficient management of knee osteoarthritis (OA). Bone and cartilage tissues can be accurately segmented from magnetic resonance imaging (MRI) data using supervised Deep Learning (DL) methods. DL training is commonly conducted using large datasets with expert-labeled annotations. DL models perform better if distributions of testing data (target domains) are close to those of training data (source domains). However, in practice, data distributions of images from different MRI scanners and sequences are different and DL models need to re-trained on each dataset separately. We propose a domain adaptation (DA) framework using the CycleGAN model for MRI translation that would aid in unsupervised MRI data segmentation. We have validated our pipeline on five scans from the Osteoarthritis Initiative (OAI) dataset. Using this pipeline, we translated TSE Fat Suppressed MRI sequences to pseudo-DESS images. An improved MaskRCNN (IMaskRCNN) instance segmentation network trained on DESS was used to segment cartilage and femoral head regions in TSE Fat Suppressed sequences. Segmentations of the I-MaskRCNN correlated well with approximated manual segmentation obtained from nearest DESS slices (DICE = 0.76) without the need for retraining. We anticipate this technique will aid in automatic unsupervised assessment of knee MRI using commonly acquired MRI sequences and save experts' time that would otherwise be required for manual segmentation.Clinical relevance- This technique paves the way to automatically convert one MRI sequence to its equivalent as if acquired by a different protocol or different magnet, facilitating robust, hardware-independent automated analysis. For example, routine clinically acquired knee MRI could be converted to high-resolution high-contrast images suitable for automated detection of cartilage defects.
Collapse
|
32
|
Tack A, Shestakov A, Lüdke D, Zachow S. A Multi-Task Deep Learning Method for Detection of Meniscal Tears in MRI Data from the Osteoarthritis Initiative Database. Front Bioeng Biotechnol 2021; 9:747217. [PMID: 34926416 PMCID: PMC8675251 DOI: 10.3389/fbioe.2021.747217] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2021] [Accepted: 10/15/2021] [Indexed: 11/30/2022] Open
Abstract
We present a novel and computationally efficient method for the detection of meniscal tears in Magnetic Resonance Imaging (MRI) data. Our method is based on a Convolutional Neural Network (CNN) that operates on complete 3D MRI scans. Our approach detects the presence of meniscal tears in three anatomical sub-regions (anterior horn, body, posterior horn) for both the Medial Meniscus (MM) and the Lateral Meniscus (LM) individually. For optimal performance of our method, we investigate how to preprocess the MRI data and how to train the CNN such that only relevant information within a Region of Interest (RoI) of the data volume is taken into account for meniscal tear detection. We propose meniscal tear detection combined with a bounding box regressor in a multi-task deep learning framework to let the CNN implicitly consider the corresponding RoIs of the menisci. We evaluate the accuracy of our CNN-based meniscal tear detection approach on 2,399 Double Echo Steady-State (DESS) MRI scans from the Osteoarthritis Initiative database. In addition, to show that our method is capable of generalizing to other MRI sequences, we also adapt our model to Intermediate-Weighted Turbo Spin-Echo (IW TSE) MRI scans. To judge the quality of our approaches, Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) values are evaluated for both MRI sequences. For the detection of tears in DESS MRI, our method reaches AUC values of 0.94, 0.93, 0.93 (anterior horn, body, posterior horn) in MM and 0.96, 0.94, 0.91 in LM. For the detection of tears in IW TSE MRI data, our method yields AUC values of 0.84, 0.88, 0.86 in MM and 0.95, 0.91, 0.90 in LM. In conclusion, the presented method achieves high accuracy for detecting meniscal tears in both DESS and IW TSE MRI data. Furthermore, our method can be easily trained and applied to other MRI sequences.
Collapse
Affiliation(s)
- Alexander Tack
- Dept. for Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany
| | - Alexey Shestakov
- Dept. for Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany
| | - David Lüdke
- Dept. for Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany
| | - Stefan Zachow
- Dept. for Visual and Data-Centric Computing, Zuse Institute Berlin, Berlin, Germany
- Charité–University Medicine, Berlin, Germany
| |
Collapse
|
33
|
Chang CW, Lai F, Christian M, Chen YC, Hsu C, Chen YS, Chang DH, Roan TL, Yu YC. Deep Learning-Assisted Burn Wound Diagnosis: Diagnostic Model Development Study. JMIR Med Inform 2021; 9:e22798. [PMID: 34860674 PMCID: PMC8686480 DOI: 10.2196/22798] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 12/19/2020] [Accepted: 10/15/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Accurate assessment of the percentage total body surface area (%TBSA) of burn wounds is crucial in the management of burn patients. The resuscitation fluid and nutritional needs of burn patients, their need for intensive unit care, and probability of mortality are all directly related to %TBSA. It is difficult to estimate a burn area of irregular shape by inspection. Many articles have reported discrepancies in estimating %TBSA by different doctors. OBJECTIVE We propose a method, based on deep learning, for burn wound detection, segmentation, and calculation of %TBSA on a pixel-to-pixel basis. METHODS A 2-step procedure was used to convert burn wound diagnosis into %TBSA. In the first step, images of burn wounds were collected from medical records and labeled by burn surgeons, and the data set was then input into 2 deep learning architectures, U-Net and Mask R-CNN, each configured with 2 different backbones, to segment the burn wounds. In the second step, we collected and labeled images of hands to create another data set, which was also input into U-Net and Mask R-CNN to segment the hands. The %TBSA of burn wounds was then calculated by comparing the pixels of mask areas on images of the burn wound and hand of the same patient according to the rule of hand, which states that one's hand accounts for 0.8% of TBSA. RESULTS A total of 2591 images of burn wounds were collected and labeled to form the burn wound data set. The data set was randomly split into training, validation, and testing sets in a ratio of 8:1:1. Four hundred images of volar hands were collected and labeled to form the hand data set, which was also split into 3 sets using the same method. For the images of burn wounds, Mask R-CNN with ResNet101 had the best segmentation result with a Dice coefficient (DC) of 0.9496, while U-Net with ResNet101 had a DC of 0.8545. For the hand images, U-Net and Mask R-CNN had similar performance with DC values of 0.9920 and 0.9910, respectively. Lastly, we conducted a test diagnosis in a burn patient. Mask R-CNN with ResNet101 had on average less deviation (0.115% TBSA) from the ground truth than burn surgeons. CONCLUSIONS This is one of the first studies to diagnose all depths of burn wounds and convert the segmentation results into %TBSA using different deep learning models. We aimed to assist medical staff in estimating burn size more accurately, thereby helping to provide precise care to burn victims.
Collapse
Affiliation(s)
- Che Wei Chang
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan.,Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Feipei Lai
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Mesakh Christian
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Yu Chun Chen
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Ching Hsu
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Yo Shen Chen
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Dun Hao Chang
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan.,Department of Information Management, Yuan Ze University, Chung-Li, Taiwan
| | - Tyng Luen Roan
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Yen Che Yu
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| |
Collapse
|
34
|
Federer SJ, Jones GG. Artificial intelligence in orthopaedics: A scoping review. PLoS One 2021; 16:e0260471. [PMID: 34813611 PMCID: PMC8610245 DOI: 10.1371/journal.pone.0260471] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 11/11/2021] [Indexed: 11/19/2022] Open
Abstract
There is a growing interest in the application of artificial intelligence (AI) to orthopaedic surgery. This review aims to identify and characterise research in this field, in order to understand the extent, range and nature of this work, and act as springboard to stimulate future studies. A scoping review, a form of structured evidence synthesis, was conducted to summarise the use of AI in orthopaedics. A literature search (1946-2019) identified 222 studies eligible for inclusion. These studies were predominantly small and retrospective. There has been significant growth in the number of papers published in the last three years, mainly from the USA (37%). The majority of research used AI for image interpretation (45%) or as a clinical decision tool (25%). Spine (43%), knee (23%) and hip (14%) were the regions of the body most commonly studied. The application of artificial intelligence to orthopaedics is growing. However, the scope of its use so far remains limited, both in terms of its possible clinical applications, and the sub-specialty areas of the body which have been studied. A standardized method of reporting AI studies would allow direct assessment and comparison. Prospective studies are required to validate AI tools for clinical use.
Collapse
Affiliation(s)
- Simon J. Federer
- MSk Lab, Sir Michael Uren Hub, Imperial College London, London, United Kingdom
- * E-mail:
| | - Gareth G. Jones
- MSk Lab, Sir Michael Uren Hub, Imperial College London, London, United Kingdom
| |
Collapse
|
35
|
Kassim YM, Yang F, Yu H, Maude RJ, Jaeger S. Diagnosing Malaria Patients with Plasmodium falciparum and vivax Using Deep Learning for Thick Smear Images. Diagnostics (Basel) 2021; 11:1994. [PMID: 34829341 PMCID: PMC8621537 DOI: 10.3390/diagnostics11111994] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 10/01/2021] [Accepted: 10/11/2021] [Indexed: 12/25/2022] Open
Abstract
We propose a new framework, PlasmodiumVF-Net, to analyze thick smear microscopy images for a malaria diagnosis on both image and patient-level. Our framework detects whether a patient is infected, and in case of a malarial infection, reports whether the patient is infected by Plasmodium falciparum or Plasmodium vivax. PlasmodiumVF-Net first detects candidates for Plasmodium parasites using a Mask Regional-Convolutional Neural Network (Mask R-CNN), filters out false positives using a ResNet50 classifier, and then follows a new approach to recognize parasite species based on a score obtained from the number of detected patches and their aggregated probabilities for all of the patient images. Reporting a patient-level decision is highly challenging, and therefore reported less often in the literature, due to the small size of detected parasites, the similarity to staining artifacts, the similarity of species in different development stages, and illumination or color variations on patient-level. We use a manually annotated dataset consisting of 350 patients, with about 6000 images, which we make publicly available together with this manuscript. Our framework achieves an overall accuracy above 90% on image and patient-level.
Collapse
Affiliation(s)
- Yasmin M. Kassim
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA; (F.Y.); (H.Y.)
| | - Feng Yang
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA; (F.Y.); (H.Y.)
| | - Hang Yu
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA; (F.Y.); (H.Y.)
| | - Richard J. Maude
- Mahidol-Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok 10400, Thailand;
- Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford OX3 7LG, UK
- Harvard TH Chan School of Public Health, Harvard University, Boston, MA 02115, USA
| | - Stefan Jaeger
- National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA; (F.Y.); (H.Y.)
| |
Collapse
|
36
|
Breast nodule classification with two-dimensional ultrasound using Mask-RCNN ensemble aggregation. Diagn Interv Imaging 2021; 102:653-658. [PMID: 34600861 DOI: 10.1016/j.diii.2021.09.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 09/10/2021] [Accepted: 09/10/2021] [Indexed: 12/30/2022]
Abstract
PURPOSE The purpose of this study was to create a deep learning algorithm to infer the benign or malignant nature of breast nodules using two-dimensional B-mode ultrasound data initially marked as BI-RADS 3 and 4. MATERIALS AND METHODS An ensemble of mask region-based convolutional neural networks (Mask-RCNN) combining nodule segmentation and classification were trained to explicitly localize the nodule and generate a probability of the nodule to be malignant on two-dimensional B-mode ultrasound. These probabilities were aggregated at test time to produce final results. Resulting inferences were assessed using area under the curve (AUC). RESULTS A total of 460 ultrasound images of breast nodules classified as BI-RADS 3 or 4 were included. There were 295 benign and 165 malignant breast nodules used for training and validation, and another 137 breast nodules images used for testing. As a part of the challenge, the distribution of benign and malignant breast nodules in the test database remained unknown. The obtained AUC was 0.69 (95% CI: 0.57-0.82) on the training set and 0.67 on the test set. CONCLUSION The proposed deep learning solution helps classify benign and malignant breast nodules based solely on two-dimensional ultrasound images initially marked as BIRADS 3 and 4.
Collapse
|
37
|
Lassau N, Bousaid I, Chouzenoux E, Verdon A, Balleyguier C, Bidault F, Mousseaux E, Harguem-Zayani S, Gaillandre L, Bensalah Z, Doutriaux-Dumoulin I, Monroc M, Haquin A, Ceugnart L, Bachelle F, Charlot M, Thomassin-Naggara I, Fourquet T, Dapvril H, Orabona J, Chamming's F, El Haik M, Zhang-Yin J, Guillot MS, Ohana M, Caramella T, Diascorn Y, Airaud JY, Cuingnet P, Gencer U, Lawrance L, Luciani A, Cotten A, Meder JF. Three artificial intelligence data challenges based on CT and ultrasound. Diagn Interv Imaging 2021; 102:669-674. [PMID: 34312111 DOI: 10.1016/j.diii.2021.06.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/21/2021] [Accepted: 06/23/2021] [Indexed: 12/18/2022]
Abstract
PURPOSE The 2020 edition of these Data Challenges was organized by the French Society of Radiology (SFR), from September 28 to September 30, 2020. The goals were to propose innovative artificial intelligence solutions for the current relevant problems in radiology and to build a large database of multimodal medical images of ultrasound and computed tomography (CT) on these subjects from several French radiology centers. MATERIALS AND METHODS This year the attempt was to create data challenge objectives in line with the clinical routine of radiologists, with less preprocessing of data and annotation, leaving a large part of the preprocessing task to the participating teams. The objectives were proposed by the different organizations depending on their core areas of expertise. A dedicated platform was used to upload the medical image data, to automatically anonymize the uploaded data. RESULTS Three challenges were proposed including classification of benign or malignant breast nodules on ultrasound examinations, detection and contouring of pathological neck lymph nodes from cervical CT examinations and classification of calcium score on coronary calcifications from thoracic CT examinations. A total of 2076 medical examinations were included in the database for the three challenges, in three months, by 18 different centers, of which 12% were excluded. The 39 participants were divided into six multidisciplinary teams among which the coronary calcification score challenge was solved with a concordance index > 95%, and the other two with scores of 67% (breast nodule classification) and 63% (neck lymph node calcifications).
Collapse
Affiliation(s)
- Nathalie Lassau
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France.
| | - Imad Bousaid
- Direction de la Transformation Numérique et des Systèmes d'Information, Institut Gustave Roussy, 94800 Villejuif, France
| | | | - Antoine Verdon
- Direction de la Transformation Numérique et des Systèmes d'Information, Institut Gustave Roussy, 94800 Villejuif, France
| | - Corinne Balleyguier
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - François Bidault
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - Elie Mousseaux
- Unité Fonctionnelle d'Imagerie Cardiovasculaire Non Invasive, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Sana Harguem-Zayani
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - Loic Gaillandre
- Centre Libéral d'Imagerie Médicale Agglomération Lille, 59800 Lille, France
| | - Zoubir Bensalah
- Department of Radiology, Centre Hospitalier St Jean, 66000 Perpignan, France
| | | | - Michèle Monroc
- Department of Radiology, Clinique Saint Antoine, 76230 Bois-Guillaume, France
| | - Audrey Haquin
- Department of Radiology, Hôpital de la Croix-Rousse - HCL, 69004 Lyon, France
| | - Luc Ceugnart
- Department of Radiology, Centre Oscar Lambret, 59000 Lille, France
| | | | - Mathilde Charlot
- Department of Radiology, Hôpital Lyon Sud - HCL, 69310 Pierre-Bénite, France
| | | | - Tiphaine Fourquet
- Department of Radiology, Centre Hospitalier Universitaire de Lille, 59000 Lille, France
| | - Héloise Dapvril
- Service d'Imagerie de la Femme, Centre Hospitalier de Valenciennes, 59300 Valenciennes, France
| | - Joseph Orabona
- Department of Radiology, Centre Hospitalier de Bastia, 20600 Bastia, France
| | | | - Mickael El Haik
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - Jules Zhang-Yin
- Department of Radiology, Hôpital Tenon, AP-HP, 75020 Paris, France
| | - Marc-Samir Guillot
- Unité Fonctionnelle d'Imagerie Cardiovasculaire Non Invasive, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Mickaël Ohana
- Department of Radiology, Centre Hospitalier Universitaire de Strasbourg, 67200 Strasbourg, France
| | - Thomas Caramella
- Department of Radiology, Institut Arnault Tzanck, 06700 Saint-Laurent du Var, France
| | - Yann Diascorn
- Department of Radiology, Institut Arnault Tzanck, 06700 Saint-Laurent du Var, France
| | | | - Philippe Cuingnet
- Department of Radiology, Centre Hospitalier de Douai, 59507 Douai, France
| | - Umit Gencer
- Unité Fonctionnelle d'Imagerie Cardiovasculaire Non Invasive, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Littisha Lawrance
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France
| | - Alain Luciani
- Collège des Enseignants de Radiologie de France, 75013 Paris, France; Department of Radiology, Centre Hospitalier Henri Mondor, 94000 Créteil, France
| | - Anne Cotten
- Musculoskeletal Imaging Department, Lille Regional University Hospital, 59000 Lille, France
| | - Jean-François Meder
- Department of Neuroradiology, Centre Hospitalier Sainte-Anne, 75014 Paris, France; Université de Paris, Faculté de Médecine, 75006 Paris, France
| |
Collapse
|
38
|
Durand WM, Lafage R, Hamilton DK, Passias PG, Kim HJ, Protopsaltis T, Lafage V, Smith JS, Shaffrey C, Gupta M, Kelly MP, Klineberg EO, Schwab F, Gum JL, Mundis G, Eastlack R, Kebaish K, Soroceanu A, Hostin RA, Burton D, Bess S, Ames C, Hart RA, Daniels AH. Artificial intelligence clustering of adult spinal deformity sagittal plane morphology predicts surgical characteristics, alignment, and outcomes. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2021; 30:2157-2166. [PMID: 33856551 DOI: 10.1007/s00586-021-06799-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2020] [Revised: 12/12/2020] [Accepted: 02/24/2021] [Indexed: 02/04/2023]
Abstract
PURPOSE AI algorithms have shown promise in medical image analysis. Previous studies of ASD clusters have analyzed alignment metrics-this study sought to complement these efforts by analyzing images of sagittal anatomical spinopelvic landmarks. We hypothesized that an AI algorithm would cluster preoperative lateral radiographs into groups with distinct morphology. METHODS This was a retrospective review of a multicenter, prospectively collected database of adult spinal deformity. A total of 915 patients with adult spinal deformity and preoperative lateral radiographs were included. A 2 × 3, self-organizing map-a form of artificial neural network frequently employed in unsupervised classification tasks-was developed. The mean spine shape was plotted for each of the six clusters. Alignment, surgical characteristics, and outcomes were compared. RESULTS Qualitatively, clusters C and D exhibited only mild sagittal plane deformity. Clusters B, E, and F, however, exhibited marked positive sagittal balance and loss of lumbar lordosis. Cluster A had mixed characteristics, likely representing compensated deformity. Patients in clusters B, E, and F disproportionately underwent 3-CO. PJK and PJF were particularly prevalent among clusters A and E. Among clusters B and F, patients who experienced PJK had significantly greater positive sagittal balance than those who did not. CONCLUSIONS This study clustered preoperative lateral radiographs of ASD patients into groups with highly distinct overall spinal morphology and association with sagittal alignment parameters, baseline HRQOL, and surgical characteristics. The relationship between SVA and PJK differed by cluster. This study represents significant progress toward incorporation of computer vision into clinically relevant classification systems in adult spinal deformity. LEVEL OF EVIDENCE IV Diagnostic: individual cross-sectional studies with the consistently applied reference standard and blinding.
Collapse
Affiliation(s)
- Wesley M Durand
- Department of Orthopaedic Surgery, Warren Alpert Medical School of Brown University, Alpert Medical School, Providence, Rhode Island, 1 Kettle Point Avenue, East Providence, RI, 02914, USA
| | | | - D Kojo Hamilton
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Peter G Passias
- Langone Medical Center, New York University, New York City, NY, USA
| | - Han Jo Kim
- Hospital for Special Surgery, Newyork city, NY, USA
| | | | | | - Justin S Smith
- University of Virginia Health System, Charlottesville, VA, USA
| | | | | | | | - Eric O Klineberg
- University of California, UC Davis Medical Center, Sacramento, CA, USA
| | - Frank Schwab
- Hospital for Special Surgery, Newyork city, NY, USA
| | | | | | | | - Khaled Kebaish
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | | | | | - Doug Burton
- Medical Center, University of Kansas, Kansas City, KS, USA
| | - Shay Bess
- Denver International Spine Center, Denver, CO, USA
| | | | - Robert A Hart
- Swedish Neuroscience Institute, Swedish Medical Center, Seattle, WA, USA
| | - Alan H Daniels
- Department of Orthopaedic Surgery, Warren Alpert Medical School of Brown University, Alpert Medical School, Providence, Rhode Island, 1 Kettle Point Avenue, East Providence, RI, 02914, USA.
| | | |
Collapse
|
39
|
Li Q, Yang W, Xu M, An N, Wang D, Wang X, Jin H, Wang J, Wang J. Model construction and application for automated measurement of CE angle on pelvis orthograph based on MASK-R-CNN algorithm. Biomed Phys Eng Express 2021; 7. [PMID: 33794517 DOI: 10.1088/2057-1976/abf483] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 04/01/2021] [Indexed: 11/12/2022]
Abstract
Developmental dysplasia of the hip (DDH) is a common orthopedic disease. A simple and cost-effective scientific tool for assisting the early diagnosis of DDH is urgently needed. This study proposed a new artificial intelligence (AI) model for automated measure of the CE angle to aid the diagnosis of DDH by modifying the Mask R-CNN algorithm.13228 anteroposterior pelvic x-ray images were collected from the PACS system of the second Hospital of Jilin University, of which 104 images were randomly selected as test data. The rest of x-ray images were labelled and preprocessed for model development. The new AI model was the constructed based modified Mask R-CNN model to detect key points for CE angle measurement. The performance of AI model on measuring CE angle was verified by comparing with three attending orthopaedic doctors. The mean CE angles on left and right pelvis measured by the AI model was 29.46 ± 6.98°and 27.92 ± 6.56°, respectively, while the mean CE angle measured by the three doctors was 29.85 ± 6.92°and 27.75 ± 6.45°, respectively. AI model displayed a higly consistency with the doctors in measuring CE angles. Besides, AI model showed a much high efficiency in term of measuring time-consumption. In this study, we successfully constructed a new effective model for measuring CE angle by identifying key points, which provided a new intelligent measurement tool for orthopedic image measurement and evaluation.
Collapse
Affiliation(s)
- Qiang Li
- Department of Orthopedics, The Second Hospital of Jilin University, 218 Ziqiang Street, Changchun 130021, Jilin, People's Republic of China
| | - Wenzhuo Yang
- Department of Orthopedics, The Second Hospital of Jilin University, 218 Ziqiang Street, Changchun 130021, Jilin, People's Republic of China
| | - Meng Xu
- Department of Orthopedics, The Second Hospital of Jilin University, 218 Ziqiang Street, Changchun 130021, Jilin, People's Republic of China
| | - Nan An
- Department of Orthopedics, The Second Hospital of Jilin University, 218 Ziqiang Street, Changchun 130021, Jilin, People's Republic of China
| | - Dawei Wang
- Institute of Advanced Research, Infervision Medical Technology Co., Ltd, People's Republic of China
| | - Xing Wang
- Department of Orthopedics, The Second Hospital of Jilin University, 218 Ziqiang Street, Changchun 130021, Jilin, People's Republic of China
| | - Hui Jin
- Department of Pain, The Second Hospital of Jilin University, 218 Ziqiang Street, Changchun 130021, Jilin, People's Republic of China
| | - Jiajiong Wang
- Department of Orthopaedics, China-Japen Union Hospital of Jilin University, 126 Xitai Street Changchun 130033, Jilin, People's Republic of China
| | - Jincheng Wang
- Department of Orthopedics, The Second Hospital of Jilin University, 218 Ziqiang Street, Changchun 130021, Jilin, People's Republic of China
| |
Collapse
|
40
|
Semantic Instance Segmentation of Kidney Cysts in MR Images: A Fully Automated 3D Approach Developed Through Active Learning. J Digit Imaging 2021; 34:773-787. [PMID: 33821360 PMCID: PMC8455788 DOI: 10.1007/s10278-021-00452-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 01/17/2021] [Accepted: 03/22/2021] [Indexed: 11/18/2022] Open
Abstract
Total kidney volume (TKV) is the main imaging biomarker used to monitor disease progression and to classify patients affected by autosomal dominant polycystic kidney disease (ADPKD) for clinical trials. However, patients with similar TKVs may have drastically different cystic presentations and phenotypes. In an effort to quantify these cystic differences, we developed the first 3D semantic instance cyst segmentation algorithm for kidneys in MR images. We have reformulated both the object detection/localization task and the instance-based segmentation task into a semantic segmentation task. This allowed us to solve this unique imaging problem efficiently, even for patients with thousands of cysts. To do this, a convolutional neural network (CNN) was trained to learn cyst edges and cyst cores. Images were converted from instance cyst segmentations to semantic edge-core segmentations by applying a 3D erosion morphology operator to up-sampled versions of the images. The reduced cysts were labeled as core; the eroded areas were dilated in 2D and labeled as edge. The network was trained on 30 MR images and validated on 10 MR images using a fourfold cross-validation procedure. The final ensemble model was tested on 20 MR images not seen during the initial training/validation. The results from the test set were compared to segmentations from two readers. The presented model achieved an averaged R2 value of 0.94 for cyst count, 1.00 for total cyst volume, 0.94 for cystic index, and an averaged Dice coefficient of 0.85. These results demonstrate the feasibility of performing cyst segmentations automatically in ADPKD patients.
Collapse
|
41
|
Safaei M, Bolus NB, Whittingslow DC, Jeong HK, Erturk A, Inan OT. Vibration Stimulation as a Non-Invasive Approach to Monitor the Severity of Meniscus Tears. IEEE Trans Neural Syst Rehabil Eng 2021; 29:350-359. [PMID: 33428572 DOI: 10.1109/tnsre.2021.3050439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Musculoskeletal disorders and injuries are one of the most prevalent medical conditions across age groups. Due to a high load-bearing function, the knee is particularly susceptible to injuries such as meniscus tears. Imaging techniques are commonly used to assess meniscus injuries, though this approach suffers from limitations including high cost, need for skilled personnel, and confinement to laboratory or clinical settings. Vibration-based structural monitoring methods in the form of acoustic emission analysis and vibration stimulation have the potential to address the limits associated with current diagnostic technologies. In this study, an active vibration measurement technique is employed to investigate the presence and severity of meniscus tear in cadaver limbs. In a highly controlled ex vivo experimental design, a series of cadaver knees (n =6) were evaluated under an external vibration, and the frequency response of the joint was analyzed to differentiate the intact and affected samples. Four stages of knee integrity were considered: baseline, sham surgery, meniscus tear, and meniscectomy. Analyzing the frequency response of injured legs showed significant changes compared to the baseline and sham stages at selected frequency bandwidths. Furthermore, a qualitative analytical model of the knee was developed based on the Euler-Bernoulli beam theory representing the meniscus tear as a change in the local stiffness of the system. Similar trends in frequency response modulation were observed in the experimental results and analytical model. These findings serve as a foundation for further development of wearable devices for detection and grading of meniscus tear and for improving our understanding of the physiological effects of injuries on the vibration characteristics of the knee. Such systems can also aid in quantifying rehabilitation progress following reconstructive surgery and / or during physical therapy.
Collapse
|
42
|
Rizk B, Brat H, Zille P, Guillin R, Pouchy C, Adam C, Ardon R, d'Assignies G. Meniscal lesion detection and characterization in adult knee MRI: A deep learning model approach with external validation. Phys Med 2021; 83:64-71. [DOI: 10.1016/j.ejmp.2021.02.010] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 01/31/2021] [Accepted: 02/16/2021] [Indexed: 02/08/2023] Open
|
43
|
Kunze KN, Rossi DM, White GM, Karhade AV, Deng J, Williams BT, Chahla J. Diagnostic Performance of Artificial Intelligence for Detection of Anterior Cruciate Ligament and Meniscus Tears: A Systematic Review. Arthroscopy 2021; 37:771-781. [PMID: 32956803 DOI: 10.1016/j.arthro.2020.09.012] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Revised: 09/02/2020] [Accepted: 09/09/2020] [Indexed: 02/02/2023]
Abstract
PURPOSE To (1) determine the diagnostic efficacy of artificial intelligence (AI) methods for detecting anterior cruciate ligament (ACL) and meniscus tears and to (2) compare the efficacy to human clinical experts. METHODS PubMed, OVID/Medline, and Cochrane libraries were queried in November 2019 for research articles pertaining to AI use for detection of ACL and meniscus tears. Information regarding AI model, prediction accuracy/area under the curve (AUC), sample sizes of testing/training sets, and imaging modalities were recorded. RESULTS A total of 11 AI studies were identified: 5 investigated ACL tears, 5 investigated meniscal tears, and 1 investigated both. The AUC of AI models for detecting ACL tears ranged from 0.895 to 0.980, and the prediction accuracy ranged from 86.7% to 100%. Of these studies, 3 compared AI models to clinical experts. Two found no significant differences in diagnostic capability, whereas one found that radiologists had a significantly greater sensitivity for detecting ACL tears (P = .002) and statistically similar specificity and accuracy. Of the 5 studies investigating the meniscus, the AUC for AI models ranged from 0.847 to 0.910 and prediction accuracy ranged from 75.0% to 90.0%. Of these studies, 2 compared AI models with clinical experts. One found no significant differences in diagnostic accuracy, whereas one found that the AI model had a significantly lower specificity (P = .003) and accuracy (P = .015) than radiologists. Two studies reported that the addition of AI models significantly increased the diagnostic performance of clinicians compared to their efforts without these models. CONCLUSIONS AI prediction capabilities were excellent and may enhance the diagnosis of ACL and meniscal pathology; however, AI did not outperform clinical experts. CLINICAL RELEVANCE AI models promise to improve diagnosing certain pathologies as well as or better than human experts, are excellent for detecting ACL and meniscus tears, and may enhance the diagnostic capabilities of human experts; however, when compared with these experts, they may not offer any significant advantage.
Collapse
Affiliation(s)
- Kyle N Kunze
- Department of Orthopaedic Surgery, Hospital for Special Surgery, New York, New York, U.S.A
| | - David M Rossi
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, Illinois, U.S.A
| | - Gregory M White
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, Illinois, U.S.A
| | - Aditya V Karhade
- Department of Orthopedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, U.S.A
| | - Jie Deng
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, Illinois, U.S.A
| | - Brady T Williams
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, Illinois, U.S.A
| | - Jorge Chahla
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, Illinois, U.S.A..
| |
Collapse
|
44
|
Abstract
Human pose reconstruction has been a fundamental research in computer vision. However, existing pose reconstruction methods suffer from the problem of wall occlusion that cannot be solved by a traditional optical sensor. This article studies a novel human target pose reconstruction framework using low-frequency ultra-wideband (UWB) multiple-input multiple-output (MIMO) radar and a convolutional neural network (CNN), which is used to detect targets behind the wall. In the proposed framework, first, we use UWB MIMO radar to capture the human body information. Then, target detection and tracking are used to lock the target position, and the back-projection algorithm is adopted to construct three-dimensional (3D) images. Finally, we take the processed 3D image as input to reconstruct the 3D pose of the human target via the designed 3D CNN model. Field detection experiments and comparison results show that the proposed framework can achieve pose reconstruction of human targets behind a wall, which indicates that our research can make up for the shortcomings of optical sensors and significantly expands the application of the UWB MIMO radar system.
Collapse
|
45
|
Melo PADS, Estivallet CLN, Srougi M, Nahas WC, Leite KRM. Detecting and grading prostate cancer in radical prostatectomy specimens through deep learning techniques. Clinics (Sao Paulo) 2021; 76:e3198. [PMID: 34730614 PMCID: PMC8527555 DOI: 10.6061/clinics/2021/e3198] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Accepted: 09/21/2021] [Indexed: 01/07/2023] Open
Abstract
OBJECTIVES This study aims to evaluate the ability of deep learning algorithms to detect and grade prostate cancer (PCa) in radical prostatectomy specimens. METHODS We selected 12 whole-slide images of radical prostatectomy specimens. These images were divided into patches, and then, analyzed and annotated. The annotated areas were categorized as follows: stroma, normal glands, and Gleason patterns 3, 4, and 5. Two analyses were performed: i) a categorical image classification method that labels each image as benign or as Gleason 3, Gleason 4, or Gleason 5, and ii) a scanning method in which distinct areas representative of benign and different Gleason patterns are delineated and labeled separately by a pathologist. The Inception v3 Convolutional Neural Network architecture was used in categorical model training, and a Mask Region-based Convolutional Neural Network was used to train the scanning method. After training, we selected three new whole-slide images that were not used during the training to evaluate the model as our test dataset. The analysis results of the images using deep learning algorithms were compared with those obtained by the pathologists. RESULTS In the categorical classification method, the trained model obtained a validation accuracy of 94.1% during training; however, the concordance with our expert uropathologists in the test dataset was only 44%. With the image-scanning method, our model demonstrated a validation accuracy of 91.2%. When the test images were used, the concordance between the deep learning method and uropathologists was 89%. CONCLUSION Deep learning algorithms have a high potential for use in the diagnosis and grading of PCa. Scanning methods are likely to be superior to simple classification methods.
Collapse
|
46
|
Automatic Delineation and Height Measurement of Regenerating Conifer Crowns under Leaf-Off Conditions Using UAV Imagery. REMOTE SENSING 2020. [DOI: 10.3390/rs12244104] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The increasing use of unmanned aerial vehicles (UAV) and high spatial resolution imagery from associated sensors necessitates the continued advancement of efficient means of image processing to ensure these tools are utilized effectively. This is exemplified in the field of forest management, where the extraction of individual tree crown information stands to benefit operational budgets. We explored training a region-based convolutional neural network (Mask R-CNN) to automatically delineate individual tree crown (ITC) polygons in regenerating forests (14 years after harvest) using true colour red-green-blue (RGB) imagery with an average ground sampling distance (GSD) of 3 cm. We predicted ITC polygons to extract height information using canopy height models generated from digital aerial photogrammetric (DAP) point clouds. Our approach yielded an average precision of 0.98, an average recall of 0.85, and an average F1 score of 0.91 for the delineation of ITC. Remote height measurements were strongly correlated with field height measurements (r2 = 0.93, RMSE = 0.34 m). The mean difference between DAP-derived and field-collected height measurements was −0.37 m and −0.24 m for white spruce (Picea glauca) and lodgepole pine (Pinus contorta), respectively. Our results show that accurate ITC delineation in young, regenerating stands is possible with fine-spatial resolution RGB imagery and that predicted ITC can be used in combination with DAP to estimate tree height.
Collapse
|
47
|
Caudal A, Guenoun D, Lefebvre G, Nisolle JF, Gorcos G, Vuillemin V, Vande Berg B. Medial meniscal ossicles: Associated knee MRI findings in a multicenter case-control study. Diagn Interv Imaging 2020; 102:321-327. [PMID: 33339774 DOI: 10.1016/j.diii.2020.11.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 11/23/2020] [Accepted: 11/23/2020] [Indexed: 01/07/2023]
Abstract
PURPOSE The purpose of this study was to assess and compare the prevalence of meniscal, ligament and cartilage lesions on knee MRI in a series of age- and sex-matched patients with and without medial meniscal ossicle. MATERIALS AND METHODS Forty-two knee MRI examinations obtained in 42 patients (36 men, 6 women; mean age, 42.5±22.2 [SD] years; range: 19-65years) on which a medial meniscal ossicle was present were compared to 42 knee MRI examinations obtained in 42 age- and sex-matched patients (36 men, 6 women; mean age, 41.8±20.6 [SD] years; range: 19-65years) on which no medial meniscal ossicles were present. Two radiologists (R1, R2) blinded to the presence of meniscal ossicle by reading only the fat-saturated intermediate-weighted MR images separately assessed the presence of meniscal, ligament and cartilage lesions on these 84 knee MRI examinations. Prevalence of meniscal and ligament lesions and degree of cartilage degradation at MRI were compared between knees with and those without medial meniscal ossicle. RESULTS In knees with medial meniscal ossicle, R1 and R2 detected 33 (79%) and 38 (90%) medial meniscal lesions, respectively that involved the posterior root (n=25/32 for R1/R2), the posterior horn (n=19/14 for R1/R2) or the body (n=8/10 for R1/R2). The prevalence of posterior root tear (60% [25/42]/76% [32/42] for R1/R2) and that of anterior cruciate ligament (ACL) lesions (48% [20/42]/57% [24/42] for R1/R2) as well as the medial cartilage degradation score (3.35±0.87 [SD] for R1 and 3.92±0.78 [SD] for R2) were significantly greater in knees with than in knees without medial meniscal ossicle (root lesions: P<0.01 for both readers; ACL lesions and medial cartilage score: P<0.01 for both readers). CONCLUSION On MRI examination, knees with a medial meniscal ossicle demonstrate a greater frequency of medial posterior root tear and of ACL lesions and a greater degree of medial femoro-tibial cartilage degradation by comparison with knees without medial ossicle.
Collapse
Affiliation(s)
- Amandine Caudal
- Department of Radiology, CHU Pasteur 2, 06001 Nice cedex 1, France
| | - Daphné Guenoun
- Department of Radiology, Institute for Locomotion, Sainte-Marguerite Hospital, AP-HM, 13009 Marseille, France; CNRS, ISM, Institute Movement Sci, Aix-Marseille Université, 13000 Marseille, France
| | - Guillaume Lefebvre
- Department of Radiology & Musculoskeletal Imaging, Centre de consultation et d'imagerie de l'appareil locomoteur, CHU de Lille, 59037 Lille cedex, France
| | | | - Gabriel Gorcos
- Centre d'Imagerie Médicale Léonard-de-Vinci, 75116 Paris, France
| | | | - Bruno Vande Berg
- Department of Radiology, Institut de Recherche expérimentale et Clinique (IREC), Cliniques Universitaires Saint-Luc, Université Catholique de Louvain (UCLouvain), 1200 Brussels, Belgium.
| |
Collapse
|
48
|
Blanc-Durand P, Schiratti JB, Schutte K, Jehanno P, Herent P, Pigneur F, Lucidarme O, Benaceur Y, Sadate A, Luciani A, Ernst O, Rouchaud A, Creze M, Dallongeville A, Banaste N, Cadi M, Bousaid I, Lassau N, Jegou S. Abdominal musculature segmentation and surface prediction from CT using deep learning for sarcopenia assessment. Diagn Interv Imaging 2020; 101:789-794. [DOI: 10.1016/j.diii.2020.04.011] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 04/26/2020] [Accepted: 04/28/2020] [Indexed: 12/18/2022]
|
49
|
Artificial intelligence to predict clinical disability in patients with multiple sclerosis using FLAIR MRI. Diagn Interv Imaging 2020; 101:795-802. [DOI: 10.1016/j.diii.2020.05.009] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 05/18/2020] [Accepted: 05/20/2020] [Indexed: 02/06/2023]
|
50
|
Chassagnon G, Dohan A. Artificial intelligence: from challenges to clinical implementation. Diagn Interv Imaging 2020; 101:763-764. [DOI: 10.1016/j.diii.2020.10.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|