1
|
Automatic measurement of lower limb alignment in portable devices based on deep learning for knee osteoarthritis. J Orthop Surg Res 2024; 19:232. [PMID: 38594698 PMCID: PMC11005281 DOI: 10.1186/s13018-024-04658-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 03/02/2024] [Indexed: 04/11/2024] Open
Abstract
BACKGROUND For knee osteoarthritis patients, analyzing alignment of lower limbs is essential for therapy, which is currently measured from standing long-leg radiographs of anteroposterior X-ray (LLR) manually. To address the time wasting, poor reproducibility and inconvenience of use caused by existing methods, we present an automated measurement model in portable devices for assessing knee alignment from LLRs. METHOD We created a model and trained it with 837 conforming LLRs, and tested it using 204 LLRs without duplicates in a portable device. Both manual and model measurements were conducted independently, then we recorded knee alignment parameters such as Hip knee ankle angle (HKA), Joint line convergence angle (JCLA), Anatomical mechanical angle (AMA), mechanical Lateral distal femoral angle (mLDFA), mechanical Medial proximal tibial angle (mMPTA), and the time required. We evaluated the model's performance compared with manual results in various metrics. RESULT In both the validation and test sets, the average mean radial errors were 2.778 and 2.447 (P<0.05). The test results for native knee joints showed that 92.22%, 79.38%, 87.94%, 79.82%, and 80.16% of the joints reached angle deviation<1° for HKA, JCLA, AMA, mLDFA, and mMPTA. Additionally, for joints with prostheses, 90.14%, 93.66%, 86.62%, 83.80%, and 85.92% of the joints reached that. The Chi-square test did not reveal any significant differences between the manual and model measurements in subgroups (P>0.05). Furthermore, the Bland-Altman 95% limits of agreement were less than ± 2° for HKA, JCLA, AMA, and mLDFA, and slightly more than ± 2 degrees for mMPTA. CONCLUSION The automatic measurement tool can assess the alignment of lower limbs in portable devices for knee osteoarthritis patients. The results are reliable, reproducible, and time-saving.
Collapse
|
2
|
Development of an automatic surgical planning system for high tibial osteotomy using artificial intelligence. Knee 2024; 48:128-137. [PMID: 38599029 DOI: 10.1016/j.knee.2024.03.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 03/06/2024] [Accepted: 03/19/2024] [Indexed: 04/12/2024]
Abstract
BACKGROUND This study proposed an automatic surgical planning system for high tibial osteotomy (HTO) using deep learning-based artificial intelligence and validated its accuracy. The system simulates osteotomy and measures lower-limb alignment parameters in pre- and post-osteotomy simulations. METHODS A total of 107 whole-leg standing radiographs were obtained from 107 patients who underwent HTO. First, the system detected anatomical landmarks on radiographs. Then, it simulated osteotomy and automatically measured five parameters in pre- and post-osteotomy simulation (hip knee angle [HKA], weight-bearing line ratio [WBL ratio], mechanical lateral distal femoral angle [mLDFA], mechanical medial proximal tibial angle [mMPTA], and mechanical lateral distal tibial angle [mLDTA]). The accuracy of the measured parameters was validated by comparing them with the ground truth (GT) values given by two orthopaedic surgeons. RESULTS All absolute errors of the system were within 1.5° or 1.5%. All inter-rater correlation confidence (ICC) values between the system and GT showed good reliability (>0.80). Excellent reliability was observed in the HKA (0.99) and WBL ratios (>0.99) for the pre-osteotomy simulation. The intra-rater difference of the system exhibited excellent reliability with an ICC value of 1.00 for all lower-limb alignment parameters in pre- and post-osteotomy simulations. In addition, the measurement time per radiograph (0.24 s) was considerably shorter than that of an orthopaedic surgeon (118 s). CONCLUSION The proposed system is practically applicable because it can measure lower-limb alignment parameters accurately and quickly in pre- and post-osteotomy simulations. The system has potential applications in surgical planning systems.
Collapse
|
3
|
Multicentric development and validation of a multi-scale and multi-task deep learning model for comprehensive lower extremity alignment analysis. Artif Intell Med 2024; 150:102843. [PMID: 38553152 DOI: 10.1016/j.artmed.2024.102843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 03/11/2024] [Accepted: 03/11/2024] [Indexed: 04/02/2024]
Abstract
Osteoarthritis of the knee, a widespread cause of knee disability, is commonly treated in orthopedics due to its rising prevalence. Lower extremity misalignment, pivotal in knee injury etiology and management, necessitates comprehensive mechanical alignment evaluation via frequently-requested weight-bearing long leg radiographs (LLR). Despite LLR's routine use, current analysis techniques are error-prone and time-consuming. To address this, we conducted a multicentric study to develop and validate a deep learning (DL) model for fully automated leg alignment assessment on anterior-posterior LLR, targeting enhanced reliability and efficiency. The DL model, developed using 594 patients' LLR and a 60%/10%/30% data split for training, validation, and testing, executed alignment analyses via a multi-step process, employing a detection network and nine specialized networks. It was designed to assess all vital anatomical and mechanical parameters for standard clinical leg deformity analysis and preoperative planning. Accuracy, reliability, and assessment duration were compared with three specialized orthopedic surgeons across two distinct institutional datasets (136 and 143 radiographs). The algorithm exhibited equivalent performance to the surgeons in terms of alignment accuracy (DL: 0.21 ± 0.18°to 1.06 ± 1.3°vs. OS: 0.21 ± 0.16°to 1.72 ± 1.96°), interrater reliability (ICC DL: 0.90 ± 0.05 to 1.0 ± 0.0 vs. ICC OS: 0.90 ± 0.03 to 1.0 ± 0.0), and clinically acceptable accuracy (DL: 53.9%-100% vs OS 30.8%-100%). Further, automated analysis significantly reduced analysis time compared to manual annotation (DL: 22 ± 0.6 s vs. OS; 101.7 ± 7 s, p ≤ 0.01). By demonstrating that our algorithm not only matches the precision of expert surgeons but also significantly outpaces them in both speed and consistency of measurements, our research underscores a pivotal advancement in harnessing AI to enhance clinical efficiency and decision-making in orthopaedics.
Collapse
|
4
|
A Radiographic Analysis of Coronal Morphological Parameters of Lower Limbs in Chinese Non-knee Osteoarthritis Populations. Orthop Surg 2024; 16:452-461. [PMID: 38088238 PMCID: PMC10834221 DOI: 10.1111/os.13952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/22/2023] [Accepted: 10/24/2023] [Indexed: 02/03/2024] Open
Abstract
OBJECTIVES Analyzing the lower limb coronal morphological parameters in populations without knee osteoarthritis (KOA) holds significant value in predicting, diagnosing, and formulating surgical strategies for KOA. This study aimed to comprehensively analyze the variability in these parameters among Chinese non-KOA populations, employing a substantial sample size. METHODS A cross-sectional retrospective analysis was performed on the Chinese non-KOA populations (n = 407; 49.9% females). The study employed an in-house developed artificial intelligence software to meticulously assess the coronal morphological parameters of all 814 lower limbs. The parameters evaluated included the hip-knee-ankle angle (HKAA), weight-bearing line ratio (WBLR), joint line convergence angle (JLCA), mechanical lateral-proximal-femoral angle (mLPFA), mechanical lateral-distal-femoral angle (mLDFA), mechanical medial-proximal-tibial angle (mMPTA), and mechanical lateral-distal-tibial angle (mLDTA). Differences in these parameters were compared between left and right limbs, different genders, and different age groups (with 50 years as the cut-off point). RESULTS HKAA and JLCA exhibited left-right differences (left vs. right: 178.2° ± 3.0° vs. 178.6° ± 2.9° for HKAA, p = 0.001; and 1.8° ± 1.5° vs. 1.4° ± 1.6° for JLCA, p < 0.001); except for the mLPFA, all other parameters show gender-related differences (male vs. female: 177.9° ± 2.8° vs. 179.0° ± 3.0° for HKAA, p < 0.001; 1.5° ± 1.5° vs. 1.8° ± 1.7° for JLCA, p = 0.003; 87.1° ± 2.1° vs. 88.1° ± 2.1° for mMPTA, p < 0.001; 90.2° ± 4.0° vs. 91.1° ± 3.2° for mLDTA, p < 0.001; 38.7% ± 12.9% vs. 43.6% ± 14.1% for WBLR, p < 0.001; and 87.7° ± 2.3° vs. 87.4° ± 2.7° for mLDTA, p = 0.045); mLPFA increase with age (younger vs. older: 90.1° ± 7.2° vs. 93.4° ± 4.9° for mLPFA, p < 0.001), while no statistical difference exists for other parameters. CONCLUSIONS There were differences in lower limb coronal morphological parameters among Chinese non-KOA populations between left and right sides, different genders, and age.
Collapse
|
5
|
Enhanced deep learning model enables accurate alignment measurement across diverse institutional imaging protocols. Knee Surg Relat Res 2024; 36:4. [PMID: 38217058 PMCID: PMC10785531 DOI: 10.1186/s43019-023-00209-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 12/27/2023] [Indexed: 01/14/2024] Open
Abstract
BACKGROUND Achieving consistent accuracy in radiographic measurements across different equipment and protocols is challenging. This study evaluates an advanced deep learning (DL) model, building upon a precursor, for its proficiency in generating uniform and precise alignment measurements in full-leg radiographs irrespective of institutional imaging differences. METHODS The enhanced DL model was trained on over 10,000 radiographs. Utilizing a segmented approach, it separately identified and evaluated regions of interest (ROIs) for the hip, knee, and ankle, subsequently integrating these regions. For external validation, 300 datasets from three distinct institutes with varied imaging protocols and equipment were employed. The study measured seven radiologic parameters: hip-knee-ankle angle, lateral distal femoral angle, medial proximal tibial angle, joint line convergence angle, weight-bearing line ratio, joint line obliquity angle, and lateral distal tibial angle. Measurements by the model were compared with an orthopedic specialist's evaluations using inter-observer and intra-observer intraclass correlation coefficients (ICCs). Additionally, the absolute error percentage in alignment measurements was assessed, and the processing duration for radiograph evaluation was recorded. RESULTS The DL model exhibited excellent performance, achieving an inter-observer ICC between 0.936 and 0.997, on par with an orthopedic specialist, and an intra-observer ICC of 1.000. The model's consistency was robust across different institutional imaging protocols. Its accuracy was particularly notable in measuring the hip-knee-ankle angle, with no instances of absolute error exceeding 1.5 degrees. The enhanced model significantly improved processing speed, reducing the time by 30-fold from an initial 10-11 s to 300 ms. CONCLUSIONS The enhanced DL model demonstrated its ability for accurate, rapid alignment measurements in full-leg radiographs, regardless of protocol variations, signifying its potential for broad clinical and research applicability.
Collapse
|
6
|
Deep Learning Artificial Intelligence Tool for Automated Radiographic Determination of Posterior Tibial Slope in Patients With ACL Injury. Orthop J Sports Med 2023; 11:23259671231215820. [PMID: 38107846 PMCID: PMC10725654 DOI: 10.1177/23259671231215820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 06/19/2023] [Indexed: 12/19/2023] Open
Abstract
Background An increased posterior tibial slope (PTS) corresponds with an increased risk of graft failure after anterior cruciate ligament (ACL) reconstruction (ACLR). Validated methods of manual PTS measurements are subject to potential interobserver variability and can be inefficient on large datasets. Purpose/Hypothesis To develop a deep learning artificial intelligence technique for automated PTS measurement from standard lateral knee radiographs. It was hypothesized that this deep learning tool would be able to measure the PTS on a high volume of radiographs expeditiously and that these measurements would be similar to previously validated manual measurements. Study Design Cohort study (diagnosis); Level of evidence, 2. Methods A deep learning U-Net model was developed on a cohort of 300 postoperative short-leg lateral radiographs from patients who underwent ACLR to segment the tibial shaft, tibial joint surface, and tibial tuberosity. The model was trained via a random split after an 80 to 20 train-validation scheme. Masks for training images were manually segmented, and the model was trained for 400 epochs. An image processing pipeline was then deployed to annotate and measure the PTS using the predicted segmentation masks. Finally, the performance of this combined pipeline was compared with human measurements performed by 2 study personnel using a previously validated manual technique for measuring the PTS on short-leg lateral radiographs on an independent test set consisting of both pre- and postoperative images. Results The U-Net semantic segmentation model achieved a mean Dice similarity coefficient of 0.885 on the validation cohort. The mean difference between the human-made and computer-vision measurements was 1.92° (σ = 2.81° [P = .24]). Extreme disagreements between the human and machine measurements, as defined by ≥5° differences, occurred <5% of the time. The model was incorporated into a web-based digital application front-end for demonstration purposes, which can measure a single uploaded image in Portable Network Graphics format in a mean time of 5 seconds. Conclusion We developed an efficient and reliable deep learning computer vision algorithm to automate the PTS measurement on short-leg lateral knee radiographs. This tool, which demonstrated good agreement with human annotations, represents an effective clinical adjunct for measuring the PTS as part of the preoperative assessment of patients with ACL injuries.
Collapse
|
7
|
Leg-Length Discrepancy Variability on Standard Anteroposterior Pelvis Radiographs: An Analysis Using Deep Learning Measurements. J Arthroplasty 2023; 38:2017-2023.e3. [PMID: 36898486 DOI: 10.1016/j.arth.2023.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 02/28/2023] [Accepted: 03/02/2023] [Indexed: 03/12/2023] Open
Abstract
BACKGROUND Leg-length discrepancy (LLD) is a critical factor in component selection and placement for total hip arthroplasty. However, LLD radiographic measurements are subject to variation based on the femoral/pelvic landmarks chosen. This study leveraged deep learning (DL) to automate LLD measurements on pelvis radiographs and compared LLD based on several anatomically distinct landmarks. METHODS Patients who had baseline anteroposterior pelvis radiographs from the Osteoarthritis Initiative were included. A DL algorithm was created to identify LLD-relevant landmarks (ie, teardrop (TD), obturator foramen, ischial tuberosity, greater and lesser trochanters) and measure LLD accurately using six landmark combinations. The algorithm was then applied to automate LLD measurements in the entire cohort of patients. Interclass correlation coefficients (ICC) were calculated to assess agreement between different LLD methods. RESULTS The DL algorithm measurements were first validated in an independent cohort for all six LLD methods (ICC = 0.73-0.98). Images from 3,689 patients (22,134 LLD measurements) were measured in 133 minutes. When using the TD and lesser trochanter landmarks as the standard LLD method, only measuring LLD using the TD and greater trochanter conferred acceptable agreement (ICC = 0.72). When comparing all six LLD methods for agreement, no combination had an ICC>0.90. Only two (13%) combinations had an ICC>0.75 and eight (53%) combinations had a poor ICC (<0.50). CONCLUSION We leveraged DL to automate LLD measurements in a large patient cohort and found considerable variation in LLD based on the pelvic/femoral landmark selection. This emphasizes the need for the standardization of landmarks for both research and surgical planning.
Collapse
|
8
|
A deep learning approach for fully automated measurements of lower extremity alignment in radiographic images. Sci Rep 2023; 13:14692. [PMID: 37673920 PMCID: PMC10482837 DOI: 10.1038/s41598-023-41380-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 08/25/2023] [Indexed: 09/08/2023] Open
Abstract
During clinical evaluation of patients and planning orthopedic treatments, the periodic assessment of lower limb alignment is critical. Currently, physicians use physical tools and radiographs to directly observe limb alignment. However, this process is manual, time consuming, and prone to human error. To this end, a deep-learning (DL)-based system was developed to automatically, rapidly, and accurately detect lower limb alignment by using anteroposterior standing X-ray medical imaging data of lower limbs. For this study, leg radiographs of non-overlapping 770 patients were collected from January 2016 to August 2020. To precisely detect necessary landmarks, a DL model was implemented stepwise. A radiologist compared the final calculated measurements with the observations in terms of the concordance correlation coefficient (CCC), Pearson correlation coefficient (PCC), and intraclass correlation coefficient (ICC). Based on the results and 250 frontal lower limb radiographs obtained from 250 patients, the system measurements for 16 indicators revealed superior reliability (CCC, PCC, and ICC ≤ 0.9; mean absolute error, mean square error, and root mean square error ≥ 0.9) for clinical observations. Furthermore, the average measurement speed was approximately 12 s. In conclusion, the analysis of anteroposterior standing X-ray medical imaging data by the DL-based lower limb alignment diagnostic support system produces measurement results similar to those obtained by radiologists.
Collapse
|
9
|
Automated correction angle calculation in high tibial osteotomy planning. Sci Rep 2023; 13:12876. [PMID: 37553353 PMCID: PMC10409734 DOI: 10.1038/s41598-023-39967-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 08/02/2023] [Indexed: 08/10/2023] Open
Abstract
High tibial osteotomy correction angle calculation is a process that is usually performed manually or in a semi-automated way. The process, according to the Miniaci method, is divided into several stages to find specific points: the center of the femoral head, the edges of the tibial plateau, the Fujisawa point, the center of the ankle joint, and the Hinge point. In this paper, we proposed an end-to-end approach that consists of different techniques for finding each point. We used YOLOv4 to detect regions of interest. To identify the center of the femoral head, we used the YOLOv4 and the Hough transform. For the other points, we used a combined method of YOLOv4 with the ASM/AAM algorithm and YOLOv4 with image processing algorithms. Our fully-automated method achieved a mean error rate of 0.5[Formula: see text] (0[Formula: see text]-2.76[Formula: see text]) ICC 0.99 (0.98-0.99) 95% CI on our own dataset of standing long-leg Anterior Posterior view X-rays. This might be the first method that automatically calculates the correction angle of high tibial osteotomy.
Collapse
|
10
|
Predicting hip-knee-ankle and femorotibial angles from knee radiographs with deep learning. Knee 2023; 42:281-288. [PMID: 37119601 DOI: 10.1016/j.knee.2023.03.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 01/25/2023] [Accepted: 03/09/2023] [Indexed: 05/01/2023]
Abstract
BACKGROUND Knee alignment affects the development and surgical treatment of knee osteoarthritis. Automating femorotibial angle (FTA) and hip-knee-ankle angle (HKA) measurement from radiographs could improve reliability and save time. Further, if HKA could be predicted from knee-only radiographs then radiation exposure could be reduced and the need for specialist equipment and personnel avoided. The aim of this research was to assess if deep learning methods could predict FTA and HKA angle from posteroanterior (PA) knee radiographs. METHODS Convolutional neural networks with densely connected final layers were trained to analyse PA knee radiographs from the Osteoarthritis Initiative (OAI) database. The FTA dataset with 6149 radiographs and HKA dataset with 2351 radiographs were split into training, validation, and test datasets in a 70:15:15 ratio. Separate models were developed for the prediction of FTA and HKA and their accuracy was quantified using mean squared error as loss function. Heat maps were used to identify the anatomical features within each image that most contributed to the predicted angles. RESULTS High accuracy was achieved for both FTA (mean absolute error 0.8°) and HKA (mean absolute error 1.7°). Heat maps for both models were concentrated on the knee anatomy and could prove a valuable tool for assessing prediction reliability in clinical application. CONCLUSION Deep learning techniques enable fast, reliable and accurate predictions of both FTA and HKA from plain knee radiographs and could lead to cost savings for healthcare providers and reduced radiation exposure for patients.
Collapse
|
11
|
Construction of a Diagnostic m7G Regulator-Mediated Scoring Model for Identifying the Characteristics and Immune Landscapes of Osteoarthritis. Biomolecules 2023; 13:biom13030539. [PMID: 36979474 PMCID: PMC10046530 DOI: 10.3390/biom13030539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 02/24/2023] [Accepted: 02/28/2023] [Indexed: 03/18/2023] Open
Abstract
With the increasingly serious burden of osteoarthritis (OA) on modern society, it is urgent to propose novel diagnostic biomarkers and differentiation models for OA. 7-methylguanosine (m7G), as one of the most common base modification forms in post transcriptional regulation, through which the seventh position N of guanine (G) of messenger RNA is modified by methyl under the action of methyltransferase; it has been found that it plays a crucial role in different diseases. Therefore, we explored the relationship between OA and m7G. Based on the expression level of 18 m7G-related regulators, we identified nine significant regulators. Then, via a series of methods of machine learning, such as support vector machine recursive feature elimination, random forest and lasso-cox regression analysis, a total of four significant regulators were further identified (DCP2, EIF4E2, LARP1 and SNUPN). Additionally, according to the expression level of the above four regulators, two different m7G-related clusters were divided via consensus cluster analysis. Furthermore, via immune infiltration, differential expression analysis and enrichment analysis, we explored the characteristic of the above two different clusters. An m7G-related scoring model was constructed via the PCA algorithm. Meanwhile, there was a different immune status and correlation for immune checkpoint inhibitors between the above two clusters. The expression difference of the above four regulators was verified via real-time quantitative polymerase chain reaction. Overall, a total of four biomarkers were identified and two different m7G-related subsets of OA with different immune microenvironment were obtained. Meanwhile, the construction of m7G-related Scoring model may provide some new strategies and insights for the therapy and diagnosis of OA patients.
Collapse
|
12
|
Deep convolutional feature details for better knee disorder diagnoses in magnetic resonance images. Comput Med Imaging Graph 2022; 102:102142. [PMID: 36446308 DOI: 10.1016/j.compmedimag.2022.102142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 11/12/2022] [Accepted: 11/14/2022] [Indexed: 11/23/2022]
Abstract
Convolutional neural networks (CNNs) applied to magnetic resonance imaging (MRI) have demonstrated their ability in the automatic diagnosis of knee injuries. Despite the promising results, the currently available solutions do not take into account the particular anatomy of knee disorders. Existing works have shown that injuries are localized in small-sized knee regions near the center of MRI scans. Based on such insights, we propose MRPyrNet, a CNN architecture capable of extracting more relevant features from these regions. Our solution is composed of a Feature Pyramid Network with Pyramidal Detail Pooling, and can be plugged into any existing CNN-based diagnostic pipeline. The first module aims to enhance the CNN intermediate features to better detect the small-sized appearance of disorders, while the second one captures such kind of evidence by maintaining its detailed information. An extensive evaluation campaign is conducted to understand in-depth the potential of the proposed solution. The experimental results achieved demonstrate that the application of MRPyrNet to baseline methodologies improves their diagnostic capability, especially in the case of anterior cruciate ligament tear and meniscal tear because of MRPyrNet's ability in exploiting the relevant appearance features of such disorders. Code is available at https://github.com/matteo-dunnhofer/MRPyrNet.
Collapse
|
13
|
Automated Artificial Intelligence-Based Assessment of Lower Limb Alignment Validated on Weight-Bearing Pre- and Postoperative Full-Leg Radiographs. Diagnostics (Basel) 2022; 12:diagnostics12112679. [PMID: 36359520 PMCID: PMC9689840 DOI: 10.3390/diagnostics12112679] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 10/26/2022] [Accepted: 10/28/2022] [Indexed: 11/06/2022] Open
Abstract
The assessment of the knee alignment using standing weight-bearing full-leg radiographs (FLR) is a standardized method. Determining the load-bearing axis of the leg requires time-consuming manual measurements. The aim of this study is to develop and validate a novel algorithm based on artificial intelligence (AI) for the automated assessment of lower limb alignment. In the first stage, a customized mask-RCNN model was trained to automatically detect and segment anatomical structures and implants in FLR. In the second stage, four region-specific neural network models (adaptations of UNet) were trained to automatically place anatomical landmarks. In the final stage, this information was used to automatically determine five key lower limb alignment angles. For the validation dataset, weight-bearing, antero-posterior FLR were captured preoperatively and 3 months postoperatively. Preoperative images were measured by the operating orthopedic surgeon and an independent physician. Postoperative images were measured by the second rater only. The final validation dataset consisted of 95 preoperative and 105 postoperative FLR. The detection rate for the different angles ranged between 92.4% and 98.9%. Human vs. human inter-(ICCs: 0.85−0.99) and intra-rater (ICCs: 0.95−1.0) reliability analysis achieved significant agreement. The ICC-values of human vs. AI inter-rater reliability analysis ranged between 0.8 and 1.0 preoperatively and between 0.83 and 0.99 postoperatively (all p < 0.001). An independent and external validation of the proposed algorithm on pre- and postoperative FLR, with excellent reliability for human measurements, could be demonstrated. Hence, the algorithm might allow for the objective and time saving analysis of large datasets and support physicians in daily routine.
Collapse
|
14
|
Comparison of tibial alignment parameters based on clinically relevant anatomical landmarks : a deep learning radiological analysis. Bone Jt Open 2022; 3:767-776. [PMID: 36196596 PMCID: PMC9626868 DOI: 10.1302/2633-1462.310.bjo-2022-0082.r1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
AIMS Accurate identification of the ankle joint centre is critical for estimating tibial coronal alignment in total knee arthroplasty (TKA). The purpose of the current study was to leverage artificial intelligence (AI) to determine the accuracy and effect of using different radiological anatomical landmarks to quantify mechanical alignment in relation to a traditionally defined radiological ankle centre. METHODS Patients with full-limb radiographs from the Osteoarthritis Initiative were included. A sub-cohort of 250 radiographs were annotated for landmarks relevant to knee alignment and used to train a deep learning (U-Net) workflow for angle calculation on the entire database. The radiological ankle centre was defined as the midpoint of the superior talus edge/tibial plafond. Knee alignment (hip-knee-ankle angle) was compared against 1) midpoint of the most prominent malleoli points, 2) midpoint of the soft-tissue overlying malleoli, and 3) midpoint of the soft-tissue sulcus above the malleoli. RESULTS A total of 932 bilateral full-limb radiographs (1,864 knees) were measured at a rate of 20.63 seconds/image. The knee alignment using the radiological ankle centre was accurate against ground truth radiologist measurements (inter-class correlation coefficient (ICC) = 0.99 (0.98 to 0.99)). Compared to the radiological ankle centre, the mean midpoint of the malleoli was 2.3 mm (SD 1.3) lateral and 5.2 mm (SD 2.4) distal, shifting alignment by 0.34o (SD 2.4o) valgus, whereas the midpoint of the soft-tissue sulcus was 4.69 mm (SD 3.55) lateral and 32.4 mm (SD 12.4) proximal, shifting alignment by 0.65o (SD 0.55o) valgus. On the intermalleolar line, measuring a point at 46% (SD 2%) of the intermalleolar width from the medial malleoli (2.38 mm medial adjustment from midpoint) resulted in knee alignment identical to using the radiological ankle centre. CONCLUSION The current study leveraged AI to create a consistent and objective model that can estimate patient-specific adjustments necessary for optimal landmark usage in extramedullary and computer-guided navigation for tibial coronal alignment to match radiological planning.Cite this article: Bone Jt Open 2022;3(10):767-776.
Collapse
|
15
|
Classification of Malaria Using Object Detection Models. INFORMATICS 2022. [DOI: 10.3390/informatics9040076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Malaria poses a global health problem every day, as it affects millions of lives all over the world. A traditional diagnosis requires the manual inspection of blood smears from the patient under a microscope to check for the malaria parasite. This is often time consuming and subject to error. Thus, the automated detection and classification of the malaria type and stage of progression can provide a quicker and more accurate diagnosis for patients. In this research, we used two object detection models, YOLOv5 and scaled YOLOv4, to classify the stage of progression and type of malaria parasite. We also used two different datasets for the classification of stage and parasite type while assessing the viability of the dataset for the task. The dataset used is comprised of microscopic images of red blood cells that were either parasitized or uninfected. The infected cells were classified based on two broad categories: the type of malarial parasite causing the infection and the stage of progression of the disease. The dataset was manually annotated using the LabelImg tool. The images were then augmented to enhance model training. Both models YOLOv5 and scaled YOLOv4 proved effective in classifying the type of parasite. Scaled YOLOv4 was in the lead with an accuracy of 83% followed by YOLOv5 with an accuracy of 78.5%. The proposed models may be useful for the medical professionals in the accurate diagnosis of malaria and its stage prediction.
Collapse
|
16
|
HPFace: a high speed and accuracy face detector. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07823-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
17
|
Deep learning-based landmark recognition and angle measurement of full-leg plain radiographs can be adopted to assess lower extremity alignment. Knee Surg Sports Traumatol Arthrosc 2022; 31:1388-1397. [PMID: 36006418 DOI: 10.1007/s00167-022-07124-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 08/10/2022] [Indexed: 10/15/2022]
Abstract
PURPOSE Evaluating lower extremity alignment using full-leg plain radiographs is an essential step in diagnosis and treatment of patients with knee osteoarthritis. The study objective was to present a deep learning-based anatomical landmark recognition and angle measurement model, using full-leg radiographs, and validate its performance. METHODS A total of 11,212 full-leg plain radiographs were used to create the model. To train the data, 15 anatomical landmarks were marked by two orthopaedic surgeons. Mechanical lateral distal femoral angle (mLDFA), medial proximal tibial angle (MPTA), joint line convergence angle (JLCA), and hip-knee-ankle angle (HKAA) were then measured. For inter-observer reliability, the inter-observer intraclass correlation coefficient (ICC) was evaluated by comparing measurements from the model, surgeons, and students, to ground truth measurements annotated by an orthopaedic specialist with 14 years of experience. To evaluate test-retest reliability, all measurements were made twice by each measurer. Intra-observer ICCs were then derived. Performance evaluation metrics used in previous studies were also derived for direct comparison of the model's performance. RESULTS Inter-observer ICCs for all angles of the model were 0.98 or higher (p < 0.001). Intra-observer ICCs for all angles were 1.00, which was higher than that of the orthopaedic specialist (0.97-1.00). Measurements made by the model showed no significant systemic variation. Except for JLCA, angles were precisely measured with absolute error averages under 0.52 degrees and proportion of outliers under 4.26%. CONCLUSIONS The deep learning model is capable of evaluating lower extremity alignment with performance as accurate as an orthopaedic specialist with 14 years of experience. LEVEL OF EVIDENCE III, retrospective cohort study.
Collapse
|
18
|
Performance and Sensitivity Analysis of an Automated X-Ray Based Total Knee Replacement Mass-Customization Pipeline. J Med Device 2022. [DOI: 10.1115/1.4055000] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Abstract
A proof-of-concept, fully automated, mass-customization pipeline for knee replacement surgery is outlined. The pipeline aims to address the limitations of currently available customization solutions by removing the need for 3D imaging and manual design, minimizing lead times, and reducing costs, whilst enabling improved patient outcomes.
The dataflow of the pipeline and methods for assessing performance are detailed. A digitally reconstructed radiograph method was adopted in the analysis to remove errors stemming from poor X-ray alignment and calibration, and to enable the influence of specific attributes to be evaluated. A sensitivity study was performed to quantify the impact of X-ray alignment and calibration.
The analysis found better results were achieved for the tibia over the femur in terms of clinically significant component over/under-hang (9% vs 18%). The pipeline was sensitive to subject ethnicity, but this was likely due to limited diversity in the training data. Arthritis severity was found to impact performance, suggesting further work is required to confirm suitability for use with more severe cases. X-ray alignment and dimensional calibration were shown to be paramount for accurate results. The pipeline performance was demonstrated to be superior to results reported for off-the-shelf implants, but customization solutions based on 3D imaging could afford marginally better results.
In summary, the study validated the pipeline for a broad range of subjects, highlighted its potential advantages over both off-the-shelf and other customization alternatives, and outlined the potential challenges of adopting such a tool.
Collapse
|
19
|
John Charnley Award: Deep Learning Prediction of Hip Joint Center on Standard Pelvis Radiographs. J Arthroplasty 2022; 37:S400-S407.e1. [PMID: 35304298 DOI: 10.1016/j.arth.2022.03.033] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 03/04/2022] [Accepted: 03/08/2022] [Indexed: 02/02/2023] Open
Abstract
BACKGROUND Accurate hip joint center (HJC) determination is critical for preoperative planning, intraoperative execution, clinical outcomes after total hip arthroplasty, and commonly used classification systems in primary and revision hip replacement. However, current methods of preoperative HJC estimation are prone to subjectivity and human error. The purpose of the study was to leverage deep learning (DL) to develop a rapid and objective HJC estimation tool on anteroposterior (AP) pelvis radiographs. METHODS Radiographs from 3,965 patients (7,930 hips) were included. A DL model workflow was created to detect bony landmarks and estimate HJC based on a pelvic height ratio method. The workflow was utilized to conduct a grid-search for optimal nonspecific, sex-specific, and patient-specific (using contralateral hip) pelvic height ratios on the training/validation cohort (6,344 hips). Algorithm performance was assessed on an independent testing cohort for HJC estimation comparison. RESULTS The algorithm estimated HJC for the testing cohort at a rate of 0.65 seconds/hip based on features in AP radiographs alone. The model predicted HJC within 5 mm of error for 80% of hips using nonspecific ratios, which increased to 83% with sex-specific and 91% with patient-specific pelvic height ratio models. Mean error decreased utilizing the patient-specific model (3.09 ± 1.69 mm, P < .001). CONCLUSION Using DL, we developed nonspecific, sex-specific, and patient-specific models capable of estimating native HJC on AP pelvis radiographs. This tool may provide clinical value when considering preoperative component position in patients planned to undergo THA and in reducing the subjective variability in HJC estimation. LEVEL OF EVIDENCE Diagnostic, level IV.
Collapse
|
20
|
Artificial Intelligence Model to Detect Real Contact Relationship between Mandibular Third Molars and Inferior Alveolar Nerve Based on Panoramic Radiographs. Diagnostics (Basel) 2021; 11:diagnostics11091664. [PMID: 34574005 PMCID: PMC8465495 DOI: 10.3390/diagnostics11091664] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 09/01/2021] [Accepted: 09/08/2021] [Indexed: 11/17/2022] Open
Abstract
This study aimed to develop a novel detection model for automatically assessing the real contact relationship between mandibular third molars (MM3s) and the inferior alveolar nerve (IAN) based on panoramic radiographs processed with deep learning networks, minimizing pseudo-contact interference and reducing the frequency of cone beam computed tomography (CBCT) use. A deep-learning network approach based on YOLOv4, named as MM3-IANnet, was applied to oral panoramic radiographs for the first time. The relationship between MM3s and the IAN in CBCT was considered the real contact relationship. Accuracy metrics were calculated to evaluate and compare the performance of the MM3-IANnet, dentists and a cooperative approach with dentists and the MM3-IANnet. Our results showed that in comparison with detection by dentists (AP = 76.45%) or the MM3-IANnet (AP = 83.02%), the cooperative dentist-MM3-IANnet approach yielded the highest average precision (AP = 88.06%). In conclusion, the MM3-IANnet detection model is an encouraging artificial intelligence approach that might assist dentists in detecting the real contact relationship between MM3s and IANs based on panoramic radiographs.
Collapse
|