1
|
Mekki YM, Rhim HC, Daneshvar D, Pouliopoulos AN, Curtin C, Hagert E. Applications of artificial intelligence in ultrasound imaging for carpal-tunnel syndrome diagnosis: a scoping review. INTERNATIONAL ORTHOPAEDICS 2025; 49:965-973. [PMID: 40100390 PMCID: PMC11971218 DOI: 10.1007/s00264-025-06497-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2025] [Accepted: 03/08/2025] [Indexed: 03/20/2025]
Abstract
PURPOSE The purpose of this scoping review is to analyze the application of artificial intelligence (AI) in ultrasound (US) imaging for diagnosing carpal tunnel syndrome (CTS), with an aim to explore the potential of AI in enhancing diagnostic accuracy, efficiency, and patient outcomes by automating tasks, providing objective measurements, and facilitating earlier detection of CTS. METHODS We systematically searched multiple electronic databases, including Embase, PubMed, IEEE Xplore, and Scopus, to identify relevant studies published up to January 1, 2025. Studies were included if they focused on the application of AI in US imaging for CTS diagnosis. Editorials, expert opinions, conference papers, dataset publications, and studies that did not have a clear clinical application of the AI algorithm were excluded. RESULTS 345 articles were identified, following abstract and full-text review by two independent reviewers, 18 manuscripts were included. Of these, thirteen studies were experimental studies, three were comparative studies, and one was a feasibility study. All eighteen studies shared the common objective of improving CTS diagnosis and/or initial assessment using AI, with shared aims ranging from median nerve segmentation (n = 12) to automated diagnosis (n = 9) and severity classification (n = 2). The majority of studies utilized deep learning approaches, particularly CNNs (n = 15), and some focused on radiomics features (n = 5) and traditional machine learning techniques. CONCLUSION The integration of AI in US imaging for CTS diagnosis holds significant promise for transforming clinical practice. AI has the potential to improve diagnostic accuracy, streamline the diagnostic process, reduce variability, and ultimately lead to better patient outcomes. Further research is needed to address challenges related to dataset limitations, variability in US imaging, and ethical considerations.
Collapse
Affiliation(s)
| | - Hye Chang Rhim
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Spaulding Rehabilitation Hospital, Boston, MA, USA
| | - Daniel Daneshvar
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Spaulding Rehabilitation Hospital, Boston, MA, USA
| | - Antonios N Pouliopoulos
- Department of Surgical & Interventional Engineering, School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Catherine Curtin
- Department of Plastic Surgery, Stanford Medicine, Stanford, CA, USA
| | - Elisabet Hagert
- Aspetar Orthopedic and Sports Medicine Hospital, Doha, Qatar.
- Karolinska Institutet, Stockholm, Sweden.
| |
Collapse
|
2
|
Anjum M, Shahab S, Ahmad S, Dhahbi S, Whangbo T. Aggregated Pattern Classification Method for improving neural disorder stage detection. Brain Behav 2024; 14:e3519. [PMID: 39169422 PMCID: PMC11338743 DOI: 10.1002/brb3.3519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/08/2024] [Accepted: 03/17/2024] [Indexed: 08/23/2024] Open
Abstract
BACKGROUND Neurological disorders pose a significant health challenge, and their early detection is critical for effective treatment planning and prognosis. Traditional classification of neural disorders based on causes, symptoms, developmental stage, severity, and nervous system effects has limitations. Leveraging artificial intelligence (AI) and machine learning (ML) for pattern recognition provides a potent solution to address these challenges. Therefore, this study focuses on proposing an innovative approach-the Aggregated Pattern Classification Method (APCM)-for precise identification of neural disorder stages. METHOD The APCM was introduced to address prevalent issues in neural disorder detection, such as overfitting, robustness, and interoperability. This method utilizes aggregative patterns and classification learning functions to mitigate these challenges and enhance overall recognition accuracy, even in imbalanced data. The analysis involves neural images using observations from healthy individuals as a reference. Action response patterns from diverse inputs are mapped to identify similar features, establishing the disorder ratio. The stages are correlated based on available responses and associated neural data, with a preference for classification learning. This classification necessitates image and labeled data to prevent additional flaws in pattern recognition. Recognition and classification occur through multiple iterations, incorporating similar and diverse neural features. The learning process is finely tuned for minute classifications using labeled and unlabeled input data. RESULTS The proposed APCM demonstrates notable achievements, with high pattern recognition (15.03%) and controlled classification errors (CEs) (10.61% less). The method effectively addresses overfitting, robustness, and interoperability issues, showcasing its potential as a powerful tool for detecting neural disorders at different stages. The ability to handle imbalanced data contributes to the overall success of the algorithm. CONCLUSION The APCM emerges as a promising and effective approach for identifying precise neural disorder stages. By leveraging AI and ML, the method successfully resolves key challenges in pattern recognition. The high pattern recognition and reduced CEs underscore the method's potential for clinical applications. However, it is essential to acknowledge the reliance on high-quality neural image data, which may limit the generalizability of the approach. The proposed method allows future research to refine further and enhance its interpretability, providing valuable insights into neural disorder progression and underlying biological mechanisms.
Collapse
Affiliation(s)
- Mohd Anjum
- Department of Computer EngineeringAligarh Muslim UniversityAligarhIndia
| | - Sana Shahab
- Department of Business AdministrationCollege of Business AdministrationPrincess Nourah bint Abdulrahman UniversityRiyadhSaudi Arabia
| | - Shabir Ahmad
- Department of Computer EngineeringCollege of IT ConvergenceGachon UniversitySeongnamRepublic of Korea
| | - Sami Dhahbi
- Department of Computer science, College of Science and Art at MahayilKing Khalid UniversityMuhayil AseerSaudi Arabia
| | - Taegkeun Whangbo
- Department of Computer EngineeringCollege of IT ConvergenceGachon UniversitySeongnamRepublic of Korea
| |
Collapse
|
3
|
Peng J, Zeng J, Lai M, Huang R, Ni D, Li Z. One-Stop Automated Diagnostic System for Carpal Tunnel Syndrome in Ultrasound Images Using Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:304-314. [PMID: 38044200 DOI: 10.1016/j.ultrasmedbio.2023.10.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 08/23/2023] [Accepted: 10/22/2023] [Indexed: 12/05/2023]
Abstract
OBJECTIVE Ultrasound (US) examination has unique advantages in diagnosing carpal tunnel syndrome (CTS), although identification of the median nerve (MN) and diagnosis of CTS depend heavily on the expertise of examiners. In the aim of alleviating this problem, we developed a one-stop automated CTS diagnosis system (OSA-CTSD) and evaluated its effectiveness as a computer-aided diagnostic tool. METHODS We combined real-time MN delineation, accurate biometric measurements and explainable CTS diagnosis into a unified framework, called OSA-CTSD. We then collected a total of 32,301 static images from US videos of 90 normal wrists and 40 CTS wrists for evaluation using a simplified scanning protocol. RESULTS The proposed model exhibited better segmentation and measurement performance than competing methods, with a Hausdorff distance (95th percentile) score of 7.21 px, average symmetric surface distance score of 2.64 px, Dice score of 85.78% and intersection over union score of 76.00%. In the reader study, it exhibited performance comparable to the average performance of experienced radiologists in classifying CTS and outperformed inexperienced radiologists in terms of classification metrics (e.g., accuracy score 3.59% higher and F1 score 5.85% higher). CONCLUSION Diagnostic performance of the OSA-CTSD was promising, with the advantages of real-time delineation, automation and clinical interpretability. The application of such a tool not only reduces reliance on the expertise of examiners but also can help to promote future standardization of the CTS diagnostic process, benefiting both patients and radiologists.
Collapse
Affiliation(s)
- Jiayu Peng
- Department of Ultrasound, Second People's Hospital of Shenzhen, First Affiliated Hospital of Shenzhen University, Shenzhen, China; Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China
| | - Jiajun Zeng
- Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Manlin Lai
- Ultrasound Division, Department of Medical Imaging, University of Hong Kong-Shenzhen Hospital, Shenzhen, China
| | - Ruobing Huang
- Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Dong Ni
- Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Zhenzhou Li
- Department of Ultrasound, Second People's Hospital of Shenzhen, First Affiliated Hospital of Shenzhen University, Shenzhen, China; Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China.
| |
Collapse
|
4
|
Yu R, Yan S, Gao J, Zhao M, Fu X, Yan Y, Li M, Li X. FBN: Weakly Supervised Thyroid Nodule Segmentation Optimized by Online Foreground and Background. ULTRASOUND IN MEDICINE & BIOLOGY 2023:S0301-5629(23)00138-2. [PMID: 37308370 DOI: 10.1016/j.ultrasmedbio.2023.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 04/12/2023] [Accepted: 04/21/2023] [Indexed: 06/14/2023]
Abstract
OBJECTIVE The main objective of the work described here was to train a semantic segmentation model using classification data for thyroid nodule ultrasound images to reduce the pressure of obtaining pixel-level labeled data sets. Furthermore, we improved the segmentation performance of the model by mining the image information to narrow the gap between weakly supervised semantic segmentation (WSSS) and fully supervised semantic segmentation. METHODS Most WSSS methods use a class activation map (CAM) to generate segmentation results. However, the lack of supervision information makes it difficult for a CAM to highlight the object region completely. Therefore, we here propose a novel foreground and background pair (FB-Pair) representation method, which consists of high- and low-response regions highlighted by the original CAM-generated online in the original image. During training, the original CAM is revised using the CAM generated by the FB-Pair. In addition, we design a self-supervised learning pretext task based on FB-Pair, which requires the model to predict whether the pixels in FB-Pair are from the original image during training. After this task, the model will accurately distinguish between different categories of objects. RESULTS Experiments on the thyroid nodule ultrasound image (TUI) data set revealed that our proposed method outperformed existing methods, with a 5.7% improvement in the mean intersection-over-union (mIoU) performance of segmentation compared with the second-best method and a reduction to 2.9% in the difference between the performance of benign and malignant nodules. CONCLUSION Our method trains a well-performing segmentation model on ultrasound images of thyroid nodules using only classification data. In addition, we determined that CAM can take full advantage of the information in the images to highlight the target regions more accurately and thus improve the segmentation performance.
Collapse
Affiliation(s)
- Ruiguo Yu
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Shaoqi Yan
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Jie Gao
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Mankun Zhao
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Xuzhou Fu
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Yang Yan
- Tianjin Medical University General Hospital, Tianjin Medical University, Tianjin, China
| | - Ming Li
- Tianjin Medical University General Hospital, Tianjin Medical University, Tianjin, China
| | - Xuewei Li
- College of Intelligence and Computing, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Advanced Networking, Tianjin University, Tianjin, China; Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China.
| |
Collapse
|