1
|
Cai L, Pfob A. Artificial intelligence in abdominal and pelvic ultrasound imaging: current applications. Abdom Radiol (NY) 2025; 50:1775-1789. [PMID: 39487919 PMCID: PMC11947003 DOI: 10.1007/s00261-024-04640-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 10/06/2024] [Accepted: 10/10/2024] [Indexed: 11/04/2024]
Abstract
BACKGROUND In recent years, the integration of artificial intelligence (AI) techniques into medical imaging has shown great potential to transform the diagnostic process. This review aims to provide a comprehensive overview of current state-of-the-art applications for AI in abdominal and pelvic ultrasound imaging. METHODS We searched the PubMed, FDA, and ClinicalTrials.gov databases for applications of AI in abdominal and pelvic ultrasound imaging. RESULTS A total of 128 titles were identified from the database search and were eligible for screening. After screening, 57 manuscripts were included in the final review. The main anatomical applications included multi-organ detection (n = 16, 28%), gynecology (n = 15, 26%), hepatobiliary system (n = 13, 23%), and musculoskeletal (n = 8, 14%). The main methodological applications included deep learning (n = 37, 65%), machine learning (n = 13, 23%), natural language processing (n = 5, 9%), and robots (n = 2, 4%). The majority of the studies were single-center (n = 43, 75%) and retrospective (n = 56, 98%). We identified 17 FDA approved AI ultrasound devices, with only a few being specifically used for abdominal/pelvic imaging (infertility monitoring and follicle development). CONCLUSION The application of AI in abdominal/pelvic ultrasound shows promising early results for disease diagnosis, monitoring, and report refinement. However, the risk of bias remains high because very few of these applications have been prospectively validated (in multi-center studies) or have received FDA clearance.
Collapse
Affiliation(s)
- Lie Cai
- Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120, Heidelberg, Germany
| | - André Pfob
- Department of Obstetrics and Gynecology, Heidelberg University Hospital, Im Neuenheimer Feld 440, 69120, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT) and German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
2
|
Cui XW, Goudie A, Blaivas M, Chai YJ, Chammas MC, Dong Y, Stewart J, Jiang TA, Liang P, Sehgal CM, Wu XL, Hsieh PCC, Adrian S, Dietrich CF. WFUMB Commentary Paper on Artificial intelligence in Medical Ultrasound Imaging. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:428-438. [PMID: 39672681 DOI: 10.1016/j.ultrasmedbio.2024.10.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 10/24/2024] [Accepted: 10/31/2024] [Indexed: 12/15/2024]
Abstract
Artificial intelligence (AI) is defined as the theory and development of computer systems able to perform tasks normally associated with human intelligence. At present, AI has been widely used in a variety of ultrasound tasks, including in point-of-care ultrasound, echocardiography, and various diseases of different organs. However, the characteristics of ultrasound, compared to other imaging modalities, such as computed tomography (CT) and magnetic resonance imaging (MRI), poses significant additional challenges to AI. Application of AI can not only reduce variability during ultrasound image acquisition, but can standardize these interpretations and identify patterns that escape the human eye and brain. These advances have enabled greater innovations in ultrasound AI applications that can be applied to a variety of clinical settings and disease states. Therefore, The World Federation of Ultrasound in Medicine and Biology (WFUMB) is addressing the topic with a brief and practical overview of current and potential future AI applications in medical ultrasound, as well as discuss some current limitations and future challenges to AI implementation.
Collapse
Affiliation(s)
- Xin Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College and State Key Laboratory for Diagnosis and Treatment of Severe Zoonotic Infectious Diseases, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Adrian Goudie
- Department of Emergency, Fiona Stanley Hospital, Perth, Australia
| | - Michael Blaivas
- Department of Medicine, University of South Carolina School of Medicine, Columbia, SC, USA
| | - Young Jun Chai
- Department of Surgery, Seoul National University College of Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Republic of Korea
| | - Maria Cristina Chammas
- Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, Brazil
| | - Yi Dong
- Department of Ultrasound, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jonathon Stewart
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
| | - Tian-An Jiang
- Department of Ultrasound Medicine, The First Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Ping Liang
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, China
| | - Chandra M Sehgal
- Ultrasound Research Lab, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Xing-Long Wu
- School of Computer Science & Engineering, Wuhan Institute of Technology, Wuhan, Hubei, China
| | | | - Saftoiu Adrian
- Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Christoph F Dietrich
- Department General Internal Medicine (DAIM), Hospitals Hirslanden Bern Beau Site, Salem and Permanence, Bern, Switzerland.
| |
Collapse
|
3
|
Hernandez Torres SI, Holland L, Edwards TH, Venn EC, Snider EJ. Deep learning models for interpretation of point of care ultrasound in military working dogs. Front Vet Sci 2024; 11:1374890. [PMID: 38903685 PMCID: PMC11187302 DOI: 10.3389/fvets.2024.1374890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 05/20/2024] [Indexed: 06/22/2024] Open
Abstract
Introduction Military working dogs (MWDs) are essential for military operations in a wide range of missions. With this pivotal role, MWDs can become casualties requiring specialized veterinary care that may not always be available far forward on the battlefield. Some injuries such as pneumothorax, hemothorax, or abdominal hemorrhage can be diagnosed using point of care ultrasound (POCUS) such as the Global FAST® exam. This presents a unique opportunity for artificial intelligence (AI) to aid in the interpretation of ultrasound images. In this article, deep learning classification neural networks were developed for POCUS assessment in MWDs. Methods Images were collected in five MWDs under general anesthesia or deep sedation for all scan points in the Global FAST® exam. For representative injuries, a cadaver model was used from which positive and negative injury images were captured. A total of 327 ultrasound clips were captured and split across scan points for training three different AI network architectures: MobileNetV2, DarkNet-19, and ShrapML. Gradient class activation mapping (GradCAM) overlays were generated for representative images to better explain AI predictions. Results Performance of AI models reached over 82% accuracy for all scan points. The model with the highest performance was trained with the MobileNetV2 network for the cystocolic scan point achieving 99.8% accuracy. Across all trained networks the diaphragmatic hepatorenal scan point had the best overall performance. However, GradCAM overlays showed that the models with highest accuracy, like MobileNetV2, were not always identifying relevant features. Conversely, the GradCAM heatmaps for ShrapML show general agreement with regions most indicative of fluid accumulation. Discussion Overall, the AI models developed can automate POCUS predictions in MWDs. Preliminarily, ShrapML had the strongest performance and prediction rate paired with accurately tracking fluid accumulation sites, making it the most suitable option for eventual real-time deployment with ultrasound systems. Further integration of this technology with imaging technologies will expand use of POCUS-based triage of MWDs.
Collapse
Affiliation(s)
- Sofia I. Hernandez Torres
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| | - Lawrence Holland
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| | - Thomas H. Edwards
- Hemorrhage Control and Vascular Dysfunction Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
- Texas A&M University, School of Veterinary Medicine, College Station, TX, United States
| | - Emilee C. Venn
- Veterinary Support Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| | - Eric J. Snider
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| |
Collapse
|
4
|
Hernandez Torres SI, Ruiz A, Holland L, Ortiz R, Snider EJ. Evaluation of Deep Learning Model Architectures for Point-of-Care Ultrasound Diagnostics. Bioengineering (Basel) 2024; 11:392. [PMID: 38671813 PMCID: PMC11048259 DOI: 10.3390/bioengineering11040392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 04/05/2024] [Accepted: 04/13/2024] [Indexed: 04/28/2024] Open
Abstract
Point-of-care ultrasound imaging is a critical tool for patient triage during trauma for diagnosing injuries and prioritizing limited medical evacuation resources. Specifically, an eFAST exam evaluates if there are free fluids in the chest or abdomen but this is only possible if ultrasound scans can be accurately interpreted, a challenge in the pre-hospital setting. In this effort, we evaluated the use of artificial intelligent eFAST image interpretation models. Widely used deep learning model architectures were evaluated as well as Bayesian models optimized for six different diagnostic models: pneumothorax (i) B- or (ii) M-mode, hemothorax (iii) B- or (iv) M-mode, (v) pelvic or bladder abdominal hemorrhage and (vi) right upper quadrant abdominal hemorrhage. Models were trained using images captured in 27 swine. Using a leave-one-subject-out training approach, the MobileNetV2 and DarkNet53 models surpassed 85% accuracy for each M-mode scan site. The different B-mode models performed worse with accuracies between 68% and 74% except for the pelvic hemorrhage model, which only reached 62% accuracy for all model architectures. These results highlight which eFAST scan sites can be easily automated with image interpretation models, while other scan sites, such as the bladder hemorrhage model, will require more robust model development or data augmentation to improve performance. With these additional improvements, the skill threshold for ultrasound-based triage can be reduced, thus expanding its utility in the pre-hospital setting.
Collapse
Affiliation(s)
| | | | | | | | - Eric J. Snider
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, Joint Base San Antonio, Fort Sam Houston, San Antonio, TX 78234, USA; (S.I.H.T.); (A.R.); (L.H.); (R.O.)
| |
Collapse
|
5
|
Amezcua KL, Collier J, Lopez M, Hernandez Torres SI, Ruiz A, Gathright R, Snider EJ. Design and testing of ultrasound probe adapters for a robotic imaging platform. Sci Rep 2024; 14:5102. [PMID: 38429442 PMCID: PMC10907673 DOI: 10.1038/s41598-024-55480-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 02/23/2024] [Indexed: 03/03/2024] Open
Abstract
Medical imaging-based triage is a critical tool for emergency medicine in both civilian and military settings. Ultrasound imaging can be used to rapidly identify free fluid in abdominal and thoracic cavities which could necessitate immediate surgical intervention. However, proper ultrasound image capture requires a skilled ultrasonography technician who is likely unavailable at the point of injury where resources are limited. Instead, robotics and computer vision technology can simplify image acquisition. As a first step towards this larger goal, here, we focus on the development of prototypes for ultrasound probe securement using a robotics platform. The ability of four probe adapter technologies to precisely capture images at anatomical locations, repeatedly, and with different ultrasound transducer types were evaluated across more than five scoring criteria. Testing demonstrated two of the adapters outperformed the traditional robot gripper and manual image capture, with a compact, rotating design compatible with wireless imaging technology being most suitable for use at the point of injury. Next steps will integrate the robotic platform with computer vision and deep learning image interpretation models to automate image capture and diagnosis. This will lower the skill threshold needed for medical imaging-based triage, enabling this procedure to be available at or near the point of injury.
Collapse
Affiliation(s)
- Krysta-Lynn Amezcua
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA
| | - James Collier
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA
| | - Michael Lopez
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA
| | - Sofia I Hernandez Torres
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA
| | - Austin Ruiz
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA
| | - Rachel Gathright
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA
| | - Eric J Snider
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, 78234, USA.
| |
Collapse
|
6
|
Avital G, Hernandez Torres SI, Knowlton ZJ, Bedolla C, Salinas J, Snider EJ. Toward Smart, Automated Junctional Tourniquets-AI Models to Interpret Vessel Occlusion at Physiological Pressure Points. Bioengineering (Basel) 2024; 11:109. [PMID: 38391595 PMCID: PMC10885917 DOI: 10.3390/bioengineering11020109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 01/05/2024] [Accepted: 01/18/2024] [Indexed: 02/24/2024] Open
Abstract
Hemorrhage is the leading cause of preventable death in both civilian and military medicine. Junctional hemorrhages are especially difficult to manage since traditional tourniquet placement is often not possible. Ultrasound can be used to visualize and guide the caretaker to apply pressure at physiological pressure points to stop hemorrhage. However, this process is technically challenging, requiring the vessel to be properly positioned over rigid boney surfaces and applying sufficient pressure to maintain proper occlusion. As a first step toward automating this life-saving intervention, we demonstrate an artificial intelligence algorithm that classifies a vessel as patent or occluded, which can guide a user to apply the appropriate pressure required to stop flow. Neural network models were trained using images captured from a custom tissue-mimicking phantom and an ex vivo swine model of the inguinal region, as pressure was applied using an ultrasound probe with and without color Doppler overlays. Using these images, we developed an image classification algorithm suitable for the determination of patency or occlusion in an ultrasound image containing color Doppler overlay. Separate AI models for both test platforms were able to accurately detect occlusion status in test-image sets to more than 93% accuracy. In conclusion, this methodology can be utilized for guiding and monitoring proper vessel occlusion, which, when combined with automated actuation and other AI models, can allow for automated junctional tourniquet application.
Collapse
Affiliation(s)
- Guy Avital
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
- Israel Defense Forces Medical Corps, Ramat Gan 52620, Israel
- Division of Anesthesia, Intensive Care, and Pain Management, Tel-Aviv Medical Center, Affiliated with the Faculty of Medicine, Tel Aviv University, Tel Aviv 64239, Israel
| | | | - Zechariah J Knowlton
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - Carlos Bedolla
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - Jose Salinas
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - Eric J Snider
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| |
Collapse
|