1
|
Horky A, Wasenitz M, Iacovella C, Bahlmann F, Al Naimi A. The performance of sonographic antenatal birth weight assessment assisted with artificial intelligence compared to that of manual examiners at term. Arch Gynecol Obstet 2025:10.1007/s00404-025-08042-2. [PMID: 40299004 DOI: 10.1007/s00404-025-08042-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2025] [Accepted: 04/22/2025] [Indexed: 04/30/2025]
Abstract
PURPOSE The aim of this study is to investigate the differences in the accuracy of sonographic antenatal fetal weight estimation at term with artificial intelligence (AI) compared to that of clinical sonographers at different levels of experience. METHODS This is a prospective cohort study where pregnant women at term scheduled for an imminent elective cesarean section were recruited. Three independent antenatal fetal weight estimations for each fetus were blindly measured by an experienced resident physician with level I qualification from the German Society for Ultrasound in Medicine (group 1), a senior physician with level II qualification (group 2), and an AI-supported algorithm (group 3) using Hadlock formula 3. The differences between the three groups and the actual birth weight were examined with a paired t-test. A variation within 10% of birth weight was deemed accurate, and the diagnostic accuracies of both groups 1 and 3 compared to group 2 were assessed using receiver operating characteristic (ROC) curves. The association between accuracy and potential influencing factors including gestational age, fetal position, maternal age, maternal body mass index (BMI), twins, neonatal gender, placental position, gestational diabetes, and amniotic fluid index was tested with univariate logistic regression. A sensitivity analysis by inflating the estimated weights by daily 25 grams (g) gain for days between examination and birth was conducted. RESULTS 300 fetuses at a mean gestational week of 38.7 ± 1.1 were included in this study and examined on median 2 (2-4) days prior to delivery. Average birth weight was 3264.6 ± 530.7 g and the mean difference of the sonographic estimated fetal weight compared to birthweight was -203.6 ± 325.4 g, -132.2 ± 294.1 g, and -338.4 ± 606.2 g for groups 1, 2, and 3 respectively. The estimated weight was accurate in 62% (56.2%, 67.5%), 70% (64.5%, 75,1%), and 48.3% (42.6%, 54.1%) for groups 1, 2, and 3 respectively. The diagnostic accuracy measures for groups 1 and 3 compared to group 2 resulted in 55.7% (48.7%, 62.5%) and 68.6% (61.8%, 74.8%) sensitivity, 68.9% (58.3%, 78.2%) and 53.3% (42.5%, 63.9%) specificity and 0.62 (0.56, 0.68) and 0.61 (0.55, 0.67) area under the ROC curves respectively. There was no association between accuracy and the investigated variables. Adjusting for sensitivity analysis increased the accuracy to 68% (62.4%, 73.2%), 75% (69.7%, 79.8%), and 51.3% (45.5%, 57.1%), and changed the mean difference compared to birth weight to -136.1 ± 321.8 g, -64.7 ± 291.2 g, and -270.7 ± 605.2 g for groups 1, 2, and 3 respectively. CONCLUSION The antenatal weight estimation by experienced specialists with high-level qualifications remains the gold standard and provides the highest precision. Nevertheless, the accuracy of this standard is less than 80% even after adjusting for daily weight gain. The tested AI-supported method exhibits high variability and requires optimization and validation before being reliably used in clinical practice.
Collapse
Affiliation(s)
- Alex Horky
- Department of Obstetrics and Gynecology, Buergerhospital - Dr. Senckenberg Foundation, Nibelungenallee 37-41, 60318, Frankfurt, Hessen, Germany
| | - Marita Wasenitz
- Department of Obstetrics and Gynecology, Buergerhospital - Dr. Senckenberg Foundation, Nibelungenallee 37-41, 60318, Frankfurt, Hessen, Germany
| | - Carlotta Iacovella
- Department of Obstetrics and Gynecology, Buergerhospital - Dr. Senckenberg Foundation, Nibelungenallee 37-41, 60318, Frankfurt, Hessen, Germany
| | - Franz Bahlmann
- Department of Obstetrics and Gynecology, Buergerhospital - Dr. Senckenberg Foundation, Nibelungenallee 37-41, 60318, Frankfurt, Hessen, Germany
| | - Ammar Al Naimi
- Department of Obstetrics and Gynecology, Buergerhospital - Dr. Senckenberg Foundation, Nibelungenallee 37-41, 60318, Frankfurt, Hessen, Germany.
- Department of Obstetrics and Prenatal Medicine, Goethe University, University Hospital of Frankfurt, Hessen, Germany.
| |
Collapse
|
2
|
Kim J, Maranna S, Watson C, Parange N. A scoping review on the integration of artificial intelligence in point-of-care ultrasound: Current clinical applications. Am J Emerg Med 2025; 92:172-181. [PMID: 40117961 DOI: 10.1016/j.ajem.2025.03.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2024] [Revised: 03/14/2025] [Accepted: 03/15/2025] [Indexed: 03/23/2025] Open
Abstract
BACKGROUND Artificial intelligence (AI) is used increasingly in point-of-care ultrasound (POCUS). However, the true role, utility, advantages, and limitations of AI tools in POCUS have been poorly understood. AIM to conduct a scoping review on the current literature of AI in POCUS to identify (1) how AI is being applied in POCUS, and (2) how AI in POCUS could be utilized in clinical settings. METHODS The review followed the JBI scoping review methodology. A search strategy was conducted in Medline, Embase, Emcare, Scopus, Web of Science, Google Scholar, and AI POCUS manufacturer websites. Selection criteria, evidence screening, and selection were performed in Covidence. Data extraction and analysis were performed on Microsoft Excel by the primary investigator and confirmed by the secondary investigators. RESULTS Thirty-three papers were included. AI POCUS on the cardiopulmonary region was the most prominent in the literature. AI was most frequently used to automatically measure biometry using POCUS images. AI POCUS was most used in acute settings. However, novel applications in non-acute and low-resource settings were also explored. AI had the potential to increase POCUS accessibility and usability, expedited care and management, and had a reasonably high diagnostic accuracy in limited applications such as measurement of Left Ventricular Ejection Fraction, Inferior Vena Cava Collapsibility Index, Left-Ventricular Outflow Tract Velocity Time Integral and identifying B-lines of the lung. However, AI could not interpret poor images, underperformed compared to standard-of-care diagnostic methods, and was less effective in patients with specific disease states, such as severe illnesses that limit POCUS image acquisition. CONCLUSION This review uncovered the applications of AI in POCUS and the advantages and limitations of AI POCUS in different clinical settings. Future research in the field must first establish the diagnostic accuracy of AI POCUS tools and explore their clinical utility through clinical trials.
Collapse
Affiliation(s)
- Junu Kim
- University of South Australia, Adelaide, South Australia, Australia.
| | - Sandhya Maranna
- University of South Australia, Adelaide, South Australia, Australia.
| | - Caterina Watson
- Edith Cowan University, 270 Joondalup Dr, Joondalup, Western Australia, Australia.
| | - Nayana Parange
- University of South Australia, Adelaide, South Australia, Australia.
| |
Collapse
|
3
|
Kılınçdemir Turgut Ü. Artificial intelligence and perinatology: a study on accelerated academic production- a bibliometric analysis. Front Med (Lausanne) 2025; 12:1505450. [PMID: 40051727 PMCID: PMC11883689 DOI: 10.3389/fmed.2025.1505450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2024] [Accepted: 01/22/2025] [Indexed: 03/09/2025] Open
Abstract
Objective The main purpose of this bibliometric study is to compile the rapidly increasing articles in the field of perinatology in recent years and to shed light on the research areas where studies are concentrated. Materials and methods This bibliometric study was conducted using the Thomson ISI Web of Science Core Collection (WOSCC) system on May 4, 2024, with specific keywords. The abstracts of 1,124 articles that met the criteria were reviewed, and 382 articles related to perinatology were evaluated. Keyword co-occurrence, co-citation of authors, and co-citation of references analyses were conducted using VOSviewer (version 1.6.19). Out of these, 121 articles with 10 or more citations were analyzed in terms of their content and categorized under the headings "Purpose of Evaluation," "Medical Methods and Parameters Used," "Output To Be Evaluated," and "Fetal System or Region Being Evaluated." Results In this bibliometric study, it was found that the most frequently published journal among the 382 examined articles was Medical Image Analysis, while the journals with the most publications in the field of perinatology were Prenatal Diagnosis and Ultrasound in Obstetrıcs & Gynecology. The most commonly used keyword was "deep learning" (115/382). Among the 121 highly cited articles, the most common purpose of evaluation was "Prenatal Screening." Artificial intelligence was most frequently used in ultrasound (59.8%) imaging, with MRI (20.5%) in second place. Among the evaluated outputs, "organ scanning" (35/121) was in first place, while "biometry" (34/121) was in second place. In terms of evaluated systems and organs, "growth screening" (35/121) was the most common, followed by the "neurological system" (33/121) and then the "cardiovascular system" (18/121). Conclusion I has witnessed the increasing influence of artificial intelligence in the field of perinatology in recent years. This impact may mark the historic beginning of the transition to the AI era in perinatology. Milestones are being laid on the path from prenatal screening to prenatal treatment.
Collapse
|
4
|
Perez K, Wisniewski D, Ari A, Lee K, Lieneck C, Ramamonjiarivelo Z. Investigation into Application of AI and Telemedicine in Rural Communities: A Systematic Literature Review. Healthcare (Basel) 2025; 13:324. [PMID: 39942513 PMCID: PMC11816903 DOI: 10.3390/healthcare13030324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2025] [Revised: 01/25/2025] [Accepted: 01/28/2025] [Indexed: 02/16/2025] Open
Abstract
Recent advances in artificial intelligence (AI) and telemedicine are transforming healthcare delivery, particularly in rural and underserved communities. BACKGROUND/OBJECTIVES The purpose of this systematic review is to explore the use of AI-driven diagnostic tools and telemedicine platforms to identify underlying themes (constructs) in the literature across multiple research studies. METHOD The research team conducted an extensive review of studies and articles using multiple research databases that aimed to identify consistent themes and patterns across the literature. RESULTS Five underlying constructs were identified with regard to the utilization of AI and telemedicine on patient diagnosis in rural communities: (1) Challenges/benefits of AI and telemedicine in rural communities, (2) Integration of telemedicine and AI in diagnosis and patient monitoring, (3) Future considerations of AI and telemedicine in rural communities, (4) Application of AI for accurate and early diagnosis of diseases through various digital tools, and (5) Insights into the future directions and potential innovations in AI and telemedicine specifically geared towards enhancing healthcare delivery in rural communities. CONCLUSIONS While AI technologies offer enhanced diagnostic capabilities by processing vast datasets of medical records, imaging, and patient histories, leading to earlier and more accurate diagnoses, telemedicine acts as a bridge between patients in remote areas and specialized healthcare providers, offering timely access to consultations, follow-up care, and chronic disease management. Therefore, the integration of AI with telemedicine allows for real-time decision support, improving clinical outcomes by providing data-driven insights during virtual consultations. However, challenges remain, including ensuring equitable access to these technologies, addressing digital literacy gaps, and managing the ethical implications of AI-driven decisions. Despite these hurdles, AI and telemedicine hold significant promise in reducing healthcare disparities and advancing the quality of care in rural settings, potentially leading to improved long-term health outcomes for underserved populations.
Collapse
Affiliation(s)
- Kinalyne Perez
- School of Health Administration, Texas State University, San Marcos, TX 78666, USA; (K.P.); (D.W.); (K.L.); (Z.R.)
| | - Daniela Wisniewski
- School of Health Administration, Texas State University, San Marcos, TX 78666, USA; (K.P.); (D.W.); (K.L.); (Z.R.)
| | - Arzu Ari
- College of Health Professions, Texas State University, San Marcos, TX 78666, USA;
| | - Kim Lee
- School of Health Administration, Texas State University, San Marcos, TX 78666, USA; (K.P.); (D.W.); (K.L.); (Z.R.)
| | - Cristian Lieneck
- School of Health Administration, Texas State University, San Marcos, TX 78666, USA; (K.P.); (D.W.); (K.L.); (Z.R.)
| | - Zo Ramamonjiarivelo
- School of Health Administration, Texas State University, San Marcos, TX 78666, USA; (K.P.); (D.W.); (K.L.); (Z.R.)
| |
Collapse
|
5
|
Naz S, Noorani S, Jaffar Zaidi SA, Rahman AR, Sattar S, Das JK, Hoodbhoy Z. Use of artificial intelligence for gestational age estimation: a systematic review and meta-analysis. Front Glob Womens Health 2025; 6:1447579. [PMID: 39950139 PMCID: PMC11821921 DOI: 10.3389/fgwh.2025.1447579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 01/15/2025] [Indexed: 02/16/2025] Open
Abstract
Introduction Estimating a reliable gestational age (GA) is essential in providing appropriate care during pregnancy. With advancements in data science, there are several publications on the use of artificial intelligence (AI) models to estimate GA using ultrasound (US) images. The aim of this meta-analysis is to assess the accuracy of AI models in assessing GA against US as the gold standard. Methods A literature search was performed in PubMed, CINAHL, Wiley Cochrane Library, Scopus, and Web of Science databases. Studies that reported use of AI models for GA estimation with US as the reference standard were included. Risk of bias assessment was performed using Quality Assessment for Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Mean error in GA was estimated using STATA version-17 and subgroup analysis on trimester of GA assessment, AI models, study design, and external validation was performed. Results Out of the 1,039 studies screened, 17 were included in the review, and of these 10 studies were included in the meta-analysis. Five (29%) studies were from high-income countries (HICs), four (24%) from upper-middle-income countries (UMICs), one (6%) from low-and middle-income countries (LMIC), and the remaining seven studies (41%) used data across different income regions. The pooled mean error in GA estimation based on 2D images (n = 6) and blind sweep videos (n = 4) was 4.32 days (95% CI: 2.82, 5.83; l 2: 97.95%) and 2.55 days (95% CI: -0.13, 5.23; l 2: 100%), respectively. On subgroup analysis based on 2D images, the mean error in GA estimation in the first trimester was 7.00 days (95% CI: 6.08, 7.92), 2.35 days (95% CI: 1.03, 3.67) in the second, and 4.30 days (95% CI: 4.10, 4.50) in the third trimester. In studies using deep learning for 2D images, those employing CNN reported a mean error of 5.11 days (95% CI: 1.85, 8.37) in gestational age estimation, while one using DNN indicated a mean error of 5.39 days (95% CI: 5.10, 5.68). Most studies exhibited an unclear or low risk of bias in various domains, including patient selection, index test, reference standard, flow and timings and applicability domain. Conclusion Preliminary experience with AI models shows good accuracy in estimating GA. This holds tremendous potential for pregnancy dating, especially in resource-poor settings where trained interpreters may be limited. Systematic Review Registration PROSPERO, identifier (CRD42022319966).
Collapse
Affiliation(s)
- Sabahat Naz
- Department of Pediatrics and Child Health, The Aga Khan University, Karachi, Pakistan
| | - Sahir Noorani
- Department of Pediatrics and Child Health, The Aga Khan University, Karachi, Pakistan
| | - Syed Ali Jaffar Zaidi
- Department of Pediatrics and Child Health, The Aga Khan University, Karachi, Pakistan
| | - Abdu R. Rahman
- Institute for Global Health and Development, The Aga Khan University, Karachi, Pakistan
| | - Saima Sattar
- Department of Pediatrics and Child Health, The Aga Khan University, Karachi, Pakistan
| | - Jai K. Das
- Department of Pediatrics and Child Health, The Aga Khan University, Karachi, Pakistan
- Institute for Global Health and Development, The Aga Khan University, Karachi, Pakistan
| | - Zahra Hoodbhoy
- Department of Pediatrics and Child Health, The Aga Khan University, Karachi, Pakistan
| |
Collapse
|
6
|
Rauf F, Attique Khan M, Albarakati HM, Jabeen K, Alsenan S, Hamza A, Teng S, Nam Y. Artificial intelligence assisted common maternal fetal planes prediction from ultrasound images based on information fusion of customized convolutional neural networks. Front Med (Lausanne) 2024; 11:1486995. [PMID: 39534222 PMCID: PMC11554532 DOI: 10.3389/fmed.2024.1486995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Accepted: 10/15/2024] [Indexed: 11/16/2024] Open
Abstract
Ultrasound imaging is frequently employed to aid with fetal development. It benefits from being real-time, inexpensive, non-intrusive, and simple. Artificial intelligence is becoming increasingly significant in medical imaging and can assist in resolving many problems related to the classification of fetal organs. Processing fetal ultrasound (US) images increasingly uses deep learning (DL) techniques. This paper aims to assess the development of existing DL classification systems for use in a real maternal-fetal healthcare setting. This experimental process has employed two publicly available datasets, such as FPSU23 Dataset and Fetal Imaging. Two novel deep learning architectures have been designed in the proposed architecture based on 3-residual and 4-residual blocks with different convolutional filter sizes. The hyperparameters of the proposed architectures were initialized through Bayesian Optimization. Following the training process, deep features were extracted from the average pooling layers of both models. In a subsequent step, the features from both models were optimized using an improved version of the Generalized Normal Distribution Optimizer (GNDO). Finally, neural networks are used to classify the fused optimized features of both models, which were first combined using a new fusion technique. The best classification scores, 98.5 and 88.6% accuracy, were obtained after multiple steps of analysis. Additionally, a comparison with existing state-of-the-art methods revealed a notable improvement in the suggested architecture's accuracy.
Collapse
Affiliation(s)
- Fatima Rauf
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | - Muhammad Attique Khan
- Department of Artificial Intelligence, College of Computer Engineering and Science, Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia
| | - Hussain M. Albarakati
- Computer and Network Engineering Department, College of Computing, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Kiran Jabeen
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | - Shrooq Alsenan
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Ameer Hamza
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | - Sokea Teng
- Department of ICT Convergence, Soonchunhyang University, Asan, Republic of Korea
| | - Yunyoung Nam
- Department of ICT Convergence, Soonchunhyang University, Asan, Republic of Korea
| |
Collapse
|
7
|
Ochoa EJ, Romero SE, Marini TJ, O'Connell A, Brennan G, Kan J, Meng S, Zhao Y, Baran T, Castaneda B. A comparison between Deep Learning architectures for the assessment of breast tumor segmentation using VSI ultrasound protocol. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039274 DOI: 10.1109/embc53108.2024.10782786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Automatic breast tumor ultrasound segmentation is one of the most critical components in the development of tools for breast cancer diagnosis. Several deep learning algorithms have been tested with public and private datasets but none of them has been designed for asynchronous protocol ultrasound acquisition. In this work, a dataset collected through the Volume Sweep Imaging protocol for breast ultrasound (VSI-B) was used. A comparative analysis of convolutional neural networks for segmentation was carried out, including the preliminary stages of data cleaning and preprocessing. The networks evaluated were: U-NET, Attention U-NET, Residual U-NET, and multi-input attention U-NET; among which the multi-input attention U-NET was identified as the best model, achieving a 72.45% Dice coefficient after a leave-one-out cross-validation with 53 patients. The results show that these semantic segmentation approaches could be useful for automatic tumor segmentation, particularly for asynchronous acquisitions such as VSI-B.
Collapse
|
8
|
Castillo M, Rodriguez S, Aviles E, Castaneda B, Romero SE. Monitoring of Lung Ultrasound Acquisition using Infrared Sensors and Artificial Intelligence. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039542 DOI: 10.1109/embc53108.2024.10782311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Lung diseases contribute significantly to global mortality rates. The conventional diagnostic techniques based on medical imaging used for lung disease diagnosis normally require specialized personnel and complex infrastructure, posing challenges in rural areas. In response to these challenges, Volume Sweep Imaging in the lung (VSI-L) is an ultrasound-based acquisition protocol designed to empower non-specialized healthcare providers by following the movement of the transducer when performing a set of ultrasound scans along the chest. Later, the acquired data is sent to the radiologist for future analysis through a telecommunication system. VSI-L has been tested in clinical trials showing its capacity for tele-ultrasound. Even though it is a standardized protocol, human error remains a concern, which is reflected in not maintaining the correct position and speed of the transducer or producing videos that are difficult for the radiologist to interpret. In response to these challenges, a training system based on an infrared sensor is proposed that allows following the trajectory of the ultrasound transducer, through which the movement coordinates are acquired, which through a Machine Learning program are classified to evaluate whether The ultrasound procedure was performed correctly. This approach showed positive results when classifying the traces made, obtaining an accuracy of approximately 95%.
Collapse
|
9
|
Gleed AD, Mishra D, Self A, Thiruvengadam R, Desiraju BK, Bhatnagar S, Papageorghiou AT, Noble JA. Statistical Characterisation of Fetal Anatomy in Simple Obstetric Ultrasound Video Sweeps. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:985-993. [PMID: 38692940 DOI: 10.1016/j.ultrasmedbio.2024.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Revised: 03/14/2024] [Accepted: 03/15/2024] [Indexed: 05/03/2024]
Abstract
OBJECTIVE We present a statistical characterisation of fetal anatomies in obstetric ultrasound video sweeps where the transducer follows a fixed trajectory on the maternal abdomen. METHODS Large-scale, frame-level manual annotations of fetal anatomies (head, spine, abdomen, pelvis, femur) were used to compute common frame-level anatomy detection patterns expected for breech, cephalic, and transverse fetal presentations, with respect to video sweep paths. The patterns, termed statistical heatmaps, quantify the expected anatomies seen in a simple obstetric ultrasound video sweep protocol. In this study, a total of 760 unique manual annotations from 365 unique pregnancies were used. RESULTS We provide a qualitative interpretation of the heatmaps assessing the transducer sweep paths with respect to different fetal presentations and suggest ways in which the heatmaps can be applied in computational research (e.g., as a machine learning prior). CONCLUSION The heatmap parameters are freely available to other researchers (https://github.com/agleed/calopus_statistical_heatmaps).
Collapse
Affiliation(s)
- Alexander D Gleed
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Divyanshu Mishra
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Alice Self
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | | | | | | | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
10
|
Khaledyan D, Marini TJ, O’Connell A, Meng S, Kan J, Brennan G, Zhao Y, Baran TM, Parker KJ. WATUNet: a deep neural network for segmentation of volumetric sweep imaging ultrasound. MACHINE LEARNING: SCIENCE AND TECHNOLOGY 2024; 5:015042. [PMID: 38464559 PMCID: PMC10921088 DOI: 10.1088/2632-2153/ad2e15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 01/31/2024] [Accepted: 02/28/2024] [Indexed: 03/12/2024] Open
Abstract
Limited access to breast cancer diagnosis globally leads to delayed treatment. Ultrasound, an effective yet underutilized method, requires specialized training for sonographers, which hinders its widespread use. Volume sweep imaging (VSI) is an innovative approach that enables untrained operators to capture high-quality ultrasound images. Combined with deep learning, like convolutional neural networks, it can potentially transform breast cancer diagnosis, enhancing accuracy, saving time and costs, and improving patient outcomes. The widely used UNet architecture, known for medical image segmentation, has limitations, such as vanishing gradients and a lack of multi-scale feature extraction and selective region attention. In this study, we present a novel segmentation model known as Wavelet_Attention_UNet (WATUNet). In this model, we incorporate wavelet gates and attention gates between the encoder and decoder instead of a simple connection to overcome the limitations mentioned, thereby improving model performance. Two datasets are utilized for the analysis: the public 'Breast Ultrasound Images' dataset of 780 images and a private VSI dataset of 3818 images, captured at the University of Rochester by the authors. Both datasets contained segmented lesions categorized into three types: no mass, benign mass, and malignant mass. Our segmentation results show superior performance compared to other deep networks. The proposed algorithm attained a Dice coefficient of 0.94 and an F1 score of 0.94 on the VSI dataset and scored 0.93 and 0.94 on the public dataset, respectively. Moreover, our model significantly outperformed other models in McNemar's test with false discovery rate correction on a 381-image VSI set. The experimental findings demonstrate that the proposed WATUNet model achieves precise segmentation of breast lesions in both standard-of-care and VSI images, surpassing state-of-the-art models. Hence, the model holds considerable promise for assisting in lesion identification, an essential step in the clinical diagnosis of breast lesions.
Collapse
Affiliation(s)
- Donya Khaledyan
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States of America
| | - Thomas J Marini
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Avice O’Connell
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Steven Meng
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Jonah Kan
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Galen Brennan
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Yu Zhao
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Timothy M Baran
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Kevin J Parker
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States of America
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| |
Collapse
|
11
|
Khaledyan D, Marini TJ, M. Baran T, O’Connell A, Parker K. Enhancing breast ultrasound segmentation through fine-tuning and optimization techniques: Sharp attention UNet. PLoS One 2023; 18:e0289195. [PMID: 38091358 PMCID: PMC10718429 DOI: 10.1371/journal.pone.0289195] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 09/03/2023] [Indexed: 12/18/2023] Open
Abstract
Segmentation of breast ultrasound images is a crucial and challenging task in computer-aided diagnosis systems. Accurately segmenting masses in benign and malignant cases and identifying regions with no mass is a primary objective in breast ultrasound image segmentation. Deep learning (DL) has emerged as a powerful tool in medical image segmentation, revolutionizing how medical professionals analyze and interpret complex imaging data. The UNet architecture is a highly regarded and widely used DL model in medical image segmentation. Its distinctive architectural design and exceptional performance have made it popular among researchers. With the increase in data and model complexity, optimization and fine-tuning models play a vital and more challenging role than before. This paper presents a comparative study evaluating the effect of image preprocessing and different optimization techniques and the importance of fine-tuning different UNet segmentation models for breast ultrasound images. Optimization and fine-tuning techniques have been applied to enhance the performance of UNet, Sharp UNet, and Attention UNet. Building upon this progress, we designed a novel approach by combining Sharp UNet and Attention UNet, known as Sharp Attention UNet. Our analysis yielded the following quantitative evaluation metrics for the Sharp Attention UNet: the Dice coefficient, specificity, sensitivity, and F1 score values obtained were 0.93, 0.99, 0.94, and 0.94, respectively. In addition, McNemar's statistical test was applied to assess significant differences between the approaches. Across a number of measures, our proposed model outperformed all other models, resulting in improved breast lesion segmentation.
Collapse
Affiliation(s)
- Donya Khaledyan
- Department of Electrical and Electronics Engineering, University of Rochester, Rochester, NY, United States of America
| | - Thomas J. Marini
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Timothy M. Baran
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Avice O’Connell
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Kevin Parker
- Department of Electrical and Electronics Engineering, University of Rochester, Rochester, NY, United States of America
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| |
Collapse
|
12
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
13
|
Pietrolucci ME, Maqina P, Mappa I, Marra MC, D' Antonio F, Rizzo G. Evaluation of an artificial intelligent algorithm (Heartassist™) to automatically assess the quality of second trimester cardiac views: a prospective study. J Perinat Med 2023; 51:920-924. [PMID: 37097825 DOI: 10.1515/jpm-2023-0052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Accepted: 03/25/2023] [Indexed: 04/26/2023]
Abstract
OBJECTIVES The aim of this study was to evaluate the agreement between visual and automatic methods in assessing the adequacy of fetal cardiac views obtained during second trimester ultrasonographic examination. METHODS In a prospective observational study frames of the four-chamber view left and right outflow tracts, and three-vessel trachea view were obtained from 120 consecutive singleton low-risk women undergoing second trimester ultrasound at 19-23 weeks of gestation. For each frame, the quality assessment was performed by an expert sonographer and by an artificial intelligence software (Heartassist™). The Cohen's κ coefficient was used to evaluate the agreement rates between both techniques. RESULTS The number and percentage of images considered adequate visually by the expert or with Heartassist™ were similar with a percentage >87 % for all the cardiac views considered. The Cohen's κ coefficient values were for the four-chamber view 0.827 (95 % CI 0.662-0.992), 0.814 (95 % CI 0.638-0.990) for left ventricle outflow tract, 0.838 (95 % CI 0.683-0.992) and three vessel trachea view 0.866 (95 % CI 0.717-0.999), indicating a good agreement between the two techniques. CONCLUSIONS Heartassist™ allows to obtain the automatic evaluation of fetal cardiac views, reached the same accuracy of expert visual assessment and has the potential to be applied in the evaluation of fetal heart during second trimester ultrasonographic screening of fetal anomalies.
Collapse
Affiliation(s)
- Maria Elena Pietrolucci
- Department of Obstetrics and Gynecology, Fondazione Policlinico Tor Vergata, Università di Roma Tor Vergata, Roma, Italy
| | - Pavjola Maqina
- Department of Obstetrics and Gynecology, Fondazione Policlinico Tor Vergata, Università di Roma Tor Vergata, Roma, Italy
| | - Ilenia Mappa
- Department of Obstetrics and Gynecology, Fondazione Policlinico Tor Vergata, Università di Roma Tor Vergata, Roma, Italy
| | - Maria Chiara Marra
- Department of Obstetrics and Gynecology, Fondazione Policlinico Tor Vergata, Università di Roma Tor Vergata, Roma, Italy
| | | | - Giuseppe Rizzo
- Department of Obstetrics and Gynecology, Fondazione Policlinico Tor Vergata, Università di Roma Tor Vergata, Roma, Italy
| |
Collapse
|
14
|
Erlick M, Marini T, Drennan K, Dozier A, Castaneda B, Baran T, Toscano M. Assessment of a Brief Standardized Obstetric Ultrasound Training Program for Individuals Without Prior Ultrasound Experience. Ultrasound Q 2023; 39:124-128. [PMID: 36223486 DOI: 10.1097/ruq.0000000000000626] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
ABSTRACT Obstetric volume sweep imaging (OB VSI) is a simple set of transducer movements guided by external body landmarks that can be taught to ultrasound-naive non-experts. This approach can increase access to ultrasound in rural/low-resources settings lacking trained sonographers. This study presents and evaluates a training program for OB VSI. Six trainees without previous formal ultrasound experience received a training program on the OB VSI protocol containing focused didactics and supervised live hands-on ultrasound scanning practice. Trainees then independently performed 194 OB VSI examinations on pregnancies >14 weeks with known prenatal ultrasound abnormalities. Images were reviewed by maternal-fetal medicine specialists for the primary outcome (protocol deviation rates) and secondary outcomes (examination quality and image quality). Protocol deviation was present in 25.8% of cases, but only 7.7% of these errors affected the diagnostic potential of the ultrasound. Error rate differences between trainees ranged from 8.6% to 53.8% ( P < 0.0001). Image quality was excellent or acceptable in 88.2%, and 96.4% had image quality capable of yielding a diagnostic interpretation. The frequency of protocol deviations decreased over time in the majority of trainees, demonstrating retention of training program over time. This brief OB VSI training program for ultrasound-naive non-experts yielded operators capable of producing high-quality images capable of diagnostic interpretation after 3 hours of training. This training program could be adapted for use by local community members in low-resource/rural settings to increase access to obstetric ultrasound.
Collapse
Affiliation(s)
- Mariah Erlick
- University of Rochester School of Medicine and Dentistry
| | - Thomas Marini
- Department of Imaging Sciences, University of Rochester Medical Center
| | - Kathryn Drennan
- Department of Obstetrics and Gynecology, University of Rochester Medical Center
| | - Ann Dozier
- Department of Public Health Sciences, University of Rochester Medical Center
| | - Benjamin Castaneda
- Laboratorio de Imágenes Médicas, Departamento de Ingeniería, Pontificia Universidad Católica del Perú
| | - Timothy Baran
- Department of Imaging Sciences, The Institute for Optics, Department of Biomedical Engineering, University of Rochester Medical Center
| | - Marika Toscano
- Department of Obstetrics and Gynecology, University of Rochester Medical Center
| |
Collapse
|
15
|
Horgan R, Nehme L, Abuhamad A. Artificial intelligence in obstetric ultrasound: A scoping review. Prenat Diagn 2023; 43:1176-1219. [PMID: 37503802 DOI: 10.1002/pd.6411] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/05/2023] [Accepted: 07/17/2023] [Indexed: 07/29/2023]
Abstract
The objective is to summarize the current use of artificial intelligence (AI) in obstetric ultrasound. PubMed, Cochrane Library, and ClinicalTrials.gov databases were searched using the following keywords "neural networks", OR "artificial intelligence", OR "machine learning", OR "deep learning", AND "obstetrics", OR "obstetrical", OR "fetus", OR "foetus", OR "fetal", OR "foetal", OR "pregnancy", or "pregnant", AND "ultrasound" from inception through May 2022. The search was limited to the English language. Studies were eligible for inclusion if they described the use of AI in obstetric ultrasound. Obstetric ultrasound was defined as the process of obtaining ultrasound images of a fetus, amniotic fluid, or placenta. AI was defined as the use of neural networks, machine learning, or deep learning methods. The authors' search identified a total of 127 papers that fulfilled our inclusion criteria. The current uses of AI in obstetric ultrasound include first trimester pregnancy ultrasound, assessment of placenta, fetal biometry, fetal echocardiography, fetal neurosonography, assessment of fetal anatomy, and other uses including assessment of fetal lung maturity and screening for risk of adverse pregnancy outcomes. AI holds the potential to improve the ultrasound efficiency, pregnancy outcomes in low resource settings, detection of congenital malformations and prediction of adverse pregnancy outcomes.
Collapse
Affiliation(s)
- Rebecca Horgan
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Lea Nehme
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| | - Alfred Abuhamad
- Division of Maternal Fetal Medicine, Department of Obstetrics & Gynecology, Eastern Virginia Medical School, Norfolk, Virginia, USA
| |
Collapse
|
16
|
Khaledyan D, Marini TJ, O’Connell A, Parker K. Enhancing Breast Ultrasound Segmentation through Fine-tuning and Optimization Techniques: Sharp Attention UNet. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.14.549040. [PMID: 37503223 PMCID: PMC10370074 DOI: 10.1101/2023.07.14.549040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Segmentation of breast ultrasound images is a crucial and challenging task in computer-aided diagnosis systems. Accurately segmenting masses in benign and malignant cases and identifying regions with no mass is a primary objective in breast ultrasound image segmentation. Deep learning (DL) has emerged as a powerful tool in medical image segmentation, revolutionizing how medical professionals analyze and interpret complex imaging data. The UNet architecture is a highly regarded and widely used DL model in medical image segmentation. Its distinctive architectural design and exceptional performance have made it a popular choice among researchers in the medical image segmentation field. With the increase in data and model complexity, optimization and fine-tuning models play a vital and more challenging role than before. This paper presents a comparative study evaluating the effect of image preprocessing and different optimization techniques and the importance of fine-tuning different UNet segmentation models for breast ultrasound images. Optimization and fine-tuning techniques have been applied to enhance the performance of UNet, Sharp UNet, and Attention UNet. Building upon this progress, we designed a novel approach by combining Sharp UNet and Attention UNet, known as Sharp Attention UNet. Our analysis yielded the following quantitative evaluation metrics for the Sharp Attention UNet: the dice coefficient, specificity, sensitivity, and F1 score obtained values of 0.9283, 0.9936, 0.9426, and 0.9412, respectively. In addition, McNemar's statistical test was applied to assess significant differences between the approaches. Across a number of measures, our proposed model outperforms the earlier designed models and points towards improved breast lesion segmentation algorithms.
Collapse
Affiliation(s)
- Donya Khaledyan
- Department of Electrical and Electronics Engineering, University of Rochester, Rochester, NY, USA
| | - Thomas J. Marini
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
| | - Avice O’Connell
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
| | - Kevin Parker
- Department of Electrical and Electronics Engineering, University of Rochester, Rochester, NY, USA
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
| |
Collapse
|
17
|
Toscano M, Marini T, Lennon C, Erlick M, Silva H, Crofton K, Serratelli W, Rana N, Dozier AM, Castaneda B, Baran TM, Drennan K. Diagnosis of Pregnancy Complications Using Blind Ultrasound Sweeps Performed by Individuals Without Prior Formal Ultrasound Training. Obstet Gynecol 2023; 141:937-948. [PMID: 37103534 DOI: 10.1097/aog.0000000000005139] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 01/22/2023] [Indexed: 04/28/2023]
Abstract
OBJECTIVE To estimate the diagnostic accuracy of blind ultrasound sweeps performed with a low-cost, portable ultrasound system by individuals with no prior formal ultrasound training to diagnose common pregnancy complications. METHODS This is a single-center, prospective cohort study conducted from October 2020 to January 2022 among people with second- and third-trimester pregnancies. Nonspecialists with no prior formal ultrasound training underwent a brief training on a simple eight-step approach to performing a limited obstetric ultrasound examination that uses blind sweeps of a portable ultrasound probe based on external body landmarks. The sweeps were interpreted by five blinded maternal-fetal medicine subspecialists. Sensitivity, specificity, and positive and negative predictive values for blinded ultrasound sweep identification of pregnancy complications (fetal malpresentation, multiple gestations, placenta previa, and abnormal amniotic fluid volume) were compared with a reference standard ultrasonogram as the primary analysis. Kappa for agreement was also assessed. RESULTS Trainees performed 194 blinded ultrasound examinations on 168 unique pregnant people (248 fetuses) at a mean of 28±5.85 weeks of gestation for a total of 1,552 blinded sweep cine clips. There were 49 ultrasonograms with normal results (control group) and 145 ultrasonograms with abnormal results with known pregnancy complications. In this cohort, the sensitivity for detecting a prespecified pregnancy complication was 91.7% (95% CI 87.2-96.2%) overall, with the highest detection rate for multiple gestations (100%, 95% CI 100-100%) and noncephalic presentation (91.8%, 95% CI 86.4-97.3%). There was high negative predictive value for placenta previa (96.1%, 95% CI 93.5-98.8%) and abnormal amniotic fluid volume (89.5%, 95% CI 85.3-93.6%). There was also substantial to perfect mean agreement for these same outcomes (range 87-99.6% agreement, Cohen κ range 0.59-0.91, P<.001 for all). CONCLUSION Blind ultrasound sweeps of the gravid abdomen guided by an eight-step protocol using only external anatomic landmarks and performed by previously untrained operators with a low-cost, portable, battery-powered device had excellent sensitivity and specificity for high-risk pregnancy complications such as malpresentation, placenta previa, multiple gestations, and abnormal amniotic fluid volume, similar to results of a diagnostic ultrasound examination using a trained ultrasonographer and standard-of-care ultrasound machine. This approach has the potential to improve access to obstetric ultrasonography globally.
Collapse
Affiliation(s)
- Marika Toscano
- Division of Maternal-Fetal Medicine, Department of Gynecology & Obstetrics, Johns Hopkins University School of Medicine, Baltimore, Maryland; the Department of Imaging Sciences, the Department of Public Health Sciences, and the Department of Obstetrics & Gynecology, University of Rochester Medical Center, and the University of Rochester School of Medicine and Dentistry, Rochester, New York; and the Division of Electric Engineering, Department of Academic Engineering, Pontificia Universidad Catolica del Peru, Lima, Peru
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
18
|
Marini TJ, Castaneda B, Satheesh M, Zhao YT, Reátegui-Rivera CM, Sifuentes W, Baran TM, Kaproth-Joslin KA, Ambrosini R, Rios-Mayhua G, Dozier AM. Sustainable volume sweep imaging lung teleultrasound in Peru: Public health perspectives from a new frontier in expanding access to imaging. FRONTIERS IN HEALTH SERVICES 2023; 3:1002208. [PMID: 37077694 PMCID: PMC10106710 DOI: 10.3389/frhs.2023.1002208] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 02/27/2023] [Indexed: 04/05/2023]
Abstract
BackgroundPulmonary disease is a common cause of morbidity and mortality, but the majority of the people in the world lack access to diagnostic imaging for its assessment. We conducted an implementation assessment of a potentially sustainable and cost-effective model for delivery of volume sweep imaging (VSI) lung teleultrasound in Peru. This model allows image acquisition by individuals without prior ultrasound experience after only a few hours of training.MethodsLung teleultrasound was implemented at 5 sites in rural Peru after a few hours of installation and staff training. Patients were offered free lung VSI teleultrasound examination for concerns of respiratory illness or research purposes. After ultrasound examination, patients were surveyed regarding their experience. Health staff and members of the implementation team also participated in separate interviews detailing their views of the teleultrasound system which were systematically analyzed for key themes.ResultsPatients and staff rated their experience with lung teleultrasound as overwhelmingly positive. The lung teleultrasound system was viewed as a potential way to improve access to imaging and the health of rural communities. Detailed interviews with the implementation team revealed obstacles to implementation important for consideration such as gaps in lung ultrasound understanding.ConclusionsLung VSI teleultrasound was successfully deployed to 5 health centers in rural Peru. Implementation assessment revealed enthusiasm for the system among members of the community along with important areas of consideration for future teleultrasound deployment. This system offers a potential means to increase access to imaging for pulmonary illness and improve the health of the global community.
Collapse
Affiliation(s)
- Thomas J. Marini
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States
- Correspondence: Thomas J. Marini
| | - Benjamin Castaneda
- Departamento de Ingeniería, Laboratorio de Imágenes Médicas, Pontificia Universidad Católica del Perú, Lima, Peru
| | - Malavika Satheesh
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States
| | - Yu T. Zhao
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States
| | | | | | - Timothy M. Baran
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States
| | | | - Robert Ambrosini
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States
| | | | - Ann M. Dozier
- Department of Public Health, University of Rochester Medical Center, Rochester, NY, United States
| |
Collapse
|
19
|
Marini TJ, Castaneda B, Iyer R, Baran TM, Nemer O, Dozier AM, Parker KJ, Zhao Y, Serratelli W, Matos G, Ali S, Ghobryal B, Visca A, O'Connell A. Breast Ultrasound Volume Sweep Imaging: A New Horizon in Expanding Imaging Access for Breast Cancer Detection. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023; 42:817-832. [PMID: 35802491 DOI: 10.1002/jum.16047] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 06/11/2022] [Accepted: 06/13/2022] [Indexed: 05/26/2023]
Abstract
OBJECTIVE The majority of people in the world lack basic access to breast diagnostic imaging resulting in delay to diagnosis of breast cancer. In this study, we tested a volume sweep imaging (VSI) ultrasound protocol for evaluation of palpable breast lumps that can be performed by operators after minimal training without prior ultrasound experience as a means to increase accessibility to breast ultrasound. METHODS Medical students without prior ultrasound experience were trained for less than 2 hours on the VSI breast ultrasound protocol. Patients presenting with palpable breast lumps for standard of care ultrasound examination were scanned by a trained medical student with the VSI protocol using a Butterfly iQ handheld ultrasound probe. Video clips of the VSI scan imaging were later interpreted by an attending breast imager. Results of VSI scan interpretation were compared to the same-day standard of care ultrasound examination. RESULTS Medical students scanned 170 palpable lumps with the VSI protocol. There was 97% sensitivity and 100% specificity for a breast mass on VSI corresponding to 97.6% agreement with standard of care (Cohen's κ = 0.95, P < .0001). There was a detection rate of 100% for all cancer presenting as a sonographic mass. High agreement for mass characteristics between VSI and standard of care was observed, including 87% agreement on Breast Imaging-Reporting and Data System assessments (Cohen's κ = 0.82, P < .0001). CONCLUSIONS Breast ultrasound VSI for palpable lumps offers a promising means to increase access to diagnostic imaging in underserved areas. This approach could decrease delay to diagnosis for breast cancer, potentially improving morbidity and mortality.
Collapse
Affiliation(s)
| | | | - Radha Iyer
- University of Rochester Medical Center, Rochester, NY, USA
| | | | - Omar Nemer
- University of Rochester Medical Center, Rochester, NY, USA
| | - Ann M Dozier
- University of Rochester Medical Center, Rochester, NY, USA
| | - Kevin J Parker
- University of Rochester Medical Center, Rochester, NY, USA
| | - Yu Zhao
- University of Rochester Medical Center, Rochester, NY, USA
| | | | - Gregory Matos
- University of Rochester Medical Center, Rochester, NY, USA
| | - Shania Ali
- University of Rochester Medical Center, Rochester, NY, USA
| | | | - Adam Visca
- University of Rochester Medical Center, Rochester, NY, USA
| | | |
Collapse
|
20
|
Papastefanou I, Nicolaides KH, Salomon LJ. Audit of fetal biometry: understanding sources of error to improve our practice. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2023; 61:431-435. [PMID: 36647209 DOI: 10.1002/uog.26156] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 12/15/2022] [Accepted: 12/20/2022] [Indexed: 06/17/2023]
Affiliation(s)
- I Papastefanou
- Fetal Medicine Research Institute, King's College Hospital, London, UK
- Department of Women and Children's Health, Faculty of Life Sciences & Medicine, King's College London, London, UK
| | - K H Nicolaides
- Fetal Medicine Research Institute, King's College Hospital, London, UK
| | - L J Salomon
- Department of Obstetrics, Fetal Medicine and Surgery, Necker-Enfants Malades Hospital, AP-HP, Paris, France
- URP FETUS 7328 and LUMIERE Platform, University of Paris Cité, Institut Imagine, Paris, France
| |
Collapse
|
21
|
Gleed AD, Chen Q, Jackman J, Mishra D, Chandramohan V, Self A, Bhatnagar S, Papageorghiou AT, Noble JA. Automatic Image Guidance for Assessment of Placenta Location in Ultrasound Video Sweeps. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:106-121. [PMID: 36241588 DOI: 10.1016/j.ultrasmedbio.2022.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 06/06/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
Ultrasound-based assistive tools are aimed at reducing the high skill needed to interpret a scan by providing automatic image guidance. This may encourage uptake of ultrasound (US) clinical assessments in rural settings in low- and middle-income countries (LMICs), where well-trained sonographers can be scarce. This paper describes a new method that automatically generates an assistive video overlay to provide image guidance to a user to assess placenta location. The user captures US video by following a sweep protocol that scans a U-shape on the lower maternal abdomen. The sweep trajectory is simple and easy to learn. We initially explore a 2-D embedding of placenta shapes, mapping manually segmented placentas in US video frames to a 2-D space. We map 2013 frames from 11 videos. This provides insight into the spectrum of placenta shapes that appear when using the sweep protocol. We propose classification of the placenta shapes from three observed clusters: complex, tip and rectangular. We use this insight to design an effective automatic segmentation algorithm, combining a U-Net with a CRF-RNN module to enhance segmentation performance with respect to placenta shape. The U-Net + CRF-RNN algorithm automatically segments the placenta and maternal bladder. We assess segmentation performance using both area and shape metrics. We report results comparable to the state-of-the-art for automatic placenta segmentation on the Dice metric, achieving 0.83 ± 0.15 evaluated on 2127 frames from 10 videos. We also qualitatively evaluate 78,308 frames from 135 videos, assessing if the anatomical outline is correctly segmented. We found that addition of the CRF-RNN improves over a baseline U-Net when faced with a complex placenta shape, which we observe in our 2-D embedding, up to 14% with respect to the percentage shape error. From the segmentations, an assistive video overlay is automatically constructed that (i) highlights the placenta and bladder, (ii) determines the lower placenta edge and highlights this location as a point and (iii) labels a 2-cm clearance on the lower placenta edge. The 2-cm clearance is chosen to satisfy current clinical guidelines. We propose to assess the placenta location by comparing the 2-cm region and the bottom of the bladder, which represents a coarse localization of the cervix. Anatomically, the bladder must sit above the cervix region. We present proof-of-concept results for the video overlay.
Collapse
Affiliation(s)
- Alexander D Gleed
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Qingchao Chen
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - James Jackman
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Divyanshu Mishra
- Translational Health Science and Technology Institute, Faridabad, India
| | | | - Alice Self
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | | | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
22
|
Marini TJ, Kaproth-Joslin K, Ambrosini R, Baran TM, Dozier AM, Zhao YT, Satheesh M, Mahony Reátegui-Rivera C, Sifuentes W, Rios-Mayhua G, Castaneda B. Volume sweep imaging lung teleultrasound for detection of COVID-19 in Peru: a multicentre pilot study. BMJ Open 2022; 12:e061332. [PMID: 36192102 PMCID: PMC9534786 DOI: 10.1136/bmjopen-2022-061332] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 08/03/2022] [Indexed: 11/30/2022] Open
Abstract
OBJECTIVES Pulmonary disease is a significant cause of morbidity and mortality in adults and children, but most of the world lacks diagnostic imaging for its assessment. Lung ultrasound is a portable, low-cost, and highly accurate imaging modality for assessment of pulmonary pathology including pneumonia, but its deployment is limited secondary to a lack of trained sonographers. In this study, we piloted a low-cost lung teleultrasound system in rural Peru during the COVID-19 pandemic using lung ultrasound volume sweep imaging (VSI) that can be operated by an individual without prior ultrasound training circumventing many obstacles to ultrasound deployment. DESIGN Pilot study. SETTING Study activities took place in five health centres in rural Peru. PARTICIPANTS There were 213 participants presenting to rural health clinics. INTERVENTIONS Individuals without prior ultrasound experience in rural Peru underwent brief training on how to use the teleultrasound system and perform lung ultrasound VSI. Subsequently, patients attending clinic were scanned by these previously ultrasound-naïve operators with the teleultrasound system. PRIMARY AND SECONDARY OUTCOME MEASURES Radiologists examined the ultrasound imaging to assess its diagnostic value and identify any pathology. A random subset of 20% of the scans were analysed for inter-reader reliability. RESULTS Lung VSI teleultrasound examinations underwent detailed analysis by two cardiothoracic attending radiologists. Of the examinations, 202 were rated of diagnostic image quality (94.8%, 95% CI 90.9% to 97.4%). There was 91% agreement between radiologists on lung ultrasound interpretation among a 20% sample of all examinations (κ=0.76, 95% CI 0.53 to 0.98). Radiologists were able to identify sequelae of COVID-19 with the predominant finding being B-lines. CONCLUSION Lung VSI teleultrasound performed by individuals without prior training allowed diagnostic imaging of the lungs and identification of sequelae of COVID-19 infection. Deployment of lung VSI teleultrasound holds potential as a low-cost means to improve access to imaging around the world.
Collapse
Affiliation(s)
- Thomas J Marini
- University of Rochester Medical Center, Rochester, New York, USA
| | | | - Robert Ambrosini
- University of Rochester Medical Center, Rochester, New York, USA
| | - Timothy M Baran
- University of Rochester Medical Center, Rochester, New York, USA
| | - Ann M Dozier
- University of Rochester Medical Center, Rochester, New York, USA
| | - Yu T Zhao
- University of Rochester Medical Center, Rochester, New York, USA
| | | | | | | | | | | |
Collapse
|
23
|
Gaga R. Editorial for "Evaluation of Spatial Attentive Deep Learning for Automatic Placental Segmentation on Longitudinal MRI". J Magn Reson Imaging 2022; 57:1541-1542. [PMID: 35979891 DOI: 10.1002/jmri.28401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 07/08/2022] [Indexed: 11/11/2022] Open
Affiliation(s)
- Remus Gaga
- 2nd Pediatric Clinic, Clinical Emergency Hospital for Children, Cluj-Napoca, Cluj, Romania
| |
Collapse
|