1
|
Lasala A, Fiorentino MC, Bandini A, Moccia S. FetalBrainAwareNet: Bridging GANs with anatomical insight for fetal ultrasound brain plane synthesis. Comput Med Imaging Graph 2024; 116:102405. [PMID: 38824716 DOI: 10.1016/j.compmedimag.2024.102405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 04/25/2024] [Accepted: 05/22/2024] [Indexed: 06/04/2024]
Abstract
Over the past decade, deep-learning (DL) algorithms have become a promising tool to aid clinicians in identifying fetal head standard planes (FHSPs) during ultrasound (US) examination. However, the adoption of these algorithms in clinical settings is still hindered by the lack of large annotated datasets. To overcome this barrier, we introduce FetalBrainAwareNet, an innovative framework designed to synthesize anatomically accurate images of FHSPs. FetalBrainAwareNet introduces a cutting-edge approach that utilizes class activation maps as a prior in its conditional adversarial training process. This approach fosters the presence of the specific anatomical landmarks in the synthesized images. Additionally, we investigate specialized regularization terms within the adversarial training loss function to control the morphology of the fetal skull and foster the differentiation between the standard planes, ensuring that the synthetic images faithfully represent real US scans in both structure and overall appearance. The versatility of our FetalBrainAwareNet framework is highlighted by its ability to generate high-quality images of three predominant FHSPs using a singular, integrated framework. Quantitative (Fréchet inception distance of 88.52) and qualitative (t-SNE) results suggest that our framework generates US images with greater variability compared to state-of-the-art methods. By using the synthetic images generated with our framework, we increase the accuracy of FHSP classifiers by 3.2% compared to training the same classifiers solely with real acquisitions. These achievements suggest that using our synthetic images to increase the training set could provide benefits to enhance the performance of DL algorithms for FHSPs classification that could be integrated in real clinical scenarios.
Collapse
Affiliation(s)
- Angelo Lasala
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy.
| | | | - Andrea Bandini
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy; Health Science Interdisciplinary Research Center, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| |
Collapse
|
2
|
Alzubaidi M, Shah U, Agus M, Househ M. FetSAM: Advanced Segmentation Techniques for Fetal Head Biometrics in Ultrasound Imagery. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:281-295. [PMID: 38766538 PMCID: PMC11100952 DOI: 10.1109/ojemb.2024.3382487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 03/05/2024] [Accepted: 03/24/2024] [Indexed: 05/22/2024] Open
Abstract
Goal: FetSAM represents a cutting-edge deep learning model aimed at revolutionizing fetal head ultrasound segmentation, thereby elevating prenatal diagnostic precision. Methods: Utilizing a comprehensive dataset-the largest to date for fetal head metrics-FetSAM incorporates prompt-based learning. It distinguishes itself with a dual loss mechanism, combining Weighted DiceLoss and Weighted Lovasz Loss, optimized through AdamW and underscored by class weight adjustments for better segmentation balance. Performance benchmarks against prominent models such as U-Net, DeepLabV3, and Segformer highlight its efficacy. Results: FetSAM delivers unparalleled segmentation accuracy, demonstrated by a DSC of 0.90117, HD of 1.86484, and ASD of 0.46645. Conclusion: FetSAM sets a new benchmark in AI-enhanced prenatal ultrasound analysis, providing a robust, precise tool for clinical applications and pushing the envelope of prenatal care with its groundbreaking dataset and segmentation capabilities.
Collapse
Affiliation(s)
- Mahmood Alzubaidi
- College of Science and EngineeringHamad Bin Khalifa UniversityDoha34110Qatar
| | - Uzair Shah
- College of Science and EngineeringHamad Bin Khalifa UniversityDoha34110Qatar
| | - Marco Agus
- College of Science and EngineeringHamad Bin Khalifa UniversityDoha34110Qatar
| | - Mowafa Househ
- College of Science and EngineeringHamad Bin Khalifa UniversityDoha34110Qatar
| |
Collapse
|
3
|
Hao M, Guo J, Liu C, Chen C, Wang S. Development and preliminary testing of a prior knowledge-based visual navigation system for cardiac ultrasound scanning. Biomed Eng Lett 2024; 14:307-316. [PMID: 38374906 PMCID: PMC10874367 DOI: 10.1007/s13534-023-00338-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 11/21/2023] [Accepted: 11/28/2023] [Indexed: 02/21/2024] Open
Abstract
Purpose Ultrasound is widely used to diagnose disease and guide surgery because it is versatile, inexpensive and radiation-free. However, image acquisition is dependent on the operation of a professional sonographer, which is a difficult skill to learn for a wider range of non-sonographers. Methods We propose a prior knowledge-based visual navigation method to obtain three important standard ultrasound views of the heart, based on the sonographer's skill learning and augmented reality prompts. The key information about the probe movement was captured using vision-based tracking and normalisation methods on 14 volunteers, based on a professional sonographer's practice. An augmented reality-based navigation method was then proposed to guide operators with no ultrasound experience to find standard views of the heart in a second set of three volunteers. Results Through quantitative analysis and qualitative scoring, the results showed that the proposed method can effectively guide non-sonographers to obtain standard views with diagnostic value. Conclusion It is believed that the method proposed in this paper has clear application value in primary care, and expansion of the data will allow the accuracy of the navigation to be further improved.
Collapse
Affiliation(s)
- Mingrui Hao
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100043 China
| | - Jun Guo
- Hangtian Center Hospital, Beijing, 100049 China
| | - Cuicui Liu
- Hangtian Center Hospital, Beijing, 100049 China
| | - Chen Chen
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
| | - Shuangyi Wang
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100043 China
- Centre for Artificial Intelligence and Robotics, Hong Kong Institute of Science & Innovation, Chinese Academy of Sciences, Hong Kong, China
| |
Collapse
|
4
|
Chen J, Lu R, Ye S, Guang M, Tassew TM, Jing B, Zhang G, Chen G, Shen D. Image Recovery Matters: A Recovery-Extraction Framework for Robust Fetal Brain Extraction From MR Images. IEEE J Biomed Health Inform 2024; 28:823-834. [PMID: 37995170 DOI: 10.1109/jbhi.2023.3333953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
The extraction of the fetal brain from magnetic resonance (MR) images is a challenging task. In particular, fetal MR images suffer from different kinds of artifacts introduced during the image acquisition. Among those artifacts, intensity inhomogeneity is a common one affecting brain extraction. In this work, we propose a deep learning-based recovery-extraction framework for fetal brain extraction, which is particularly effective in handling fetal MR images with intensity inhomogeneity. Our framework involves two stages. First, the artifact-corrupted images are recovered with the proposed generative adversarial learning-based image recovery network with a novel region-of-darkness discriminator that enforces the network focusing on artifacts of the images. Second, we propose a brain extraction network for more effective fetal brain segmentation by strengthening the association between lower- and higher-level features as well as suppressing task-irrelevant features. Thanks to the proposed recovery-extraction strategy, our framework is able to accurately segment fetal brains from artifact-corrupted MR images. The experiments show that our framework achieves promising performance in both quantitative and qualitative evaluations, and outperforms state-of-the-art methods in both image recovery and fetal brain extraction.
Collapse
|
5
|
Yousefpour Shahrivar R, Karami F, Karami E. Enhancing Fetal Anomaly Detection in Ultrasonography Images: A Review of Machine Learning-Based Approaches. Biomimetics (Basel) 2023; 8:519. [PMID: 37999160 PMCID: PMC10669151 DOI: 10.3390/biomimetics8070519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/05/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023] Open
Abstract
Fetal development is a critical phase in prenatal care, demanding the timely identification of anomalies in ultrasound images to safeguard the well-being of both the unborn child and the mother. Medical imaging has played a pivotal role in detecting fetal abnormalities and malformations. However, despite significant advances in ultrasound technology, the accurate identification of irregularities in prenatal images continues to pose considerable challenges, often necessitating substantial time and expertise from medical professionals. In this review, we go through recent developments in machine learning (ML) methods applied to fetal ultrasound images. Specifically, we focus on a range of ML algorithms employed in the context of fetal ultrasound, encompassing tasks such as image classification, object recognition, and segmentation. We highlight how these innovative approaches can enhance ultrasound-based fetal anomaly detection and provide insights for future research and clinical implementations. Furthermore, we emphasize the need for further research in this domain where future investigations can contribute to more effective ultrasound-based fetal anomaly detection.
Collapse
Affiliation(s)
- Ramin Yousefpour Shahrivar
- Department of Biology, College of Convergent Sciences and Technologies, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Fatemeh Karami
- Department of Medical Genetics, Applied Biophotonics Research Center, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Ebrahim Karami
- Department of Engineering and Applied Sciences, Memorial University of Newfoundland, St. John’s, NL A1B 3X5, Canada
| |
Collapse
|
6
|
Aminizadeh S, Heidari A, Toumaj S, Darbandi M, Navimipour NJ, Rezaei M, Talebi S, Azad P, Unal M. The applications of machine learning techniques in medical data processing based on distributed computing and the Internet of Things. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107745. [PMID: 37579550 DOI: 10.1016/j.cmpb.2023.107745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 07/15/2023] [Accepted: 08/02/2023] [Indexed: 08/16/2023]
Abstract
Medical data processing has grown into a prominent topic in the latest decades with the primary goal of maintaining patient data via new information technologies, including the Internet of Things (IoT) and sensor technologies, which generate patient indexes in hospital data networks. Innovations like distributed computing, Machine Learning (ML), blockchain, chatbots, wearables, and pattern recognition can adequately enable the collection and processing of medical data for decision-making in the healthcare era. Particularly, to assist experts in the disease diagnostic process, distributed computing is beneficial by digesting huge volumes of data swiftly and producing personalized smart suggestions. On the other side, the current globe is confronting an outbreak of COVID-19, so an early diagnosis technique is crucial to lowering the fatality rate. ML systems are beneficial in aiding radiologists in examining the incredible amount of medical images. Nevertheless, they demand a huge quantity of training data that must be unified for processing. Hence, developing Deep Learning (DL) confronts multiple issues, such as conventional data collection, quality assurance, knowledge exchange, privacy preservation, administrative laws, and ethical considerations. In this research, we intend to convey an inclusive analysis of the most recent studies in distributed computing platform applications based on five categorized platforms, including cloud computing, edge, fog, IoT, and hybrid platforms. So, we evaluated 27 articles regarding the usage of the proposed framework, deployed methods, and applications, noting the advantages, drawbacks, and the applied dataset and screening the security mechanism and the presence of the Transfer Learning (TL) method. As a result, it was proved that most recent research (about 43%) used the IoT platform as the environment for the proposed architecture, and most of the studies (about 46%) were done in 2021. In addition, the most popular utilized DL algorithm was the Convolutional Neural Network (CNN), with a percentage of 19.4%. Hence, despite how technology changes, delivering appropriate therapy for patients is the primary aim of healthcare-associated departments. Therefore, further studies are recommended to develop more functional architectures based on DL and distributed environments and better evaluate the present healthcare data analysis models.
Collapse
Affiliation(s)
| | - Arash Heidari
- Department of Computer Engineering, Tabriz Branch, Islamic Azad University, Tabriz, Iran; Department of Software Engineering, Haliç University, Istanbul, Turkiye.
| | - Shiva Toumaj
- Urmia University of Medical Sciences, Urmia, Iran
| | - Mehdi Darbandi
- Department of Electrical and Electronic Engineering, Eastern Mediterranean University, Gazimagusa 99628, Turkiye
| | - Nima Jafari Navimipour
- Department of Computer Engineering, Kadir Has University, Istanbul, Turkiye; Future Technology Research Center, National Yunlin University of Science and Technology, Douliou, Yunlin 64002, Taiwan.
| | - Mahsa Rezaei
- Tabriz University of Medical Sciences, Faculty of Surgery, Tabriz, Iran
| | - Samira Talebi
- Department of Computer Science, University of Texas at San Antonio, TX, USA
| | - Poupak Azad
- Department of Computer Science, University of Manitoba, Winnipeg, Canada
| | - Mehmet Unal
- Department of Computer Engineering, Nisantasi University, Istanbul, Turkiye
| |
Collapse
|
7
|
Gao J, Lao Q, Liu P, Yi H, Kang Q, Jiang Z, Wu X, Li K, Chen Y, Zhang L. Anatomically Guided Cross-Domain Repair and Screening for Ultrasound Fetal Biometry. IEEE J Biomed Health Inform 2023; 27:4914-4925. [PMID: 37486830 DOI: 10.1109/jbhi.2023.3298096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/26/2023]
Abstract
Ultrasound based estimation of fetal biometry is extensively used to diagnose prenatal abnormalities and to monitor fetal growth, for which accurate segmentation of the fetal anatomy is a crucial prerequisite. Although deep neural network-based models have achieved encouraging results on this task, inevitable distribution shifts in ultrasound images can still result in severe performance drop in real world deployment scenarios. In this article, we propose a complete ultrasound fetal examination system to deal with this troublesome problem by repairing and screening the anatomically implausible results. Our system consists of three main components: A routine segmentation network, a fetal anatomical key points guided repair network, and a shape-coding based selective screener. Guided by the anatomical key points, our repair network has stronger cross-domain repair capabilities, which can substantially improve the outputs of the segmentation network. By quantifying the distance between an arbitrary segmentation mask to its corresponding anatomical shape class, the proposed shape-coding based selective screener can then effectively reject the entire implausible results that cannot be fully repaired. Extensive experiments demonstrate that our proposed framework has strong anatomical guarantee and outperforms other methods in three different cross-domain scenarios.
Collapse
|
8
|
Sai H, Xu Z, Xia C, Wang L, Zhang J. Lightweight Force-Controlled Device for Freehand Ultrasound Acquisition. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:944-960. [PMID: 37028093 DOI: 10.1109/tuffc.2023.3252015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This study investigates a force-controlled auxiliary device for freehand ultrasound (US) examinations. The designed device allows sonographers to maintain a steady target pressure on the US probe, thereby improving the US image quality and reproducibility. The use of a screw motor to power the device and a Raspberry Pi as the system controller results in a lightweight and portable device, while a screen enhances user-interactivity. Using gravity compensation, error compensation, an adaptive proportional-integral-derivative algorithm, and low-pass signal filtering, the designed device provides highly accurate force control. Several experiments using the developed device, including clinical trials relating to the jugular and superficial femoral veins, validate its utility in ensuring the desired pressure in response to varying environments and prolonged US examinations, enabling low or high pressures to be maintained and lowering the threshold of clinical experience. Moreover, the experimental results show that the designed device effectively relieves the stress on the sonographer's hand joints during US examinations and enables rapid assessment of the tissue elasticity characteristics. With automatic pressure tracking between probe and patient, the proposed device offers potentially significant benefits for the reproducibility and stability of US images and the health of sonographers.
Collapse
|
9
|
Ferreira MR, Torres HR, Oliveira B, de Araujo ARVF, Morais P, Novais P, Vilaca JL. Deep Learning Networks for Breast Lesion Classification in Ultrasound Images: A Comparative Study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083151 DOI: 10.1109/embc40787.2023.10340293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Accurate lesion classification as benign or malignant in breast ultrasound (BUS) images is a critical task that requires experienced radiologists and has many challenges, such as poor image quality, artifacts, and high lesion variability. Thus, automatic lesion classification may aid professionals in breast cancer diagnosis. In this scope, computer-aided diagnosis systems have been proposed to assist in medical image interpretation, outperforming the intra and inter-observer variability. Recently, such systems using convolutional neural networks have demonstrated impressive results in medical image classification tasks. However, the lack of public benchmarks and a standardized evaluation method hampers the performance comparison of networks. This work is a benchmark for lesion classification in BUS images comparing six state-of-the-art networks: GoogLeNet, InceptionV3, ResNet, DenseNet, MobileNetV2, and EfficientNet. For each network, five input data variations that include segmentation information were tested to compare their impact on the final performance. The methods were trained on a multi-center BUS dataset (BUSI and UDIAT) and evaluated using the following metrics: precision, sensitivity, F1-score, accuracy, and area under the curve (AUC). Overall, the lesion with a thin border of background provides the best performance. For this input data, EfficientNet obtained the best results: an accuracy of 97.65% and an AUC of 96.30%.Clinical Relevance- This study showed the potential of deep neural networks to be used in clinical practice for breast lesion classification, also suggesting the best model choices.
Collapse
|
10
|
Torres HR, Oliveira B, Fonseca JC, Morais P, Vilaca JL. Dual consistency loss for contour-aware segmentation in medical images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082637 DOI: 10.1109/embc40787.2023.10340931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Medical image segmentation is a paramount task for several clinical applications, namely for the diagnosis of pathologies, for treatment planning, and for aiding image-guided surgeries. With the development of deep learning, Convolutional Neural Networks (CNN) have become the state-of-the-art for medical image segmentation. However, issues are still raised concerning the precise object boundary delineation, since traditional CNNs can produce non-smooth segmentations with boundary discontinuities. In this work, a U-shaped CNN architecture is proposed to generate both pixel-wise segmentation and probabilistic contour maps of the object to segment, in order to generate reliable segmentations at the object's boundaries. Moreover, since the segmentation and contour maps must be inherently related to each other, a dual consistency loss that relates the two outputs of the network is proposed. Thus, the network is enforced to consistently learn the segmentation and contour delineation tasks during the training. The proposed method was applied and validated on a public dataset of cardiac 3D ultrasound images of the left ventricle. The results obtained showed the good performance of the method and its applicability for the cardiac dataset, showing its potential to be used in clinical practice for medical image segmentation.Clinical Relevance- The proposed network with dual consistency loss scheme can improve the performance of state-of-the-art CNNs for medical image segmentation, proving its value to be applied for computer-aided diagnosis.
Collapse
|
11
|
Kim TK, Kim JS, Cho HC. Deep-learning-based gestational sac detection in ultrasound images using modified YOLOv7-E6E model. JOURNAL OF ANIMAL SCIENCE AND TECHNOLOGY 2023; 65:627-637. [PMID: 37332278 PMCID: PMC10271918 DOI: 10.5187/jast.2023.e43] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/20/2023] [Accepted: 05/03/2023] [Indexed: 06/20/2023]
Abstract
As the population and income levels rise, meat consumption steadily increases annually. However, the number of farms and farmers producing meat decrease during the same period, reducing meat sufficiency. Information and Communications Technology (ICT) has begun to be applied to reduce labor and production costs of livestock farms and improve productivity. This technology can be used for rapid pregnancy diagnosis of sows; the location and size of the gestation sacs of sows are directly related to the productivity of the farm. In this study, a system proposes to determine the number of gestation sacs of sows from ultrasound images. The system used the YOLOv7-E6E model, changing the activation function from sigmoid-weighted linear unit (SiLU) to a multi-activation function (SiLU + Mish). Also, the upsampling method was modified from nearest to bicubic to improve performance. The model trained with the original model using the original data achieved mean average precision of 86.3%. When the proposed multi-activation function, upsampling, and AutoAugment were applied, the performance improved by 0.3%, 0.9%, and 0.9%, respectively. When all three proposed methods were simultaneously applied, a significant performance improvement of 3.5% to 89.8% was achieved.
Collapse
Affiliation(s)
- Tae-kyeong Kim
- Interdisciplinary Graduate Program for BIT Medical Convergence, Kangwon National University, Chuncheon 24341, Korea
| | - Jin Soo Kim
- College of Animal Life Sciences, Kangwon National University, Chuncheon 24341, Korea
| | - Hyun-chong Cho
- Department of Electronics Engineering and Interdisciplinary Graduate Program for BIT Medical Convergence, Kangwon National University, Chuncheon 24341, Korea
| |
Collapse
|
12
|
Caspi Y, de Zwarte SMC, Iemenschot IJ, Lumbreras R, de Heus R, Bekker MN, Hulshoff Pol H. Automatic measurements of fetal intracranial volume from 3D ultrasound scans. FRONTIERS IN NEUROIMAGING 2022; 1:996702. [PMID: 37555155 PMCID: PMC10406279 DOI: 10.3389/fnimg.2022.996702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 09/15/2022] [Indexed: 08/10/2023]
Abstract
Three-dimensional fetal ultrasound is commonly used to study the volumetric development of brain structures. To date, only a limited number of automatic procedures for delineating the intracranial volume exist. Hence, intracranial volume measurements from three-dimensional ultrasound images are predominantly performed manually. Here, we present and validate an automated tool to extract the intracranial volume from three-dimensional fetal ultrasound scans. The procedure is based on the registration of a brain model to a subject brain. The intracranial volume of the subject is measured by applying the inverse of the final transformation to an intracranial mask of the brain model. The automatic measurements showed a high correlation with manual delineation of the same subjects at two gestational ages, namely, around 20 and 30 weeks (linear fitting R2(20 weeks) = 0.88, R2(30 weeks) = 0.77; Intraclass Correlation Coefficients: 20 weeks=0.94, 30 weeks = 0.84). Overall, the automatic intracranial volumes were larger than the manually delineated ones (84 ± 16 vs. 76 ± 15 cm3; and 274 ± 35 vs. 237 ± 28 cm3), probably due to differences in cerebellum delineation. Notably, the automated measurements reproduced both the non-linear pattern of fetal brain growth and the increased inter-subject variability for older fetuses. By contrast, there was some disagreement between the manual and automatic delineation concerning the size of sexual dimorphism differences. The method presented here provides a relatively efficient way to delineate volumes of fetal brain structures like the intracranial volume automatically. It can be used as a research tool to investigate these structures in large cohorts, which will ultimately aid in understanding fetal structural human brain development.
Collapse
Affiliation(s)
- Yaron Caspi
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Sonja M. C. de Zwarte
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Iris J. Iemenschot
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Raquel Lumbreras
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Roel de Heus
- Department of Obstetrics and Gynaecology, St. Antonius Hospital, Utrecht, Netherlands
- Department of Obstetrics, University Medical Center Utrecht, Utrecht, Netherlands
| | - Mireille N. Bekker
- Department of Obstetrics, University Medical Center Utrecht, Utrecht, Netherlands
| | - Hilleke Hulshoff Pol
- Department of Psychiatry, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
- Department of Psychology, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
13
|
Alzubaidi M, Agus M, Shah U, Makhlouf M, Alyafei K, Househ M. Ensemble Transfer Learning for Fetal Head Analysis: From Segmentation to Gestational Age and Weight Prediction. Diagnostics (Basel) 2022; 12:diagnostics12092229. [PMID: 36140628 PMCID: PMC9497941 DOI: 10.3390/diagnostics12092229] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 08/25/2022] [Accepted: 08/26/2022] [Indexed: 11/16/2022] Open
Abstract
Ultrasound is one of the most commonly used imaging methodologies in obstetrics to monitor the growth of a fetus during the gestation period. Specifically, ultrasound images are routinely utilized to gather fetal information, including body measurements, anatomy structure, fetal movements, and pregnancy complications. Recent developments in artificial intelligence and computer vision provide new methods for the automated analysis of medical images in many domains, including ultrasound images. We present a full end-to-end framework for segmenting, measuring, and estimating fetal gestational age and weight based on two-dimensional ultrasound images of the fetal head. Our segmentation framework is based on the following components: (i) eight segmentation architectures (UNet, UNet Plus, Attention UNet, UNet 3+, TransUNet, FPN, LinkNet, and Deeplabv3) were fine-tuned using lightweight network EffientNetB0, and (ii) a weighted voting method for building an optimized ensemble transfer learning model (ETLM). On top of that, ETLM was used to segment the fetal head and to perform analytic and accurate measurements of circumference and seven other values of the fetal head, which we incorporated into a multiple regression model for predicting the week of gestational age and the estimated fetal weight (EFW). We finally validated the regression model by comparing our result with expert physician and longitudinal references. We evaluated the performance of our framework on the public domain dataset HC18: we obtained 98.53% mean intersection over union (mIoU) as the segmentation accuracy, overcoming the state-of-the-art methods; as measurement accuracy, we obtained a 1.87 mm mean absolute difference (MAD). Finally we obtained a 0.03% mean square error (MSE) in predicting the week of gestational age and 0.05% MSE in predicting EFW.
Collapse
Affiliation(s)
- Mahmood Alzubaidi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
- Correspondence: (M.A.); (M.H.)
| | - Marco Agus
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| | - Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| | - Michel Makhlouf
- Sidra Medical and Research Center, Sidra Medicine, Doha P.O. Box 26999, Qatar
| | - Khalid Alyafei
- Sidra Medical and Research Center, Sidra Medicine, Doha P.O. Box 26999, Qatar
| | - Mowafa Househ
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
- Correspondence: (M.A.); (M.H.)
| |
Collapse
|
14
|
Ferreira MR, Torres HR, Oliveira B, Gomes-Fonseca J, Morais P, Novais P, Vilaca JL. Comparative Analysis of Current Deep Learning Networks for Breast Lesion Segmentation in Ultrasound Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3878-3881. [PMID: 36085645 DOI: 10.1109/embc48229.2022.9871091] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Automatic lesion segmentation in breast ultrasound (BUS) images aids in the diagnosis of breast cancer, the most common type of cancer in women. Accurate lesion segmentation in ultrasound images is a challenging task due to speckle noise, artifacts, shadows, and lesion variability in size and shape. Recently, convolutional neural networks have demonstrated impressive results in medical image segmentation tasks. However, the lack of public benchmarks and a standardized evaluation method hampers the networks' performance comparison. This work presents a benchmark of seven state-of-the-art methods for the automatic breast lesion segmentation task. The methods were evaluated on a multi-center BUS dataset composed of three public datasets. Specifically, the U-Net, Dynamic U-Net, Semantic Segmentation Deep Residual Network with Variational Autoencoder (SegResNetVAE), U-Net Transformers, Residual Feedback Network, Multiscale Dual Attention-Based Network, and Global Guidance Network (GG-Net) architectures were evaluated. The training was performed with a combination of the cross-entropy and Dice loss functions and the overall performance of the networks was assessed using the Dice coefficient, Jaccard index, accuracy, recall, specificity, and precision. Despite all networks having obtained Dice scores superior to 75%, the GG-Net and SegResNetVAE architectures outperform the remaining methods, achieving 82.56% and 81.90%, respectively. Clinical Relevance- The results of this study allowed to prove the potential of deep neural networks to be used in clinical practice for breast lesion segmentation also suggesting the best model choices.
Collapse
|