1
|
Wang L, Fatemi M, Alizad A. Artificial intelligence in fetal brain imaging: Advancements, challenges, and multimodal approaches for biometric and structural analysis. Comput Biol Med 2025; 192:110312. [PMID: 40319756 DOI: 10.1016/j.compbiomed.2025.110312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2025] [Revised: 04/21/2025] [Accepted: 04/29/2025] [Indexed: 05/07/2025]
Abstract
Artificial intelligence (AI) is transforming fetal brain imaging by addressing key challenges in diagnostic accuracy, efficiency, and data integration in prenatal care. This review explores AI's application in enhancing fetal brain imaging through ultrasound (US) and magnetic resonance imaging (MRI), with a particular focus on multimodal integration to leverage their complementary strengths. By critically analyzing state-of-the-art AI methodologies, including deep learning frameworks and attention-based architectures, this study highlights significant advancements alongside persistent challenges. Notable barriers include the scarcity of diverse and high-quality datasets, computational inefficiencies, and ethical concerns surrounding data privacy and security. Special attention is given to multimodal approaches that integrate US and MRI, combining the accessibility and real-time imaging of US with the superior soft tissue contrast of MRI to improve diagnostic precision. Furthermore, this review emphasizes the transformative potential of AI in fostering clinical adoption through innovations such as real-time diagnostic tools and human-AI collaboration frameworks. By providing a comprehensive roadmap for future research and implementation, this study underscores AI's potential to redefine fetal imaging practices, enhance diagnostic accuracy, and ultimately improve perinatal care outcomes.
Collapse
Affiliation(s)
- Lulu Wang
- Department of Engineering, Reykjavík University, Reykjavík 101, Iceland; Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN, 55902, USA; College of Science, Engineering and Technology, University of South Africa, Midrand, 1686, Gauteng, South Africa.
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN, 55902, USA
| | - Azra Alizad
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN, 55902, USA
| |
Collapse
|
2
|
Zhang W, Yang T, Fan J, Wang H, Ji M, Zhang H, Miao J. U-shaped network combining dual-stream fusion mamba and redesigned multilayer perceptron for myocardial pathology segmentation. Med Phys 2025. [PMID: 40247150 DOI: 10.1002/mp.17812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 03/10/2025] [Accepted: 03/16/2025] [Indexed: 04/19/2025] Open
Abstract
BACKGROUND Cardiac magnetic resonance imaging (CMR) provides critical pathological information, such as scars and edema, which are vital for diagnosing myocardial infarction (MI). However, due to the limited pathological information in single-sequence CMR images and the small size of pathological regions, automatic segmentation of myocardial pathology remains a significant challenge. PURPOSE In the paper, we propose a novel two-stage anatomical-pathological segmentation framework combining Kolmogorov-Arnold Networks (KAN) and Mamba, aiming to effectively segment myocardial pathology in multi-sequence CMR images. METHODS First, in the coarse segmentation stage, we employed a multiline parallel MambaUnet as the anatomical structure segmentation network to obtain shape prior information. This approach effectively addresses the class imbalance issue and aids in subsequent pathological segmentation. In the fine segmentation stage, we introduced a novel U-shaped segmentation network, KANMambaNet, which features a Dual-Stream Fusion Mamba module. This module enhances the network's ability to capture long-range dependencies while improving its capability to distinguish different pathological features in small regions. Additionally, we developed a Kolmogorov-Arnold Network-based multilayer perceptron (KAN MLP) module that utilizes learnable activation functions instead of fixed nonlinear functions. This design enhances the network's flexibility in handling various pathological features, enabling more accurate differentiation of the pathological characteristics at the boundary between edema and scar regions. Our method achieves competitive segmentation performance compared to state-of-the-art models, particularly in terms of the Dice coefficient. RESULTS We validated our model's performance on the MyoPS2020 dataset, achieving a Dice score of 0.8041 ± $\pm$ 0.0751 for myocardial edema and 0.9051 ± $\pm$ 0.0240 for myocardial scar. Compared to the baseline model MambaUnet, our edema segmentation performance improved by 0.1420, and scar segmentation performance improved by 0.1081. CONCLUSIONS We developed an innovative two-stage anatomical-pathological segmentation framework that integrates KAN and Mamba, effectively segmenting myocardial pathology in multi-sequence CMR images. The experimental results demonstrate that our proposed method achieves superior segmentation performance compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Wenjie Zhang
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, China
| | - Tiejun Yang
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, China
- Key Laboratory of Grain Information Processing and Control (HAUT), Ministry of Education, Zhengzhou, China
- Henan Key Laboratory of Grain Photoelectric Detection and Control (HAUT), Zhengzhou, Henan, China
| | - Jiacheng Fan
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, China
| | - Heng Wang
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, China
| | - Mingzhu Ji
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, China
| | - Huiyao Zhang
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, China
| | - Jianyu Miao
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, China
| |
Collapse
|
3
|
Bastiaansen WAP, Klein S, Hojeij B, Rubini E, Koning AHJ, Niessen W, Steegers-Theunissen RPM, Rousian M. Automatic Human Embryo Volume Measurement in First Trimester Ultrasound From the Rotterdam Periconception Cohort: Quantitative and Qualitative Evaluation of Artificial Intelligence. J Med Internet Res 2025; 27:e60887. [PMID: 40163035 PMCID: PMC11997536 DOI: 10.2196/60887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 11/29/2024] [Accepted: 01/17/2025] [Indexed: 04/02/2025] Open
Abstract
BACKGROUND Noninvasive volumetric measurements during the first trimester of pregnancy provide unique insight into human embryonic growth and development. However, current methods, such as semiautomatic (eg, virtual reality [VR]) or manual segmentation (eg, VOCAL) are not used in routine care due to their time-consuming nature, requirement for specialized training, and introduction of inter- and intrarater variability. OBJECTIVE This study aimed to address the challenges of manual and semiautomatic measurements, our objective is to develop an automatic artificial intelligence (AI) algorithm to segment the region of interest and measure embryonic volume (EV) and head volume (HV) during the first trimester of pregnancy. METHODS We used 3D ultrasound datasets from the Rotterdam Periconception Cohort, collected between 7 and 11 weeks of gestational age. We measured the EV in gestational weeks 7, 9 and 11, and the HV in weeks 9 and 11. To develop the AI algorithms for measuring EV and HV, we used nnU-net, a state-of-the-art segmentation algorithm that is publicly available. We tested the algorithms on 164 (EV) and 92 (HV) datasets, both acquired before 2020. The AI algorithm's generalization to data acquired in the future was evaluated by testing on 116 (EV) and 58 (HV) datasets from 2020. The performance of the model was assessed using the intraclass correlation coefficient (ICC) between the volume obtained using AI and using VR. In addition, 2 experts qualitatively rated both VR and AI segmentations for the EV and HV. RESULTS We found that segmentation of both the EV and HV using AI took around a minute additionally, rating took another minute, hence in total, volume measurement took 2 minutes per ultrasound dataset, while experienced raters needed 5-10 minutes using a VR tool. For both the EV and HV, we found an ICC of 0.998 on the test set acquired before 2020 and an ICC of 0.996 (EV) and 0.997 (HV) for data acquired in 2020. During qualitative rating for the EV, a comparable proportion (AI: 42%, VR: 38%) were rated as excellent; however, we found that major errors were more common with the AI algorithm, as it more frequently missed limbs. For the HV, the AI segmentations were rated as excellent in 79% of cases, compared with only 17% for VR. CONCLUSIONS We developed 2 fully automatic AI algorithms to accurately measure the EV and HV in the first trimester on 3D ultrasound data. In depth qualitative analysis revealed that the quality of the measurement for AI and VR were similar. Since automatic volumetric assessment now only takes a couple of minutes, the use of these measurements in pregnancy for monitoring growth and development during this crucial period, becomes feasible, which may lead to better screening, diagnostics, and treatment of developmental disorders in pregnancy.
Collapse
Affiliation(s)
- Wietske A P Bastiaansen
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
- Department of Radiology and Nuclear Medicine, Biomedical Imaging Group Rotterdam, University Medical Center, Erasmus MC, Rotterdam, The Netherlands
| | - Stefan Klein
- Department of Radiology and Nuclear Medicine, Biomedical Imaging Group Rotterdam, University Medical Center, Erasmus MC, Rotterdam, The Netherlands
| | - Batoul Hojeij
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Eleonora Rubini
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Anton H J Koning
- Department of Pathology, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Wiro Niessen
- University Medical Center Groningen, Groningen, The Netherlands
| | | | - Melek Rousian
- Department of Obstetrics and Gynecology, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| |
Collapse
|
4
|
Zhang Z, Lei Z, Zhou M, Hasegawa H, Gao S. Complex-Valued Convolutional Gated Recurrent Neural Network for Ultrasound Beamforming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5668-5679. [PMID: 38598398 DOI: 10.1109/tnnls.2024.3384314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Ultrasound detection is a potent tool for the clinical diagnosis of various diseases due to its real-time, convenient, and noninvasive qualities. Yet, existing ultrasound beamforming and related methods face a big challenge to improve both the quality and speed of imaging for the required clinical applications. The most notable characteristic of ultrasound signal data is its spatial and temporal features. Because most signals are complex-valued, directly processing them by using real-valued networks leads to phase distortion and inaccurate output. In this study, for the first time, we propose a complex-valued convolutional gated recurrent (CCGR) neural network to handle ultrasound analytic signals with the aforementioned properties. The complex-valued network operations proposed in this study improve the beamforming accuracy of complex-valued ultrasound signals over traditional real-valued methods. Further, the proposed deep integration of convolution and recurrent neural networks makes a great contribution to extracting rich and informative ultrasound signal features. Our experimental results reveal its outstanding imaging quality over existing state-of-the-art methods. More significantly, its ultrafast processing speed of only 0.07 s per image promises considerable clinical application potential. The code is available at https://github.com/zhangzm0128/CCGR.
Collapse
|
5
|
Bai J, Zhou Z, Ou Z, Koehler G, Stock R, Maier-Hein K, Elbatel M, Martí R, Li X, Qiu Y, Gou P, Chen G, Zhao L, Zhang J, Dai Y, Wang F, Silvestre G, Curran K, Sun H, Xu J, Cai P, Jiang L, Lan L, Ni D, Zhong M, Chen G, Campello VM, Lu Y, Lekadir K. PSFHS challenge report: Pubic symphysis and fetal head segmentation from intrapartum ultrasound images. Med Image Anal 2025; 99:103353. [PMID: 39340971 DOI: 10.1016/j.media.2024.103353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 09/13/2024] [Accepted: 09/16/2024] [Indexed: 09/30/2024]
Abstract
Segmentation of the fetal and maternal structures, particularly intrapartum ultrasound imaging as advocated by the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) for monitoring labor progression, is a crucial first step for quantitative diagnosis and clinical decision-making. This requires specialized analysis by obstetrics professionals, in a task that i) is highly time- and cost-consuming and ii) often yields inconsistent results. The utility of automatic segmentation algorithms for biometry has been proven, though existing results remain suboptimal. To push forward advancements in this area, the Grand Challenge on Pubic Symphysis-Fetal Head Segmentation (PSFHS) was held alongside the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023). This challenge aimed to enhance the development of automatic segmentation algorithms at an international scale, providing the largest dataset to date with 5,101 intrapartum ultrasound images collected from two ultrasound machines across three hospitals from two institutions. The scientific community's enthusiastic participation led to the selection of the top 8 out of 179 entries from 193 registrants in the initial phase to proceed to the competition's second stage. These algorithms have elevated the state-of-the-art in automatic PSFHS from intrapartum ultrasound images. A thorough analysis of the results pinpointed ongoing challenges in the field and outlined recommendations for future work. The top solutions and the complete dataset remain publicly available, fostering further advancements in automatic segmentation and biometry for intrapartum ultrasound imaging.
Collapse
Affiliation(s)
- Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China; Auckland Bioengineering Institute, The University of Auckland, Private Bag 92019, Auckland 1142, New Zealand.
| | - Zihao Zhou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China
| | - Zhanhong Ou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China
| | - Gregor Koehler
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Raphael Stock
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Klaus Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Marawan Elbatel
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hongkong, China
| | - Robert Martí
- Computer Vision and Robotics Group, University of Girona, Girona, Spain
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hongkong, China
| | - Yaoyang Qiu
- Canon Medical Systems (China) Co., LTD, Beijing, China
| | - Panjie Gou
- Canon Medical Systems (China) Co., LTD, Beijing, China
| | - Gongping Chen
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Lei Zhao
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
| | - Jianxun Zhang
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Yu Dai
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Fangyijie Wang
- School of Medicine, University College Dublin, Dublin, Ireland
| | | | - Kathleen Curran
- School of Computer Science, University College Dublin, Dublin, Ireland
| | - Hongkun Sun
- School of Statistics & Mathematics, Zhejiang Gongshang University, Hangzhou, China
| | - Jing Xu
- School of Statistics & Mathematics, Zhejiang Gongshang University, Hangzhou, China
| | - Pengzhou Cai
- School of Computer Science & Engineering, Chongqing University of Technology, Chongqing, China
| | - Lu Jiang
- School of Computer Science & Engineering, Chongqing University of Technology, Chongqing, China
| | - Libin Lan
- School of Computer Science & Engineering, Chongqing University of Technology, Chongqing, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound & Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging & School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Mei Zhong
- NanFang Hospital of Southern Medical University, Guangzhou, China
| | - Gaowen Chen
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Víctor M Campello
- Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China
| | - Karim Lekadir
- Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
6
|
Hurtado J, Sierra-Franco CA, Motta T, Raposo A. Segmentation of four-chamber view images in fetal ultrasound exams using a novel deep learning model ensemble method. Comput Biol Med 2024; 183:109188. [PMID: 39395344 DOI: 10.1016/j.compbiomed.2024.109188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 08/28/2024] [Accepted: 09/20/2024] [Indexed: 10/14/2024]
Abstract
Fetal echocardiography, a specialized ultrasound application commonly utilized for fetal heart assessment, can greatly benefit from automated segmentation of anatomical structures, aiding operators in their evaluations. We introduce a novel approach that combines various deep learning models for segmenting key anatomical structures in 2D ultrasound images of the fetal heart. Our ensemble method combines the raw predictions from the selected models, obtaining the optimal set of segmentation components that closely approximate the distribution of the fetal heart, resulting in improved segmentation outcomes. The selection of these components involves sequential and hierarchical geometry filtering, focusing on the analysis of shape and relative distances. Unlike other ensemble strategies that average predictions, our method works as a shape selector, ensuring that the final segmentation aligns more accurately with anatomical expectations. Considering a large private dataset for model training and evaluation, we present both numerical and visual experiments highlighting the advantages of our method in comparison to the segmentations produced by the individual models and a conventional average ensemble. Furthermore, we show some applications where our method proves instrumental in obtaining reliable estimations.
Collapse
Affiliation(s)
- Jan Hurtado
- Tecgraf Institute, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil; Department of Informatics, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil.
| | - Cesar A Sierra-Franco
- Tecgraf Institute, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil.
| | - Thiago Motta
- Tecgraf Institute, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil.
| | - Alberto Raposo
- Tecgraf Institute, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil; Department of Informatics, Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil.
| |
Collapse
|
7
|
Kumar A, Jiang H, Imran M, Valdes C, Leon G, Kang D, Nataraj P, Zhou Y, Weiss MD, Shao W. A flexible 2.5D medical image segmentation approach with in-slice and cross-slice attention. Comput Biol Med 2024; 182:109173. [PMID: 39317055 DOI: 10.1016/j.compbiomed.2024.109173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 08/18/2024] [Accepted: 09/17/2024] [Indexed: 09/26/2024]
Abstract
Deep learning has become the de facto method for medical image segmentation, with 3D segmentation models excelling in capturing complex 3D structures and 2D models offering high computational efficiency. However, segmenting 2.5D images, characterized by high in-plane resolution but lower through-plane resolution, presents significant challenges. While applying 2D models to individual slices of a 2.5D image is feasible, it fails to capture the spatial relationships between slices. On the other hand, 3D models face challenges such as resolution inconsistencies in 2.5D images, along with computational complexity and susceptibility to overfitting when trained with limited data. In this context, 2.5D models, which capture inter-slice correlations using only 2D neural networks, emerge as a promising solution due to their reduced computational demand and simplicity in implementation. In this paper, we introduce CSA-Net, a flexible 2.5D segmentation model capable of processing 2.5D images with an arbitrary number of slices. CSA-Net features an innovative Cross-Slice Attention (CSA) module that effectively captures 3D spatial information by learning long-range dependencies between the center slice (for segmentation) and its neighboring slices. Moreover, CSA-Net utilizes the self-attention mechanism to learn correlations among pixels within the center slice. We evaluated CSA-Net on three 2.5D segmentation tasks: (1) multi-class brain MR image segmentation, (2) binary prostate MR image segmentation, and (3) multi-class prostate MR image segmentation. CSA-Net outperformed leading 2D, 2.5D, and 3D segmentation methods across all three tasks, achieving average Dice coefficients and HD95 values of 0.897 and 1.40 mm for the brain dataset, 0.921 and 1.06 mm for the prostate dataset, and 0.659 and 2.70 mm for the ProstateX dataset, demonstrating its efficacy and superiority. Our code is publicly available at: https://github.com/mirthAI/CSA-Net.
Collapse
Affiliation(s)
- Amarjeet Kumar
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL, 32610, United States
| | - Hongxu Jiang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32610, United States
| | - Muhammad Imran
- Department of Medicine, University of Florida, Gainesville, FL, 32610, United States
| | - Cyndi Valdes
- Department of Pediatrics, University of Florida, Gainesville, FL, 32610, United States
| | - Gabriela Leon
- College of Medicine, University of Florida, Gainesville, FL, 32610, United States
| | - Dahyun Kang
- College of Medicine, University of Florida, Gainesville, FL, 32610, United States
| | - Parvathi Nataraj
- Department of Pediatrics, University of Florida, Gainesville, FL, 32610, United States
| | - Yuyin Zhou
- Department of Computer Science and Engineering, University of California, Santa Cruz, CA, 95064, United States
| | - Michael D Weiss
- Department of Pediatrics, University of Florida, Gainesville, FL, 32610, United States
| | - Wei Shao
- Department of Medicine, University of Florida, Gainesville, FL, 32610, United States; Intelligent Clinical Care Center, University of Florida, Gainesville, FL, 32610, United States.
| |
Collapse
|
8
|
Boneš E, Gergolet M, Bohak C, Lesar Ž, Marolt M. Automatic Segmentation and Alignment of Uterine Shapes from 3D Ultrasound Data. Comput Biol Med 2024; 178:108794. [PMID: 38941903 DOI: 10.1016/j.compbiomed.2024.108794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 06/18/2024] [Accepted: 06/19/2024] [Indexed: 06/30/2024]
Abstract
BACKGROUND The uterus is the most important organ in the female reproductive system. Its shape plays a critical role in fertility and pregnancy outcomes. Advances in medical imaging, such as 3D ultrasound, have significantly improved the exploration of the female genital tract, thereby enhancing gynecological healthcare. Despite well-documented data for organs like the liver and heart, large-scale studies on the uterus are lacking. Existing classifications, such as VCUAM and ESHRE/ESGE, provide different definitions for normal uterine shapes but are not based on real-world measurements. Moreover, the lack of comprehensive datasets significantly hinders research in this area. Our research, part of the larger NURSE study, aims to fill this gap by establishing the shape of a normal uterus using real-world 3D vaginal ultrasound scans. This will facilitate research into uterine shape abnormalities associated with infertility and recurrent miscarriages. METHODS We developed an automated system for the segmentation and alignment of uterine shapes from 3D ultrasound data, which consists of two steps: automatic segmentation of the uteri in 3D ultrasound scans using deep learning techniques, and alignment of the resulting shapes with standard geometrical approaches, enabling the extraction of the normal shape for future analysis. The system was trained and validated on a comprehensive dataset of 3D ultrasound images from multiple medical centers. Its performance was evaluated by comparing the automated results with manual annotations provided by expert clinicians. RESULTS The presented approach demonstrated high accuracy in segmenting and aligning uterine shapes from 3D ultrasound data. The segmentation achieved an average Dice similarity coefficient (DSC) of 0.90. Our method for aligning uterine shapes showed minimal translation and rotation errors compared to traditional methods, with the preliminary average shape exhibiting characteristics consistent with expert findings of a normal uterus. CONCLUSION We have presented an approach to automatically segment and align uterine shapes from 3D ultrasound data. We trained a deep learning nnU-Net model that achieved high accuracy and proposed an alignment method using a combination of standard geometrical techniques. Additionally, we have created a publicly available dataset of 3D transvaginal ultrasound volumes with manual annotations of uterine cavities to support further research and development in this field. The dataset and the trained models are available at https://github.com/UL-FRI-LGM/UterUS.
Collapse
Affiliation(s)
- Eva Boneš
- University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, 1000, Slovenia.
| | - Marco Gergolet
- University of Ljubljana, Faculty of Medicine, Vrazov trg 2, Ljubljana, 1000, Slovenia.
| | - Ciril Bohak
- University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, 1000, Slovenia; King Abdullah University of Science and Technology, Visual Computing Center, Thuwal, 23955-6900, Saudi Arabia.
| | - Žiga Lesar
- University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, 1000, Slovenia.
| | - Matija Marolt
- University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, 1000, Slovenia.
| |
Collapse
|
9
|
Ye Q, Yang H, Lin B, Wang M, Song L, Xie Z, Lu Z, Feng Q, Zhao Y. Automatic detection, segmentation, and classification of primary bone tumors and bone infections using an ensemble multi-task deep learning framework on multi-parametric MRIs: a multi-center study. Eur Radiol 2024; 34:4287-4299. [PMID: 38127073 DOI: 10.1007/s00330-023-10506-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/09/2023] [Accepted: 10/29/2023] [Indexed: 12/23/2023]
Abstract
OBJECTIVES To develop an ensemble multi-task deep learning (DL) framework for automatic and simultaneous detection, segmentation, and classification of primary bone tumors (PBTs) and bone infections based on multi-parametric MRI from multi-center. METHODS This retrospective study divided 749 patients with PBTs or bone infections from two hospitals into a training set (N = 557), an internal validation set (N = 139), and an external validation set (N = 53). The ensemble framework was constructed using T1-weighted image (T1WI), T2-weighted image (T2WI), and clinical characteristics for binary (PBTs/bone infections) and three-category (benign/intermediate/malignant PBTs) classification. The detection and segmentation performances were evaluated using Intersection over Union (IoU) and Dice score. The classification performance was evaluated using the receiver operating characteristic (ROC) curve and compared with radiologist interpretations. RESULT On the external validation set, the single T1WI-based and T2WI-based multi-task models obtained IoUs of 0.71 ± 0.25/0.65 ± 0.30 for detection and Dice scores of 0.75 ± 0.26/0.70 ± 0.33 for segmentation. The framework achieved AUCs of 0.959 (95%CI, 0.955-1.000)/0.900 (95%CI, 0.773-0.100) and accuracies of 90.6% (95%CI, 79.7-95.9%)/78.3% (95%CI, 58.1-90.3%) for the binary/three-category classification. Meanwhile, for the three-category classification, the performance of the framework was superior to that of three junior radiologists (accuracy: 65.2%, 69.6%, and 69.6%, respectively) and comparable to that of two senior radiologists (accuracy: 78.3% and 78.3%). CONCLUSION The MRI-based ensemble multi-task framework shows promising performance in automatically and simultaneously detecting, segmenting, and classifying PBTs and bone infections, which was preferable to junior radiologists. CLINICAL RELEVANCE STATEMENT Compared with junior radiologists, the ensemble multi-task deep learning framework effectively improves differential diagnosis for patients with primary bone tumors or bone infections. This finding may help physicians make treatment decisions and enable timely treatment of patients. KEY POINTS • The ensemble framework fusing multi-parametric MRI and clinical characteristics effectively improves the classification ability of single-modality models. • The ensemble multi-task deep learning framework performed well in detecting, segmenting, and classifying primary bone tumors and bone infections. • The ensemble framework achieves an optimal classification performance superior to junior radiologists' interpretations, assisting the clinical differential diagnosis of primary bone tumors and bone infections.
Collapse
Affiliation(s)
- Qiang Ye
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Hening Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
| | - Bomiao Lin
- Department of Radiology, ZhuJiang Hospital of Southern Medical University, Guangzhou, China
| | - Menghong Wang
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Liwen Song
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Zhuoyao Xie
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China
| | - Zixiao Lu
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China.
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China.
| | - Yinghua Zhao
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), Guangzhou, Guangdong, China.
| |
Collapse
|
10
|
Ou Z, Bai J, Chen Z, Lu Y, Wang H, Long S, Chen G. RTSeg-net: A lightweight network for real-time segmentation of fetal head and pubic symphysis from intrapartum ultrasound images. Comput Biol Med 2024; 175:108501. [PMID: 38703545 DOI: 10.1016/j.compbiomed.2024.108501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/19/2024] [Accepted: 04/21/2024] [Indexed: 05/06/2024]
Abstract
The segmentation of the fetal head (FH) and pubic symphysis (PS) from intrapartum ultrasound images plays a pivotal role in monitoring labor progression and informing crucial clinical decisions. Achieving real-time segmentation with high accuracy on systems with limited hardware capabilities presents significant challenges. To address these challenges, we propose the real-time segmentation network (RTSeg-Net), a groundbreaking lightweight deep learning model that incorporates innovative distribution shifting convolutional blocks, tokenized multilayer perceptron blocks, and efficient feature fusion blocks. Designed for optimal computational efficiency, RTSeg-Net minimizes resource demand while significantly enhancing segmentation performance. Our comprehensive evaluation on two distinct intrapartum ultrasound image datasets reveals that RTSeg-Net achieves segmentation accuracy on par with more complex state-of-the-art networks, utilizing merely 1.86 M parameters-just 6 % of their hyperparameters-and operating seven times faster, achieving a remarkable rate of 31.13 frames per second on a Jetson Nano, a device known for its limited computing capacity. These achievements underscore RTSeg-Net's potential to provide accurate, real-time segmentation on low-power devices, broadening the scope for its application across various stages of labor. By facilitating real-time, accurate ultrasound image analysis on portable, low-cost devices, RTSeg-Net promises to revolutionize intrapartum monitoring, making sophisticated diagnostic tools accessible to a wider range of healthcare settings.
Collapse
Affiliation(s)
- Zhanhong Ou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China; Auckland Bioengineering Institute, University of Auckland, Auckland, 1010, New Zealand.
| | - Zhide Chen
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Huijin Wang
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Shun Long
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Gaowen Chen
- Obstetrics and Gynecology Center, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
| |
Collapse
|
11
|
Chen L, Zeng B, Shen J, Xu J, Cai Z, Su S, Chen J, Cai X, Ying T, Hu B, Wu M, Chen X, Zheng Y. Bone age assessment based on three-dimensional ultrasound and artificial intelligence compared with paediatrician-read radiographic bone age: protocol for a prospective, diagnostic accuracy study. BMJ Open 2024; 14:e079969. [PMID: 38401893 PMCID: PMC10895244 DOI: 10.1136/bmjopen-2023-079969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 01/31/2024] [Indexed: 02/26/2024] Open
Abstract
INTRODUCTION Radiographic bone age (BA) assessment is widely used to evaluate children's growth disorders and predict their future height. Moreover, children are more sensitive and vulnerable to X-ray radiation exposure than adults. The purpose of this study is to develop a new, safer, radiation-free BA assessment method for children by using three-dimensional ultrasound (3D-US) and artificial intelligence (AI), and to test the diagnostic accuracy and reliability of this method. METHODS AND ANALYSIS This is a prospective, observational study. All participants will be recruited through Paediatric Growth and Development Clinic. All participants will receive left hand 3D-US and X-ray examination at the Shanghai Sixth People's Hospital on the same day, all images will be recorded. These image related data will be collected and randomly divided into training set (80% of all) and test set (20% of all). The training set will be used to establish a cascade network of 3D-US skeletal image segmentation and BA prediction model to achieve end-to-end prediction of image to BA. The test set will be used to evaluate the accuracy of AI BA model of 3D-US. We have developed a new ultrasonic scanning device, which can be proposed to automatic 3D-US scanning of hands. AI algorithms, such as convolutional neural network, will be used to identify and segment the skeletal structures in the hand 3D-US images. We will achieve automatic segmentation of hand skeletal 3D-US images, establish BA prediction model of 3D-US, and test the accuracy of the prediction model. ETHICS AND DISSEMINATION The Ethics Committee of Shanghai Sixth People's Hospital approved this study. The approval number is 2022-019. A written informed consent will be obtained from their parent or guardian of each participant. Final results will be published in peer-reviewed journals and presented at national and international conferences. TRIAL REGISTRATION NUMBER ChiCTR2200057236.
Collapse
Affiliation(s)
- Li Chen
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Bolun Zeng
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jian Shen
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zehang Cai
- Shantou Institute of Ultrasonic Instruments Co., Ltd, Shantou, China
| | - Shudian Su
- Shantou Institute of Ultrasonic Instruments Co., Ltd, Shantou, China
| | - Jie Chen
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaojun Cai
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Tao Ying
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Bing Hu
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Min Wu
- Department of Pediatrics, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Yuanyi Zheng
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
12
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
13
|
Yousefpour Shahrivar R, Karami F, Karami E. Enhancing Fetal Anomaly Detection in Ultrasonography Images: A Review of Machine Learning-Based Approaches. Biomimetics (Basel) 2023; 8:519. [PMID: 37999160 PMCID: PMC10669151 DOI: 10.3390/biomimetics8070519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/05/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023] Open
Abstract
Fetal development is a critical phase in prenatal care, demanding the timely identification of anomalies in ultrasound images to safeguard the well-being of both the unborn child and the mother. Medical imaging has played a pivotal role in detecting fetal abnormalities and malformations. However, despite significant advances in ultrasound technology, the accurate identification of irregularities in prenatal images continues to pose considerable challenges, often necessitating substantial time and expertise from medical professionals. In this review, we go through recent developments in machine learning (ML) methods applied to fetal ultrasound images. Specifically, we focus on a range of ML algorithms employed in the context of fetal ultrasound, encompassing tasks such as image classification, object recognition, and segmentation. We highlight how these innovative approaches can enhance ultrasound-based fetal anomaly detection and provide insights for future research and clinical implementations. Furthermore, we emphasize the need for further research in this domain where future investigations can contribute to more effective ultrasound-based fetal anomaly detection.
Collapse
Affiliation(s)
- Ramin Yousefpour Shahrivar
- Department of Biology, College of Convergent Sciences and Technologies, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Fatemeh Karami
- Department of Medical Genetics, Applied Biophotonics Research Center, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Ebrahim Karami
- Department of Engineering and Applied Sciences, Memorial University of Newfoundland, St. John’s, NL A1B 3X5, Canada
| |
Collapse
|
14
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
15
|
Ji C, Liu K, Yang X, Cao Y, Cao X, Pan Q, Yang Z, Sun L, Yin L, Deng X, Ni D. A novel artificial intelligence model for fetal facial profile marker measurement during the first trimester. BMC Pregnancy Childbirth 2023; 23:718. [PMID: 37817098 PMCID: PMC10563312 DOI: 10.1186/s12884-023-06046-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Accepted: 10/03/2023] [Indexed: 10/12/2023] Open
Abstract
BACKGROUND To study the validity of an artificial intelligence (AI) model for measuring fetal facial profile markers, and to evaluate the clinical value of the AI model for identifying fetal abnormalities during the first trimester. METHODS This retrospective study used two-dimensional mid-sagittal fetal profile images taken during singleton pregnancies at 11-13+ 6 weeks of gestation. We measured the facial profile markers, including inferior facial angle (IFA), maxilla-nasion-mandible (MNM) angle, facial-maxillary angle (FMA), frontal space (FS) distance, and profile line (PL) distance using AI and manual measurements. Semantic segmentation and landmark localization were used to develop an AI model to measure the selected markers and evaluate the diagnostic value for fetal abnormalities. The consistency between AI and manual measurements was compared using intraclass correlation coefficients (ICC). The diagnostic value of facial markers measured using the AI model during fetal abnormality screening was evaluated using receiver operating characteristic (ROC) curves. RESULTS A total of 2372 normal fetuses and 37 with abnormalities were observed, including 18 with trisomy 21, 7 with trisomy 18, and 12 with CLP. Among them, 1872 normal fetuses were used for AI model training and validation, and the remaining 500 normal fetuses and all fetuses with abnormalities were used for clinical testing. The ICCs (95%CI) of the IFA, MNM angle, FMA, FS distance, and PL distance between the AI and manual measurement for the 500 normal fetuses were 0.812 (0.780-0.840), 0.760 (0.720-0.795), 0.766 (0.727-0.800), 0.807 (0.775-0.836), and 0.798 (0.764-0.828), respectively. IFA clinically significantly identified trisomy 21 and trisomy 18, with areas under the ROC curve (AUC) of 0.686 (95%CI, 0.585-0.788) and 0.729 (95%CI, 0.621-0.837), respectively. FMA effectively predicted trisomy 18, with an AUC of 0.904 (95%CI, 0.842-0.966). MNM angle and FS distance exhibited good predictive value in CLP, with AUCs of 0.738 (95%CI, 0.573-0.902) and 0.677 (95%CI, 0.494-0.859), respectively. CONCLUSIONS The consistency of fetal facial profile marker measurements between the AI and manual measurement was good during the first trimester. The AI model is a convenient and effective tool for the early screen for fetal trisomy 21, trisomy 18, and CLP, which can be generalized to first-trimester scanning (FTS).
Collapse
Affiliation(s)
- Chunya Ji
- Center for Medical Ultrasound, Suzhou Municipal Hospital, Gusu School, The Affiliated Suzhou Hospital of Nanjing Medical University, Nanjing Medical University, Suzhou, Jiangsu, China
| | - Kai Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Xueyuan Blvd, Nanshan, Shenzhen, Guangdong, China
| | - Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Xueyuan Blvd, Nanshan, Shenzhen, Guangdong, China
| | - Yan Cao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Xueyuan Blvd, Nanshan, Shenzhen, Guangdong, China
- Shenzhen RayShape Medical Technology Co., Ltd, Shenzhen, Guangdong, China
| | - Xiaoju Cao
- Center for Reproduction and Genetics, Suzhou Municipal Hospital, Gusu School, The Affiliated Suzhou Hospital of Nanjing Medical University, Nanjing Medical University, No. 26 Daoqian Street, Suzhou, 215002, Jiangsu, China
| | - Qi Pan
- Center for Medical Ultrasound, Suzhou Municipal Hospital, Gusu School, The Affiliated Suzhou Hospital of Nanjing Medical University, Nanjing Medical University, Suzhou, Jiangsu, China
| | - Zhong Yang
- Center for Medical Ultrasound, Suzhou Municipal Hospital, Gusu School, The Affiliated Suzhou Hospital of Nanjing Medical University, Nanjing Medical University, Suzhou, Jiangsu, China
| | - Lingling Sun
- Center for Medical Ultrasound, Suzhou Municipal Hospital, Gusu School, The Affiliated Suzhou Hospital of Nanjing Medical University, Nanjing Medical University, Suzhou, Jiangsu, China
| | - Linliang Yin
- Center for Medical Ultrasound, Suzhou Municipal Hospital, Gusu School, The Affiliated Suzhou Hospital of Nanjing Medical University, Nanjing Medical University, Suzhou, Jiangsu, China.
| | - Xuedong Deng
- Center for Medical Ultrasound, Suzhou Municipal Hospital, Gusu School, The Affiliated Suzhou Hospital of Nanjing Medical University, Nanjing Medical University, Suzhou, Jiangsu, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Xueyuan Blvd, Nanshan, Shenzhen, Guangdong, China.
| |
Collapse
|
16
|
Goudarzi S, Whyte J, Boily M, Towers A, Kilgour RD, Rivaz H. Segmentation of Arm Ultrasound Images in Breast Cancer-Related Lymphedema: A Database and Deep Learning Algorithm. IEEE Trans Biomed Eng 2023; 70:2552-2563. [PMID: 37028332 DOI: 10.1109/tbme.2023.3253646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
OBJECTIVE Breast cancer treatment often causes the removal of or damage to lymph nodes of the patient's lymphatic drainage system. This side effect is the origin of Breast Cancer-Related Lymphedema (BCRL), referring to a noticeable increase in excess arm volume. Ultrasound imaging is a preferred modality for the diagnosis and progression monitoring of BCRL because of its low cost, safety, and portability. As the affected and unaffected arms look similar in B-mode ultrasound images, the thickness of the skin, subcutaneous fat, and muscle have been shown to be important biomarkers for this task. The segmentation masks are also helpful in monitoring the longitudinal changes in morphology and mechanical properties of tissue layers. METHODS For the first time, a publicly available ultrasound dataset containing the Radio-Frequency (RF) data of 39 subjects and manual segmentation masks by two experts, are provided. Inter- and intra-observer reproducibility studies performed on the segmentation maps show a high Dice Score Coefficient (DSC) of 0.94±0.08 and 0.92±0.06, respectively. Gated Shape Convolutional Neural Network (GSCNN) is modified for precise automatic segmentation of tissue layers, and its generalization performance is improved by the CutMix augmentation strategy. RESULTS We got an average DSC of 0.87±0.11 on the test set, which confirms the high performance of the method. CONCLUSION Automatic segmentation can pave the way for convenient and accessible staging of BCRL, and our dataset can facilitate development and validation of those methods. SIGNIFICANCE Timely diagnosis and treatment of BCRL have crucial importance in preventing irreversible damage.
Collapse
|
17
|
Zhou GQ, Wei H, Wang X, Wang KN, Chen Y, Xiong F, Ren G, Liu C, Li L, Huang Q. BSMNet: Boundary-salience multi-branch network for intima-media identification in carotid ultrasound images. Comput Biol Med 2023; 162:107092. [PMID: 37263149 DOI: 10.1016/j.compbiomed.2023.107092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 05/05/2023] [Accepted: 05/27/2023] [Indexed: 06/03/2023]
Abstract
Carotid artery intima-media thickness (CIMT) is an essential factor in signaling the risk of cardiovascular diseases, which is commonly evaluated using ultrasound imaging. However, automatic intima-media segmentation and thickness measurement are still challenging due to the boundary ambiguity of intima-media and inherent speckle noises in ultrasound images. In this work, we propose an end-to-end boundary-salience multi-branch network, BSMNet, to tackle the carotid intima-media identification from ultrasound images, where the prior shape knowledge and anatomical dependence are exploited using a parallel linear structure learning modules followed by a boundary refinement module. Moreover, we design a strip attention model to boost the thin strip region segmentation with shape priors, in which an anisotropic kernel shape captures long-range global relations and scrutinizes meaningful local salient contexts simultaneously. Extensive experimental results on an in-house carotid ultrasound (US) dataset demonstrate the promising performance of our method, which achieves about 0.02 improvement in Dice and HD95 than other state-of-the-art methods. Our method is promising in advancing the analysis of systemic arterial disease with ultrasound imaging.
Collapse
Affiliation(s)
- Guang-Quan Zhou
- The School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China.
| | - Hao Wei
- The School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Xiaoyi Wang
- Shenzhen Delica Medical Equipment Co., Ltd, Shenzhen, 518132, China.
| | - Kai-Ni Wang
- The School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China
| | - Yuzhao Chen
- The School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Fei Xiong
- Ethics Committee of Medical and Experimental Animals, Northwestern Polytechnical University, Xi'an, China
| | - Guanqing Ren
- Shenzhen Delica Medical Equipment Co., Ltd, Shenzhen, 518132, China
| | - Chunying Liu
- Ethics Committee of Medical and Experimental Animals, Northwestern Polytechnical University, Xi'an, China
| | - Le Li
- Institute of Medical Research, Northwestern Polytechnical University, Xi'an, China.
| | - Qinghua Huang
- School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, Xi'an, China.
| |
Collapse
|
18
|
Rabanaque D, Regalado M, Benítez R, Rabanaque S, Agut T, Carreras N, Mata C. Semi-Automatic GUI Platform to Characterize Brain Development in Preterm Children Using Ultrasound Images. J Imaging 2023; 9:145. [PMID: 37504822 PMCID: PMC10381479 DOI: 10.3390/jimaging9070145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 07/06/2023] [Accepted: 07/12/2023] [Indexed: 07/29/2023] Open
Abstract
The third trimester of pregnancy is the most critical period for human brain development, during which significant changes occur in the morphology of the brain. The development of sulci and gyri allows for a considerable increase in the brain surface. In preterm newborns, these changes occur in an extrauterine environment that may cause a disruption of the normal brain maturation process. We hypothesize that a normalized atlas of brain maturation with cerebral ultrasound images from birth to term equivalent age will help clinicians assess these changes. This work proposes a semi-automatic Graphical User Interface (GUI) platform for segmenting the main cerebral sulci in the clinical setting from ultrasound images. This platform has been obtained from images of a cerebral ultrasound neonatal database images provided by two clinical researchers from the Hospital Sant Joan de Déu in Barcelona, Spain. The primary objective is to provide a user-friendly design platform for clinicians for running and visualizing an atlas of images validated by medical experts. This GUI offers different segmentation approaches and pre-processing tools and is user-friendly and designed for running, visualizing images, and segmenting the principal sulci. The presented results are discussed in detail in this paper, providing an exhaustive analysis of the proposed approach's effectiveness.
Collapse
Affiliation(s)
- David Rabanaque
- Barcelona East School of Engineering, Universitat Politècnica de Catalunya, 08019 Barcelona, Spain
| | - Maria Regalado
- Barcelona East School of Engineering, Universitat Politècnica de Catalunya, 08019 Barcelona, Spain
| | - Raul Benítez
- Barcelona East School of Engineering, Universitat Politècnica de Catalunya, 08019 Barcelona, Spain
- Research Centre for Biomedical Engineering (CREB), Barcelona East School of Engineering, Universitat Politècnica de Catalunya, 08028 Barcelona, Spain
- Pediatric Computational Imaging Research Group, Hospital Sant Joan de Déu Barcelona, 08950 Esplugues de Llobregat, Spain
| | - Sonia Rabanaque
- Barcelona East School of Engineering, Universitat Politècnica de Catalunya, 08019 Barcelona, Spain
| | - Thais Agut
- Institut de Recerca Sant Joan de Déu, Hospital Sant Joan de Déu Barcelona, 08950 Esplugues de Llobregat, Spain
- Neonatal Department, Hospital Sant Joan de Déu Barcelona, 08950 Esplugues de Llobregat, Spain
- Fundación NeNe, 28010 Madrid, Spain
| | - Nuria Carreras
- Institut de Recerca Sant Joan de Déu, Hospital Sant Joan de Déu Barcelona, 08950 Esplugues de Llobregat, Spain
- Neonatal Department, Hospital Sant Joan de Déu Barcelona, 08950 Esplugues de Llobregat, Spain
| | - Christian Mata
- Barcelona East School of Engineering, Universitat Politècnica de Catalunya, 08019 Barcelona, Spain
- Research Centre for Biomedical Engineering (CREB), Barcelona East School of Engineering, Universitat Politècnica de Catalunya, 08028 Barcelona, Spain
- Pediatric Computational Imaging Research Group, Hospital Sant Joan de Déu Barcelona, 08950 Esplugues de Llobregat, Spain
- Institut de Recerca Sant Joan de Déu, Hospital Sant Joan de Déu Barcelona, 08950 Esplugues de Llobregat, Spain
| |
Collapse
|
19
|
Arain Z, Iliodromiti S, Slabaugh G, David AL, Chowdhury TT. Machine learning and disease prediction in obstetrics. Curr Res Physiol 2023; 6:100099. [PMID: 37324652 PMCID: PMC10265477 DOI: 10.1016/j.crphys.2023.100099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 05/09/2023] [Indexed: 06/17/2023] Open
Abstract
Machine learning technologies and translation of artificial intelligence tools to enhance the patient experience are changing obstetric and maternity care. An increasing number of predictive tools have been developed with data sourced from electronic health records, diagnostic imaging and digital devices. In this review, we explore the latest tools of machine learning, the algorithms to establish prediction models and the challenges to assess fetal well-being, predict and diagnose obstetric diseases such as gestational diabetes, pre-eclampsia, preterm birth and fetal growth restriction. We discuss the rapid growth of machine learning approaches and intelligent tools for automated diagnostic imaging of fetal anomalies and to asses fetoplacental and cervix function using ultrasound and magnetic resonance imaging. In prenatal diagnosis, we discuss intelligent tools for magnetic resonance imaging sequencing of the fetus, placenta and cervix to reduce the risk of preterm birth. Finally, the use of machine learning to improve safety standards in intrapartum care and early detection of complications will be discussed. The demand for technologies to enhance diagnosis and treatment in obstetrics and maternity should improve frameworks for patient safety and enhance clinical practice.
Collapse
Affiliation(s)
- Zara Arain
- Centre for Bioengineering, School of Engineering and Materials Science, Queen Mary University of London, Mile End Road, London, E1 4NS, UK
| | - Stamatina Iliodromiti
- Women's Health Research Unit, Wolfson Institute of Population Health, Queen Mary University of London, 58 Turner Street, London, E1 2AB, UK
| | - Gregory Slabaugh
- Digital Environment Research Institute, School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E1 1HH, UK
| | - Anna L. David
- Elizabeth Garrett Anderson Institute for Women's Health, University College London, Medical School Building, Huntley Street, London, WC1E 6AU, UK
| | - Tina T. Chowdhury
- Centre for Bioengineering, School of Engineering and Materials Science, Queen Mary University of London, Mile End Road, London, E1 4NS, UK
| |
Collapse
|
20
|
Anusooya G, Bharathiraja S, Mahdal M, Sathyarajasekaran K, Elangovan M. Self-Supervised Wavelet-Based Attention Network for Semantic Segmentation of MRI Brain Tumor. SENSORS (BASEL, SWITZERLAND) 2023; 23:2719. [PMID: 36904923 PMCID: PMC10007092 DOI: 10.3390/s23052719] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 02/28/2023] [Accepted: 02/28/2023] [Indexed: 06/18/2023]
Abstract
To determine the appropriate treatment plan for patients, radiologists must reliably detect brain tumors. Despite the fact that manual segmentation involves a great deal of knowledge and ability, it may sometimes be inaccurate. By evaluating the size, location, structure, and grade of the tumor, automatic tumor segmentation in MRI images aids in a more thorough analysis of pathological conditions. Due to the intensity differences in MRI images, gliomas may spread out, have low contrast, and are therefore difficult to detect. As a result, segmenting brain tumors is a challenging process. In the past, several methods for segmenting brain tumors in MRI scans were created. However, because of their susceptibility to noise and distortions, the usefulness of these approaches is limited. Self-Supervised Wavele- based Attention Network (SSW-AN), a new attention module with adjustable self-supervised activation functions and dynamic weights, is what we suggest as a way to collect global context information. In particular, this network's input and labels are made up of four parameters produced by the two-dimensional (2D) Wavelet transform, which makes the training process simpler by neatly segmenting the data into low-frequency and high-frequency channels. To be more precise, we make use of the channel attention and spatial attention modules of the self-supervised attention block (SSAB). As a result, this method may more easily zero in on crucial underlying channels and spatial patterns. The suggested SSW-AN has been shown to outperform the current state-of-the-art algorithms in medical image segmentation tasks, with more accuracy, more promising dependability, and less unnecessary redundancy.
Collapse
Affiliation(s)
| | | | - Miroslav Mahdal
- Department of Control Systems and Instrumentation, Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, 708 00 Ostrava, Czech Republic
| | | | | |
Collapse
|
21
|
Andreasen LA, Feragen A, Christensen AN, Thybo JK, Svendsen MBS, Zepf K, Lekadir K, Tolsgaard MG. Multi-centre deep learning for placenta segmentation in obstetric ultrasound with multi-observer and cross-country generalization. Sci Rep 2023; 13:2221. [PMID: 36755050 PMCID: PMC9908915 DOI: 10.1038/s41598-023-29105-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 01/30/2023] [Indexed: 02/10/2023] Open
Abstract
The placenta is crucial to fetal well-being and it plays a significant role in the pathogenesis of hypertensive pregnancy disorders. Moreover, a timely diagnosis of placenta previa may save lives. Ultrasound is the primary imaging modality in pregnancy, but high-quality imaging depends on the access to equipment and staff, which is not possible in all settings. Convolutional neural networks may help standardize the acquisition of images for fetal diagnostics. Our aim was to develop a deep learning based model for classification and segmentation of the placenta in ultrasound images. We trained a model based on manual annotations of 7,500 ultrasound images to identify and segment the placenta. The model's performance was compared to annotations made by 25 clinicians (experts, trainees, midwives). The overall image classification accuracy was 81%. The average intersection over union score (IoU) reached 0.78. The model's accuracy was lower than experts' and trainees', but it outperformed all clinicians at delineating the placenta, IoU = 0.75 vs 0.69, 0.66, 0.59. The model was cross validated on 100 2nd trimester images from Barcelona, yielding an accuracy of 76%, IoU 0.68. In conclusion, we developed a model for automatic classification and segmentation of the placenta with consistent performance across different patient populations. It may be used for automated detection of placenta previa and enable future deep learning research in placental dysfunction.
Collapse
Affiliation(s)
- Lisbeth Anita Andreasen
- Copenhagen Academy for Medical Education and Simulation (CAMES) Rigshospitalet, Copenhagen, Denmark.
| | - Aasa Feragen
- Technical University of Denmark (DTU) Compute, Lyngby, Denmark
| | | | | | - Morten Bo S Svendsen
- Copenhagen Academy for Medical Education and Simulation (CAMES) Rigshospitalet, Copenhagen, Denmark
| | - Kilian Zepf
- Technical University of Denmark (DTU) Compute, Lyngby, Denmark
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Universitat de Barcelona, Barcelona, Spain
| | - Martin Grønnebæk Tolsgaard
- Copenhagen Academy for Medical Education and Simulation (CAMES) Rigshospitalet, Copenhagen, Denmark.,Department of Fetal Medicine, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|
22
|
Gleed AD, Chen Q, Jackman J, Mishra D, Chandramohan V, Self A, Bhatnagar S, Papageorghiou AT, Noble JA. Automatic Image Guidance for Assessment of Placenta Location in Ultrasound Video Sweeps. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:106-121. [PMID: 36241588 DOI: 10.1016/j.ultrasmedbio.2022.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 06/06/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
Ultrasound-based assistive tools are aimed at reducing the high skill needed to interpret a scan by providing automatic image guidance. This may encourage uptake of ultrasound (US) clinical assessments in rural settings in low- and middle-income countries (LMICs), where well-trained sonographers can be scarce. This paper describes a new method that automatically generates an assistive video overlay to provide image guidance to a user to assess placenta location. The user captures US video by following a sweep protocol that scans a U-shape on the lower maternal abdomen. The sweep trajectory is simple and easy to learn. We initially explore a 2-D embedding of placenta shapes, mapping manually segmented placentas in US video frames to a 2-D space. We map 2013 frames from 11 videos. This provides insight into the spectrum of placenta shapes that appear when using the sweep protocol. We propose classification of the placenta shapes from three observed clusters: complex, tip and rectangular. We use this insight to design an effective automatic segmentation algorithm, combining a U-Net with a CRF-RNN module to enhance segmentation performance with respect to placenta shape. The U-Net + CRF-RNN algorithm automatically segments the placenta and maternal bladder. We assess segmentation performance using both area and shape metrics. We report results comparable to the state-of-the-art for automatic placenta segmentation on the Dice metric, achieving 0.83 ± 0.15 evaluated on 2127 frames from 10 videos. We also qualitatively evaluate 78,308 frames from 135 videos, assessing if the anatomical outline is correctly segmented. We found that addition of the CRF-RNN improves over a baseline U-Net when faced with a complex placenta shape, which we observe in our 2-D embedding, up to 14% with respect to the percentage shape error. From the segmentations, an assistive video overlay is automatically constructed that (i) highlights the placenta and bladder, (ii) determines the lower placenta edge and highlights this location as a point and (iii) labels a 2-cm clearance on the lower placenta edge. The 2-cm clearance is chosen to satisfy current clinical guidelines. We propose to assess the placenta location by comparing the 2-cm region and the bottom of the bladder, which represents a coarse localization of the cervix. Anatomically, the bladder must sit above the cervix region. We present proof-of-concept results for the video overlay.
Collapse
Affiliation(s)
- Alexander D Gleed
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Qingchao Chen
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - James Jackman
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Divyanshu Mishra
- Translational Health Science and Technology Institute, Faridabad, India
| | | | - Alice Self
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | | | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
23
|
Fiorentino MC, Villani FP, Di Cosmo M, Frontoni E, Moccia S. A review on deep-learning algorithms for fetal ultrasound-image analysis. Med Image Anal 2023; 83:102629. [PMID: 36308861 DOI: 10.1016/j.media.2022.102629] [Citation(s) in RCA: 50] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 07/12/2022] [Accepted: 09/10/2022] [Indexed: 11/07/2022]
Abstract
Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. A number of survey papers in the field is today available, but most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 153 research papers published after 2017. Papers are analyzed and commented from both the methodology and the application perspective. We categorized the papers into (i) fetal standard-plane detection, (ii) anatomical structure analysis and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. In addition, emerging applications are also outlined. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorithms for fetal US image analysis and a discussion on current challenges that have to be tackled by researchers working in the field to translate the research methodology into actual clinical practice.
Collapse
Affiliation(s)
| | | | - Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Italy; Department of Political Sciences, Communication and International Relations, Università degli Studi di Macerata, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Italy
| |
Collapse
|
24
|
Zimmer VA, Gomez A, Skelton E, Wright R, Wheeler G, Deng S, Ghavami N, Lloyd K, Matthew J, Kainz B, Rueckert D, Hajnal JV, Schnabel JA. Placenta segmentation in ultrasound imaging: Addressing sources of uncertainty and limited field-of-view. Med Image Anal 2023; 83:102639. [PMID: 36257132 PMCID: PMC7614009 DOI: 10.1016/j.media.2022.102639] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 03/09/2022] [Accepted: 09/15/2022] [Indexed: 02/04/2023]
Abstract
Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.
Collapse
Affiliation(s)
- Veronika A Zimmer
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom; Faculty of Informatics, Technical University of Munich, Germany.
| | - Alberto Gomez
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Emily Skelton
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom; School of Health Sciences, City, University of London, London, United Kingdom
| | - Robert Wright
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Gavin Wheeler
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Shujie Deng
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Nooshin Ghavami
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Karen Lloyd
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Jacqueline Matthew
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Bernhard Kainz
- BioMedIA group, Imperial College London, London, United Kingdom; FAU Erlangen-Nürnberg, Germany
| | - Daniel Rueckert
- Faculty of Informatics, Technical University of Munich, Germany; BioMedIA group, Imperial College London, London, United Kingdom
| | - Joseph V Hajnal
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Julia A Schnabel
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom; Faculty of Informatics, Technical University of Munich, Germany; Helmholtz Center Munich, Germany
| |
Collapse
|
25
|
Singh VK, Yousef Kalafi E, Cheah E, Wang S, Wang J, Ozturk A, Li Q, Eldar YC, Samir AE, Kumar V. HaTU-Net: Harmonic Attention Network for Automated Ovarian Ultrasound Quantification in Assisted Pregnancy. Diagnostics (Basel) 2022; 12:diagnostics12123213. [PMID: 36553220 PMCID: PMC9777827 DOI: 10.3390/diagnostics12123213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/02/2022] [Accepted: 12/08/2022] [Indexed: 12/23/2022] Open
Abstract
Antral follicle Count (AFC) is a non-invasive biomarker used to assess ovarian reserves through transvaginal ultrasound (TVUS) imaging. Antral follicles' diameter is usually in the range of 2-10 mm. The primary aim of ovarian reserve monitoring is to measure the size of ovarian follicles and the number of antral follicles. Manual follicle measurement is inhibited by operator time, expertise and the subjectivity of delineating the two axes of the follicles. This necessitates an automated framework capable of quantifying follicle size and count in a clinical setting. This paper proposes a novel Harmonic Attention-based U-Net network, HaTU-Net, to precisely segment the ovary and follicles in ultrasound images. We replace the standard convolution operation with a harmonic block that convolves the features with a window-based discrete cosine transform (DCT). Additionally, we proposed a harmonic attention mechanism that helps to promote the extraction of rich features. The suggested technique allows for capturing the most relevant features, such as boundaries, shape, and textural patterns, in the presence of various noise sources (i.e., shadows, poor contrast between tissues, and speckle noise). We evaluated the proposed model on our in-house private dataset of 197 patients undergoing TransVaginal UltraSound (TVUS) exam. The experimental results on an independent test set confirm that HaTU-Net achieved a Dice coefficient score of 90% for ovaries and 81% for antral follicles, an improvement of 2% and 10%, respectively, when compared to a standard U-Net. Further, we accurately measure the follicle size, yielding the recall, and precision rates of 91.01% and 76.49%, respectively.
Collapse
Affiliation(s)
- Vivek Kumar Singh
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Elham Yousef Kalafi
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Eugene Cheah
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Shuhang Wang
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Jingchao Wang
- Department of Ultrasound, The Third Hospital of Hebei Medical University, Shijiazhuang 050051, China
| | - Arinc Ozturk
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Qian Li
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Yonina C. Eldar
- Faculty of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Anthony E. Samir
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Viksit Kumar
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
- Correspondence:
| |
Collapse
|
26
|
din NMU, Dar RA, Rasool M, Assad A. Breast cancer detection using deep learning: Datasets, methods, and challenges ahead. Comput Biol Med 2022; 149:106073. [DOI: 10.1016/j.compbiomed.2022.106073] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 08/21/2022] [Accepted: 08/27/2022] [Indexed: 12/22/2022]
|
27
|
Zhang Y, Liao Q, Ding L, Zhang J. Bridging 2D and 3D segmentation networks for computation-efficient volumetric medical image segmentation: An empirical study of 2.5D solutions. Comput Med Imaging Graph 2022; 99:102088. [DOI: 10.1016/j.compmedimag.2022.102088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Revised: 05/24/2022] [Accepted: 05/26/2022] [Indexed: 11/28/2022]
|
28
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
29
|
Reddy CD, Van den Eynde J, Kutty S. Artificial intelligence in perinatal diagnosis and management of congenital heart disease. Semin Perinatol 2022; 46:151588. [PMID: 35396036 DOI: 10.1016/j.semperi.2022.151588] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Prenatal diagnosis and management of congenital heart disease (CHD) has progressed substantially in the past few decades. Fetal echocardiography can accurately detect and diagnose approximately 85% of cardiac anomalies. The prenatal diagnosis of CHD results in improved care, with improved risk stratification, perioperative status and survival. However, there is much work to be done. A minority of CHD is actually identified prenatally. This seemingly incongruous gap is due, in part, to diminished recognition of an anomaly even when present in the images and the need for increased training to obtain specialized cardiac views. Artificial intelligence (AI) is a field within computer science that focuses on the development of algorithms that "learn, reason, and self-correct" in a human-like fashion. When applied to fetal echocardiography, AI has the potential to improve image acquisition, image optimization, automated measurements, identification of outliers, classification of diagnoses, and prediction of outcomes. Adoption of AI in the field has been thus far limited by a paucity of data, limited resources to implement new technologies, and legal and ethical concerns. Despite these barriers, recognition of the potential benefits will push us to a future in which AI will become a routine part of clinical practice.
Collapse
Affiliation(s)
- Charitha D Reddy
- Division of Pediatric Cardiology, Stanford University, Palo Alto, CA, USA.
| | - Jef Van den Eynde
- Helen B. Taussig Heart Center, The Johns Hopkins Hospital and School of Medicine, Baltimore, MD, USA; Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
| | - Shelby Kutty
- Helen B. Taussig Heart Center, The Johns Hopkins Hospital and School of Medicine, Baltimore, MD, USA
| |
Collapse
|
30
|
Qiu Y, Hu Y, Kong P, Xie H, Zhang X, Cao J, Wang T, Lei B. Automatic Prostate Gleason Grading Using Pyramid Semantic Parsing Network in Digital Histopathology. Front Oncol 2022; 12:772403. [PMID: 35463378 PMCID: PMC9024330 DOI: 10.3389/fonc.2022.772403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 02/22/2022] [Indexed: 12/24/2022] Open
Abstract
Purpose Prostate biopsy histopathology and immunohistochemistry are important in the differential diagnosis of the disease and can be used to assess the degree of prostate cancer differentiation. Today, prostate biopsy is increasing the demand for experienced uropathologists, which puts a lot of pressure on pathologists. In addition, the grades of different observations had an indicating effect on the treatment of the patients with cancer, but the grades were highly changeable, and excessive treatment and insufficient treatment often occurred. To alleviate these problems, an artificial intelligence system with clinically acceptable prostate cancer detection and Gleason grade accuracy was developed. Methods Deep learning algorithms have been proved to outperform other algorithms in the analysis of large data and show great potential with respect to the analysis of pathological sections. Inspired by the classical semantic segmentation network, we propose a pyramid semantic parsing network (PSPNet) for automatic prostate Gleason grading. To boost the segmentation performance, we get an auxiliary prediction output, which is mainly the optimization of auxiliary objective function in the process of network training. The network not only includes effective global prior representations but also achieves good results in tissue micro-array (TMA) image segmentation. Results Our method is validated using 321 biopsies from the Vancouver Prostate Centre and ranks the first on the MICCAI 2019 prostate segmentation and classification benchmark and the Vancouver Prostate Centre data. To prove the reliability of the proposed method, we also conduct an experiment to test the consistency with the diagnosis of pathologists. It demonstrates that the well-designed method in our study can achieve good results. The experiment also focused on the distinction between high-risk cancer (Gleason pattern 4, 5) and low-risk cancer (Gleason pattern 3). Our proposed method also achieves the best performance with respect to various evaluation metrics for distinguishing benign from malignant. Availability The Python source code of the proposed method is publicly available at https://github.com/hubutui/Gleason. All implementation details are presented in this paper. Conclusion These works prove that the Gleason grading results obtained from our method are effective and accurate.
Collapse
Affiliation(s)
- Yali Qiu
- School of Biomedical Engineering, Health Science Center, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen, China
| | - Yujin Hu
- School of Biomedical Engineering, Health Science Center, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen, China
| | - Peiyao Kong
- School of Biomedical Engineering, Health Science Center, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen, China
| | - Hai Xie
- School of Biomedical Engineering, Health Science Center, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen, China
| | - Xiaoliu Zhang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen, China
| | - Jiuwen Cao
- Key Lab for Internet of Things (IOT) and Information Fusion Technology of Zhejiang, Hangzhou Dianzi University, Hangzhou, China
| | - Tianfu Wang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen, China
| | - Baiying Lei
- School of Biomedical Engineering, Health Science Center, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen, China
| |
Collapse
|
31
|
Yan Y, Tang L, Huang H, Yu Q, Xu H, Chen Y, Chen M, Zhang Q. Four-quadrant fast compressive tracking of breast ultrasound videos for computer-aided response evaluation of neoadjuvant chemotherapy in mice. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 217:106698. [PMID: 35217304 DOI: 10.1016/j.cmpb.2022.106698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 01/26/2022] [Accepted: 02/08/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Neoadjuvant chemotherapy (NAC) is a valuable treatment approach for locally advanced breast cancer. Contrast-enhanced ultrasound (CEUS) potentially enables the assessment of therapeutic response to NAC. In order to evaluate the response accurately, quantitatively and objectively, a method that can effectively compensate motions of breast cancer in CEUS videos is urgently needed. METHODS We proposed the four-quadrant fast compressive tracking (FQFCT) approach to automatically perform CEUS video tracking and compensation for mice undergoing NAC. The FQFCT divided a tracking window into four smaller windows at four quadrants of a breast lesion and formulated the tracking at each quadrant as a binary classification task. After the FQFCT of breast cancer videos, the quantitative features of CEUS including the mean transit time (MTT) were computed. All mice showed a pathological response to NAC. The features between pre- (day 1) and post-treatment (day 3 and day 5) in these responders were statistically compared. RESULTS When we tracked the CEUS videos of mice with the FQFCT, the average tracking error of FQFCT was 0.65 mm, reduced by 46.72% compared with the classic fast compressive tracking method (1.22 mm). After compensation with the FQFCT, the MTT on day 5 of the NAC was significantly different from the MTT before NAC (day 1) (p = 0.013). CONCLUSIONS The FQFCT improves the accuracy of CEUS video tracking and contributes to the computer-aided response evaluation of NAC for breast cancer in mice.
Collapse
Affiliation(s)
- Yifei Yan
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Lei Tang
- Department of Ultrasound, Tongren Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai 200050, China
| | - Haibo Huang
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Qihui Yu
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Haohao Xu
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Ying Chen
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Man Chen
- Department of Ultrasound, Tongren Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai 200050, China.
| | - Qi Zhang
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China.
| |
Collapse
|
32
|
Tang P, Yang P, Nie D, Wu X, Zhou J, Wang Y. Unified medical image segmentation by learning from uncertainty in an end-to-end manner. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108215] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
33
|
Schilpzand M, Neff C, van Dillen J, van Ginneken B, Heskes T, de Korte C, van den Heuvel T. Automatic Placenta Localization From Ultrasound Imaging in a Resource-Limited Setting Using a Predefined Ultrasound Acquisition Protocol and Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:663-674. [PMID: 35063289 DOI: 10.1016/j.ultrasmedbio.2021.12.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2020] [Revised: 11/22/2021] [Accepted: 12/02/2021] [Indexed: 06/14/2023]
Abstract
Placenta localization from obstetric 2-D ultrasound (US) imaging is unattainable for many pregnant women in low-income countries because of a severe shortage of trained sonographers. To address this problem, we present a method to automatically detect low-lying placenta or placenta previa from 2-D US imaging. Two-dimensional US data from 280 pregnant women were collected in Ethiopia using a standardized acquisition protocol and low-cost equipment. The detection method consists of two parts. First, 2-D US segmentation of the placenta is performed using a deep learning model with a U-Net architecture. Second, the segmentation is used to classify each placenta as either normal or a class including both low-lying placenta and placenta previa. The segmentation model was trained and tested on 6574 2-D US images, achieving a median test Dice coefficient of 0.84 (interquartile range = 0.23). The classifier achieved a sensitivity of 81% and a specificity of 82% on a holdout test set of 148 cases. Additionally, the model was found to segment in real time (19 ± 2 ms per 2-D US image) using a smartphone paired with a low-cost 2-D US device. This work illustrates the feasibility of using automated placenta localization in a resource-limited setting.
Collapse
Affiliation(s)
- Martijn Schilpzand
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Medical Ultrasound Imaging Centre, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Institute for Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands.
| | - Chase Neff
- Medical Ultrasound Imaging Centre, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jeroen van Dillen
- Department of Obstetrics, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Tom Heskes
- Institute for Computing and Information Sciences, Radboud University, Nijmegen, The Netherlands
| | - Chris de Korte
- Medical Ultrasound Imaging Centre, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Physics of Fluids Group, Technical Medical Center, University of Twente, Enschede, The Netherlands
| | - Thomas van den Heuvel
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Medical Ultrasound Imaging Centre, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
34
|
The Accuracy and Radiomics Feature Effects of Multiple U-net-Based Automatic Segmentation Models for Transvaginal Ultrasound Images of Cervical Cancer. J Digit Imaging 2022; 35:983-992. [PMID: 35355160 PMCID: PMC9485324 DOI: 10.1007/s10278-022-00620-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 10/21/2021] [Accepted: 03/11/2022] [Indexed: 10/18/2022] Open
Abstract
Ultrasound (US) imaging has been recognized and widely used as a screening and diagnostic imaging modality for cervical cancer all over the world. However, few studies have investigated the U-net-based automatic segmentation models for cervical cancer on US images and investigated the effects of automatic segmentations on radiomics features. A total of 1102 transvaginal US images from 796 cervical cancer patients were collected and randomly divided into training (800), validation (100) and test sets (202), respectively, in this study. Four U-net models (U-net, U-net with ResNet, context encoder network (CE-net), and Attention U-net) were adapted to segment the target of cervical cancer automatically on these US images. Radiomics features were extracted and evaluated from both manually and automatically segmented area. The mean Dice similarity coefficient (DSC) of U-net, Attention U-net, CE-net, and U-net with ResNet were 0.88, 0.89, 0.88, and 0.90, respectively. The average Pearson coefficients for the evaluation of the reliability of US image-based radiomics were 0.94, 0.96, 0.94, and 0.95 for U-net, U-net with ResNet, Attention U-net, and CE-net, respectively, in their comparison with manual segmentation. The reproducibility of the radiomics parameters evaluated by intraclass correlation coefficients (ICC) showed robustness of automatic segmentation with an average ICC coefficient of 0.99. In conclusion, high accuracy of U-net-based automatic segmentations was achieved in delineating the target area of cervical cancer US images. It is feasible and reliable for further radiomics studies with features extracted from automatic segmented target areas.
Collapse
|
35
|
Lin H, Chen Y, Xie S, Yu M, Deng D, Sun T, Hu Y, Chen M, Chen S, Chen X. A Dual-modal Imaging Method Combining Ultrasound and Electromagnetism for Simultaneous Measurement of Tissue Elasticity and Electrical Conductivity. IEEE Trans Biomed Eng 2022; 69:2499-2511. [PMID: 35119996 DOI: 10.1109/tbme.2022.3148120] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The mechanical and electrical properties of soft tissues are relative to soft tissues' pathological state. Modern medical imaging devices have shown a trend to multi-modal imaging, which will provide complementary functional information to improve the accuracy of disease diagnosis. However, no method or system can simultaneously measure the mechanical and electrical properties of the soft tissue. In this study, we proposed a novel dual-modal imaging method integrated by shear wave elasticity imaging (SWEI) and Magneto-acousto-electrical tomography (MAET) to measure soft tissue's elasticity and conductivity simultaneously. A dual-modal imaging system based on a linear array transducer is built, and the imaging performances of MAET and SWEI were respectively evaluated by phantoms experiment and \textit{in vitro} experiment. Conductivity phantom experiments show that the MAET in this dual-modal system can image conductivity gradient as low as 0.4 S/m. The phantom experiments show that the reconstructed 2-D elasticity maps of the phantoms with inclusions with a diameter larger than 5 mm are relatively accurate. \textit{In vitro} experiments show that the elasticity parameter can significantly distinguish the changes in tissue before and after heating. This study first proposes a method that can simultaneously obtain tissue elasticity and electrical conductivity to the best of our knowledge. Although this paper just carried out the proof of concept experiments of the new method, it demonstrates great potential for disease diagnosis in the future.
Collapse
|
36
|
Wang KN, Yang X, Miao J, Li L, Yao J, Zhou P, Xue W, Zhou GQ, Zhuang X, Ni D. AWSnet: An Auto-weighted Supervision Attention Network for Myocardial Scar and Edema Segmentation in Multi-sequence Cardiac Magnetic Resonance Images. Med Image Anal 2022; 77:102362. [DOI: 10.1016/j.media.2022.102362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 10/26/2021] [Accepted: 01/10/2022] [Indexed: 10/19/2022]
|
37
|
He F, Wang Y, Xiu Y, Zhang Y, Chen L. Artificial Intelligence in Prenatal Ultrasound Diagnosis. Front Med (Lausanne) 2021; 8:729978. [PMID: 34977053 PMCID: PMC8716504 DOI: 10.3389/fmed.2021.729978] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
The application of artificial intelligence (AI) technology to medical imaging has resulted in great breakthroughs. Given the unique position of ultrasound (US) in prenatal screening, the research on AI in prenatal US has practical significance with its application to prenatal US diagnosis improving work efficiency, providing quantitative assessments, standardizing measurements, improving diagnostic accuracy, and automating image quality control. This review provides an overview of recent studies that have applied AI technology to prenatal US diagnosis and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
| | | | | | | | - Lizhu Chen
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| |
Collapse
|
38
|
Liu R, Liu M, Sheng B, Li H, Li P, Song H, Zhang P, Jiang L, Shen D. NHBS-Net: A Feature Fusion Attention Network for Ultrasound Neonatal Hip Bone Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3446-3458. [PMID: 34106849 DOI: 10.1109/tmi.2021.3087857] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Ultrasound is a widely used technology for diagnosing developmental dysplasia of the hip (DDH) because it does not use radiation. Due to its low cost and convenience, 2-D ultrasound is still the most common examination in DDH diagnosis. In clinical usage, the complexity of both ultrasound image standardization and measurement leads to a high error rate for sonographers. The automatic segmentation results of key structures in the hip joint can be used to develop a standard plane detection method that helps sonographers decrease the error rate. However, current automatic segmentation methods still face challenges in robustness and accuracy. Thus, we propose a neonatal hip bone segmentation network (NHBS-Net) for the first time for the segmentation of seven key structures. We design three improvements, an enhanced dual attention module, a two-class feature fusion module, and a coordinate convolution output head, to help segment different structures. Compared with current state-of-the-art networks, NHBS-Net gains outstanding performance accuracy and generalizability, as shown in the experiments. Additionally, image standardization is a common need in ultrasonography. The ability of segmentation-based standard plane detection is tested on a 50-image standard dataset. The experiments show that our method can help healthcare workers decrease their error rate from 6%-10% to 2%. In addition, the segmentation performance in another ultrasound dataset (fetal heart) demonstrates the ability of our network.
Collapse
|
39
|
Chen Y, Liu J, Luo X, Luo J. ApodNet: Learning for High Frame Rate Synthetic Transmit Aperture Ultrasound Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3190-3204. [PMID: 34048340 DOI: 10.1109/tmi.2021.3084821] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Two-way dynamic focusing in synthetic transmit aperture (STA) beamforming can benefit high-quality ultrasound imaging with higher lateral spatial resolution and contrast resolution. However, STA requires the complete dataset for beamforming in a relatively low frame rate and transmit power. This paper proposes a deep-learning architecture to achieve high frame rate STA imaging with two-way dynamic focusing. The network consists of an encoder and a joint decoder. The encoder trains a set of binary weights as the apodizations of the high-frame-rate plane wave transmissions. In this respect, we term our network ApodNet. The decoder can recover the complete dataset from the acquired channel data to achieve dynamic transmit focusing. We evaluate the proposed method by simulations at different levels of noise and in-vivo experiments on the human biceps brachii and common carotid artery. The experimental results demonstrate that ApodNet provides a promising strategy for high frame rate STA imaging, obtaining comparable lateral resolution and contrast resolution with four-times higher frame rate than conventional STA imaging in the in-vivo experiments. Particularly, ApodNet improves contrast resolution of the hypoechoic targets with much shorter computational time when compared with other high-frame-rate methods in both simulations and in-vivo experiments.
Collapse
|
40
|
Chen Z, Liu Z, Du M, Wang Z. Artificial Intelligence in Obstetric Ultrasound: An Update and Future Applications. Front Med (Lausanne) 2021; 8:733468. [PMID: 34513890 PMCID: PMC8429607 DOI: 10.3389/fmed.2021.733468] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 08/04/2021] [Indexed: 01/04/2023] Open
Abstract
Artificial intelligence (AI) can support clinical decisions and provide quality assurance for images. Although ultrasonography is commonly used in the field of obstetrics and gynecology, the use of AI is still in a stage of infancy. Nevertheless, in repetitive ultrasound examinations, such as those involving automatic positioning and identification of fetal structures, prediction of gestational age (GA), and real-time image quality assurance, AI has great potential. To realize its application, it is necessary to promote interdisciplinary communication between AI developers and sonographers. In this review, we outlined the benefits of AI technology in obstetric ultrasound diagnosis by optimizing image acquisition, quantification, segmentation, and location identification, which can be helpful for obstetric ultrasound diagnosis in different periods of pregnancy.
Collapse
Affiliation(s)
- Zhiyi Chen
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China.,Institute of Medical Imaging, University of South China, Hengyang, China
| | - Zhenyu Liu
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| | - Meng Du
- Institute of Medical Imaging, University of South China, Hengyang, China
| | - Ziyao Wang
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| |
Collapse
|
41
|
Cao X, Chen H, Li Y, Peng Y, Wang S, Cheng L. Dilated densely connected U-Net with uncertainty focus loss for 3D ABUS mass segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106313. [PMID: 34364182 DOI: 10.1016/j.cmpb.2021.106313] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Accepted: 07/21/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate segmentation of breast mass in 3D automated breast ultrasound (ABUS) images plays an important role in qualitative and quantitative ABUS image analysis. Yet this task is challenging due to the low signal to noise ratio and serious artifacts in ABUS images, the large shape and size variation of breast masses, as well as the small training dataset compared with natural images. The purpose of this study is to address these difficulties by designing a dilated densely connected U-Net (D2U-Net) together with an uncertainty focus loss. METHODS A lightweight yet effective densely connected segmentation network is constructed to extensively explore feature representations in the small ABUS dataset. In order to deal with the high variation in shape and size of breast masses, a set of hybrid dilated convolutions is integrated into the dense blocks of the D2U-Net. We further suggest an uncertainty focus loss to put more attention on unreliable network predictions, especially the ambiguous mass boundaries caused by low signal to noise ratio and artifacts. Our segmentation algorithm is evaluated on an ABUS dataset of 170 volumes from 107 patients. Ablation analysis and comparison with existing methods are conduct to verify the effectiveness of the proposed method. RESULTS Experiment results demonstrate that the proposed algorithm outperforms existing methods on 3D ABUS mass segmentation tasks, with Dice similarity coefficient, Jaccard index and 95% Hausdorff distance of 69.02%, 56.61% and 4.92 mm, respectively. CONCLUSIONS The proposed method is effective in segmenting breast masses on our small ABUS dataset, especially breast masses with large shape and size variations.
Collapse
Affiliation(s)
- Xuyang Cao
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Houjin Chen
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China.
| | - Yanfeng Li
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Yahui Peng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China
| | - Shu Wang
- Peking University People's Hospital, Beijing 100044, China
| | - Lin Cheng
- Peking University People's Hospital, Beijing 100044, China
| |
Collapse
|
42
|
Yang H, Shan C, Bouwman A, Dekker LRC, Kolen AF, de With PHN. Medical Instrument Segmentation in 3D US by Hybrid Constrained Semi-Supervised Learning. IEEE J Biomed Health Inform 2021; 26:762-773. [PMID: 34347611 DOI: 10.1109/jbhi.2021.3101872] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Medical instrument segmentation in 3D ultrasound is essential for image-guided intervention. However, to train a successful deep neural network for instrument segmentation, a large number of labeled images are required, which is expensive and time-consuming to obtain. In this article, we propose a semi-supervised learning (SSL) framework for instrument segmentation in 3D US, which requires much less annotation effort than the existing methods. To achieve the SSL learning, a Dual-UNet is proposed to segment the instrument. The Dual-UNet leverages unlabeled data using a novel hybrid loss function, consisting of uncertainty and contextual constraints. Specifically, the uncertainty constraints leverage the uncertainty estimation of the predictions of the UNet, and therefore improve the unlabeled information for SSL training. In addition, contextual constraints exploit the contextual information of the training images, which are used as the complementary information for voxel-wise uncertainty estimation. Extensive experiments on multiple ex-vivo and in-vivo datasets show that our proposed method achieves Dice score of about 68.6%-69.1% and the inference time of about 1 sec. per volume. These results are better than the state-of-the-art SSL methods and the inference time is comparable to the supervised approaches.
Collapse
|
43
|
Zhou SK, Le HN, Luu K, V Nguyen H, Ayache N. Deep reinforcement learning in medical imaging: A literature review. Med Image Anal 2021; 73:102193. [PMID: 34371440 DOI: 10.1016/j.media.2021.102193] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 05/22/2021] [Accepted: 07/20/2021] [Indexed: 12/29/2022]
Abstract
Deep reinforcement learning (DRL) augments the reinforcement learning framework, which learns a sequence of actions that maximizes the expected reward, with the representative power of deep neural networks. Recent works have demonstrated the great potential of DRL in medicine and healthcare. This paper presents a literature review of DRL in medical imaging. We start with a comprehensive tutorial of DRL, including the latest model-free and model-based algorithms. We then cover existing DRL applications for medical imaging, which are roughly divided into three main categories: (i) parametric medical image analysis tasks including landmark detection, object/lesion detection, registration, and view plane localization; (ii) solving optimization tasks including hyperparameter tuning, selecting augmentation strategies, and neural architecture search; and (iii) miscellaneous applications including surgical gesture segmentation, personalized mobile health intervention, and computational model personalization. The paper concludes with discussions of future perspectives.
Collapse
Affiliation(s)
- S Kevin Zhou
- Medical Imaging, Robotics, and Analytic Computing Laboratory and Enigineering (MIRACLE) Center, School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China; Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, China.
| | | | - Khoa Luu
- CSCE Department, University of Arkansas, US
| | | | | |
Collapse
|
44
|
Yang X, Li H, Wang Y, Liang X, Chen C, Zhou X, Zeng F, Fang J, Frangi A, Chen Z, Ni D. Contrastive rendering with semi-supervised learning for ovary and follicle segmentation from 3D ultrasound. Med Image Anal 2021; 73:102134. [PMID: 34246847 DOI: 10.1016/j.media.2021.102134] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 06/04/2021] [Accepted: 06/09/2021] [Indexed: 10/21/2022]
Abstract
Segmentation of ovary and follicles from 3D ultrasound (US) is the crucial technique of measurement tools for female infertility diagnosis. Since manual segmentation is time-consuming and operator-dependent, an accurate and fast segmentation method is highly demanded. However, it is challenging for current deep-learning based methods to segment ovary and follicles precisely due to ambiguous boundaries and insufficient annotations. In this paper, we propose a contrastive rendering (C-Rend) framework to segment ovary and follicles with detail-refined boundaries. Furthermore, we incorporate the proposed C-Rend with a semi-supervised learning (SSL) framework, leveraging unlabeled data for better performance. Highlights of this paper include: (1) A rendering task is performed to estimate boundary accurately via enriched feature representation learning. (2) Point-wise contrastive learning is proposed to enhance the similarity of intra-class points and contrastively decrease the similarity of inter-class points. (3) The C-Rend plays a complementary role for the SSL framework in uncertainty-aware learning, which could provide reliable supervision information and achieve superior segmentation performance. Through extensive validation on large in-house datasets with partial annotations, our method outperforms state-of-the-art methods in various evaluation metrics for both the ovary and follicles.
Collapse
Affiliation(s)
- Xin Yang
- School of Biomedical Engineering, Health Center, Shenzhen University, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, China
| | - Haoming Li
- School of Biomedical Engineering, Health Center, Shenzhen University, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, China
| | - Yi Wang
- School of Biomedical Engineering, Health Center, Shenzhen University, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, China
| | - Xiaowen Liang
- Department of Ultrasound Medicine, Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Chaoyu Chen
- School of Biomedical Engineering, Health Center, Shenzhen University, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, China
| | - Xu Zhou
- School of Biomedical Engineering, Health Center, Shenzhen University, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, China
| | - Fengyi Zeng
- Department of Ultrasound Medicine, Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Jinghui Fang
- Department of Ultrasound Medicine, Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Alejandro Frangi
- School of Biomedical Engineering, Health Center, Shenzhen University, China; Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing, University of Leeds, Leeds, UK; Medical Imaging Research Center (MIRC), University Hospital Gasthuisberg, Electrical Engineering Department, KU Leuven, Leuven, Belgium
| | - Zhiyi Chen
- Institute of Medical Imaging, University of South China, Hengyang, Hunan Province, China; Department of Ultrasound Medicine, Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.
| | - Dong Ni
- School of Biomedical Engineering, Health Center, Shenzhen University, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, China.
| |
Collapse
|
45
|
Looney P, Yin Y, Collins SL, Nicolaides KH, Plasencia W, Molloholli M, Natsis S, Stevenson GN. Fully Automated 3-D Ultrasound Segmentation of the Placenta, Amniotic Fluid, and Fetus for Early Pregnancy Assessment. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2038-2047. [PMID: 33460372 PMCID: PMC8154733 DOI: 10.1109/tuffc.2021.3052143] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Volumetric placental measurement using 3-D ultrasound has proven clinical utility in predicting adverse pregnancy outcomes. However, this metric cannot currently be employed as part of a screening test due to a lack of robust and real-time segmentation tools. We present a multiclass (MC) convolutional neural network (CNN) developed to segment the placenta, amniotic fluid, and fetus. The ground-truth data set consisted of 2093 labeled placental volumes augmented by 300 volumes with placenta, amniotic fluid, and fetus annotated. A two-pathway, hybrid (HB) model using transfer learning, a modified loss function, and exponential average weighting was developed and demonstrated the best performance for placental segmentation (PS), achieving a Dice similarity coefficient (DSC) of 0.84- and 0.38-mm average Hausdorff distances (HDAV). The use of a dual-pathway architecture improved the PS by 0.03 DSC and reduced HDAV by 0.27 mm compared with a naïve MC model. The incorporation of exponential weighting produced a further small improvement in DSC by 0.01 and a reduction of HDAV by 0.44 mm. Per volume inference using the FCNN took 7-8 s. This method should enable clinically relevant morphometric measurements (such as volume and total surface area) to be automatically generated for the placenta, amniotic fluid, and fetus. The ready availability of such metrics makes a population-based screening test for adverse pregnancy outcomes possible.
Collapse
|
46
|
Li J, Lin X, Che H, Li H, Qian X. Pancreas segmentation with probabilistic map guided bi-directional recurrent UNet. Phys Med Biol 2021; 66. [PMID: 33915526 DOI: 10.1088/1361-6560/abfce3] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Accepted: 04/29/2021] [Indexed: 01/07/2023]
Abstract
Pancreas segmentation in medical imaging is of great significance for clinical pancreas diagnostics and treatment. However, the large population variations in the pancreas shape and volume cause enormous segmentation difficulties, even for state-of-the-art algorithms utilizing fully convolutional neural networks (FCNs). Specifically, pancreas segmentation suffers from the loss of statement temporal information in 2D methods, and the high computational cost of 3D methods. To alleviate these problems, we propose a probabilistic-map-guided bi-directional recurrent UNet (PBR-UNet) architecture, which fuses intra-slice information and inter-slice probabilistic maps into a local 3D hybrid regularization scheme, which is followed by a bi-directional recurrent optimization scheme. The PBR-UNet method consists of an initial estimation module for efficiently extracting pixel-level probabilistic maps and a primary segmentation module for propagating hybrid information through a 2.5D UNet architecture. Specifically, local 3D information is inferred by combining an input image with the probabilistic maps of the adjacent slices into multi-channel hybrid data, and then hierarchically aggregating the hybrid information of the entire segmentation network. Besides, a bi-directional recurrent optimization mechanism is developed to update the hybrid information in both the forward and the backward directions. This allows the proposed network to make full and optimal use of the local context information. Quantitative and qualitative evaluation was performed on the NIH Pancreas-CT and MSD pancreas dataset, and our proposed PBR-UNet method achieved similar segmentation results with less computational cost compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Jun Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, People's Republic of China
| | - Xiaozhu Lin
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, People's Republic of China
| | - Hui Che
- Biomedical Engineering Department, Rutgers University, New Jersey 08901, United States of America
| | - Hao Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, People's Republic of China
| | - Xiaohua Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, People's Republic of China
| |
Collapse
|
47
|
Abstract
Abstract
Deep learning is transforming most areas of science and technology, including electron microscopy. This review paper offers a practical perspective aimed at developers with limited familiarity. For context, we review popular applications of deep learning in electron microscopy. Following, we discuss hardware and software needed to get started with deep learning and interface with electron microscopes. We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy.
Collapse
|
48
|
Day TG, Kainz B, Hajnal J, Razavi R, Simpson JM. Artificial intelligence, fetal echocardiography, and congenital heart disease. Prenat Diagn 2021. [PMCID: PMC8641383 DOI: 10.1002/pd.5892] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Affiliation(s)
- Thomas G. Day
- Faculty of Life Sciences and Medicine School of Biomedical Engineering and Imaging Sciences King's College London London UK
- Department of Congenital Cardiology Evelina London Children's Healthcare Guy's and St Thomas' NHS Foundation Trust London UK
| | - Bernhard Kainz
- Department of Computing Faculty of Engineering Imperial College London London UK
| | - Jo Hajnal
- Faculty of Life Sciences and Medicine School of Biomedical Engineering and Imaging Sciences King's College London London UK
| | - Reza Razavi
- Faculty of Life Sciences and Medicine School of Biomedical Engineering and Imaging Sciences King's College London London UK
- Department of Congenital Cardiology Evelina London Children's Healthcare Guy's and St Thomas' NHS Foundation Trust London UK
| | - John M. Simpson
- Faculty of Life Sciences and Medicine School of Biomedical Engineering and Imaging Sciences King's College London London UK
- Department of Congenital Cardiology Evelina London Children's Healthcare Guy's and St Thomas' NHS Foundation Trust London UK
| |
Collapse
|
49
|
Xie Y, Zhang J, Lu H, Shen C, Xia Y. SESV: Accurate Medical Image Segmentation by Predicting and Correcting Errors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:286-296. [PMID: 32956049 DOI: 10.1109/tmi.2020.3025308] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Medical image segmentation is an essential task in computer-aided diagnosis. Despite their prevalence and success, deep convolutional neural networks (DCNNs) still need to be improved to produce accurate and robust enough segmentation results for clinical use. In this paper, we propose a novel and generic framework called Segmentation-Emendation-reSegmentation-Verification (SESV) to improve the accuracy of existing DCNNs in medical image segmentation, instead of designing a more accurate segmentation model. Our idea is to predict the segmentation errors produced by an existing model and then correct them. Since predicting segmentation errors is challenging, we design two ways to tolerate the mistakes in the error prediction. First, rather than using a predicted segmentation error map to correct the segmentation mask directly, we only treat the error map as the prior that indicates the locations where segmentation errors are prone to occur, and then concatenate the error map with the image and segmentation mask as the input of a re-segmentation network. Second, we introduce a verification network to determine whether to accept or reject the refined mask produced by the re-segmentation network on a region-by-region basis. The experimental results on the CRAG, ISIC, and IDRiD datasets suggest that using our SESV framework can improve the accuracy of DeepLabv3+ substantially and achieve advanced performance in the segmentation of gland cells, skin lesions, and retinal microaneurysms. Consistent conclusions can also be drawn when using PSPNet, U-Net, and FPN as the segmentation network, respectively. Therefore, our SESV framework is capable of improving the accuracy of different DCNNs on different medical image segmentation tasks.
Collapse
|
50
|
Cao X, Chen H, Li Y, Peng Y, Wang S, Cheng L. Uncertainty Aware Temporal-Ensembling Model for Semi-Supervised ABUS Mass Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:431-443. [PMID: 33021936 DOI: 10.1109/tmi.2020.3029161] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Accurate breast mass segmentation of automated breast ultrasound (ABUS) images plays a crucial role in 3D breast reconstruction which can assist radiologists in surgery planning. Although the convolutional neural network has great potential for breast mass segmentation due to the remarkable progress of deep learning, the lack of annotated data limits the performance of deep CNNs. In this article, we present an uncertainty aware temporal ensembling (UATE) model for semi-supervised ABUS mass segmentation. Specifically, a temporal ensembling segmentation (TEs) model is designed to segment breast mass using a few labeled images and a large number of unlabeled images. Considering the network output contains correct predictions and unreliable predictions, equally treating each prediction in pseudo label update and loss calculation may degrade the network performance. To alleviate this problem, the uncertainty map is estimated for each image. Then an adaptive ensembling momentum map and an uncertainty aware unsupervised loss are designed and integrated with TEs model. The effectiveness of the proposed UATE model is mainly verified on an ABUS dataset of 107 patients with 170 volumes, including 13382 2D labeled slices. The Jaccard index (JI), Dice similarity coefficient (DSC), pixel-wise accuracy (AC) and Hausdorff distance (HD) of the proposed method on testing set are 63.65%, 74.25%, 99.21% and 3.81mm respectively. Experimental results demonstrate that our semi-supervised method outperforms the fully supervised method, and get a promising result compared with existing semi-supervised methods.
Collapse
|