1
|
Ou Z, Bai J, Chen Z, Lu Y, Wang H, Long S, Chen G. RTSeg-net: A lightweight network for real-time segmentation of fetal head and pubic symphysis from intrapartum ultrasound images. Comput Biol Med 2024; 175:108501. [PMID: 38703545 DOI: 10.1016/j.compbiomed.2024.108501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/19/2024] [Accepted: 04/21/2024] [Indexed: 05/06/2024]
Abstract
The segmentation of the fetal head (FH) and pubic symphysis (PS) from intrapartum ultrasound images plays a pivotal role in monitoring labor progression and informing crucial clinical decisions. Achieving real-time segmentation with high accuracy on systems with limited hardware capabilities presents significant challenges. To address these challenges, we propose the real-time segmentation network (RTSeg-Net), a groundbreaking lightweight deep learning model that incorporates innovative distribution shifting convolutional blocks, tokenized multilayer perceptron blocks, and efficient feature fusion blocks. Designed for optimal computational efficiency, RTSeg-Net minimizes resource demand while significantly enhancing segmentation performance. Our comprehensive evaluation on two distinct intrapartum ultrasound image datasets reveals that RTSeg-Net achieves segmentation accuracy on par with more complex state-of-the-art networks, utilizing merely 1.86 M parameters-just 6 % of their hyperparameters-and operating seven times faster, achieving a remarkable rate of 31.13 frames per second on a Jetson Nano, a device known for its limited computing capacity. These achievements underscore RTSeg-Net's potential to provide accurate, real-time segmentation on low-power devices, broadening the scope for its application across various stages of labor. By facilitating real-time, accurate ultrasound image analysis on portable, low-cost devices, RTSeg-Net promises to revolutionize intrapartum monitoring, making sophisticated diagnostic tools accessible to a wider range of healthcare settings.
Collapse
Affiliation(s)
- Zhanhong Ou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China; Auckland Bioengineering Institute, University of Auckland, Auckland, 1010, New Zealand.
| | - Zhide Chen
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Huijin Wang
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Shun Long
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Gaowen Chen
- Obstetrics and Gynecology Center, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
| |
Collapse
|
2
|
Li L, Yu J, Li Y, Wei J, Fan R, Wu D, Ye Y. Multi-sequence generative adversarial network: better generation for enhanced magnetic resonance imaging images. Front Comput Neurosci 2024; 18:1365238. [PMID: 38841427 PMCID: PMC11151883 DOI: 10.3389/fncom.2024.1365238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 03/27/2024] [Indexed: 06/07/2024] Open
Abstract
Introduction MRI is one of the commonly used diagnostic methods in clinical practice, especially in brain diseases. There are many sequences in MRI, but T1CE images can only be obtained by using contrast agents. Many patients (such as cancer patients) must undergo alignment of multiple MRI sequences for diagnosis, especially the contrast-enhanced magnetic resonance sequence. However, some patients such as pregnant women, children, etc. find it difficult to use contrast agents to obtain enhanced sequences, and contrast agents have many adverse reactions, which can pose a significant risk. With the continuous development of deep learning, the emergence of generative adversarial networks makes it possible to extract features from one type of image to generate another type of image. Methods We propose a generative adversarial network model with multimodal inputs and end-to-end decoding based on the pix2pix model. For the pix2pix model, we used four evaluation metrics: NMSE, RMSE, SSIM, and PNSR to assess the effectiveness of our generated model. Results Through statistical analysis, we compared our proposed new model with pix2pix and found significant differences between the two. Our model outperformed pix2pix, with higher SSIM and PNSR, lower NMSE and RMSE. We also found that the input of T1W images and T2W images had better effects than other combinations, providing new ideas for subsequent work on generating magnetic resonance enhancement sequence images. By using our model, it is possible to generate magnetic resonance enhanced sequence images based on magnetic resonance non-enhanced sequence images. Discussion This has significant implications as it can greatly reduce the use of contrast agents to protect populations such as pregnant women and children who are contraindicated for contrast agents. Additionally, contrast agents are relatively expensive, and this generation method may bring about substantial economic benefits.
Collapse
Affiliation(s)
- Leizi Li
- South China Normal University-Panyu Central Hospital Joint Laboratory of Basic and Translational Medical Research, Guangzhou Panyu Central Hospital, Guangzhou, China
- Guangzhou Key Laboratory of Subtropical Biodiversity and Biomonitoring and Guangdong Provincial Engineering Technology Research Center for Drug and Food Biological Resources Processing and Comprehensive Utilization, School of Life Sciences, South China Normal University, Guangzhou, China
| | - Jingchun Yu
- Guangzhou Key Laboratory of Subtropical Biodiversity and Biomonitoring and Guangdong Provincial Engineering Technology Research Center for Drug and Food Biological Resources Processing and Comprehensive Utilization, School of Life Sciences, South China Normal University, Guangzhou, China
| | - Yijin Li
- Guangzhou Key Laboratory of Subtropical Biodiversity and Biomonitoring and Guangdong Provincial Engineering Technology Research Center for Drug and Food Biological Resources Processing and Comprehensive Utilization, School of Life Sciences, South China Normal University, Guangzhou, China
| | - Jinbo Wei
- South China Normal University-Panyu Central Hospital Joint Laboratory of Basic and Translational Medical Research, Guangzhou Panyu Central Hospital, Guangzhou, China
| | - Ruifang Fan
- Guangzhou Key Laboratory of Subtropical Biodiversity and Biomonitoring and Guangdong Provincial Engineering Technology Research Center for Drug and Food Biological Resources Processing and Comprehensive Utilization, School of Life Sciences, South China Normal University, Guangzhou, China
| | - Dieen Wu
- South China Normal University-Panyu Central Hospital Joint Laboratory of Basic and Translational Medical Research, Guangzhou Panyu Central Hospital, Guangzhou, China
| | - Yufeng Ye
- South China Normal University-Panyu Central Hospital Joint Laboratory of Basic and Translational Medical Research, Guangzhou Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| |
Collapse
|
3
|
Dubey G, Srivastava S, Jayswal AK, Saraswat M, Singh P, Memoria M. Fetal Ultrasound Segmentation and Measurements Using Appearance and Shape Prior Based Density Regression with Deep CNN and Robust Ellipse Fitting. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:247-267. [PMID: 38343234 PMCID: PMC10976955 DOI: 10.1007/s10278-023-00908-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 09/05/2023] [Accepted: 09/06/2023] [Indexed: 03/02/2024]
Abstract
Accurately segmenting the structure of the fetal head (FH) and performing biometry measurements, including head circumference (HC) estimation, stands as a vital requirement for addressing abnormal fetal growth during pregnancy under the expertise of experienced radiologists using ultrasound (US) images. However, accurate segmentation and measurement is a challenging task due to image artifact, incomplete ellipse fitting, and fluctuations due to FH dimensions over different trimesters. Also, it is highly time-consuming due to the absence of specialized features, which leads to low segmentation accuracy. To address these challenging tasks, we propose an automatic density regression approach to incorporate appearance and shape priors into the deep learning-based network model (DR-ASPnet) with robust ellipse fitting using fetal US images. Initially, we employed multiple pre-processing steps to remove unwanted distortions, variable fluctuations, and a clear view of significant features from the US images. Then some form of augmentation operation is applied to increase the diversity of the dataset. Next, we proposed the hierarchical density regression deep convolutional neural network (HDR-DCNN) model, which involves three network models to determine the complex location of FH for accurate segmentation during the training and testing processes. Then, we used post-processing operations using contrast enhancement filtering with a morphological operation model to smooth the region and remove unnecessary artifacts from the segmentation results. After post-processing, we applied the smoothed segmented result to the robust ellipse fitting-based least square (REFLS) method for HC estimation. Experimental results of the DR-ASPnet model obtain 98.86% dice similarity coefficient (DSC) as segmentation accuracy, and it also obtains 1.67 mm absolute distance (AD) as measurement accuracy compared to other state-of-the-art methods. Finally, we achieved a 0.99 correlation coefficient (CC) in estimating the measured and predicted HC values on the HC18 dataset.
Collapse
Affiliation(s)
- Gaurav Dubey
- Department of Computer Science, KIET Group of Institutions, Delhi-NCR, Ghaziabad, U.P, India
| | | | | | - Mala Saraswat
- Department of Computer Science, Bennett University, Greater Noida, India
| | - Pooja Singh
- Shiv Nadar University, Greater Noida, Uttar Pradesh, India
| | - Minakshi Memoria
- CSE Department, UIT, Uttaranchal University, Dehradun, Uttarakhand, India
| |
Collapse
|
4
|
Kim S, Yoon H, Lee J, Yoo S. Facial wrinkle segmentation using weighted deep supervision and semi-automatic labeling. Artif Intell Med 2023; 145:102679. [PMID: 37925209 DOI: 10.1016/j.artmed.2023.102679] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Revised: 07/28/2023] [Accepted: 10/03/2023] [Indexed: 11/06/2023]
Abstract
Facial wrinkles are important indicators of human aging. Recently, a method using deep learning and a semi-automatic labeling was proposed to segment facial wrinkles, which showed much better performance than conventional image-processing-based methods. However, the difficulty of wrinkle segmentation remains challenging due to the thinness of wrinkles and their small proportion in the entire image. Therefore, performance improvement in wrinkle segmentation is still necessary. To address this issue, we propose a novel loss function that takes into account the thickness of wrinkles based on the semi-automatic labeling approach. First, considering the different spatial dimensions of the decoder in the U-Net architecture, we generated weighted wrinkle maps from ground truth. These weighted wrinkle maps were used to calculate the training losses more accurately than the existing deep supervision approach. This new loss computation approach is defined as weighted deep supervision in our study. The proposed method was evaluated using an image dataset obtained from a professional skin analysis device and labeled using semi-automatic labeling. In our experiment, the proposed weighted deep supervision showed higher Jaccard Similarity Index (JSI) performance for wrinkle segmentation compared to conventional deep supervision and traditional image processing methods. Additionally, we conducted experiments on the labeling using a semi-automatic labeling approach, which had not been explored in previous research, and compared it with human labeling. The semi-automatic labeling technology showed more consistent wrinkle labels than human-made labels. Furthermore, to assess the scalability of the proposed method to other domains, we applied it to retinal vessel segmentation. The results demonstrated superior performance of the proposed method compared to existing retinal vessel segmentation approaches. In conclusion, the proposed method offers high performance and can be easily applied to various biomedical domains and U-Net-based architectures. Therefore, the proposed approach will be beneficial for various biomedical imaging approaches. To facilitate this, we have made the source code of the proposed method publicly available at: https://github.com/resemin/WeightedDeepSupervision.
Collapse
Affiliation(s)
- Semin Kim
- AI R&D Center, Lululab Inc., 318, Dosan-daero, Gangnam-gu, Seoul, Republic of Korea.
| | - Huisu Yoon
- AI R&D Center, Lululab Inc., 318, Dosan-daero, Gangnam-gu, Seoul, Republic of Korea.
| | - Jongha Lee
- AI R&D Center, Lululab Inc., 318, Dosan-daero, Gangnam-gu, Seoul, Republic of Korea.
| | - Sangwook Yoo
- AI R&D Center, Lululab Inc., 318, Dosan-daero, Gangnam-gu, Seoul, Republic of Korea.
| |
Collapse
|
5
|
Gao Z, Tian Z, Pu B, Li S, Li K. Deep endpoints focusing network under geometric constraints for end-to-end biometric measurement in fetal ultrasound images. Comput Biol Med 2023; 165:107399. [PMID: 37683530 DOI: 10.1016/j.compbiomed.2023.107399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 07/25/2023] [Accepted: 08/26/2023] [Indexed: 09/10/2023]
Abstract
Biometric measurements in fetal ultrasound images are one of the most highly demanding medical image analysis tasks that can directly contribute to diagnosing fetal diseases. However, the natural high-speckle noise and shadows in ultrasound data present big challenges for automatic biometric measurement. Almost all the existing dominant automatic methods are two-stage models, where the key anatomical structures are segmented first and then measured, thus bringing segmentation and fitting errors. What is worse, the results of the second-stage fitting are completely dependent on the good performance of first-stage segmentation, i.e., the segmentation error will lead to a larger fitting error. To this end, we propose a novel end-to-end biometric measurement network, abbreviated as E2EBM-Net, that directly fits the measurement parameters. E2EBM-Net includes a cross-level feature fusion module to extract multi-scale texture information, a hard-soft attention module to improve position sensitivity, and center-focused detectors jointly to achieve accurate localizing and regressing of the measurement endpoints, as well as a loss function with geometric cues to enhance the correlations. To our knowledge, this is the first AI-based application to address the biometric measurement of irregular anatomical structures in fetal ultrasound images with an end-to-end approach. Experiment results showed that E2EBM-Net outperformed the existing methods and achieved the state-of-the-art performance.
Collapse
Affiliation(s)
- Zhan Gao
- College of Computer Science and Electronic Engineering, Hunan University, Changsha 410000, China
| | - Zean Tian
- College of Computer Science and Electronic Engineering, Hunan University, Changsha 410000, China
| | - Bin Pu
- College of Computer Science and Electronic Engineering, Hunan University, Changsha 410000, China
| | - Shengli Li
- Department of Ultrasound, Shenzhen Maternal & Child Healthcare Hospital, Southern Medical University, Shenzhen, 518028, China
| | - Kenli Li
- College of Computer Science and Electronic Engineering, Hunan University, Changsha 410000, China.
| |
Collapse
|
6
|
Ciceri T, Squarcina L, Giubergia A, Bertoldo A, Brambilla P, Peruzzo D. Review on deep learning fetal brain segmentation from Magnetic Resonance images. Artif Intell Med 2023; 143:102608. [PMID: 37673558 DOI: 10.1016/j.artmed.2023.102608] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 05/31/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Brain segmentation is often the first and most critical step in quantitative analysis of the brain for many clinical applications, including fetal imaging. Different aspects challenge the segmentation of the fetal brain in magnetic resonance imaging (MRI), such as the non-standard position of the fetus owing to his/her movements during the examination, rapid brain development, and the limited availability of imaging data. In recent years, several segmentation methods have been proposed for automatically partitioning the fetal brain from MR images. These algorithms aim to define regions of interest with different shapes and intensities, encompassing the entire brain, or isolating specific structures. Deep learning techniques, particularly convolutional neural networks (CNNs), have become a state-of-the-art approach in the field because they can provide reliable segmentation results over heterogeneous datasets. Here, we review the deep learning algorithms developed in the field of fetal brain segmentation and categorize them according to their target structures. Finally, we discuss the perceived research gaps in the literature of the fetal domain, suggesting possible future research directions that could impact the management of fetal MR images.
Collapse
Affiliation(s)
- Tommaso Ciceri
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy; Department of Information Engineering, University of Padua, Padua, Italy
| | - Letizia Squarcina
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
| | - Alice Giubergia
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy; Department of Information Engineering, University of Padua, Padua, Italy
| | - Alessandra Bertoldo
- Department of Information Engineering, University of Padua, Padua, Italy; University of Padua, Padova Neuroscience Center, Padua, Italy
| | - Paolo Brambilla
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy; Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy.
| | - Denis Peruzzo
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy
| |
Collapse
|
7
|
Zhan B, Zhou L, Li Z, Wu X, Pu Y, Zhou J, Wang Y, Shen D. D2FE-GAN: Decoupled dual feature extraction based GAN for MRI image synthesis. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
8
|
Wang K, Wang Y, Zhan B, Yang Y, Zu C, Wu X, Zhou J, Nie D, Zhou L. An Efficient Semi-Supervised Framework with Multi-Task and Curriculum Learning for Medical Image Segmentation. Int J Neural Syst 2022; 32:2250043. [DOI: 10.1142/s0129065722500435] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
9
|
Semi-supervised Medical Image Segmentation via a Tripled-uncertainty Guided Mean Teacher Model with Contrastive Learning. Med Image Anal 2022; 79:102447. [DOI: 10.1016/j.media.2022.102447] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 03/16/2022] [Accepted: 04/01/2022] [Indexed: 11/18/2022]
|