1
|
Patel DJ, Chaudhari K, Acharya N, Shrivastava D, Muneeba S. Artificial Intelligence in Obstetrics and Gynecology: Transforming Care and Outcomes. Cureus 2024; 16:e64725. [PMID: 39156405 PMCID: PMC11329325 DOI: 10.7759/cureus.64725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Accepted: 07/17/2024] [Indexed: 08/20/2024] Open
Abstract
The integration of artificial intelligence (AI) in obstetrics and gynecology (OB/GYN) is revolutionizing the landscape of women's healthcare. This review article explores the transformative impact of AI technologies on the diagnosis, treatment, and management of obstetric and gynecological conditions. We examine key advancements in AI-driven imaging techniques, predictive analytics, and personalized medicine, highlighting their roles in enhancing prenatal care, improving maternal and fetal outcomes, and optimizing gynecological interventions. The article also addresses the challenges and ethical considerations associated with the implementation of AI in clinical practice. This paper highlights the potential of AI to greatly improve the standard of care in OB/GYN, ultimately leading to better health outcomes for women, by offering a thorough overview of present AI uses and future prospects.
Collapse
Affiliation(s)
- Dharmesh J Patel
- Department of Obstetrics and Gynecology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| | - Kamlesh Chaudhari
- Department of Obstetrics and Gynecology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| | - Neema Acharya
- Department of Obstetrics and Gynecology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| | - Deepti Shrivastava
- Department of Obstetrics and Gynecology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| | - Shaikh Muneeba
- Department of Obstetrics and Gynecology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education & Research, Wardha, IND
| |
Collapse
|
2
|
Pei Y, E L, Dai C, Han J, Wang H, Liang H. Combining deep learning and intelligent biometry to extract ultrasound standard planes and assess early gestational weeks. Eur Radiol 2023; 33:9390-9400. [PMID: 37392231 DOI: 10.1007/s00330-023-09808-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 03/07/2023] [Accepted: 03/26/2023] [Indexed: 07/03/2023]
Abstract
OBJECTIVES To develop and validate a fully automated AI system to extract standard planes, assess early gestational weeks, and compare the performance of the developed system to sonographers. METHODS In this three-center retrospective study, 214 consecutive pregnant women that underwent transvaginal ultrasounds between January and December 2018 were selected. Their ultrasound videos were automatically split into 38,941 frames using a particular program. First, an optimal deep-learning classifier was selected to extract the standard planes with key anatomical structures from the ultrasound frames. Second, an optimal segmentation model was selected to outline gestational sacs. Third, novel biometry was used to measure, select the largest gestational sac in the same video, and assess gestational weeks automatically. Finally, an independent test set was used to compare the performance of the system with that of sonographers. The outcomes were analyzed using the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and mean similarity between two samples (mDice). RESULTS The standard planes were extracted with an AUC of 0.975, a sensitivity of 0.961, and a specificity of 0.979. The gestational sacs' contours were segmented with a mDice of 0.974 (error less than 2 pixels). The comparison showed that the relative error of the tool in assessing gestational weeks was 12.44% and 6.92% lower and faster (min, 0.17 vs. 16.6 and 12.63) than that of the intermediate and senior sonographers, respectively. CONCLUSIONS This proposed end-to-end tool allows automatic assessment of gestational weeks in early pregnancy and may reduce manual analysis time and measurement errors. CLINICAL RELEVANCE STATEMENT The fully automated tool achieved high accuracy showing its potential to optimize the increasingly scarce resources of sonographers. Explainable predictions can assist in their confidence in assessing gestational weeks and provide a reliable basis for managing early pregnancy cases. KEY POINTS • The end-to-end pipeline enabled automatic identification of the standard plane containing the gestational sac in an ultrasound video, as well as segmentation of the sac contour, automatic multi-angle measurements, and the selection of the sac with the largest mean internal diameter to calculate the early gestational week. • This fully automated tool combining deep learning and intelligent biometry may assist the sonographer in assessing the early gestational week, increasing accuracy and reducing the analyzing time, thereby reducing observer dependence.
Collapse
Affiliation(s)
- Yuanyuan Pei
- Clinical Data Center, Guangdong Provincial Clinical Research Center for Child Health, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, 510623, China
| | - Longjiang E
- Clinical Data Center, Guangdong Provincial Clinical Research Center for Child Health, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, 510623, China
| | - Changping Dai
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, 510623, China
| | - Jin Han
- Prenatal Diagnosis Center of Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, 510623, China
| | - Haiyu Wang
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, 510623, China.
| | - Huiying Liang
- Clinical Data Center, Guangdong Provincial Clinical Research Center for Child Health, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, 510623, China.
- Medical Big Data Research Center, Guangdong Provincial People's Hospital/Guangdong Academy of Medical Sciences, Guangzhou, 510080, China.
| |
Collapse
|
3
|
Tang J, Liang Y, Jiang Y, Liu J, Zhang R, Huang D, Pang C, Huang C, Luo D, Zhou X, Li R, Zhang K, Xie B, Hu L, Zhu F, Xia H, Lu L, Wang H. A multicenter study on two-stage transfer learning model for duct-dependent CHDs screening in fetal echocardiography. NPJ Digit Med 2023; 6:143. [PMID: 37573426 PMCID: PMC10423245 DOI: 10.1038/s41746-023-00883-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 07/21/2023] [Indexed: 08/14/2023] Open
Abstract
Duct-dependent congenital heart diseases (CHDs) are a serious form of CHD with a low detection rate, especially in underdeveloped countries and areas. Although existing studies have developed models for fetal heart structure identification, there is a lack of comprehensive evaluation of the long axis of the aorta. In this study, a total of 6698 images and 48 videos are collected to develop and test a two-stage deep transfer learning model named DDCHD-DenseNet for screening critical duct-dependent CHDs. The model achieves a sensitivity of 0.973, 0.843, 0.769, and 0.759, and a specificity of 0.985, 0.967, 0.956, and 0.759, respectively, on the four multicenter test sets. It is expected to be employed as a potential automatic screening tool for hierarchical care and computer-aided diagnosis. Our two-stage strategy effectively improves the robustness of the model and can be extended to screen for other fetal heart development defects.
Collapse
Affiliation(s)
- Jiajie Tang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
- School of Information Management, Wuhan University, Wuhan, China
| | - Yongen Liang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Yuxuan Jiang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
- School of Information Management, Wuhan University, Wuhan, China
| | - Jinrong Liu
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Rui Zhang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Danping Huang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Chengcheng Pang
- Cardiovascular Pediatrics/Guangdong Cardiovascular Institute/Medical Big Data Center, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Chen Huang
- Department of Medical Ultrasonics/Shenzhen Longgang Maternal and Child Health Hospital, Shenzhen, China
| | - Dongni Luo
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Xue Zhou
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Ruizhuo Li
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
- School of Medicine, Southern China University of Technology, Guangzhou, China
| | - Kanghui Zhang
- School of Information Management, Wuhan University, Wuhan, China
| | - Bingbing Xie
- School of Information Management, Wuhan University, Wuhan, China
| | - Lianting Hu
- Cardiovascular Pediatrics/Guangdong Cardiovascular Institute/Medical Big Data Center, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Fanfan Zhu
- School of Information Management, Wuhan University, Wuhan, China
| | - Huimin Xia
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
| | - Long Lu
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
- School of Information Management, Wuhan University, Wuhan, China.
- Center for Healthcare Big Data Research, The Big Data Institute, Wuhan University, Wuhan, China.
- School of Public Health, Wuhan University, Wuhan, China.
| | - Hongying Wang
- Department of Medical Ultrasonics/Institute of Pediatrics, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
| |
Collapse
|
4
|
Sobhaninia Z, Karimi N, Khadivi P, Samavi S. Brain tumor segmentation by cascaded multiscale multitask learning framework based on feature aggregation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
5
|
Yasrab R, Fu Z, Zhao H, Lee LH, Sharma H, Drukker L, Papageorgiou AT, Noble JA. A Machine Learning Method for Automated Description and Workflow Analysis of First Trimester Ultrasound Scans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1301-1313. [PMID: 36455084 DOI: 10.1109/tmi.2022.3226274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Obstetric ultrasound assessment of fetal anatomy in the first trimester of pregnancy is one of the less explored fields in obstetric sonography because of the paucity of guidelines on anatomical screening and availability of data. This paper, for the first time, examines imaging proficiency and practices of first trimester ultrasound scanning through analysis of full-length ultrasound video scans. Findings from this study provide insights to inform the development of more effective user-machine interfaces, of targeted assistive technologies, as well as improvements in workflow protocols for first trimester scanning. Specifically, this paper presents an automated framework to model operator clinical workflow from full-length routine first-trimester fetal ultrasound scan videos. The 2D+t convolutional neural network-based architecture proposed for video annotation incorporates transfer learning and spatio-temporal (2D+t) modelling to automatically partition an ultrasound video into semantically meaningful temporal segments based on the fetal anatomy detected in the video. The model results in a cross-validation A1 accuracy of 96.10% , F1=0.95 , precision =0.94 and recall =0.95 . Automated semantic partitioning of unlabelled video scans (n=250) achieves a high correlation with expert annotations ( ρ = 0.95, p=0.06 ). Clinical workflow patterns, operator skill and its variability can be derived from the resulting representation using the detected anatomy labels, order, and distribution. It is shown that nuchal translucency (NT) is the toughest standard plane to acquire and most operators struggle to localize high-quality frames. Furthermore, it is found that newly qualified operators spend 25.56% more time on key biometry tasks than experienced operators.
Collapse
|
6
|
Gridach M, Yasrab R, Drukker L, Papageorghiou AT, Noble JA. D2ANet: Densely Attentional-Aware Network for First Trimester Ultrasound CRL and NT Segmentation. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 58:1-4. [PMID: 39247913 PMCID: PMC7616422 DOI: 10.1109/isbi53787.2023.10230727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/10/2024]
Abstract
Manual annotation of medical images is time consuming for clinical experts; therefore, reliable automatic segmentation would be the ideal way to handle large medical datasets. In this paper, we are interested in detection and segmentation of two fundamental measurements in the first trimester ultrasound (US) scan: Nuchal Translucency (NT) and Crown Rump Length (CRL). There can be a significant variation in the shape, location or size of the anatomical structures in the fetal US scans. We propose a new approach, namely Densely Attentional-Aware Network for First Trimester Ultrasound CRL and NT Segmentation (DA2Net), to encode variation in feature size by relying on the powerful attention mechanism and densely connected networks. Our results show that the proposed D2ANet offers high pixel agreement (mean JSC = 84.21) with expert manual annotations.
Collapse
Affiliation(s)
- Mourad Gridach
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| | - Robail Yasrab
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| | - Lior Drukker
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK
| | - Aris T Papageorghiou
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| |
Collapse
|
7
|
Balagalla UB, Jayasooriya J, de Alwis C, Subasinghe A. Automated segmentation of standard scanning planes to measure biometric parameters in foetal ultrasound images – a survey. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2179343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Affiliation(s)
- U. B. Balagalla
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - J.V.D. Jayasooriya
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - C. de Alwis
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| | - A. Subasinghe
- Department of Electrical and Electronic Engineering, University of Sri Jayewardenepura, Nugegoda, Sri Lanka
| |
Collapse
|
8
|
Kim HY, Cho GJ, Kwon HS. Applications of artificial intelligence in obstetrics. Ultrasonography 2023; 42:2-9. [PMID: 36588179 PMCID: PMC9816710 DOI: 10.14366/usg.22063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 06/09/2022] [Accepted: 06/20/2022] [Indexed: 01/13/2023] Open
Abstract
Artificial intelligence, which has been applied as an innovative technology in multiple fields of healthcare, analyzes large amounts of data to assist in disease prediction, prevention, and diagnosis, as well as in patient monitoring. In obstetrics, artificial intelligence has been actively applied and integrated into our daily medical practice. This review provides an overview of artificial intelligence systems currently used for obstetric diagnostic purposes, such as fetal cardiotocography, ultrasonography, and magnetic resonance imaging, and demonstrates how these methods have been developed and clinically applied.
Collapse
Affiliation(s)
- Ho Yeon Kim
- Department of Obstetrics and Gynecology, Korea University College of Medicine, Seoul, Korea
| | - Geum Joon Cho
- Department of Obstetrics and Gynecology, Korea University College of Medicine, Seoul, Korea
| | - Han Sung Kwon
- Division of Maternal and Fetal Medicine, Department of Obstetrics and Gynecology, Research Institute of Medical Science, Konkuk University School of Medicine, Seoul, Korea
| |
Collapse
|
9
|
Alzubaidi M, Agus M, Shah U, Makhlouf M, Alyafei K, Househ M. Ensemble Transfer Learning for Fetal Head Analysis: From Segmentation to Gestational Age and Weight Prediction. Diagnostics (Basel) 2022; 12:diagnostics12092229. [PMID: 36140628 PMCID: PMC9497941 DOI: 10.3390/diagnostics12092229] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 08/25/2022] [Accepted: 08/26/2022] [Indexed: 11/16/2022] Open
Abstract
Ultrasound is one of the most commonly used imaging methodologies in obstetrics to monitor the growth of a fetus during the gestation period. Specifically, ultrasound images are routinely utilized to gather fetal information, including body measurements, anatomy structure, fetal movements, and pregnancy complications. Recent developments in artificial intelligence and computer vision provide new methods for the automated analysis of medical images in many domains, including ultrasound images. We present a full end-to-end framework for segmenting, measuring, and estimating fetal gestational age and weight based on two-dimensional ultrasound images of the fetal head. Our segmentation framework is based on the following components: (i) eight segmentation architectures (UNet, UNet Plus, Attention UNet, UNet 3+, TransUNet, FPN, LinkNet, and Deeplabv3) were fine-tuned using lightweight network EffientNetB0, and (ii) a weighted voting method for building an optimized ensemble transfer learning model (ETLM). On top of that, ETLM was used to segment the fetal head and to perform analytic and accurate measurements of circumference and seven other values of the fetal head, which we incorporated into a multiple regression model for predicting the week of gestational age and the estimated fetal weight (EFW). We finally validated the regression model by comparing our result with expert physician and longitudinal references. We evaluated the performance of our framework on the public domain dataset HC18: we obtained 98.53% mean intersection over union (mIoU) as the segmentation accuracy, overcoming the state-of-the-art methods; as measurement accuracy, we obtained a 1.87 mm mean absolute difference (MAD). Finally we obtained a 0.03% mean square error (MSE) in predicting the week of gestational age and 0.05% MSE in predicting EFW.
Collapse
Affiliation(s)
- Mahmood Alzubaidi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
- Correspondence: (M.A.); (M.H.)
| | - Marco Agus
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| | - Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
| | - Michel Makhlouf
- Sidra Medical and Research Center, Sidra Medicine, Doha P.O. Box 26999, Qatar
| | - Khalid Alyafei
- Sidra Medical and Research Center, Sidra Medicine, Doha P.O. Box 26999, Qatar
| | - Mowafa Househ
- College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 34110, Qatar
- Correspondence: (M.A.); (M.H.)
| |
Collapse
|
10
|
Ran AR, Wang X, Chan PP, Chan NC, Yip W, Young AL, Wong MOM, Yung HW, Chang RT, Mannil SS, Tham YC, Cheng CY, Chen H, Li F, Zhang X, Heng PA, Tham CC, Cheung CY. Three-Dimensional Multi-Task Deep Learning Model to Detect Glaucomatous Optic Neuropathy and Myopic Features From Optical Coherence Tomography Scans: A Retrospective Multi-Centre Study. Front Med (Lausanne) 2022; 9:860574. [PMID: 35783623 PMCID: PMC9240220 DOI: 10.3389/fmed.2022.860574] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 04/25/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeWe aim to develop a multi-task three-dimensional (3D) deep learning (DL) model to detect glaucomatous optic neuropathy (GON) and myopic features (MF) simultaneously from spectral-domain optical coherence tomography (SDOCT) volumetric scans.MethodsEach volumetric scan was labelled as GON according to the criteria of retinal nerve fibre layer (RNFL) thinning, with a structural defect that correlated in position with the visual field defect (i.e., reference standard). MF were graded by the SDOCT en face images, defined as presence of peripapillary atrophy (PPA), optic disc tilting, or fundus tessellation. The multi-task DL model was developed by ResNet with output of Yes/No GON and Yes/No MF. SDOCT scans were collected in a tertiary eye hospital (Hong Kong SAR, China) for training (80%), tuning (10%), and internal validation (10%). External testing was performed on five independent datasets from eye centres in Hong Kong, the United States, and Singapore, respectively. For GON detection, we compared the model to the average RNFL thickness measurement generated from the SDOCT device. To investigate whether MF can affect the model’s performance on GON detection, we conducted subgroup analyses in groups stratified by Yes/No MF. The area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, and accuracy were reported.ResultsA total of 8,151 SDOCT volumetric scans from 3,609 eyes were collected. For detecting GON, in the internal validation, the proposed 3D model had significantly higher AUROC (0.949 vs. 0.913, p < 0.001) than average RNFL thickness in discriminating GON from normal. In the external testing, the two approaches had comparable performance. In the subgroup analysis, the multi-task DL model performed significantly better in the group of “no MF” (0.883 vs. 0.965, p-value < 0.001) in one external testing dataset, but no significant difference in internal validation and other external testing datasets. The multi-task DL model’s performance to detect MF was also generalizable in all datasets, with the AUROC values ranging from 0.855 to 0.896.ConclusionThe proposed multi-task 3D DL model demonstrated high generalizability in all the datasets and the presence of MF did not affect the accuracy of GON detection generally.
Collapse
Affiliation(s)
- An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Lam Kin Chung. Jet King-Shing Ho Glaucoma Treatment and Research Centre, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Xi Wang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, CA, United States
| | - Poemen P. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Lam Kin Chung. Jet King-Shing Ho Glaucoma Treatment and Research Centre, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Hong Kong Eye Hospital, Hong Kong, Hong Kong SAR, China
| | - Noel C. Chan
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Department of Ophthalmology, Prince of Wales Hospital, Hong Kong, Hong Kong SAR, China
- Department of Ophthalmology, Alice Ho Miu Ling Nethersole Hospital, Hong Kong, Hong Kong SAR, China
| | - Wilson Yip
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Department of Ophthalmology, Prince of Wales Hospital, Hong Kong, Hong Kong SAR, China
- Department of Ophthalmology, Alice Ho Miu Ling Nethersole Hospital, Hong Kong, Hong Kong SAR, China
| | - Alvin L. Young
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Department of Ophthalmology, Prince of Wales Hospital, Hong Kong, Hong Kong SAR, China
| | - Mandy O. M. Wong
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Hong Kong Eye Hospital, Hong Kong, Hong Kong SAR, China
| | - Hon-Wah Yung
- Tuen Mun Eye Centre, Hong Kong, Hong Kong SAR, China
| | - Robert T. Chang
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, CA, United States
| | - Suria S. Mannil
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Palo Alto, CA, United States
| | - Yih Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-National University of Singapore Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-National University of Singapore Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, Hong Kong SAR, China
| | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Clement C. Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Lam Kin Chung. Jet King-Shing Ho Glaucoma Treatment and Research Centre, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Hong Kong Eye Hospital, Hong Kong, Hong Kong SAR, China
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- Lam Kin Chung. Jet King-Shing Ho Glaucoma Treatment and Research Centre, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- *Correspondence: Carol Y. Cheung,
| |
Collapse
|
11
|
Zeng W, Luo J, Cheng J, Lu Y. Efficient fetal ultrasound image segmentation for automatic head circumference measurement using a lightweight deep convolutional neural network. Med Phys 2022; 49:5081-5092. [PMID: 35536111 DOI: 10.1002/mp.15700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 03/20/2022] [Accepted: 04/24/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Fetal head circumference (HC) is an important biometric parameter that can be used to assess fetal development in obstetric clinical practice. Most of existing methods use deep neural network to accomplish the task of automatic fetal HC measurement from two-dimensional ultrasound images, and some of them achieved relatively high prediction accuracy. However, few of these methods focused on optimizing model efficiency performance. Our purpose is to develop a more efficient approach for this task, which could help doctors measure HC faster and would be more suitable for deployment on devices with scarce computing resources. METHODS In this paper, we present a very lightweight deep convolutional neural network to achieve automatic fetal head segmentation from ultrasound images. By using sequential prediction network architecture, the proposed model could perform much faster inference while maintaining a high prediction accuracy. In addition, we used depthwise separable convolution to replace part of the standard convolution in the network and shrunk the input image to further improve model efficiency. After getting fetal head segmentation results, post-processing, including morphological processing and least-squares ellipse fitting, was applied to obtain the fetal HC. All experiments in this work were performed on a public dataset, HC18, with 999 fetal ultrasound images for training and 335 for testing. The dataset is publicly available on https://hc18.grand-challenge.org/ and the code for our method is also publicly available on https://github.com/ApeMocker/CSM-for-fetal-HC-measurement. RESULTS Our model has only 0.13 million [M] parameters and can achieve an inference speed of 28[ms] per frame on a CPU and 0.194 [ms] per frame on a GPU, which far exceeds all existing deep learning-based models as far as we know. Experimental results showed that the method achieved a mean absolute difference of 1.97 (±1.89) [mm] and a Dice similarity coefficient of 97.61(±1.72) [%] on HC18 test set, which were comparable to the state-of-the-art. CONCLUSION We presented a very lightweight deep learning-based model to realize fast and accurate fetal head segmentation from two-dimensional ultrasound image, which is then used for calculating the fetal HC. The proposed method could help obstetricians measure the fetal head circumference more efficiently with high accuracy, and has the potential to be applied to the situations where computing resources are relatively scarce. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Wen Zeng
- School of Biomedical Engineering, Shenzhen campus of Sun Yat-sen University, Shenzhen, Guangdong, 518107, China
| | - Jie Luo
- School of Biomedical Engineering, Shenzhen campus of Sun Yat-sen University, Shenzhen, Guangdong, 518107, China.,Key Laboratory of Sensing Technology and Biomedical Instrument of Guangdong Province, Guangdong Provincial Engineering and Technology Center of Advanced and Portable Medical Devices, Sun Yat-sen University, Guangzhou, China
| | - Jiaru Cheng
- School of Biomedical Engineering, Shenzhen campus of Sun Yat-sen University, Shenzhen, Guangdong, 518107, China
| | - Yiling Lu
- School of Biomedical Engineering, Shenzhen campus of Sun Yat-sen University, Shenzhen, Guangdong, 518107, China
| |
Collapse
|
12
|
RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7839840. [PMID: 35571722 PMCID: PMC9106472 DOI: 10.1155/2022/7839840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 03/23/2022] [Accepted: 04/04/2022] [Indexed: 11/17/2022]
Abstract
Answer selection (AS) is a critical subtask of the open-domain question answering (QA) problem. The present paper proposes a method called RLAS-BIABC for AS, which is established on attention mechanism-based long short-term memory (LSTM) and the bidirectional encoder representations from transformers (BERT) word embedding, enriched by an improved artificial bee colony (ABC) algorithm for pretraining and a reinforcement learning-based algorithm for training backpropagation (BP) algorithm. BERT can be comprised in downstream work and fine-tuned as a united task-specific architecture, and the pretrained BERT model can grab different linguistic effects. Existing algorithms typically train the AS model with positive-negative pairs for a two-class classifier. A positive pair contains a question and a genuine answer, while a negative one includes a question and a fake answer. The output should be one for positive and zero for negative pairs. Typically, negative pairs are more than positive, leading to an imbalanced classification that drastically reduces system performance. To deal with it, we define classification as a sequential decision-making process in which the agent takes a sample at each step and classifies it. For each classification operation, the agent receives a reward, in which the prize of the majority class is less than the reward of the minority class. Ultimately, the agent finds the optimal value for the policy weights. We initialize the policy weights with the improved ABC algorithm. The initial value technique can prevent problems such as getting stuck in the local optimum. Although ABC serves well in most tasks, there is still a weakness in the ABC algorithm that disregards the fitness of related pairs of individuals in discovering a neighboring food source position. Therefore, this paper also proposes a mutual learning technique that modifies the produced candidate food source with the higher fitness between two individuals selected by a mutual learning factor. We tested our model on three datasets, LegalQA, TrecQA, and WikiQA, and the results show that RLAS-BIABC can be recognized as a state-of-the-art method.
Collapse
|
13
|
An improved semantic segmentation with region proposal network for cardiac defect interpretation. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07217-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
14
|
Yang C, Liao S, Yang Z, Guo J, Zhang Z, Yang Y, Guo Y, Yin S, Liu C, Kang Y. RDHCformer: Fusing ResDCN and Transformers for Fetal Head Circumference Automatic Measurement in 2D Ultrasound Images. Front Med (Lausanne) 2022; 9:848904. [PMID: 35425784 PMCID: PMC9002127 DOI: 10.3389/fmed.2022.848904] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 03/07/2022] [Indexed: 11/13/2022] Open
Abstract
Fetal head circumference (HC) is an important biological parameter to monitor the healthy development of the fetus. Since there are some HC measurement errors that affected by the skill and experience of the sonographers, a rapid, accurate and automatic measurement for fetal HC in prenatal ultrasound is of great significance. We proposed a new one-stage network for rotating elliptic object detection based on anchor-free method, which is also an end-to-end network for fetal HC auto-measurement that no need for any post-processing. The network structure used simple transformer structure combined with convolutional neural network (CNN) for a lightweight design, meanwhile, made full use of powerful global feature extraction ability of transformer and local feature extraction ability of CNN to extract continuous and complete skull edge information. The two complement each other for promoting detection precision of fetal HC without significantly increasing the amount of computation. In order to reduce the large variation of intersection over union (IOU) in rotating elliptic object detection caused by slight angle deviation, we used soft stage-wise regression (SSR) strategy for angle regression and added KLD that is approximate to IOU loss into total loss function. The proposed method achieved good results on the HC18 dataset to prove its effectiveness. This study is expected to help less experienced sonographers, provide help for precision medicine, and relieve the shortage of sonographers for prenatal ultrasound in worldwide.
Collapse
Affiliation(s)
- Chaoran Yang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Medical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| | - Shanshan Liao
- Department of Obstetrics, Shengjing Hospital of China Medical University, Shenyang, China
| | - Zeyu Yang
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Jiaqi Guo
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Zhichao Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yingjian Yang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Medical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| | - Yingwei Guo
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Medical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| | - Shaowei Yin
- Department of Obstetrics, Shengjing Hospital of China Medical University, Shenyang, China
| | - Caixia Liu
- Department of Obstetrics, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yan Kang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.,Medical Device Innovation Center, Shenzhen Technology University, Shenzhen, China.,Engineering Research Centre of Medical Imaging and Intelligent Analysis, Ministry of Education, Shenyang, China
| |
Collapse
|
15
|
He F, Wang Y, Xiu Y, Zhang Y, Chen L. Artificial Intelligence in Prenatal Ultrasound Diagnosis. Front Med (Lausanne) 2021; 8:729978. [PMID: 34977053 PMCID: PMC8716504 DOI: 10.3389/fmed.2021.729978] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
The application of artificial intelligence (AI) technology to medical imaging has resulted in great breakthroughs. Given the unique position of ultrasound (US) in prenatal screening, the research on AI in prenatal US has practical significance with its application to prenatal US diagnosis improving work efficiency, providing quantitative assessments, standardizing measurements, improving diagnostic accuracy, and automating image quality control. This review provides an overview of recent studies that have applied AI technology to prenatal US diagnosis and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
| | | | | | | | - Lizhu Chen
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| |
Collapse
|
16
|
Nurmaini S, Rachmatullah MN, Sapitri AI, Darmawahyuni A, Tutuko B, Firdaus F, Partan RU, Bernolian N. Deep Learning-Based Computer-Aided Fetal Echocardiography: Application to Heart Standard View Segmentation for Congenital Heart Defects Detection. SENSORS (BASEL, SWITZERLAND) 2021; 21:8007. [PMID: 34884008 PMCID: PMC8659935 DOI: 10.3390/s21238007] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 11/28/2021] [Accepted: 11/29/2021] [Indexed: 12/02/2022]
Abstract
Accurate segmentation of fetal heart in echocardiography images is essential for detecting the structural abnormalities such as congenital heart defects (CHDs). Due to the wide variations attributed to different factors, such as maternal obesity, abdominal scars, amniotic fluid volume, and great vessel connections, this process is still a challenging problem. CHDs detection with expertise in general are substandard; the accuracy of measurements remains highly dependent on humans' training, skills, and experience. To make such a process automatic, this study proposes deep learning-based computer-aided fetal heart echocardiography examinations with an instance segmentation approach, which inherently segments the four standard heart views and detects the defect simultaneously. We conducted several experiments with 1149 fetal heart images for predicting 24 objects, including four shapes of fetal heart standard views, 17 objects of heart-chambers in each view, and three cases of congenital heart defect. The result showed that the proposed model performed satisfactory performance for standard views segmentation, with a 79.97% intersection over union and 89.70% Dice coefficient similarity. It also performed well in the CHDs detection, with mean average precision around 98.30% for intra-patient variation and 82.42% for inter-patient variation. We believe that automatic segmentation and detection techniques could make an important contribution toward improving congenital heart disease diagnosis rates.
Collapse
Affiliation(s)
- Siti Nurmaini
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang 30139, Indonesia; (M.N.R.); (A.I.S.); (A.D.); (B.T.) (F.F.)
| | - Muhammad Naufal Rachmatullah
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang 30139, Indonesia; (M.N.R.); (A.I.S.); (A.D.); (B.T.) (F.F.)
| | - Ade Iriani Sapitri
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang 30139, Indonesia; (M.N.R.); (A.I.S.); (A.D.); (B.T.) (F.F.)
| | - Annisa Darmawahyuni
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang 30139, Indonesia; (M.N.R.); (A.I.S.); (A.D.); (B.T.) (F.F.)
| | - Bambang Tutuko
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang 30139, Indonesia; (M.N.R.); (A.I.S.); (A.D.); (B.T.) (F.F.)
| | - Firdaus Firdaus
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang 30139, Indonesia; (M.N.R.); (A.I.S.); (A.D.); (B.T.) (F.F.)
| | | | - Nuswil Bernolian
- Division of Maternal-Fetal Medicine, Department of Obstetrics and Gynecology, Mohammad Hoesin General Hospital, Palembang 30126, Indonesia;
| |
Collapse
|
17
|
Moccia S, Fiorentino MC, Frontoni E. Mask-R
2
CNN: a distance-field regression version of Mask-RCNN for fetal-head delineation in ultrasound images. Int J Comput Assist Radiol Surg 2021; 16:1711-1718. [PMID: 34156608 PMCID: PMC8580944 DOI: 10.1007/s11548-021-02430-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 06/04/2021] [Indexed: 11/03/2022]
Abstract
BACKGROUND AND OBJECTIVES Fetal head-circumference (HC) measurement from ultrasound (US) images provides useful hints for assessing fetal growth. Such measurement is performed manually during the actual clinical practice, posing issues relevant to intra- and inter-clinician variability. This work presents a fully automatic, deep-learning-based approach to HC delineation, which we named Mask-R2 CNN. It advances our previous work in the field and performs HC distance-field regression in an end-to-end fashion, without requiring a priori HC localization nor any postprocessing for outlier removal. METHODS Mask-R2 CNN follows the Mask-RCNN architecture, with a backbone inspired by feature-pyramid networks, a region-proposal network and the ROI align. The Mask-RCNN segmentation head is here modified to regress the HC distance field. RESULTS Mask-R2 CNN was tested on the HC18 Challenge dataset, which consists of 999 training and 335 testing images. With a comprehensive ablation study, we showed that Mask-R2 CNN achieved a mean absolute difference of 1.95 mm (standard deviation = ± 1.92 mm), outperforming other approaches in the literature. CONCLUSIONS With this work, we proposed an end-to-end model for HC distance-field regression. With our experimental results, we showed that Mask-R2 CNN may be an effective support for clinicians for assessing fetal growth.
Collapse
Affiliation(s)
- Sara Moccia
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy.
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy.
| | | | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Ancona, Italy
| |
Collapse
|
18
|
Chen Z, Liu Z, Du M, Wang Z. Artificial Intelligence in Obstetric Ultrasound: An Update and Future Applications. Front Med (Lausanne) 2021; 8:733468. [PMID: 34513890 PMCID: PMC8429607 DOI: 10.3389/fmed.2021.733468] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 08/04/2021] [Indexed: 01/04/2023] Open
Abstract
Artificial intelligence (AI) can support clinical decisions and provide quality assurance for images. Although ultrasonography is commonly used in the field of obstetrics and gynecology, the use of AI is still in a stage of infancy. Nevertheless, in repetitive ultrasound examinations, such as those involving automatic positioning and identification of fetal structures, prediction of gestational age (GA), and real-time image quality assurance, AI has great potential. To realize its application, it is necessary to promote interdisciplinary communication between AI developers and sonographers. In this review, we outlined the benefits of AI technology in obstetric ultrasound diagnosis by optimizing image acquisition, quantification, segmentation, and location identification, which can be helpful for obstetric ultrasound diagnosis in different periods of pregnancy.
Collapse
Affiliation(s)
- Zhiyi Chen
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China.,Institute of Medical Imaging, University of South China, Hengyang, China
| | - Zhenyu Liu
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| | - Meng Du
- Institute of Medical Imaging, University of South China, Hengyang, China
| | - Ziyao Wang
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| |
Collapse
|
19
|
Komatsu M, Sakai A, Dozen A, Shozu K, Yasutomi S, Machino H, Asada K, Kaneko S, Hamamoto R. Towards Clinical Application of Artificial Intelligence in Ultrasound Imaging. Biomedicines 2021; 9:720. [PMID: 34201827 PMCID: PMC8301304 DOI: 10.3390/biomedicines9070720] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/13/2021] [Accepted: 06/18/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is being increasingly adopted in medical research and applications. Medical AI devices have continuously been approved by the Food and Drug Administration in the United States and the responsible institutions of other countries. Ultrasound (US) imaging is commonly used in an extensive range of medical fields. However, AI-based US imaging analysis and its clinical implementation have not progressed steadily compared to other medical imaging modalities. The characteristic issues of US imaging owing to its manual operation and acoustic shadows cause difficulties in image quality control. In this review, we would like to introduce the global trends of medical AI research in US imaging from both clinical and basic perspectives. We also discuss US image preprocessing, ingenious algorithms that are suitable for US imaging analysis, AI explainability for obtaining informed consent, the approval process of medical AI devices, and future perspectives towards the clinical application of AI-based US diagnostic support technologies.
Collapse
Affiliation(s)
- Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Akira Sakai
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ai Dozen
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Kanto Shozu
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Suguru Yasutomi
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
20
|
Amiri M, Brooks R, Rivaz H. Fine-Tuning U-Net for Ultrasound Image Segmentation: Different Layers, Different Outcomes. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2510-2518. [PMID: 32763853 DOI: 10.1109/tuffc.2020.3015081] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
One way of resolving the problem of scarce and expensive data in deep learning for medical applications is using transfer learning and fine-tuning a network which has been trained on a large data set. The common practice in transfer learning is to keep the shallow layers unchanged and to modify deeper layers according to the new data set. This approach may not work when using a U-Net and when moving from a different domain to ultrasound (US) images due to their drastically different appearance. In this study, we investigated the effect of fine-tuning different sets of layers of a pretrained U-Net for US image segmentation. Two different schemes were analyzed, based on two different definitions of shallow and deep layers. We studied simulated US images, as well as two human US data sets. We also included a chest X-ray data set. The results showed that choosing which layers to fine-tune is a critical task. In particular, they demonstrated that fine-tuning the last layers of the network, which is the common practice for classification networks, is often the worst strategy. It may therefore be more appropriate to fine-tune the shallow layers rather than deep layers in US image segmentation when using a U-Net. Shallow layers learn lower level features which are critical in automatic segmentation of medical images. Even when a large US data set is available, we observed that fine-tuning shallow layers is a faster approach compared to fine-tuning the whole network.
Collapse
|
21
|
Yi J, Kang HK, Kwon JH, Kim KS, Park MH, Seong YK, Kim DW, Ahn B, Ha K, Lee J, Hah Z, Bang WC. Technology trends and applications of deep learning in ultrasonography: image quality enhancement, diagnostic support, and improving workflow efficiency. Ultrasonography 2020; 40:7-22. [PMID: 33152846 PMCID: PMC7758107 DOI: 10.14366/usg.20102] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 09/14/2020] [Indexed: 12/12/2022] Open
Abstract
In this review of the most recent applications of deep learning to ultrasound imaging, the architectures of deep learning networks are briefly explained for the medical imaging applications of classification, detection, segmentation, and generation. Ultrasonography applications for image processing and diagnosis are then reviewed and summarized, along with some representative imaging studies of the breast, thyroid, heart, kidney, liver, and fetal head. Efforts towards workflow enhancement are also reviewed, with an emphasis on view recognition, scanning guide, image quality assessment, and quantification and measurement. Finally some future prospects are presented regarding image quality enhancement, diagnostic support, and improvements in workflow efficiency, along with remarks on hurdles, benefits, and necessary collaborations.
Collapse
Affiliation(s)
- Jonghyon Yi
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Ho Kyung Kang
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Jae-Hyun Kwon
- DR Imaging R&D Lab, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Kang-Sik Kim
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Moon Ho Park
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Yeong Kyeong Seong
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Dong Woo Kim
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Byungeun Ahn
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Kilsu Ha
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Jinyong Lee
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Zaegyoo Hah
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Won-Chul Bang
- Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seoul, Korea.,Product Strategy Team, Samsung Medison Co., Ltd., Seoul, Korea
| |
Collapse
|