1
|
Tonni G, Grisolia G, Tonni S, Lacerda VA, Ruano R, Sepulveda W. Fetal Face: Enhancing 3D Ultrasound Imaging by Postprocessing With AI Applications: Myth, Reality, or Legal Concerns? JOURNAL OF CLINICAL ULTRASOUND : JCU 2025; 53:562-567. [PMID: 39450521 DOI: 10.1002/jcu.23870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Accepted: 09/23/2024] [Indexed: 10/26/2024]
Abstract
The use of artificial intelligence (AI) platforms is revolutionizing the performance in managing metadata and big data. Medicine is another field where AI is spreading. However, this technological advancement is not amenable to errors or fraudulent misconducts. International organization and recently the European Union have released principles and recommendations for an appropriate use of AI in healthcare. In prenatal ultrasound diagnosis, the use of AI in daily practice is having a revolutionary impact. Notwithstanding, the diagnostic enhancement should be regulated, and AI applications should be developed to guarantee correct imaging acquisition and further postprocessing.
Collapse
Affiliation(s)
- G Tonni
- Department of Obstetrics and Neonatology, Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), ASL of Reggio Emilia, Reggio Emilia, Italy
| | - G Grisolia
- Department of Obstetrics and Gynecology, Carlo Poma Hospital, ASST of Mantua, Mantua, Italy
| | - Silvia Tonni
- Viadana City Hall, Registration Office, Viadana, Mantua, Italy
| | - Valter Andrade Lacerda
- Department of Obstetrics and Gynecology, Faculty of Medical Sciences Unicamp, Campinas, Brazil
| | - Rodrigo Ruano
- Division of Fetal Medicine, Department of Obstetric, Gynecology and Reproductive Sciences, University of Miami Miller School of Medicine, Miami, Florida, USA
| | | |
Collapse
|
2
|
Bai J, Zhou Z, Ou Z, Koehler G, Stock R, Maier-Hein K, Elbatel M, Martí R, Li X, Qiu Y, Gou P, Chen G, Zhao L, Zhang J, Dai Y, Wang F, Silvestre G, Curran K, Sun H, Xu J, Cai P, Jiang L, Lan L, Ni D, Zhong M, Chen G, Campello VM, Lu Y, Lekadir K. PSFHS challenge report: Pubic symphysis and fetal head segmentation from intrapartum ultrasound images. Med Image Anal 2025; 99:103353. [PMID: 39340971 DOI: 10.1016/j.media.2024.103353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 09/13/2024] [Accepted: 09/16/2024] [Indexed: 09/30/2024]
Abstract
Segmentation of the fetal and maternal structures, particularly intrapartum ultrasound imaging as advocated by the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) for monitoring labor progression, is a crucial first step for quantitative diagnosis and clinical decision-making. This requires specialized analysis by obstetrics professionals, in a task that i) is highly time- and cost-consuming and ii) often yields inconsistent results. The utility of automatic segmentation algorithms for biometry has been proven, though existing results remain suboptimal. To push forward advancements in this area, the Grand Challenge on Pubic Symphysis-Fetal Head Segmentation (PSFHS) was held alongside the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023). This challenge aimed to enhance the development of automatic segmentation algorithms at an international scale, providing the largest dataset to date with 5,101 intrapartum ultrasound images collected from two ultrasound machines across three hospitals from two institutions. The scientific community's enthusiastic participation led to the selection of the top 8 out of 179 entries from 193 registrants in the initial phase to proceed to the competition's second stage. These algorithms have elevated the state-of-the-art in automatic PSFHS from intrapartum ultrasound images. A thorough analysis of the results pinpointed ongoing challenges in the field and outlined recommendations for future work. The top solutions and the complete dataset remain publicly available, fostering further advancements in automatic segmentation and biometry for intrapartum ultrasound imaging.
Collapse
Affiliation(s)
- Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China; Auckland Bioengineering Institute, The University of Auckland, Private Bag 92019, Auckland 1142, New Zealand.
| | - Zihao Zhou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China
| | - Zhanhong Ou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China
| | - Gregor Koehler
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Raphael Stock
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Klaus Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Marawan Elbatel
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hongkong, China
| | - Robert Martí
- Computer Vision and Robotics Group, University of Girona, Girona, Spain
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hongkong, China
| | - Yaoyang Qiu
- Canon Medical Systems (China) Co., LTD, Beijing, China
| | - Panjie Gou
- Canon Medical Systems (China) Co., LTD, Beijing, China
| | - Gongping Chen
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Lei Zhao
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
| | - Jianxun Zhang
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Yu Dai
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Fangyijie Wang
- School of Medicine, University College Dublin, Dublin, Ireland
| | | | - Kathleen Curran
- School of Computer Science, University College Dublin, Dublin, Ireland
| | - Hongkun Sun
- School of Statistics & Mathematics, Zhejiang Gongshang University, Hangzhou, China
| | - Jing Xu
- School of Statistics & Mathematics, Zhejiang Gongshang University, Hangzhou, China
| | - Pengzhou Cai
- School of Computer Science & Engineering, Chongqing University of Technology, Chongqing, China
| | - Lu Jiang
- School of Computer Science & Engineering, Chongqing University of Technology, Chongqing, China
| | - Libin Lan
- School of Computer Science & Engineering, Chongqing University of Technology, Chongqing, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound & Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging & School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Mei Zhong
- NanFang Hospital of Southern Medical University, Guangzhou, China
| | - Gaowen Chen
- Zhujiang Hospital of Southern Medical University, Guangzhou, China
| | - Víctor M Campello
- Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Informatization, Jinan University, Guangzhou, China
| | - Karim Lekadir
- Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
3
|
Rathika S, Mahendran K, Sudarsan H, Ananth SV. Novel neural network classification of maternal fetal ultrasound planes through optimized feature selection. BMC Med Imaging 2024; 24:337. [PMID: 39696025 DOI: 10.1186/s12880-024-01453-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 10/04/2024] [Indexed: 12/20/2024] Open
Abstract
Ultrasound (US) imaging is an essential diagnostic technique in prenatal care, enabling enhanced surveillance of fetal growth and development. Fetal ultrasonography standard planes are crucial for evaluating fetal development parameters and detecting abnormalities. Real-time imaging, low cost, non-invasiveness, and accessibility make US imaging indispensable in clinical practice. However, acquiring fetal US planes with correct fetal anatomical features is a difficult and time-consuming task, even for experienced sonographers. Medical imaging using AI shows promise for addressing current challenges. In response to this challenge, a Deep Learning (DL)-based automated categorization method for maternal fetal US planes are introduced to enhance detection efficiency and diagnosis accuracy. This paper presents a hybrid optimization technique for feature selection and introduces a novel Radial Basis Function Neural Network (RBFNN) for reliable maternal fetal US plane classification. A large dataset of maternal-fetal screening US images was collected from publicly available sources and categorized into six groups: the four fetal anatomical planes, the mother's cervix, and an additional category. Feature extraction is performed using Gray-Level Co-occurrence Matrix (GLCM), and optimization methods such as Particle Swarm Optimization (PSO), Grey Wolf Optimization (GWO), and a hybrid Particle Swarm Optimization and Grey Wolf Optimization (PSOGWO) approach are utilized to select the most relevant features. The optimized features from each algorithm are then input into both conventional and proposed DL models. Experimental results indicate that the proposed approach surpasses conventional DL models in performance. Furthermore, the proposed model is evaluated against previously published models, showcasing its superior classification accuracy. In conclusion, our proposed approach provides a solid foundation for automating the classification of fetal US planes, leveraging optimization and DL techniques to enhance prenatal diagnosis and care.
Collapse
Affiliation(s)
- S Rathika
- Prince Shri Venkateshwara Padmavathy Engineering College, Chennai, India
| | - K Mahendran
- Saveetha Engineering College, Chennai, India.
| | - H Sudarsan
- K. Ramakrishnan College of Engineering, Trichy, India
| | | |
Collapse
|
4
|
Bhati D, Neha F, Amiruzzaman M. A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging. J Imaging 2024; 10:239. [PMID: 39452402 PMCID: PMC11508748 DOI: 10.3390/jimaging10100239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2024] [Revised: 09/14/2024] [Accepted: 09/21/2024] [Indexed: 10/26/2024] Open
Abstract
The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools to unravel the black-box nature of these models, providing insights into their inner workings and enhancing trust in their predictions. This survey paper comprehensively examines various interpretation and visualization techniques applied to deep learning models in medical imaging. The paper reviews methodologies, discusses their applications, and evaluates their effectiveness in enhancing the interpretability, reliability, and clinical relevance of deep learning models in medical image analysis.
Collapse
Affiliation(s)
- Deepshikha Bhati
- Department of Computer Science, Kent State University, Kent, OH 44242, USA;
| | - Fnu Neha
- Department of Computer Science, Kent State University, Kent, OH 44242, USA;
| | - Md Amiruzzaman
- Department of Computer Science, West Chester University, West Chester, PA 19383, USA;
| |
Collapse
|
5
|
Liu Z, Yu W, Wu X, Yang T, Lyu G, Liu P, Xue H. Detection of fetal facial anatomy in standard ultrasonographic sections based on real-time target detection network. Int J Gynaecol Obstet 2024; 165:916-928. [PMID: 37807664 DOI: 10.1002/ijgo.15145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 09/02/2023] [Accepted: 09/05/2023] [Indexed: 10/10/2023]
Abstract
At present, prenatal ultrasound is one of the important means for screening fetal malformations. In the process of prenatal ultrasound diagnosis, the accurate recognition of fetal facial ultrasound standard plane is crucial for facial malformation detection and disease screening. Due to the dense distribution of fetal facial images, no obvious structure contour boundary, small structure area, and large area overlap in the middle of the structure detection frame, this paper regards the fetal facial standard plane and its structure recognition as a universal target detection task for the first time, and applies real-time YOLO v5s to the fetal facial ultrasound standard plane structure detection and classification task. First, we detect the structure of a single slice, and take the structure of a slice class as the recognition object. Second, we carry out structural detection experiments on three standard planes; then, on the basis of the previous stage, the images of all parts included in the ultrasound examination of multiple fetuses were collected. In the single-class structure detection experiment and the structure detection and classification experiment of three types of standard planes, the overall recognition effect of Precision and Recall index data is better, with Precision being 98.3% and 98.1%, and Recall being 99.3% and 98.2%, respectively. The experimental results show that the model has the ability to identify fetal facial anatomy and standard sections in different data, which can help the physician to automatically and quickly screen out the standard sections of each fetal facial ultrasound.
Collapse
Affiliation(s)
- Zhonghua Liu
- Department of Ultrasound, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou, China
| | - Weifeng Yu
- Department of Ultrasound, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou, China
| | - Xiuming Wu
- Department of Ultrasound, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou, China
| | - Tong Yang
- School of Medicine, Huaqiao University, Quanzhou, Fujian, China
| | - Guorong Lyu
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, China
- Collaborative Innovation Center for Maternal and Infant Health Service Application Technology, Quanzhou Medical College, Quanzhou, China
| | - Peizhong Liu
- School of Medicine, Huaqiao University, Quanzhou, Fujian, China
- College of Engineering, Huaqiao University, Quanzhou, Fujian, China
| | - Hao Xue
- College of Engineering, Huaqiao University, Quanzhou, Fujian, China
| |
Collapse
|
6
|
Alasmawi H, Bricker L, Yaqub M. FUSC: Fetal Ultrasound Semantic Clustering of Second-Trimester Scans Using Deep Self-Supervised Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:703-711. [PMID: 38350787 DOI: 10.1016/j.ultrasmedbio.2024.01.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 12/31/2023] [Accepted: 01/14/2024] [Indexed: 02/15/2024]
Abstract
OBJECTIVE The aim of this study was address the challenges posed by the manual labeling of fetal ultrasound images by introducing an unsupervised approach, the fetal ultrasound semantic clustering (FUSC) method. The primary objective was to automatically cluster a large volume of ultrasound images into various fetal views, reducing or eliminating the need for labor-intensive manual labeling. METHODS The FUSC method was developed by using a substantial data set comprising 88,063 images. The methodology involves an unsupervised clustering approach to categorize ultrasound images into diverse fetal views. The method's effectiveness was further evaluated on an additional, unseen data set consisting of 8187 images. The evaluation included assessment of the clustering purity, and the entire process is detailed to provide insights into the method's performance. RESULTS The FUSC method exhibited notable success, achieving >92% clustering purity on the evaluation data set of 8187 images. The results signify the feasibility of automatically clustering fetal ultrasound images without relying on manual labeling. The study showcases the potential of this approach in handling a large volume of ultrasound scans encountered in clinical practice, with implications for improving efficiency and accuracy in fetal ultrasound imaging. CONCLUSION The findings of this investigation suggest that the FUSC method holds significant promise for the field of fetal ultrasound imaging. By automating the clustering of ultrasound images, this approach has the potential to reduce the manual labeling burden, making the process more efficient. The results pave the way for advanced automated labeling solutions, contributing to the enhancement of clinical practices in fetal ultrasound imaging. Our code is available at https://github.com/BioMedIA-MBZUAI/FUSC.
Collapse
Affiliation(s)
- Hussain Alasmawi
- Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates.
| | - Leanne Bricker
- Abu Dhabi Health Services Company (SEHA), Abu Dhabi, United Arab Emirates
| | - Mohammad Yaqub
- Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| |
Collapse
|
7
|
Lei T, Feng JL, Lin MF, Xie BH, Zhou Q, Wang N, Zheng Q, Yang YD, Guo HM, Xie HN. Development and validation of an artificial intelligence assisted prenatal ultrasonography screening system for trainees. Int J Gynaecol Obstet 2024; 165:306-317. [PMID: 37789758 DOI: 10.1002/ijgo.15167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/10/2023] [Accepted: 09/16/2023] [Indexed: 10/05/2023]
Abstract
OBJECTIVE Fetal anomaly screening via ultrasonography, which involves capturing and interpreting standard views, is highly challenging for inexperienced operators. We aimed to develop and validate a prenatal-screening artificial intelligence system (PSAIS) for real-time evaluation of the quality of anatomical images, indicating existing and missing structures. METHODS Still ultrasonographic images obtained from fetuses of 18-32 weeks of gestation between 2017 and 2018 were used to develop PSAIS based on YOLOv3 with global (anatomic site) and local (structures) feature extraction that could evaluate the image quality and indicate existing and missing structures in the fetal anatomical images. The performance of the PSAIS in recognizing 19 standard views was evaluated using retrospective real-world fetal scan video validation datasets from four hospitals. We stratified sampled frames (standard, similar-to-standard, and background views at approximately 1:1:1) for experts to blindly verify the results. RESULTS The PSAIS was trained using 134 696 images and validated using 836 videos with 12 697 images. For internal and external validations, the multiclass macro-average areas under the receiver operating characteristic curve were 0.943 (95% confidence interval [CI], 0.815-1.000) and 0.958 (0.864-1.000); the micro-average areas were 0.974 (0.970-0.979) and 0.973 (0.965-0.981), respectively. For similar-to-standard views, the PSAIS accurately labeled 90.9% (90.0%-91.4%) with key structures and indicated missing structures. CONCLUSIONS An artificial intelligence system developed to assist trainees in fetal anomaly screening demonstrated high agreement with experts in standard view identification.
Collapse
Affiliation(s)
- Ting Lei
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jie Ling Feng
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Mei Fang Lin
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Bai Hong Xie
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangzhou, Guangdong, China
| | - Qian Zhou
- Clinical Trials Unit, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Nan Wang
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangzhou, Guangdong, China
| | - Qiao Zheng
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yan Dong Yang
- Department of Ultrasonic Medicine, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Hong Mei Guo
- Department of Ultrasonic Medicine, DongGuan City Maternal and Child Health Hospital, DongGuan, China
| | - Hong Ning Xie
- Department of Ultrasonic Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
8
|
Padovani P, Singh Y, Pass RH, Vasile CM, Nield LE, Baruteau AE. E-Health: A Game Changer in Fetal and Neonatal Cardiology? J Clin Med 2023; 12:6865. [PMID: 37959330 PMCID: PMC10650296 DOI: 10.3390/jcm12216865] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 10/20/2023] [Accepted: 10/26/2023] [Indexed: 11/15/2023] Open
Abstract
Technological advancements have greatly impacted the healthcare industry, including the integration of e-health in pediatric cardiology. The use of telemedicine, mobile health applications, and electronic health records have demonstrated a significant potential to improve patient outcomes, reduce healthcare costs, and enhance the quality of care. Telemedicine provides a useful tool for remote clinics, follow-up visits, and monitoring for infants with congenital heart disease, while mobile health applications enhance patient and parents' education, medication compliance, and in some instances, remote monitoring of vital signs. Despite the benefits of e-health, there are potential limitations and challenges, such as issues related to availability, cost-effectiveness, data privacy and security, and the potential ethical, legal, and social implications of e-health interventions. In this review, we aim to highlight the current application and perspectives of e-health in the field of fetal and neonatal cardiology, including expert parents' opinions.
Collapse
Affiliation(s)
- Paul Padovani
- CHU Nantes, Department of Pediatric Cardiology and Pediatric Cardiac Surgery, FHU PRECICARE, Nantes Université, 44000 Nantes, France;
- CHU Nantes, INSERM, CIC FEA 1413, Nantes Université, 44000 Nantes, France
| | - Yogen Singh
- Division of Neonatology, Department of Pediatrics, Loma Linda University School of Medicine, Loma Linda, CA 92354, USA
- Division of Neonatal and Developmental Medicine, Department of Pediatrics, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Robert H. Pass
- Department of Pediatric Cardiology, Mount Sinai Kravis Children’s Hospital, New York, NY 10029, USA;
| | - Corina Maria Vasile
- Department of Pediatric and Adult Congenital Cardiology, University Hospital of Bordeaux, 33600 Bordeaux, France;
| | - Lynne E. Nield
- Division of Cardiology, Labatt Family Heart Centre, The Hospital for Sick Children, University of Toronto, Toronto, ON M5S 1A1, Canada
- Sunnybrook Health Sciences Centre, Toronto, ON M4N 3M5, Canada
| | - Alban-Elouen Baruteau
- CHU Nantes, Department of Pediatric Cardiology and Pediatric Cardiac Surgery, FHU PRECICARE, Nantes Université, 44000 Nantes, France;
- CHU Nantes, INSERM, CIC FEA 1413, Nantes Université, 44000 Nantes, France
- CHU Nantes, CNRS, INSERM, L’Institut du Thorax, Nantes Université, 44000 Nantes, France
- INRAE, UMR 1280, PhAN, Nantes Université, 44000 Nantes, France
| |
Collapse
|
9
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
10
|
Ghabri H, Alqahtani MS, Ben Othman S, Al-Rasheed A, Abbas M, Almubarak HA, Sakli H, Abdelkarim MN. Transfer learning for accurate fetal organ classification from ultrasound images: a potential tool for maternal healthcare providers. Sci Rep 2023; 13:17904. [PMID: 37863944 PMCID: PMC10589237 DOI: 10.1038/s41598-023-44689-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 10/11/2023] [Indexed: 10/22/2023] Open
Abstract
Ultrasound imaging is commonly used to aid in fetal development. It has the advantage of being real-time, low-cost, non-invasive, and easy to use. However, fetal organ detection is a challenging task for obstetricians, it depends on several factors, such as the position of the fetus, the habitus of the mother, and the imaging technique. In addition, image interpretation must be performed by a trained healthcare professional who can take into account all relevant clinical factors. Artificial intelligence is playing an increasingly important role in medical imaging and can help solve many of the challenges associated with fetal organ classification. In this paper, we propose a deep-learning model for automating fetal organ classification from ultrasound images. We trained and tested the model on a dataset of fetal ultrasound images, including two datasets from different regions, and recorded them with different machines to ensure the effective detection of fetal organs. We performed a training process on a labeled dataset with annotations for fetal organs such as the brain, abdomen, femur, and thorax, as well as the maternal cervical part. The model was trained to detect these organs from fetal ultrasound images using a deep convolutional neural network architecture. Following the training process, the model, DenseNet169, was assessed on a separate test dataset. The results were promising, with an accuracy of 99.84%, which is an impressive result. The F1 score was 99.84% and the AUC was 98.95%. Our study showed that the proposed model outperformed traditional methods that relied on the manual interpretation of ultrasound images by experienced clinicians. In addition, it also outperformed other deep learning-based methods that used different network architectures and training strategies. This study may contribute to the development of more accessible and effective maternal health services around the world and improve the health status of mothers and their newborns worldwide.
Collapse
Affiliation(s)
- Haifa Ghabri
- MACS Laboratory, National Engineering School of Gabes, University of Gabes, 6029, Gabès, Tunisia
| | - Mohammed S Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, 61421, Abha, Saudi Arabia
- BioImaging Unit, Space Research Centre, Michael Atiyah Building, University of Leicester, Leicester, LE17RH, UK
| | - Soufiene Ben Othman
- PRINCE Laboratory Research, ISITcom, Hammam Sousse, University of Sousse, Sousse, Tunisia.
| | - Amal Al-Rasheed
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Mohamed Abbas
- Electrical Engineering Department, College of Engineering, King Khalid University, 61421, Abha, Saudi Arabia
| | - Hassan Ali Almubarak
- Division of Radiology, Department of Medicine, College of Medicine and Surgery, King Khalid University (KKU), Abha, Aseer, Saudi Arabia
| | - Hedi Sakli
- EITA Consulting, 5 Rue Du Chant des Oiseaux, 78360, Montesson, Montesson, France
| | | |
Collapse
|
11
|
Zeng P, Liu S, He S, Zheng Q, Wu J, Liu Y, Lyu G, Liu P. TUSPM-NET: A multi-task model for thyroid ultrasound standard plane recognition and detection of key anatomical structures of the thyroid. Comput Biol Med 2023; 163:107069. [PMID: 37364531 DOI: 10.1016/j.compbiomed.2023.107069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/21/2023] [Accepted: 05/27/2023] [Indexed: 06/28/2023]
Abstract
The thyroid gland is a vital gland located in the anterior part of the neck. Ultrasound imaging of the thyroid gland is a non-invasive and widely used technique for diagnosing nodular growth, inflammation, and enlargement of the thyroid gland. In ultrasonography, the acquisition of ultrasound standard planes is crucial for disease diagnosis. However, the acquisition of standard planes in ultrasound examinations can be subjective, laborious and heavily reliant on the sonographer's clinical experience. To overcome these challenges, we design a multi-task model TUSP Multi-task Network (TUSPM-NET) that can recognize Thyroid Ultrasound Standard Plane (TUSP) and detect key anatomical structures in TUSPs in real-time. To improve TUSPM-NET's accuracy and learn prior knowledge in medical images, we proposed the plane target classes loss function and the plane targets position filter. Additionally, we collected 9778 TUSP images of 8 standard planes to train and validate the model. Experiments have shown that TUSPM-NET can accurately detect anatomical structures in TUSPs and recognize TUSP images. Compared to current models with better performance, TUSPM-NET's object detection map@0.5:0.95 improves by 9.3%; the precision and recall of plane recognition improve by 3.49% and 4.39%, respectively. Furthermore, TUSPM-NET recognizes and detects a TUSP image in just 19.9 ms, which means that the method is well suited to the needs of real-time clinical scanning.
Collapse
Affiliation(s)
- Pan Zeng
- School of Medicine, Huaqiao University, Quanzhou, 362021, China
| | - Shunlan Liu
- Department of Ultrasonics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Shaozheng He
- Department of Ultrasonics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Qingyu Zheng
- Department of Ultrasonics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Jiaxiang Wu
- Quanzhou Medical College, Quanzhou, 362000, China
| | - Yao Liu
- College of Scienceand Engineering, National Quemoy University, Kinmen, 89250, Taiwan.
| | - Guorong Lyu
- Department of Ultrasonics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China; Quanzhou Medical College, Quanzhou, 362000, China.
| | - Peizhong Liu
- School of Medicine, Huaqiao University, Quanzhou, 362021, China; Quanzhou Medical College, Quanzhou, 362000, China; College of Engineering, Huaqiao University, Quanzhou, 362021, China.
| |
Collapse
|
12
|
Zhou W, Qin C, Chang J, Liu Y, Chen Y, Feng M, Wang R, Yang W, Yao J. Standardized measurement of mid-surface shift of brain based on deep Hough transform. Comput Med Imaging Graph 2023; 108:102284. [PMID: 37567044 DOI: 10.1016/j.compmedimag.2023.102284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 07/29/2023] [Accepted: 07/29/2023] [Indexed: 08/13/2023]
Abstract
The measurement of mid-surface shift (MSS), the geometric displacement between the actual mid-surface and the ideal midsagittal plane (iMSP), is of great significance for accurate diagnosis, treatment and prognosis of patients with intracranial hemorrhage (ICH). Most previous studies are subject to inherent inaccuracy on account of calculating midline shift (MLS) based on 2D slices and ignoring pathological conditions. In this study, we propose a novel standardized measurement model to quantify the distance and the overall volume of mid-surface shift (MSS-D, MSS-V). Our work has four highlights. First, we develop an end-to-end network architecture with multiple sub-tasks including the actual mid-surface segmentation, hematoma segmentation and iMSP detection, which significantly improves the efficiency and accuracy of MSS measurement by taking advantage of the common properties among tasks. Second, an efficient iMSP detection scheme is proposed based on the differentiable deep Hough transform (DHT), which converts and simplifies the plane detection problem in the image space into a keypoint detection problem in the Hough space. Third, we devise a sparse DHT strategy and a weighted least square (WLS) method to increase the sparsity of features, improving inference speed and greatly reducing computation cost. Fourth, we design a joint loss function to comprehensively consider the correlation of features between multi-tasks and multi-domains. Extensive validation on our large in-house dataset (519 patients) and the public CQ500 dataset (491 patients), demonstrates the superiority of our method over the state-of-the-art methods.
Collapse
Affiliation(s)
- Wenxue Zhou
- Tencent AI Lab, Shenzhen, China; Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | | | - Jianbo Chang
- Peking Union Medical College Hospital, Beijing, China
| | | | - Yihao Chen
- Peking Union Medical College Hospital, Beijing, China
| | - Ming Feng
- Peking Union Medical College Hospital, Beijing, China
| | - Renzhi Wang
- Peking Union Medical College Hospital, Beijing, China
| | - Wenming Yang
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | | |
Collapse
|
13
|
Xiao S, Zhang J, Zhu Y, Zhang Z, Cao H, Xie M, Zhang L. Application and Progress of Artificial Intelligence in Fetal Ultrasound. J Clin Med 2023; 12:jcm12093298. [PMID: 37176738 PMCID: PMC10179567 DOI: 10.3390/jcm12093298] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 04/01/2023] [Accepted: 04/28/2023] [Indexed: 05/15/2023] Open
Abstract
Prenatal ultrasonography is the most crucial imaging modality during pregnancy. However, problems such as high fetal mobility, excessive maternal abdominal wall thickness, and inter-observer variability limit the development of traditional ultrasound in clinical applications. The combination of artificial intelligence (AI) and obstetric ultrasound may help optimize fetal ultrasound examination by shortening the examination time, reducing the physician's workload, and improving diagnostic accuracy. AI has been successfully applied to automatic fetal ultrasound standard plane detection, biometric parameter measurement, and disease diagnosis to facilitate conventional imaging approaches. In this review, we attempt to thoroughly review the applications and advantages of AI in prenatal fetal ultrasound and discuss the challenges and promises of this new field.
Collapse
Affiliation(s)
- Sushan Xiao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Junmin Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Ye Zhu
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Zisang Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Haiyan Cao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Mingxing Xie
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Li Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| |
Collapse
|
14
|
Xue H, Yu W, Liu Z, Liu P. Early Pregnancy Fetal Facial Ultrasound Standard Plane-Assisted Recognition Algorithm. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023. [PMID: 36896480 DOI: 10.1002/jum.16209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 02/14/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVES Ultrasound screening during early pregnancy is vital in preventing congenital disabilities. For example, nuchal translucency (NT) thickening is associated with fetal chromosomal abnormalities, particularly trisomy 21 and fetal heart malformations. Obtaining accurate ultrasound standard planes of a fetal face during early pregnancy is the key to subsequent biometry and disease diagnosis. Therefore, we propose a lightweight target detection network for early pregnancy fetal facial ultrasound standard plane recognition and quality assessment. METHODS First, a clinical control protocol was developed by ultrasound experts. Second, we constructed a YOLOv4 target detection algorithm based on the backbone network as GhostNet and added attention mechanisms CBAM and CA to the backbone and neck structure. Finally, key anatomical structures in the image were automatically scored according to a clinical control protocol to determine whether they were standard planes. RESULTS We reviewed other detection techniques and found that the proposed method performed well. The average recognition accuracy for six structures was 94.16%, the detection speed was 51 FPS, and the model size was 43.2 MB, and a reduction of 83% compared with the original YOLOv4 model was obtained. The precision for the standard median sagittal plane was 97.20%, and the accuracy for the standard retro-nasal triangle view was 99.07%. CONCLUSIONS The proposed method can better identify standard or non-standard planes from ultrasound image data, providing a theoretical basis for automatic acquisition of standard planes in the prenatal diagnosis of early pregnancy fetuses.
Collapse
Affiliation(s)
- Hao Xue
- College of Engineering, Huaqiao University, Quanzhou, China
| | - Weifeng Yu
- Department of Ultrasound, The First Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China
| | - Zhonghua Liu
- Department of Ultrasound, The First Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China
| | - Peizhong Liu
- College of Engineering, Huaqiao University, Quanzhou, China
| |
Collapse
|
15
|
Fiorentino MC, Villani FP, Di Cosmo M, Frontoni E, Moccia S. A review on deep-learning algorithms for fetal ultrasound-image analysis. Med Image Anal 2023; 83:102629. [PMID: 36308861 DOI: 10.1016/j.media.2022.102629] [Citation(s) in RCA: 50] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 07/12/2022] [Accepted: 09/10/2022] [Indexed: 11/07/2022]
Abstract
Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. A number of survey papers in the field is today available, but most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 153 research papers published after 2017. Papers are analyzed and commented from both the methodology and the application perspective. We categorized the papers into (i) fetal standard-plane detection, (ii) anatomical structure analysis and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. In addition, emerging applications are also outlined. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorithms for fetal US image analysis and a discussion on current challenges that have to be tackled by researchers working in the field to translate the research methodology into actual clinical practice.
Collapse
Affiliation(s)
| | | | - Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Università Politecnica delle Marche, Italy; Department of Political Sciences, Communication and International Relations, Università degli Studi di Macerata, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Italy
| |
Collapse
|
16
|
Feng H, Tang Q, Yu Z, Tang H, Yin M, Wei A. A Machine Learning Applied Diagnosis Method for Subcutaneous Cyst by Ultrasonography. OXIDATIVE MEDICINE AND CELLULAR LONGEVITY 2022; 2022:1526540. [PMID: 36299601 PMCID: PMC9592196 DOI: 10.1155/2022/1526540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 09/19/2022] [Accepted: 09/28/2022] [Indexed: 11/18/2022]
Abstract
For decades, ultrasound images have been widely used in the detection of various diseases due to their high security and efficiency. However, reading ultrasound images requires years of experience and training. In order to support the diagnosis of clinicians and reduce the workload of doctors, many ultrasonic computer aided diagnostic systems have been proposed. In recent years, the success of deep learning in image classification and segmentation has made more and more scholars realize the potential performance improvement brought by the application of deep learning in ultrasonic computer-aided diagnosis systems. This study is aimed at applying several machine learning algorithms and develop a machine learning method to diagnose subcutaneous cyst. Clinical features are extracted from datasets and images of ultrasonography of 132 patients from Hunan Provincial People's Hospital in China. All datasets are separated into 70% training and 30% testing. Four kinds of machine learning algorithms including decision tree (DT), support vector machine (SVM), K-nearest neighbors (KNN), and neural networks (NN) had been approached to determine the best performance. Compared with all the results from each feature, SVM achieved the best performance from 91.7% to 100%. Results show that SVM performed the highest accuracy in the diagnosis of subcutaneous cyst by ultrasonography, which provide a good reference in further application to clinical practice of ultrasonography of subcutaneous cyst.
Collapse
Affiliation(s)
- Hao Feng
- Department of Dermatology, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha 410005, China
| | - Qian Tang
- Department of Dermatology, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha 410005, China
| | - Zhengyu Yu
- Faculty of Engineering and IT, University of Technology, Sydney, Sydney, NSW 2007, Australia
| | - Hua Tang
- Department of Dermatology, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha 410005, China
| | - Ming Yin
- Department of Dermatology, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha 410005, China
| | - An Wei
- Department of Ultrasound, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha 410005, China
| |
Collapse
|
17
|
Fagbohungbe O, Reza SR, Dong X, Qian L. Efficient Privacy Preserving Edge Intelligent Computing Framework for Image Classification in IoT. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2022. [DOI: 10.1109/tetci.2021.3111636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Omobayode Fagbohungbe
- CREDIT Center and the Department of Electrical and Computer Engineering, Prairie View A&M University, Texas A&M University System, Prairie View, TX, USA
| | - Sheikh Rufsan Reza
- CREDIT Center and the Department of Electrical and Computer Engineering, Prairie View A&M University, Texas A&M University System, Prairie View, TX, USA
| | - Xishuang Dong
- CREDIT Center and the Department of Electrical and Computer Engineering, Prairie View A&M University, Texas A&M University System, Prairie View, TX, USA
| | - Lijun Qian
- CREDIT Center and the Department of Electrical and Computer Engineering, Prairie View A&M University, Texas A&M University System, Prairie View, TX, USA
| |
Collapse
|
18
|
Alzubaidi M, Agus M, Alyafei K, Althelaya KA, Shah U, Abd-Alrazaq AA, Anbar M, Makhlouf M, Househ M. Towards deep observation: A systematic survey on artificial intelligence techniques to monitor fetus via Ultrasound Images. iScience 2022; 25:104713. [PMID: 35856024 PMCID: PMC9287600 DOI: 10.1016/j.isci.2022.104713] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/09/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Several reviews have been conducted regarding artificial intelligence (AI) techniques to improve pregnancy outcomes. But they are not focusing on ultrasound images. This survey aims to explore how AI can assist with fetal growth monitoring via ultrasound image. We reported our findings using the guidelines for PRISMA. We conducted a comprehensive search of eight bibliographic databases. Out of 1269 studies 107 are included. We found that 2D ultrasound images were more popular (88) than 3D and 4D ultrasound images (19). Classification is the most used method (42), followed by segmentation (31), classification integrated with segmentation (16) and other miscellaneous methods such as object-detection, regression, and reinforcement learning (18). The most common areas that gained traction within the pregnancy domain were the fetus head (43), fetus body (31), fetus heart (13), fetus abdomen (10), and the fetus face (10). This survey will promote the development of improved AI models for fetal clinical applications. Artificial intelligence studies to monitor fetal development via ultrasound images Fetal issues categorized based on four categories — general, head, heart, face, abdomen The most used AI techniques are classification, segmentation, object detection, and RL The research and practical implications are included.
Collapse
|
19
|
ECAU-Net: Efficient channel attention U-Net for fetal ultrasound cerebellum segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103528] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
20
|
Lin M, He X, Guo H, He M, Zhang L, Xian J, Lei T, Xu Q, Zheng J, Feng J, Hao C, Yang Y, Wang N, Xie H. Use of real-time artificial intelligence in detection of abnormal image patterns in standard sonographic reference planes in screening for fetal intracranial malformations. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2022; 59:304-316. [PMID: 34940999 DOI: 10.1002/uog.24843] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 11/02/2021] [Accepted: 11/25/2021] [Indexed: 06/14/2023]
Abstract
OBJECTIVES To develop and validate an artificial intelligence system, the Prenatal ultrasound diagnosis Artificial Intelligence Conduct System (PAICS), to detect different patterns of fetal intracranial abnormality in standard sonographic reference planes for screening for congenital central nervous system (CNS) malformations. METHODS Neurosonographic images from normal fetuses and fetuses with CNS malformations at 18-40 gestational weeks were retrieved from the databases of two tertiary hospitals in China and assigned randomly (ratio, 8:1:1) to training, fine-tuning and internal validation datasets to develop and evaluate the PAICS. The system was built based on a real-time convolutional neural network (CNN) algorithm, You Only Look Once, version 3 (YOLOv3). An image dataset from a third tertiary hospital was used to further validate, externally, the performance of the PAICS and to compare its performance with that of sonologists with different levels of expertise. Furthermore, a prospective video dataset was employed to evaluate the performance of the PAICS in a real-time scan scenario. The diagnostic accuracy, sensitivity, specificity and area under the receiver-operating-characteristics curve (AUC) were calculated to assess the performance of the PAICS and to compare this with the performance of sonologists with different levels of experience. RESULTS In total, 43 890 images from 16 297 pregnancies and 169 videos from 166 pregnancies were used to develop and validate the PAICS. The system achieved excellent performance in identifying 10 types of intracranial image pattern, with macro- and microaverage AUCs, respectively, of 0.933 (95% CI, 0.798-1.000) and 0.977 (95% CI, 0.970-0.985) for the internal validation image dataset, 0.902 (95% CI, 0.816-0.989) and 0.898 (95% CI, 0.885-0.911) for the external validation image dataset and 0.969 (95% CI, 0.886-1.000) and 0.981 (95% CI, 0.974-0.988) in the real-time scan setting. The performance of the PAICS was comparable to that of expert sonologists in terms of macro- and microaverage accuracy (P = 0.863 and P = 0.775, respectively), sensitivity (P = 0.883, P = 0.846) and AUC (P = 0.891, P = 0.788), but required significantly less time (0.025 s per image for PAICS vs 4.4 s for experts, P < 0.001). CONCLUSIONS Both in the image dataset and in the real-time scan setting, the PAICS achieved excellent diagnostic performance for various fetal CNS abnormalities. Its performance was comparable to that of experts, but it required less time. A CNN algorithm can be trained to detect fetal CNS abnormalities. The PAICS has the potential to be an effective and efficient tool in screening for fetal CNS malformations in clinical practice. © 2021 International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- M Lin
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - X He
- Department of Ultrasound, Women and Children's Hospital affiliated to Xiamen University, Fujian, China
| | - H Guo
- Department of Ultrasound, Dongguan Maternal and Child Health Hospital, Dongguan, Guangdong, China
| | - M He
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - L Zhang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - J Xian
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangdong China & School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, China
| | - T Lei
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Q Xu
- Department of Ultrasound, Dongguan Maternal and Child Health Hospital, Dongguan, Guangdong, China
| | - J Zheng
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - J Feng
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - C Hao
- Department of Medical Statistics & Sun Yat-sen Global Health Institute, School of Public Health and Institute of State Governance, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Y Yang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - N Wang
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangdong, China
| | - H Xie
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
21
|
He F, Wang Y, Xiu Y, Zhang Y, Chen L. Artificial Intelligence in Prenatal Ultrasound Diagnosis. Front Med (Lausanne) 2021; 8:729978. [PMID: 34977053 PMCID: PMC8716504 DOI: 10.3389/fmed.2021.729978] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
The application of artificial intelligence (AI) technology to medical imaging has resulted in great breakthroughs. Given the unique position of ultrasound (US) in prenatal screening, the research on AI in prenatal US has practical significance with its application to prenatal US diagnosis improving work efficiency, providing quantitative assessments, standardizing measurements, improving diagnostic accuracy, and automating image quality control. This review provides an overview of recent studies that have applied AI technology to prenatal US diagnosis and explains the challenges encountered in these applications.
Collapse
Affiliation(s)
| | | | | | | | - Lizhu Chen
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| |
Collapse
|
22
|
Li Y, Zeng G, Zhang Y, Wang J, Jin Q, Sun L, Zhang Q, Lian Q, Qian G, Xia N, Peng R, Tang K, Wang S, Wang Y. AGMB-Transformer: Anatomy-Guided Multi-Branch Transformer Network for Automated Evaluation of Root Canal Therapy. IEEE J Biomed Health Inform 2021; 26:1684-1695. [PMID: 34797767 DOI: 10.1109/jbhi.2021.3129245] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Accurate evaluation of the treatment result on X-ray images is a significant and challenging step in root canal therapy since the incorrect interpretation of the therapy results will hamper timely follow-up which is crucial to the patients' treatment outcome. Nowadays, the evaluation is performed in a manual manner, which is time-consuming, subjective, and error-prone. In this paper, we aim to automate this process by leveraging the advances in computer vision and artificial intelligence, to provide an objective and accurate method for root canal therapy result assessment. A novel anatomy-guided multi-branch Transformer (AGMB-Transformer) network is proposed, which first extracts a set of anatomy features and then uses them to guide a multi-branch Transformer network for evaluation. Specifically, we design a polynomial curve fitting segmentation strategy with the help of landmark detection to extract the anatomy features. Moreover, a branch fusion module and a multi-branch structure including our progressive Transformer and Group Multi-Head Self-Attention (GMHSA) are designed to focus on both global and local features for an accurate diagnosis. To facilitate the research, we have collected a large-scale root canal therapy evaluation dataset with 245 root canal therapy X-ray images, and the experiment results show that our AGMB-Transformer can improve the diagnosis accuracy from 57.96% to 90.20% compared with the baseline network. The proposed AGMB-Transformer can achieve a highly accurate evaluation of root canal therapy. To our best knowledge, our work is the first to perform automatic root canal therapy evaluation and has important clinical value to reduce the workload of endodontists.
Collapse
|
23
|
Xi J, Chen J, Wang Z, Ta D, Lu B, Deng X, Li X, Huang Q. Simultaneous Segmentation of Fetal Hearts and Lungs for Medical Ultrasound Images via an Efficient Multi-scale Model Integrated With Attention Mechanism. ULTRASONIC IMAGING 2021; 43:308-319. [PMID: 34470531 DOI: 10.1177/01617346211042526] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Large scale early scanning of fetuses via ultrasound imaging is widely used to alleviate the morbidity or mortality caused by congenital anomalies in fetal hearts and lungs. To reduce the intensive cost during manual recognition of organ regions, many automatic segmentation methods have been proposed. However, the existing methods still encounter multi-scale problem at a larger range of receptive fields of organs in images, resolution problem of segmentation mask, and interference problem of task-irrelevant features, obscuring the attainment of accurate segmentations. To achieve semantic segmentation with functions of (1) extracting multi-scale features from images, (2) compensating information of high resolution, and (3) eliminating the task-irrelevant features, we propose a multi-scale model with skip connection framework and attention mechanism integrated. The multi-scale feature extraction modules are incorporated with additive attention gate units for irrelevant feature elimination, through a U-Net framework with skip connections for information compensation. The performance of fetal heart and lung segmentation indicates the superiority of our method over the existing deep learning based approaches. Our method also shows competitive performance stability during the task of semantic segmentations, showing a promising contribution on ultrasound based prognosis of congenital anomaly in the early intervention, and alleviating the negative effects caused by congenital anomaly.
Collapse
Affiliation(s)
- Jianing Xi
- School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi'an, China
| | - Jiangang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication & Electronic Engineering, East China Normal University, Shanghai, China
| | - Zhao Wang
- School of Computer Science, Northwestern Polytechnical University, Xi'an, China
| | - Dean Ta
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Bing Lu
- Center for Medical Ultrasound, Nanjing Medical University Affiliated Suzhou Hospital, Suzhou, China
| | - Xuedong Deng
- Center for Medical Ultrasound, Nanjing Medical University Affiliated Suzhou Hospital, Suzhou, China
| | - Xuelong Li
- School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi'an, China
| | - Qinghua Huang
- School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi'an, China
| |
Collapse
|
24
|
Chen Z, Liu Z, Du M, Wang Z. Artificial Intelligence in Obstetric Ultrasound: An Update and Future Applications. Front Med (Lausanne) 2021; 8:733468. [PMID: 34513890 PMCID: PMC8429607 DOI: 10.3389/fmed.2021.733468] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 08/04/2021] [Indexed: 01/04/2023] Open
Abstract
Artificial intelligence (AI) can support clinical decisions and provide quality assurance for images. Although ultrasonography is commonly used in the field of obstetrics and gynecology, the use of AI is still in a stage of infancy. Nevertheless, in repetitive ultrasound examinations, such as those involving automatic positioning and identification of fetal structures, prediction of gestational age (GA), and real-time image quality assurance, AI has great potential. To realize its application, it is necessary to promote interdisciplinary communication between AI developers and sonographers. In this review, we outlined the benefits of AI technology in obstetric ultrasound diagnosis by optimizing image acquisition, quantification, segmentation, and location identification, which can be helpful for obstetric ultrasound diagnosis in different periods of pregnancy.
Collapse
Affiliation(s)
- Zhiyi Chen
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China.,Institute of Medical Imaging, University of South China, Hengyang, China
| | - Zhenyu Liu
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| | - Meng Du
- Institute of Medical Imaging, University of South China, Hengyang, China
| | - Ziyao Wang
- The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| |
Collapse
|
25
|
Recognition of Fetal Facial Ultrasound Standard Plane Based on Texture Feature Fusion. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:6656942. [PMID: 34188691 PMCID: PMC8195636 DOI: 10.1155/2021/6656942] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 04/16/2021] [Accepted: 05/22/2021] [Indexed: 11/21/2022]
Abstract
In the process of prenatal ultrasound diagnosis, accurate identification of fetal facial ultrasound standard plane (FFUSP) is essential for accurate facial deformity detection and disease screening, such as cleft lip and palate detection and Down syndrome screening check. However, the traditional method of obtaining standard planes is manual screening by doctors. Due to different levels of doctors, this method often leads to large errors in the results. Therefore, in this study, we propose a texture feature fusion method (LH-SVM) for automatic recognition and classification of FFUSP. First, extract image's texture features, including Local Binary Pattern (LBP) and Histogram of Oriented Gradient (HOG), then perform feature fusion, and finally adopt Support Vector Machine (SVM) for predictive classification. In our study, we used fetal facial ultrasound images from 20 to 24 weeks of gestation as experimental data for a total of 943 standard plane images (221 ocular axial planes, 298 median sagittal planes, 424 nasolabial coronal planes, and 350 nonstandard planes, OAP, MSP, NCP, N-SP). Based on this data set, we performed five-fold cross-validation. The final test results show that the accuracy rate of the proposed method for FFUSP classification is 94.67%, the average precision rate is 94.27%, the average recall rate is 93.88%, and the average F1 score is 94.08%. The experimental results indicate that the texture feature fusion method can effectively predict and classify FFUSP, which provides an essential basis for clinical research on the automatic detection method of FFUSP.
Collapse
|
26
|
Recognition of Thyroid Ultrasound Standard Plane Images Based on Residual Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:5598001. [PMID: 34188673 PMCID: PMC8192196 DOI: 10.1155/2021/5598001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 04/27/2021] [Accepted: 05/14/2021] [Indexed: 01/22/2023]
Abstract
Ultrasound is one of the critical methods for diagnosis and treatment in thyroid examination. In clinical application, many reasons, such as large outpatient traffic, time-consuming training of sonographers, and uneven professional level of physicians, often cause irregularities during the ultrasonic examination, leading to misdiagnosis or missed diagnosis. In order to standardize the thyroid ultrasound examination process, this paper proposes using a deep learning method based on residual network to recognize the Thyroid Ultrasound Standard Plane (TUSP). At first, referring to multiple relevant guidelines, eight TUSP were determined with the advice of clinical ultrasound experts. A total of 5,500 TUSP images of 8 categories were collected with the approval and review of the Ethics Committee and the patient's informed consent. Then, after desensitizing and filling the images, the 18-layer residual network model (ResNet-18) was trained for TUSP image recognition, and five-fold cross-validation was performed. Finally, through indicators like accuracy rate, we compared the recognition effect of other mainstream deep convolutional neural network models. Experimental results showed that ResNet-18 has the best recognition effect on TUSP images with an average accuracy rate of 91.07%. The average macro precision, average macro recall, and average macro F1-score are 91.39%, 91.34%, and 91.30%, respectively. It proves that the deep learning method based on residual network can effectively recognize TUSP images, which is expected to standardize clinical thyroid ultrasound examination and reduce misdiagnosis and missed diagnosis.
Collapse
|
27
|
Khan S, Huh J, Ye JC. Variational Formulation of Unsupervised Deep Learning for Ultrasound Image Artifact Removal. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2086-2100. [PMID: 33523809 DOI: 10.1109/tuffc.2021.3056197] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recently, deep learning approaches have been successfully used for ultrasound (US) image artifact removal. However, paired high-quality images for supervised training are difficult to obtain in many practical situations. Inspired by the recent theory of unsupervised learning using optimal transport driven CycleGAN (OT-CycleGAN), here, we investigate the applicability of unsupervised deep learning for US artifact removal problems without matched reference data. Two types of OT-CycleGAN approaches are employed: one with the partial knowledge of the image degradation physics and the other with the lack of such knowledge. Various US artifact removal problems are then addressed using the two types of OT-CycleGAN. Experimental results for various unsupervised US artifact removal tasks confirmed that our unsupervised learning method delivers results comparable to supervised learning in many practical applications.
Collapse
|
28
|
Shen YT, Chen L, Yue WW, Xu HX. Artificial intelligence in ultrasound. Eur J Radiol 2021; 139:109717. [PMID: 33962110 DOI: 10.1016/j.ejrad.2021.109717] [Citation(s) in RCA: 95] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/28/2021] [Accepted: 04/11/2021] [Indexed: 12/13/2022]
Abstract
Ultrasound (US), a flexible green imaging modality, is expanding globally as a first-line imaging technique in various clinical fields following with the continual emergence of advanced ultrasonic technologies and the well-established US-based digital health system. Actually, in US practice, qualified physicians should manually collect and visually evaluate images for the detection, identification and monitoring of diseases. The diagnostic performance is inevitably reduced due to the intrinsic property of high operator-dependence from US. In contrast, artificial intelligence (AI) excels at automatically recognizing complex patterns and providing quantitative assessment for imaging data, showing high potential to assist physicians in acquiring more accurate and reproducible results. In this article, we will provide a general understanding of AI, machine learning (ML) and deep learning (DL) technologies; We then review the rapidly growing applications of AI-especially DL technology in the field of US-based on the following anatomical regions: thyroid, breast, abdomen and pelvis, obstetrics heart and blood vessels, musculoskeletal system and other organs by covering image quality control, anatomy localization, object detection, lesion segmentation, and computer-aided diagnosis and prognosis evaluation; Finally, we offer our perspective on the challenges and opportunities for the clinical practice of biomedical AI systems in US.
Collapse
Affiliation(s)
- Yu-Ting Shen
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China
| | - Liang Chen
- Department of Gastroenterology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, 200072, PR China
| | - Wen-Wen Yue
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| | - Hui-Xiong Xu
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| |
Collapse
|
29
|
Xue C, Zhu L, Fu H, Hu X, Li X, Zhang H, Heng PA. Global guidance network for breast lesion segmentation in ultrasound images. Med Image Anal 2021; 70:101989. [PMID: 33640719 DOI: 10.1016/j.media.2021.101989] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 01/28/2021] [Accepted: 01/29/2021] [Indexed: 12/01/2022]
Abstract
Automatic breast lesion segmentation in ultrasound helps to diagnose breast cancer, which is one of the dreadful diseases that affect women globally. Segmenting breast regions accurately from ultrasound image is a challenging task due to the inherent speckle artifacts, blurry breast lesion boundaries, and inhomogeneous intensity distributions inside the breast lesion regions. Recently, convolutional neural networks (CNNs) have demonstrated remarkable results in medical image segmentation tasks. However, the convolutional operations in a CNN often focus on local regions, which suffer from limited capabilities in capturing long-range dependencies of the input ultrasound image, resulting in degraded breast lesion segmentation accuracy. In this paper, we develop a deep convolutional neural network equipped with a global guidance block (GGB) and breast lesion boundary detection (BD) modules for boosting the breast ultrasound lesion segmentation. The GGB utilizes the multi-layer integrated feature map as a guidance information to learn the long-range non-local dependencies from both spatial and channel domains. The BD modules learn additional breast lesion boundary map to enhance the boundary quality of a segmentation result refinement. Experimental results on a public dataset and a collected dataset show that our network outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation. Moreover, we also show the application of our network on the ultrasound prostate segmentation, in which our method better identifies prostate regions than state-of-the-art networks.
Collapse
Affiliation(s)
- Cheng Xue
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Lei Zhu
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Hong Kong, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence, Abu Dhabi, UAE
| | - Xiaowei Hu
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Hai Zhang
- Shenzhen People's Hospital, The Second Clinical College of Jinan University, The First Affiliated Hospital of Southern University of Science and Technology, Guangdong Province, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong. Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| |
Collapse
|
30
|
Abstract
Deep learning (DL) approaches to medical image analysis tasks have recently become popular; however, they suffer from a lack of human interpretability critical for both increasing understanding of the methods' operation and enabling clinical translation. This review summarizes currently available methods for performing image model interpretation and critically evaluates published uses of these methods for medical imaging applications. We divide model interpretation in two categories: (1) understanding model structure and function and (2) understanding model output. Understanding model structure and function summarizes ways to inspect the learned features of the model and how those features act on an image. We discuss techniques for reducing the dimensionality of high-dimensional data and cover autoencoders, both of which can also be leveraged for model interpretation. Understanding model output covers attribution-based methods, such as saliency maps and class activation maps, which produce heatmaps describing the importance of different parts of an image to the model prediction. We describe the mathematics behind these methods, give examples of their use in medical imaging, and compare them against one another. We summarize several published toolkits for model interpretation specific to medical imaging applications, cover limitations of current model interpretation methods, provide recommendations for DL practitioners looking to incorporate model interpretation into their task, and offer general discussion on the importance of model interpretation in medical imaging contexts.
Collapse
Affiliation(s)
- Daniel T. Huff
- Department of Medical Physics, University of Wisconsin-Madison, Madison WI
| | - Amy J. Weisman
- Department of Medical Physics, University of Wisconsin-Madison, Madison WI
| | - Robert Jeraj
- Department of Medical Physics, University of Wisconsin-Madison, Madison WI
- Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
31
|
Cross-Tissue/Organ Transfer Learning for the Segmentation of Ultrasound Images Using Deep Residual U-Net. J Med Biol Eng 2021. [DOI: 10.1007/s40846-020-00585-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
32
|
Automatic Fetal Middle Sagittal Plane Detection in Ultrasound Using Generative Adversarial Network. Diagnostics (Basel) 2020; 11:diagnostics11010021. [PMID: 33374307 PMCID: PMC7824131 DOI: 10.3390/diagnostics11010021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 12/18/2020] [Accepted: 12/21/2020] [Indexed: 11/22/2022] Open
Abstract
Background and Objective: In the first trimester of pregnancy, fetal growth, and abnormalities can be assessed using the exact middle sagittal plane (MSP) of the fetus. However, the ultrasound (US) image quality and operator experience affect the accuracy. We present an automatic system that enables precise fetal MSP detection from three-dimensional (3D) US and provides an evaluation of its performance using a generative adversarial network (GAN) framework. Method: The neural network is designed as a filter and generates masks to obtain the MSP, learning the features and MSP location in 3D space. Using the proposed image analysis system, a seed point was obtained from 218 first-trimester fetal 3D US volumes using deep learning and the MSP was automatically extracted. Results: The experimental results reveal the feasibility and excellent performance of the proposed approach between the automatically and manually detected MSPs. There was no significant difference between the semi-automatic and automatic systems. Further, the inference time in the automatic system was up to two times faster than the semi-automatic approach. Conclusion: The proposed system offers precise fetal MSP measurements. Therefore, this automatic fetal MSP detection and measurement approach is anticipated to be useful clinically. The proposed system can also be applied to other relevant clinical fields in the future.
Collapse
|
33
|
Karim AM, Kaya H, Güzel MS, Tolun MR, Çelebi FV, Mishra A. A Novel Framework Using Deep Auto-Encoders Based Linear Model for Data Classification. SENSORS 2020; 20:s20216378. [PMID: 33182270 PMCID: PMC7664945 DOI: 10.3390/s20216378] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Revised: 11/03/2020] [Accepted: 11/05/2020] [Indexed: 11/25/2022]
Abstract
This paper proposes a novel data classification framework, combining sparse auto-encoders (SAEs) and a post-processing system consisting of a linear system model relying on Particle Swarm Optimization (PSO) algorithm. All the sensitive and high-level features are extracted by using the first auto-encoder which is wired to the second auto-encoder, followed by a Softmax function layer to classify the extracted features obtained from the second layer. The two auto-encoders and the Softmax classifier are stacked in order to be trained in a supervised approach using the well-known backpropagation algorithm to enhance the performance of the neural network. Afterwards, the linear model transforms the calculated output of the deep stacked sparse auto-encoder to a value close to the anticipated output. This simple transformation increases the overall data classification performance of the stacked sparse auto-encoder architecture. The PSO algorithm allows the estimation of the parameters of the linear model in a metaheuristic policy. The proposed framework is validated by using three public datasets, which present promising results when compared with the current literature. Furthermore, the framework can be applied to any data classification problem by considering minor updates such as altering some parameters including input features, hidden neurons and output classes.
Collapse
Affiliation(s)
- Ahmad M. Karim
- Computer Engineering Department, AYBU, Ankara 06830, Turkey; (A.M.K.); (H.K.); (F.V.Ç.)
| | - Hilal Kaya
- Computer Engineering Department, AYBU, Ankara 06830, Turkey; (A.M.K.); (H.K.); (F.V.Ç.)
| | | | - Mehmet R. Tolun
- Computer Engineering Department, Konya Food and Agriculture University, Konya 42080, Turkey;
| | - Fatih V. Çelebi
- Computer Engineering Department, AYBU, Ankara 06830, Turkey; (A.M.K.); (H.K.); (F.V.Ç.)
| | - Alok Mishra
- Faculty of Logistics, Molde University College-Specialized University in Logistics, 6402 Molde, Norway
- Software Engineering Department, Atilim University, Ankara 06830, Turkey
- Correspondence:
| |
Collapse
|
34
|
Xie HN, Wang N, He M, Zhang LH, Cai HM, Xian JB, Lin MF, Zheng J, Yang YZ. Using deep-learning algorithms to classify fetal brain ultrasound images as normal or abnormal. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2020; 56:579-587. [PMID: 31909548 DOI: 10.1002/uog.21967] [Citation(s) in RCA: 69] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2019] [Revised: 11/28/2019] [Accepted: 12/23/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVES To evaluate the feasibility of using deep-learning algorithms to classify as normal or abnormal sonographic images of the fetal brain obtained in standard axial planes. METHODS We included in the study images retrieved from a large hospital database from 10 251 normal and 2529 abnormal pregnancies. Abnormal cases were confirmed by neonatal ultrasound, follow-up examination or autopsy. After a series of pretraining data processing steps, 15 372 normal and 14 047 abnormal fetal brain images in standard axial planes were obtained. These were divided into training and test datasets (at case level rather than image level), at a ratio of approximately 8:2. The training data were used to train the algorithms for three purposes: performance of image segmentation along the fetal skull, classification of the image as normal or abnormal and localization of the lesion. The accuracy was then tested on the test datasets, with performance of segmentation being assessed using precision, recall and Dice's coefficient (DICE), calculated to measure the extent of overlap between human-labeled and machine-segmented regions. We assessed classification accuracy by calculating the sensitivity and specificity for abnormal images. Additionally, for 2491 abnormal images, we determined how well each lesion had been localized by overlaying heat maps created by an algorithm on the segmented ultrasound images; an expert judged these in terms of how satisfactory was the lesion localization by the algorithm, classifying this as having been done precisely, closely or irrelevantly. RESULTS Segmentation precision, recall and DICE were 97.9%, 90.9% and 94.1%, respectively. For classification, the overall accuracy was 96.3%. The sensitivity and specificity for identification of abnormal images were 96.9% and 95.9%, respectively, and the area under the receiver-operating-characteristics curve was 0.989 (95% CI, 0.986-0.991). The algorithms located lesions precisely in 61.6% (1535/2491) of the abnormal images, closely in 24.6% (614/2491) and irrelevantly in 13.7% (342/2491). CONCLUSIONS Deep-learning algorithms can be trained for segmentation and classification of normal and abnormal fetal brain ultrasound images in standard axial planes and can provide heat maps for lesion localization. This study lays the foundation for further research on the differential diagnosis of fetal intracranial abnormalities. Copyright © 2020 ISUOG. Published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- H N Xie
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - N Wang
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangdong, China
| | - M He
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - L H Zhang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - H M Cai
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, China
| | - J B Xian
- Guangzhou Aiyunji Information Technology Co., Ltd, Guangdong, China
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, China
| | - M F Lin
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - J Zheng
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Y Z Yang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
35
|
Lu Z, Li M, Annamalai A, Yang C. Recent advances in robot‐assisted echography: combining perception, control and cognition. COGNITIVE COMPUTATION AND SYSTEMS 2020. [DOI: 10.1049/ccs.2020.0015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Affiliation(s)
- Zhenyu Lu
- Bristol Robotics LaboratoryUniversity of the West of EnglandBristolUK
| | - Miao Li
- School of Power and Mechanical EngineeringWuhan UniversityWuhanPeople's Republic of China
| | | | - Chenguang Yang
- Bristol Robotics LaboratoryUniversity of the West of EnglandBristolUK
| |
Collapse
|
36
|
A Survey of Deep-Learning Applications in Ultrasound: Artificial Intelligence-Powered Ultrasound for Improving Clinical Workflow. J Am Coll Radiol 2020; 16:1318-1328. [PMID: 31492410 DOI: 10.1016/j.jacr.2019.06.004] [Citation(s) in RCA: 141] [Impact Index Per Article: 28.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 05/31/2019] [Accepted: 06/03/2019] [Indexed: 02/07/2023]
Abstract
Ultrasound is the most commonly used imaging modality in clinical practice because it is a nonionizing, low-cost, and portable point-of-care imaging tool that provides real-time images. Artificial intelligence (AI)-powered ultrasound is becoming more mature and getting closer to routine clinical applications in recent times because of an increased need for efficient and objective acquisition and evaluation of ultrasound images. Because ultrasound images involve operator-, patient-, and scanner-dependent variations, the adaptation of classical machine learning methods to clinical applications becomes challenging. With their self-learning ability, deep-learning (DL) methods are able to harness exponentially growing graphics processing unit computing power to identify abstract and complex imaging features. This has given rise to tremendous opportunities such as providing robust and generalizable AI models for improving image acquisition, real-time assessment of image quality, objective diagnosis and detection of diseases, and optimizing ultrasound clinical workflow. In this report, the authors review current DL approaches and research directions in rapidly advancing ultrasound technology and present their outlook on future directions and trends for DL techniques to further improve diagnosis, reduce health care cost, and optimize ultrasound clinical workflow.
Collapse
|
37
|
Xie B, Lei T, Wang N, Cai H, Xian J, He M, Zhang L, Xie H. Computer-aided diagnosis for fetal brain ultrasound images using deep convolutional neural networks. Int J Comput Assist Radiol Surg 2020; 15:1303-1312. [PMID: 32488568 DOI: 10.1007/s11548-020-02182-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Accepted: 04/23/2020] [Indexed: 11/29/2022]
Abstract
PURPOSE Fetal brain abnormalities are some of the most common congenital malformations that may associated with syndromic and chromosomal malformations, and could lead to neurodevelopmental delay and mental retardation. Early prenatal detection of brain abnormalities is essential for informing clinical management pathways and consulting for parents. The purpose of this research is to develop computer-aided diagnosis algorithms for five common fetal brain abnormalities, which may provide assistance to doctors for brain abnormalities detection in antenatal neurosonographic assessment. METHODS We applied a classifier to classify images of fetal brain standard planes (transventricular and transcerebellar) as normal or abnormal. The classifier was trained by image-level labeled images. In the first step, craniocerebral regions were segmented from the ultrasound images. Then, these segmentations were classified into four categories. Last, the lesions in the abnormal images were localized by class activation mapping. RESULTS We evaluated our algorithms on real-world clinical datasets of fetal brain ultrasound images. We observed that the proposed method achieved a Dice score of 0.942 on craniocerebral region segmentation, an average F1-score of 0.96 on classification and an average mean IOU of 0.497 on lesion localization. CONCLUSION We present computer-aided diagnosis algorithms for fetal brain ultrasound images based on deep convolutional neural networks. Our algorithms could be potentially applied in diagnosis assistance and are expected to help junior doctors in making clinical decision and reducing false negatives of fetal brain abnormalities.
Collapse
Affiliation(s)
- Baihong Xie
- South China University of Technology, Guangzhou, China
| | - Ting Lei
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Zhongshan Er Road 58, Guangzhou, 510080, Guangdong, China
| | - Nan Wang
- Guangzhou Aiyunji Information Technology Co., Ltd., Guangzhou, China
| | - Hongmin Cai
- South China University of Technology, Guangzhou, China
| | - Jianbo Xian
- South China University of Technology, Guangzhou, China.,Guangzhou Aiyunji Information Technology Co., Ltd., Guangzhou, China
| | - Miao He
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Zhongshan Er Road 58, Guangzhou, 510080, Guangdong, China
| | - Lihe Zhang
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Zhongshan Er Road 58, Guangzhou, 510080, Guangdong, China
| | - Hongning Xie
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Zhongshan Er Road 58, Guangzhou, 510080, Guangdong, China.
| |
Collapse
|
38
|
Lin H, Chen H, Graham S, Dou Q, Rajpoot N, Heng PA. Fast ScanNet: Fast and Dense Analysis of Multi-Gigapixel Whole-Slide Images for Cancer Metastasis Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1948-1958. [PMID: 30624213 DOI: 10.1109/tmi.2019.2891305] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Lymph node metastasis is one of the most important indicators in breast cancer diagnosis, that is traditionally observed under the microscope by pathologists. In recent years, with the dramatic advance of high-throughput scanning and deep learning technology, automatic analysis of histology from whole-slide images has received a wealth of interest in the field of medical image computing, which aims to alleviate pathologists' workload and simultaneously reduce misdiagnosis rate. However, the automatic detection of lymph node metastases from whole-slide images remains a key challenge because such images are typically very large, where they can often be multiple gigabytes in size. Also, the presence of hard mimics may result in a large number of false positives. In this paper, we propose a novel method with anchor layers for model conversion, which not only leverages the efficiency of fully convolutional architectures to meet the speed requirement in clinical practice but also densely scans the whole-slide image to achieve accurate predictions on both micro- and macro-metastases. Incorporating the strategies of asynchronous sample prefetching and hard negative mining, the network can be effectively trained. The efficacy of our method is corroborated on the benchmark dataset of 2016 Camelyon Grand Challenge. Our method achieved significant improvements in comparison with the state-of-the-art methods on tumor localization accuracy with a much faster speed and even surpassed human performance on both challenge tasks.
Collapse
|
39
|
Xie H, Lei H, He Y, Lei B. Deeply supervised full convolution network for HEp-2 specimen image segmentation. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.03.067] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
40
|
van den Heuvel TLA, Petros H, Santini S, de Korte CL, van Ginneken B. Automated Fetal Head Detection and Circumference Estimation from Free-Hand Ultrasound Sweeps Using Deep Learning in Resource-Limited Countries. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:773-785. [PMID: 30573305 DOI: 10.1016/j.ultrasmedbio.2018.09.015] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Revised: 09/05/2018] [Accepted: 09/14/2018] [Indexed: 06/09/2023]
Abstract
Ultrasound imaging remains out of reach for most pregnant women in developing countries because it requires a trained sonographer to acquire and interpret the images. We address this problem by presenting a system that can automatically estimate the fetal head circumference (HC) from data obtained with use of the obstetric sweep protocol (OSP). The OSP consists of multiple pre-defined sweeps with the ultrasound transducer over the abdomen of the pregnant woman. The OSP can be taught within a day to any health care worker without prior knowledge of ultrasound. An experienced sonographer acquired both the standard plane-to obtain the reference HC-and the OSP from 183 pregnant women in St. Luke's Hospital, Wolisso, Ethiopia. The OSP data, which will most likely not contain the standard plane, was used to automatically estimate HC using two fully convolutional neural networks. First, a VGG-Net-inspired network was trained to automatically detect the frames that contained the fetal head. Second, a U-net-inspired network was trained to automatically measure the HC for all frames in which the first network detected a fetal head. The HC was estimated from these frame measurements, and the curve of Hadlock was used to determine gestational age (GA). The results indicated that most automatically estimated GAs fell within the P2.5-P97.5 interval of the Hadlock curve compared with the GAs obtained from the reference HC, so it is possible to automatically estimate GA from OSP data. Our method therefore has potential application for providing maternal care in resource-constrained countries.
Collapse
Affiliation(s)
- Thomas L A van den Heuvel
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands; Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Hezkiel Petros
- St. Luke's Catholic Hospital and College of Nursing and Midwifery, Wolisso, Ethiopia
| | - Stefano Santini
- St. Luke's Catholic Hospital and College of Nursing and Midwifery, Wolisso, Ethiopia
| | - Chris L de Korte
- St. Luke's Catholic Hospital and College of Nursing and Midwifery, Wolisso, Ethiopia; Physics of Fluids Group, MIRA, University of Twente, The Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands; Fraunhofer MEVIS, Bremen, Germany
| |
Collapse
|
41
|
Lei B, Liu X, Liang S, Hang W, Wang Q, Choi KS, Qin J. Walking Imagery Evaluation in Brain Computer Interfaces via a Multi-View Multi-Level Deep Polynomial Network. IEEE Trans Neural Syst Rehabil Eng 2019; 27:497-506. [PMID: 30703032 DOI: 10.1109/tnsre.2019.2895064] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Brain-computer interfaces based on motor imagery (MI) have been widely used to support the rehabilitation of motor functions of the upper limbs rather than lower limbs. This is probably because it is more difficult to detect the brain activities of lower limb MI. In order to reliably detect the brain activities of lower limbs to restore or improve the walking ability of the disabled, we propose a new paradigm of walking imagery (WI) in a virtual environment (VE), in order to elicit the reliable brain activities and achieve a significant training effect. First, we extract and fuse both the spatial and time-frequency features as a multi-view feature to represent the patterns in the brain activity. Second, we design a multi-view multi-level deep polynomial network (MMDPN) to explore the complementarity among the features so as to improve the detection of walking from an idle state. Our extensive experimental results show that the VE-based paradigm significantly performs better than the traditional text-based paradigm. In addition, the VE-based paradigm can effectively help users to modulate the brain activities and improve the quality of electroencephalography signals. We also observe that the MMDPN outperforms other deep learning methods in terms of classification performance.
Collapse
|
42
|
Torrents-Barrena J, Piella G, Masoller N, Gratacós E, Eixarch E, Ceresa M, Ballester MÁG. Segmentation and classification in MRI and US fetal imaging: Recent trends and future prospects. Med Image Anal 2018; 51:61-88. [PMID: 30390513 DOI: 10.1016/j.media.2018.10.003] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2017] [Revised: 10/09/2018] [Accepted: 10/18/2018] [Indexed: 12/19/2022]
Abstract
Fetal imaging is a burgeoning topic. New advancements in both magnetic resonance imaging and (3D) ultrasound currently allow doctors to diagnose fetal structural abnormalities such as those involved in twin-to-twin transfusion syndrome, gestational diabetes mellitus, pulmonary sequestration and hypoplasia, congenital heart disease, diaphragmatic hernia, ventriculomegaly, etc. Considering the continued breakthroughs in utero image analysis and (3D) reconstruction models, it is now possible to gain more insight into the ongoing development of the fetus. Best prenatal diagnosis performances rely on the conscious preparation of the clinicians in terms of fetal anatomy knowledge. Therefore, fetal imaging will likely span and increase its prevalence in the forthcoming years. This review covers state-of-the-art segmentation and classification methodologies for the whole fetus and, more specifically, the fetal brain, lungs, liver, heart and placenta in magnetic resonance imaging and (3D) ultrasound for the first time. Potential applications of the aforementioned methods into clinical settings are also inspected. Finally, improvements in existing approaches as well as most promising avenues to new areas of research are briefly outlined.
Collapse
Affiliation(s)
- Jordina Torrents-Barrena
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.
| | - Gemma Piella
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Narcís Masoller
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), IDIBAPS, University of Barcelona, Barcelona, Spain; Center for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Eduard Gratacós
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), IDIBAPS, University of Barcelona, Barcelona, Spain; Center for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Elisenda Eixarch
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Déu), IDIBAPS, University of Barcelona, Barcelona, Spain; Center for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Mario Ceresa
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Miguel Ángel González Ballester
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain; ICREA, Barcelona, Spain
| |
Collapse
|
43
|
Li Y, Shi Z, Zhang H, Luo L, Fan G. Commentary: The Dynamic Features of Lip Corners in Genuine and Posed Smiles. Front Psychol 2018; 9:1610. [PMID: 30319471 PMCID: PMC6167606 DOI: 10.3389/fpsyg.2018.01610] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Accepted: 08/13/2018] [Indexed: 11/13/2022] Open
Affiliation(s)
- Yingqi Li
- School of Humanity, Tongji University, Shanghai, China
| | - Zhongyong Shi
- Psychiatry Department, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China
- Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - Honglei Zhang
- School of Management and Economics, Tianjin University, Tianjin, China
- Surgical Planing Lab, Radiology Department, Brigham and Women's Hospital, Boston, MA, United States
| | - Lishu Luo
- School of Management and Economics, Tianjin University, Tianjin, China
- Surgical Planing Lab, Radiology Department, Brigham and Women's Hospital, Boston, MA, United States
| | - Guoxin Fan
- Surgical Planing Lab, Radiology Department, Brigham and Women's Hospital, Boston, MA, United States
- School of Medicine, Tongji University, Shanghai, China
- *Correspondence: Guoxin Fan ;
| |
Collapse
|
44
|
Machine Learning in Ultrasound Computer-Aided Diagnostic Systems: A Survey. BIOMED RESEARCH INTERNATIONAL 2018; 2018:5137904. [PMID: 29687000 PMCID: PMC5857346 DOI: 10.1155/2018/5137904] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 01/12/2018] [Accepted: 02/06/2018] [Indexed: 12/13/2022]
Abstract
The ultrasound imaging is one of the most common schemes to detect diseases in the clinical practice. There are many advantages of ultrasound imaging such as safety, convenience, and low cost. However, reading ultrasound imaging is not easy. To support the diagnosis of clinicians and reduce the load of doctors, many ultrasound computer-aided diagnosis (CAD) systems are proposed. In recent years, the success of deep learning in the image classification and segmentation led to more and more scholars realizing the potential of performance improvement brought by utilizing the deep learning in the ultrasound CAD system. This paper summarized the research which focuses on the ultrasound CAD system utilizing machine learning technology in recent years. This study divided the ultrasound CAD system into two categories. One is the traditional ultrasound CAD system which employed the manmade feature and the other is the deep learning ultrasound CAD system. The major feature and the classifier employed by the traditional ultrasound CAD system are introduced. As for the deep learning ultrasound CAD, newest applications are summarized. This paper will be useful for researchers who focus on the ultrasound CAD system.
Collapse
|