51
|
Evaluation of an improved tool for non-invasive prediction of neonatal respiratory morbidity based on fully automated fetal lung ultrasound analysis. Sci Rep 2019; 9:1950. [PMID: 30760806 PMCID: PMC6374419 DOI: 10.1038/s41598-019-38576-w] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Accepted: 01/02/2019] [Indexed: 11/22/2022] Open
Abstract
The objective of this study was to evaluate the performance of a new version of quantusFLM®, a software tool for prediction of neonatal respiratory morbidity (NRM) by ultrasound, which incorporates a fully automated fetal lung delineation based on Deep Learning techniques. A set of 790 fetal lung ultrasound images obtained at 24 + 0–38 + 6 weeks’ gestation was evaluated. Perinatal outcomes and the occurrence of NRM were recorded. quantusFLM® version 3.0 was applied to all images to automatically delineate the fetal lung and predict NRM risk. The test was compared with the same technology but using a manual delineation of the fetal lung, and with a scenario where only gestational age was available. The software predicted NRM with a sensitivity, specificity, and positive and negative predictive value of 71.0%, 94.7%, 67.9%, and 95.4%, respectively, with an accuracy of 91.5%. The accuracy for predicting NRM obtained with the same texture analysis but using a manual delineation of the lung was 90.3%, and using only gestational age was 75.6%. To sum up, automated and non-invasive software predicted NRM with a performance similar to that reported for tests based on amniotic fluid analysis and much greater than that of gestational age alone.
Collapse
|
52
|
Yang X, Yu L, Li S, Wen H, Luo D, Bian C, Qin J, Ni D, Heng PA. Towards Automated Semantic Segmentation in Prenatal Volumetric Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:180-193. [PMID: 30040635 DOI: 10.1109/tmi.2018.2858779] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Volumetric ultrasound is rapidly emerging as a viable imaging modality for routine prenatal examinations. Biometrics obtained from the volumetric segmentation shed light on the reformation of precise maternal and fetal health monitoring. However, the poor image quality, low contrast, boundary ambiguity, and complex anatomy shapes conspire toward a great lack of efficient tools for the segmentation. It makes 3-D ultrasound difficult to interpret and hinders the widespread of 3-D ultrasound in obstetrics. In this paper, we are looking at the problem of semantic segmentation in prenatal ultrasound volumes. Our contribution is threefold: 1) we propose the first and fully automatic framework to simultaneously segment multiple anatomical structures with intensive clinical interest, including fetus, gestational sac, and placenta, which remains a rarely studied and arduous challenge; 2) we propose a composite architecture for dense labeling, in which a customized 3-D fully convolutional network explores spatial intensity concurrency for initial labeling, while a multi-directional recurrent neural network (RNN) encodes spatial sequentiality to combat boundary ambiguity for significant refinement; and 3) we introduce a hierarchical deep supervision mechanism to boost the information flow within RNN and fit the latent sequence hierarchy in fine scales, and further improve the segmentation results. Extensively verified on in-house large data sets, our method illustrates a superior segmentation performance, decent agreements with expert measurements and high reproducibilities against scanning variations, and thus is promising in advancing the prenatal ultrasound examinations.
Collapse
|
53
|
Maier-Hein L, Eisenmann M, Reinke A, Onogur S, Stankovic M, Scholz P, Arbel T, Bogunovic H, Bradley AP, Carass A, Feldmann C, Frangi AF, Full PM, van Ginneken B, Hanbury A, Honauer K, Kozubek M, Landman BA, März K, Maier O, Maier-Hein K, Menze BH, Müller H, Neher PF, Niessen W, Rajpoot N, Sharp GC, Sirinukunwattana K, Speidel S, Stock C, Stoyanov D, Taha AA, van der Sommen F, Wang CW, Weber MA, Zheng G, Jannin P, Kopp-Schneider A. Why rankings of biomedical image analysis competitions should be interpreted with care. Nat Commun 2018; 9:5217. [PMID: 30523263 PMCID: PMC6284017 DOI: 10.1038/s41467-018-07619-7] [Citation(s) in RCA: 166] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2018] [Accepted: 11/07/2018] [Indexed: 11/08/2022] Open
Abstract
International challenges have become the standard for validation of biomedical image analysis methods. Given their scientific impact, it is surprising that a critical analysis of common practices related to the organization of challenges has not yet been performed. In this paper, we present a comprehensive analysis of biomedical image analysis challenges conducted up to now. We demonstrate the importance of challenges and show that the lack of quality control has critical consequences. First, reproducibility and interpretation of the results is often hampered as only a fraction of relevant information is typically provided. Second, the rank of an algorithm is generally not robust to a number of variables such as the test data used for validation, the ranking scheme applied and the observers that make the reference annotations. To overcome these problems, we recommend best practice guidelines and define open research questions to be addressed in the future.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany.
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Annika Reinke
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Marko Stankovic
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Patrick Scholz
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Tal Arbel
- Centre for Intelligent Machines, McGill University, Montreal, QC, H3A0G4, Canada
| | - Hrvoje Bogunovic
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University Vienna, 1090, Vienna, Austria
| | - Andrew P Bradley
- Science and Engineering Faculty, Queensland University of Technology, Brisbane, QLD, 4001, Australia
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Department of Computer Science, Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Carolin Feldmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Alejandro F Frangi
- CISTIB - Center for Computational Imaging & Simulation Technologies in Biomedicine, The University of Leeds, Leeds, Yorkshire, LS2 9JT, UK
| | - Peter M Full
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Bram van Ginneken
- Department of Radiology and Nuclear Medicine, Medical Image Analysis, Radboud University Center, 6525 GA, Nijmegen, The Netherlands
| | - Allan Hanbury
- Institute of Information Systems Engineering, TU Wien, 1040, Vienna, Austria
- Complexity Science Hub Vienna, 1080, Vienna, Austria
| | - Katrin Honauer
- Heidelberg Collaboratory for Image Processing (HCI), Heidelberg University, 69120, Heidelberg, Germany
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Masaryk University, 60200, Brno, Czech Republic
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN, 37235-1679, USA
| | - Keno März
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Oskar Maier
- Institute of Medical Informatics, Universität zu Lübeck, 23562, Lübeck, Germany
| | - Klaus Maier-Hein
- Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Bjoern H Menze
- Institute for Advanced Studies, Department of Informatics, Technical University of Munich, 80333, Munich, Germany
| | - Henning Müller
- Information System Institute, HES-SO, Sierre, 3960, Switzerland
| | - Peter F Neher
- Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Wiro Niessen
- Departments of Radiology, Nuclear Medicine and Medical Informatics, Erasmus MC, 3015 GD, Rotterdam, The Netherlands
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
| | - Gregory C Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, 02114, USA
| | | | - Stefanie Speidel
- Division of Translational Surgical Oncology (TCO), National Center for Tumor Diseases Dresden, 01307, Dresden, Germany
| | - Christian Stock
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Danail Stoyanov
- Centre for Medical Image Computing (CMIC) & Department of Computer Science, University College London, London, W1W 7TS, UK
| | - Abdel Aziz Taha
- Data Science Studio, Research Studios Austria FG, 1090, Vienna, Austria
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven, The Netherlands
| | - Ching-Wei Wang
- AIExplore, NTUST Center of Computer Vision and Medical Imaging, Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, 106, Taiwan
| | - Marc-André Weber
- Institute of Diagnostic and Interventional Radiology, University Medical Center Rostock, 18051, Rostock, Germany
| | - Guoyan Zheng
- Institute for Surgical Technology and Biomechanics, University of Bern, Bern, 3014, Switzerland
| | - Pierre Jannin
- Univ Rennes, Inserm, LTSI (Laboratoire Traitement du Signal et de l'Image) - UMR_S 1099, Rennes, 35043, Cedex, France
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| |
Collapse
|
54
|
Kim B, Kim KC, Park Y, Kwon JY, Jang J, Seo JK. Machine-learning-based automatic identification of fetal abdominal circumference from ultrasound images. Physiol Meas 2018; 39:105007. [PMID: 30226815 DOI: 10.1088/1361-6579/aae255] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Obstetricians mainly use ultrasound imaging for fetal biometric measurements. However, such measurements are cumbersome. Hence, there is urgent need for automatic biometric estimation. Automated analysis of ultrasound images is complicated owing to the patient-specific, operator-dependent, and machine-specific characteristics of such images. APPROACH This paper proposes a method for the automatic fetal biometry estimation from 2D ultrasound data through several processes consisting of a specially designed convolutional neural network (CNN) and U-Net for each process. These machine learning techniques take clinicians' decisions, anatomical structures, and the characteristics of ultrasound images into account. The proposed method is divided into three steps: initial abdominal circumference (AC) estimation, AC measurement, and plane acceptance checking. MAIN RESULTS A CNN is used to classify ultrasound images (stomach bubble, amniotic fluid, and umbilical vein), and a Hough transform is used to obtain an initial estimate of the AC. These data are applied to other CNNs to estimate the spine position and bone regions. Then, the obtained information is used to determine the final AC. After determining the AC, a U-Net and a classification CNN are used to check whether the image is suitable for AC measurement. Finally, the efficacy of the proposed method is validated by clinical data. SIGNIFICANCE Our method achieved a Dice similarity metric of [Formula: see text] for AC measurement and an accuracy of 87.10% for our acceptance check of the fetal abdominal standard plane.
Collapse
Affiliation(s)
- Bukweon Kim
- Department of Computational Science and Engineering, Yonsei University, Seoul 03722, Republic of Korea
| | | | | | | | | | | |
Collapse
|
55
|
van den Heuvel TLA, de Bruijn D, de Korte CL, van Ginneken B. Automated measurement of fetal head circumference using 2D ultrasound images. PLoS One 2018; 13:e0200412. [PMID: 30138319 PMCID: PMC6107118 DOI: 10.1371/journal.pone.0200412] [Citation(s) in RCA: 71] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Accepted: 06/26/2018] [Indexed: 11/19/2022] Open
Abstract
In this paper we present a computer aided detection (CAD) system for automated measurement of the fetal head circumference (HC) in 2D ultrasound images for all trimesters of the pregnancy. The HC can be used to estimate the gestational age and monitor growth of the fetus. Automated HC assessment could be valuable in developing countries, where there is a severe shortage of trained sonographers. The CAD system consists of two steps: First, Haar-like features were computed from the ultrasound images to train a random forest classifier to locate the fetal skull. Secondly, the HC was extracted using Hough transform, dynamic programming and an ellipse fit. The CAD system was trained on 999 images and validated on an independent test set of 335 images from all trimesters. The test set was manually annotated by an experienced sonographer and a medical researcher. The reference gestational age (GA) was estimated using the crown-rump length measurement (CRL). The mean difference between the reference GA and the GA estimated by the experienced sonographer was 0.8 ± 2.6, -0.0 ± 4.6 and 1.9 ± 11.0 days for the first, second and third trimester, respectively. The mean difference between the reference GA and the GA estimated by the medical researcher was 1.6 ± 2.7, 2.0 ± 4.8 and 3.9 ± 13.7 days. The mean difference between the reference GA and the GA estimated by the CAD system was 0.6 ± 4.3, 0.4 ± 4.7 and 2.5 ± 12.4 days. The results show that the CAD system performs comparable to an experienced sonographer. The presented system shows similar or superior results compared to systems published in literature. This is the first automated system for HC assessment evaluated on a large test set which contained data of all trimesters of the pregnancy.
Collapse
Affiliation(s)
- Thomas L. A. van den Heuvel
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
- Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Dagmar de Bruijn
- Department of Obstetrics and Gynecology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Chris L. de Korte
- Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
- Fraunhofer MEVIS, Bremen, Germany
| |
Collapse
|
56
|
Automated Techniques for the Interpretation of Fetal Abnormalities: A Review. Appl Bionics Biomech 2018; 2018:6452050. [PMID: 29983738 PMCID: PMC6015700 DOI: 10.1155/2018/6452050] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Revised: 04/07/2018] [Accepted: 05/10/2018] [Indexed: 11/17/2022] Open
Abstract
Ultrasound (US) image segmentation methods, focusing on techniques developed for fetal biometric parameters and nuchal translucency, are briefly reviewed. Ultrasound medical images can easily identify the fetus using segmentation techniques and calculate fetal parameters. It can timely find the fetal abnormality so that necessary action can be taken by the pregnant woman. Firstly, a detailed literature has been offered on fetal biometric parameters and nuchal translucency to highlight the investigation approaches with a degree of validation in diverse clinical domains. Then, a categorization of the bibliographic assessment of recent research effort in the segmentation field of ultrasound 2D fetal images has been presented. The fetal images of high-risk pregnant women have been taken into the routine and continuous monitoring of fetal parameters. These parameters are used for detection of fetal weight, fetal growth, gestational age, and any possible abnormality detection.
Collapse
|
57
|
Anto EA, Amoah B, Crimi A. Segmentation of ultrasound images of fetal anatomic structures using random forest for low-cost settings. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2015:793-6. [PMID: 26736381 DOI: 10.1109/embc.2015.7318481] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In ultrasound imaging, manual extraction of contours of fetal anatomic structures from echographic images have been found to be very challenging due to speckles and low contrast characteristic features. Contours extracted are therefore associated with variability of human observers. In this case, the contours that are extracted are not reproducible and hence not reliable. This challenge has called for the need to develop a method that can accurately segment the fetal anatomic structures. This will help to estimate and measure the contours of the structures of fetal bodies such as the head circumference, femur length, etc. Most recent methods are able to integrate global shape and appearance. The drawback to most of these methods is that, they are not able to handle localized appearance variations. They only rely on an assumption of Gaussian gray value distribution and also require initialization near the optimal solution. In this manuscript random forest is used to segment head contour in fetal ultrasound scans acquired in low-cost settings, such as acquisition performed in rural areas of low-income countries using low-cost portable machines.
Collapse
|
58
|
Ilunga-Mbuyamba E, Avina-Cervantes JG, Lindner D, Arlt F, Ituna-Yudonago JF, Chalopin C. Patient-specific model-based segmentation of brain tumors in 3D intraoperative ultrasound images. Int J Comput Assist Radiol Surg 2018; 13:331-342. [PMID: 29330658 DOI: 10.1007/s11548-018-1703-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 01/04/2018] [Indexed: 11/27/2022]
Abstract
PURPOSE Intraoperative ultrasound (iUS) imaging is commonly used to support brain tumor operation. The tumor segmentation in the iUS images is a difficult task and still under improvement because of the low signal-to-noise ratio. The success of automatic methods is also limited due to the high noise sensibility. Therefore, an alternative brain tumor segmentation method in 3D-iUS data using a tumor model obtained from magnetic resonance (MR) data for local MR-iUS registration is presented in this paper. The aim is to enhance the visualization of the brain tumor contours in iUS. METHODS A multistep approach is proposed. First, a region of interest (ROI) based on the specific patient tumor model is defined. Second, hyperechogenic structures, mainly tumor tissues, are extracted from the ROI of both modalities by using automatic thresholding techniques. Third, the registration is performed over the extracted binary sub-volumes using a similarity measure based on gradient values, and rigid and affine transformations. Finally, the tumor model is aligned with the 3D-iUS data, and its contours are represented. RESULTS Experiments were successfully conducted on a dataset of 33 patients. The method was evaluated by comparing the tumor segmentation with expert manual delineations using two binary metrics: contour mean distance and Dice index. The proposed segmentation method using local and binary registration was compared with two grayscale-based approaches. The outcomes showed that our approach reached better results in terms of computational time and accuracy than the comparative methods. CONCLUSION The proposed approach requires limited interaction and reduced computation time, making it relevant for intraoperative use. Experimental results and evaluations were performed offline. The developed tool could be useful for brain tumor resection supporting neurosurgeons to improve tumor border visualization in the iUS volumes.
Collapse
Affiliation(s)
- Elisee Ilunga-Mbuyamba
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| | - Juan Gabriel Avina-Cervantes
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico.
| | - Dirk Lindner
- Department of Neurosurgery, University Hospital Leipzig, 04103, Leipzig, Germany
| | - Felix Arlt
- Department of Neurosurgery, University Hospital Leipzig, 04103, Leipzig, Germany
| | - Jean Fulbert Ituna-Yudonago
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico
| | - Claire Chalopin
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| |
Collapse
|
59
|
Li J, Wang Y, Lei B, Cheng JZ, Qin J, Wang T, Li S, Ni D. Automatic Fetal Head Circumference Measurement in Ultrasound Using Random Forest and Fast Ellipse Fitting. IEEE J Biomed Health Inform 2018; 22:215-223. [DOI: 10.1109/jbhi.2017.2703890] [Citation(s) in RCA: 53] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
60
|
van den Heuvel TLA, de Bruijn D, de Korte CL, Ginneken BV. Automated measurement of fetal head circumference using 2D ultrasound images. PLoS One 2018. [PMID: 30138319 DOI: 10.5281/zenodo.1327317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/29/2023] Open
Abstract
In this paper we present a computer aided detection (CAD) system for automated measurement of the fetal head circumference (HC) in 2D ultrasound images for all trimesters of the pregnancy. The HC can be used to estimate the gestational age and monitor growth of the fetus. Automated HC assessment could be valuable in developing countries, where there is a severe shortage of trained sonographers. The CAD system consists of two steps: First, Haar-like features were computed from the ultrasound images to train a random forest classifier to locate the fetal skull. Secondly, the HC was extracted using Hough transform, dynamic programming and an ellipse fit. The CAD system was trained on 999 images and validated on an independent test set of 335 images from all trimesters. The test set was manually annotated by an experienced sonographer and a medical researcher. The reference gestational age (GA) was estimated using the crown-rump length measurement (CRL). The mean difference between the reference GA and the GA estimated by the experienced sonographer was 0.8 ± 2.6, -0.0 ± 4.6 and 1.9 ± 11.0 days for the first, second and third trimester, respectively. The mean difference between the reference GA and the GA estimated by the medical researcher was 1.6 ± 2.7, 2.0 ± 4.8 and 3.9 ± 13.7 days. The mean difference between the reference GA and the GA estimated by the CAD system was 0.6 ± 4.3, 0.4 ± 4.7 and 2.5 ± 12.4 days. The results show that the CAD system performs comparable to an experienced sonographer. The presented system shows similar or superior results compared to systems published in literature. This is the first automated system for HC assessment evaluated on a large test set which contained data of all trimesters of the pregnancy.
Collapse
Affiliation(s)
- Thomas L A van den Heuvel
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
- Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Dagmar de Bruijn
- Department of Obstetrics and Gynecology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Chris L de Korte
- Medical Ultrasound Imaging Center, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
- Fraunhofer MEVIS, Bremen, Germany
| |
Collapse
|
61
|
Meiburger KM, Acharya UR, Molinari F. Automated localization and segmentation techniques for B-mode ultrasound images: A review. Comput Biol Med 2017; 92:210-235. [PMID: 29247890 DOI: 10.1016/j.compbiomed.2017.11.018] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 11/30/2017] [Accepted: 11/30/2017] [Indexed: 12/14/2022]
Abstract
B-mode ultrasound imaging is used extensively in medicine. Hence, there is a need to have efficient segmentation tools to aid in computer-aided diagnosis, image-guided interventions, and therapy. This paper presents a comprehensive review on automated localization and segmentation techniques for B-mode ultrasound images. The paper first describes the general characteristics of B-mode ultrasound images. Then insight on the localization and segmentation of tissues is provided, both in the case in which the organ/tissue localization provides the final segmentation and in the case in which a two-step segmentation process is needed, due to the desired boundaries being too fine to locate from within the entire ultrasound frame. Subsequenly, examples of some main techniques found in literature are shown, including but not limited to shape priors, superpixel and classification, local pixel statistics, active contours, edge-tracking, dynamic programming, and data mining. Ten selected applications (abdomen/kidney, breast, cardiology, thyroid, liver, vascular, musculoskeletal, obstetrics, gynecology, prostate) are then investigated in depth, and the performances of a few specific applications are compared. In conclusion, future perspectives for B-mode based segmentation, such as the integration of RF information, the employment of higher frequency probes when possible, the focus on completely automatic algorithms, and the increase in available data are discussed.
Collapse
Affiliation(s)
- Kristen M Meiburger
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy
| | - U Rajendra Acharya
- Department of Electronic & Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore; Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Filippo Molinari
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy.
| |
Collapse
|
62
|
Jang J, Park Y, Kim B, Lee SM, Kwon JY, Seo JK. Automatic Estimation of Fetal Abdominal Circumference From Ultrasound Images. IEEE J Biomed Health Inform 2017; 22:1512-1520. [PMID: 29990257 DOI: 10.1109/jbhi.2017.2776116] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Ultrasound diagnosis is routinely used in obstetrics and gynecology for fetal biometry, and owing to its time-consuming process, there has been a great demand for automatic estimation. However, the automated analysis of ultrasound images is complicated because they are patient specific, operator dependent, and machine specific. Among various types of fetal biometry, the accurate estimation of abdominal circumference (AC) is especially difficult to perform automatically because the abdomen has low contrast against surroundings, nonuniform contrast, and irregular shape compared to other parameters. We propose a method for the automatic estimation of the fetal AC from two-dimensional ultrasound data through a specially designed convolutional neural network (CNN), which takes account of doctors' decision process, anatomical structure, and the characteristics of the ultrasound image. The proposed method uses CNN to classify ultrasound images (stomach bubble, amniotic fluid, and umbilical vein) and Hough transformation for measuring AC. We test the proposed method using clinical ultrasound data acquired from 56 pregnant women. Experimental results show that, with relatively small training samples, the proposed CNN provides sufficient classification results for AC estimation through the Hough transformation. The proposed method automatically estimates AC from ultrasound images. The method is quantitatively evaluated and shows stable performance in most cases and even for ultrasound images deteriorated by shadowing artifacts. As a result of experiments for our acceptance check, the accuracies are 0.809 and 0.771 with expert 1 and expert 2, respectively, whereas the accuracy between the two experts is 0.905. However, for cases of oversized fetus, when the amniotic fluid is not observed or the abdominal area is distorted, it could not correctly estimate AC.
Collapse
|
63
|
Feng Y, Dong F, Xia X, Hu CH, Fan Q, Hu Y, Gao M, Mutic S. An adaptive Fuzzy C-means method utilizing neighboring information for breast tumor segmentation in ultrasound images. Med Phys 2017; 44:3752-3760. [PMID: 28513858 DOI: 10.1002/mp.12350] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2016] [Revised: 04/24/2017] [Accepted: 05/10/2017] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Ultrasound (US) imaging has been widely used in breast tumor diagnosis and treatment intervention. Automatic delineation of the tumor is a crucial first step, especially for the computer-aided diagnosis (CAD) and US-guided breast procedure. However, the intrinsic properties of US images such as low contrast and blurry boundaries pose challenges to the automatic segmentation of the breast tumor. Therefore, the purpose of this study is to propose a segmentation algorithm that can contour the breast tumor in US images. METHODS To utilize the neighbor information of each pixel, a Hausdorff distance based fuzzy c-means (FCM) method was adopted. The size of the neighbor region was adaptively updated by comparing the mutual information between them. The objective function of the clustering process was updated by a combination of Euclid distance and the adaptively calculated Hausdorff distance. Segmentation results were evaluated by comparing with three experts' manual segmentations. The results were also compared with a kernel-induced distance based FCM with spatial constraints, the method without adaptive region selection, and conventional FCM. RESULTS Results from segmenting 30 patient images showed the adaptive method had a value of sensitivity, specificity, Jaccard similarity, and Dice coefficient of 93.60 ± 5.33%, 97.83 ± 2.17%, 86.38 ± 5.80%, and 92.58 ± 3.68%, respectively. The region-based metrics of average symmetric surface distance (ASSD), root mean square symmetric distance (RMSD), and maximum symmetric surface distance (MSSD) were 0.03 ± 0.04 mm, 0.04 ± 0.03 mm, and 1.18 ± 1.01 mm, respectively. All the metrics except sensitivity were better than that of the non-adaptive algorithm and the conventional FCM. Only three region-based metrics were better than that of the kernel-induced distance based FCM with spatial constraints. CONCLUSION Inclusion of the pixel neighbor information adaptively in segmenting US images improved the segmentation performance. The results demonstrate the potential application of the method in breast tumor CAD and other US-guided procedures.
Collapse
Affiliation(s)
- Yuan Feng
- Center for Molecular Imaging and Nuclear Medicine, School of Radiological and Interdisciplinary Sciences (RAD-X), Soochow University, Collaborative Innovation Center of Radiation Medicine of Jiangsu Higher Education Institutions, Suzhou, Jiangsu, 215123, China.,School of Mechanical and Electronic Engineering, Soochow University, Suzhou, Jiangsu, 215021, China.,School of Computer Science and Engineering, Soochow University, Suzhou, Jiangsu, 215021, China
| | - Fenglin Dong
- Department of Ultrasounds, the First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou, 215006, China
| | - Xiaolong Xia
- Center for Molecular Imaging and Nuclear Medicine, School of Radiological and Interdisciplinary Sciences (RAD-X), Soochow University, Collaborative Innovation Center of Radiation Medicine of Jiangsu Higher Education Institutions, Suzhou, Jiangsu, 215123, China
| | - Chun-Hong Hu
- Department of Radiology, the First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou, 215006, China
| | - Qianmin Fan
- Department of Ultrasounds, the First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou, 215006, China
| | - Yanle Hu
- Department of Radiation Oncology, Mayo Clinic in Arizona, Phoenix, AZ, USA
| | - Mingyuan Gao
- Center for Molecular Imaging and Nuclear Medicine, School of Radiological and Interdisciplinary Sciences (RAD-X), Soochow University, Collaborative Innovation Center of Radiation Medicine of Jiangsu Higher Education Institutions, Suzhou, Jiangsu, 215123, China
| | - Sasa Mutic
- Department of Radiation Oncology, Washington University, St. Louis, MO, USA
| |
Collapse
|
64
|
Wu L, Cheng JZ, Li S, Lei B, Wang T, Ni D. FUIQA: Fetal Ultrasound Image Quality Assessment With Deep Convolutional Networks. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:1336-1349. [PMID: 28362600 DOI: 10.1109/tcyb.2017.2671898] [Citation(s) in RCA: 87] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
The quality of ultrasound (US) images for the obstetric examination is crucial for accurate biometric measurement. However, manual quality control is a labor intensive process and often impractical in a clinical setting. To improve the efficiency of examination and alleviate the measurement error caused by improper US scanning operation and slice selection, a computerized fetal US image quality assessment (FUIQA) scheme is proposed to assist the implementation of US image quality control in the clinical obstetric examination. The proposed FUIQA is realized with two deep convolutional neural network models, which are denoted as L-CNN and C-CNN, respectively. The L-CNN aims to find the region of interest (ROI) of the fetal abdominal region in the US image. Based on the ROI found by the L-CNN, the C-CNN evaluates the image quality by assessing the goodness of depiction for the key structures of stomach bubble and umbilical vein. To further boost the performance of the L-CNN, we augment the input sources of the neural network with the local phase features along with the original US data. It will be shown that the heterogeneous input sources will help to improve the performance of the L-CNN. The performance of the proposed FUIQA is compared with the subjective image quality evaluation results from three medical doctors. With comprehensive experiments, it will be illustrated that the computerized assessment with our FUIQA scheme can be comparable to the subjective ratings from medical doctors.
Collapse
|
65
|
Zhang L, Dudley NJ, Lambrou T, Allinson N, Ye X. Automatic image quality assessment and measurement of fetal head in two-dimensional ultrasound image. J Med Imaging (Bellingham) 2017; 4:024001. [PMID: 28439522 DOI: 10.1117/1.jmi.4.2.024001] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2016] [Accepted: 03/31/2017] [Indexed: 11/14/2022] Open
Abstract
Owing to the inconsistent image quality existing in routine obstetric ultrasound (US) scans that leads to a large intraobserver and interobserver variability, the aim of this study is to develop a quality-assured, fully automated US fetal head measurement system. A texton-based fetal head segmentation is used as a prerequisite step to obtain the head region. Textons are calculated using a filter bank designed specific for US fetal head structure. Both shape- and anatomic-based features calculated from the segmented head region are then fed into a random forest classifier to determine the quality of the image (e.g., whether the image is acquired from a correct imaging plane), from which fetal head measurements [biparietal diameter (BPD), occipital-frontal diameter (OFD), and head circumference (HC)] are derived. The experimental results show a good performance of our method for US quality assessment and fetal head measurements. The overall precision for automatic image quality assessment is 95.24% with 87.5% sensitivity and 100% specificity, while segmentation performance shows 99.27% ([Formula: see text]) of accuracy, 97.07% ([Formula: see text]) of sensitivity, 2.23 mm ([Formula: see text]) of the maximum symmetric contour distance, and 0.84 mm ([Formula: see text]) of the average symmetric contour distance. The statistical analysis results using paired [Formula: see text]-test and Bland-Altman plots analysis indicate that the 95% limits of agreement for inter observer variability between the automated measurements and the senior expert measurements are 2.7 mm of BPD, 5.8 mm of OFD, and 10.4 mm of HC, whereas the mean differences are [Formula: see text], [Formula: see text], and [Formula: see text], respectively. These narrow 95% limits of agreements indicate a good level of consistency between the automated and the senior expert's measurements.
Collapse
Affiliation(s)
- Lei Zhang
- University of Lincoln, School of Computer Science, Laboratory of Vision Engineering, Brayford Pool, Lincoln, United Kingdom
| | - Nicholas J Dudley
- United Lincolnshire Hospitals NHS Trust, Medical Physics, Lincoln County Hospital, Lincoln, United Kingdom
| | - Tryphon Lambrou
- University of Lincoln, School of Computer Science, Laboratory of Vision Engineering, Brayford Pool, Lincoln, United Kingdom
| | - Nigel Allinson
- University of Lincoln, School of Computer Science, Laboratory of Vision Engineering, Brayford Pool, Lincoln, United Kingdom
| | - Xujiong Ye
- University of Lincoln, School of Computer Science, Laboratory of Vision Engineering, Brayford Pool, Lincoln, United Kingdom
| |
Collapse
|
66
|
Maraci M, Bridge C, Napolitano R, Papageorghiou A, Noble J. A framework for analysis of linear ultrasound videos to detect fetal presentation and heartbeat. Med Image Anal 2017; 37:22-36. [DOI: 10.1016/j.media.2017.01.003] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2015] [Revised: 12/22/2016] [Accepted: 01/05/2017] [Indexed: 12/22/2022]
|
67
|
Plantar fascia segmentation and thickness estimation in ultrasound images. Comput Med Imaging Graph 2017; 56:60-73. [PMID: 28242379 DOI: 10.1016/j.compmedimag.2017.02.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Revised: 01/09/2017] [Accepted: 02/13/2017] [Indexed: 11/22/2022]
Abstract
Ultrasound (US) imaging offers significant potential in diagnosis of plantar fascia (PF) injury and monitoring treatment. In particular US imaging has been shown to be reliable in foot and ankle assessment and offers a real-time effective imaging technique that is able to reliably confirm structural changes, such as thickening, and identify changes in the internal echo structure associated with diseased or damaged tissue. Despite the advantages of US imaging, images are difficult to interpret during medical assessment. This is partly due to the size and position of the PF in relation to the adjacent tissues. It is therefore a requirement to devise a system that allows better and easier interpretation of PF ultrasound images during diagnosis. This study proposes an automatic segmentation approach which for the first time extracts ultrasound data to estimate size across three sections of the PF (rearfoot, midfoot and forefoot). This segmentation method uses artificial neural network module (ANN) in order to classify small overlapping patches as belonging or not-belonging to the region of interest (ROI) of the PF tissue. Features ranking and selection techniques were performed as a post-processing step for features extraction to reduce the dimension and number of the extracted features. The trained ANN classifies the image overlapping patches into PF and non-PF tissue, and then it is used to segment the desired PF region. The PF thickness was calculated using two different methods: distance transformation and area-length calculation algorithms. This new approach is capable of accurately segmenting the PF region, differentiating it from surrounding tissues and estimating its thickness.
Collapse
|
68
|
Phase based distance regularized level set for the segmentation of ultrasound kidney images. Pattern Recognit Lett 2017. [DOI: 10.1016/j.patrec.2016.12.002] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
69
|
Amoah B, Anto EA, Crimi A. Automatic fetal measurements for low-cost settings by using Local Phase Bone detection. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:161-4. [PMID: 26736225 DOI: 10.1109/embc.2015.7318325] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The estimation of gestational age is done mostly by measurements of fetal anatomical structures such as the head and femur. These measurement are also used in diagnosis and growth assessment. Manual measurements is operator dependent and hence subject to variability.
Collapse
|
70
|
Alison Noble J. Reflections on ultrasound image analysis. Med Image Anal 2016; 33:33-37. [PMID: 27503078 DOI: 10.1016/j.media.2016.06.015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2016] [Revised: 06/07/2016] [Accepted: 06/13/2016] [Indexed: 10/21/2022]
Abstract
Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time.
Collapse
Affiliation(s)
- J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, United Kingdom.
| |
Collapse
|
71
|
Sridar P, Kumar A, Li C, Woo J, Quinton A, Benzie R, Peek MJ, Feng D, Kumar RK, Nanan R, Kim J. Automatic Measurement of Thalamic Diameter in 2-D Fetal Ultrasound Brain Images Using Shape Prior Constrained Regularized Level Sets. IEEE J Biomed Health Inform 2016; 21:1069-1078. [PMID: 27333614 DOI: 10.1109/jbhi.2016.2582175] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
We derived an automated algorithm for accurately measuring the thalamic diameter from 2-D fetal ultrasound (US) brain images. The algorithm overcomes the inherent limitations of the US image modality: nonuniform density; missing boundaries; and strong speckle noise. We introduced a "guitar" structure that represents the negative space surrounding the thalamic regions. The guitar acts as a landmark for deriving the widest points of the thalamus even when its boundaries are not identifiable. We augmented a generalized level-set framework with a shape prior and constraints derived from statistical shape models of the guitars; this framework was used to segment US images and measure the thalamic diameter. Our segmentation method achieved a higher mean Dice similarity coefficient, Hausdorff distance, specificity, and reduced contour leakage when compared to other well-established methods. The automatic thalamic diameter measurement had an interobserver variability of -0.56 ± 2.29 mm compared to manual measurement by an expert sonographer. Our method was capable of automatically estimating the thalamic diameter, with the measurement accuracy on par with clinical assessment. Our method can be used as part of computer-assisted screening tools that automatically measure the biometrics of the fetal thalamus; these biometrics are linked to neurodevelopmental outcomes.
Collapse
|
72
|
Zhang L, Ye X, Lambrou T, Duan W, Allinson N, Dudley NJ. A supervised texton based approach for automatic segmentation and measurement of the fetal head and femur in 2D ultrasound images. Phys Med Biol 2016; 61:1095-115. [DOI: 10.1088/0031-9155/61/3/1095] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
73
|
Rueda S, Knight CL, Papageorghiou AT, Noble JA. Feature-based fuzzy connectedness segmentation of ultrasound images with an object completion step. Med Image Anal 2015; 26:30-46. [PMID: 26319973 PMCID: PMC4686006 DOI: 10.1016/j.media.2015.07.002] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2013] [Revised: 05/28/2015] [Accepted: 07/11/2015] [Indexed: 11/24/2022]
Abstract
Medical ultrasound (US) image segmentation and quantification can be challenging due to signal dropouts, missing boundaries, and presence of speckle, which gives images of similar objects quite different appearance. Typically, purely intensity-based methods do not lead to a good segmentation of the structures of interest. Prior work has shown that local phase and feature asymmetry, derived from the monogenic signal, extract structural information from US images. This paper proposes a new US segmentation approach based on the fuzzy connectedness framework. The approach uses local phase and feature asymmetry to define a novel affinity function, which drives the segmentation algorithm, incorporates a shape-based object completion step, and regularises the result by mean curvature flow. To appreciate the accuracy and robustness of the methodology across clinical data of varying appearance and quality, a novel entropy-based quantitative image quality assessment of the different regions of interest is introduced. The new method is applied to 81 US images of the fetal arm acquired at multiple gestational ages, as a means to define a new automated image-based biomarker of fetal nutrition. Quantitative and qualitative evaluation shows that the segmentation method is comparable to manual delineations and robust across image qualities that are typical of clinical practice.
Collapse
Affiliation(s)
- Sylvia Rueda
- Centre of Excellence in Personalised Healthcare, Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, OX3 7DQ Oxford, UK.
| | - Caroline L Knight
- Centre of Excellence in Personalised Healthcare, Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, OX3 7DQ Oxford, UK; Nuffield Department of Obstetrics & Gynaecology, University of Oxford, Oxford, U.K
| | - Aris T Papageorghiou
- Nuffield Department of Obstetrics & Gynaecology, University of Oxford, Oxford, U.K; Oxford Maternal & Perinatal Health Institute, Green Templeton College, University of Oxford, Oxford, UK
| | - J Alison Noble
- Centre of Excellence in Personalised Healthcare, Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, OX3 7DQ Oxford, UK
| |
Collapse
|
74
|
Zang X, Bascom R, Gilbert C, Toth J, Higgins W. Methods for 2-D and 3-D Endobronchial Ultrasound Image Segmentation. IEEE Trans Biomed Eng 2015; 63:1426-39. [PMID: 26529748 DOI: 10.1109/tbme.2015.2494838] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Endobronchial ultrasound (EBUS) is now commonly used for cancer-staging bronchoscopy. Unfortunately, EBUS is challenging to use and interpreting EBUS video sequences is difficult. Other ultrasound imaging domains, hampered by related difficulties, have benefited from computer-based image-segmentation methods. Yet, so far, no such methods have been proposed for EBUS. We propose image-segmentation methods for 2-D EBUS frames and 3-D EBUS sequences. Our 2-D method adapts the fast-marching level-set process, anisotropic diffusion, and region growing to the problem of segmenting 2-D EBUS frames. Our 3-D method builds upon the 2-D method while also incorporating the geodesic level-set process for segmenting EBUS sequences. Tests with lung-cancer patient data showed that the methods ran fully automatically for nearly 80% of test cases. For the remaining cases, the only user-interaction required was the selection of a seed point. When compared to ground-truth segmentations, the 2-D method achieved an overall Dice index = 90.0% ±4.9%, while the 3-D method achieved an overall Dice index = 83.9 ± 6.0%. In addition, the computation time (2-D, 0.070 s/frame; 3-D, 0.088 s/frame) was two orders of magnitude faster than interactive contour definition. Finally, we demonstrate the potential of the methods for EBUS localization in a multimodal image-guided bronchoscopy system.
Collapse
|
75
|
Iglesias JE, Sabuncu MR. Multi-atlas segmentation of biomedical images: A survey. Med Image Anal 2015; 24:205-219. [PMID: 26201875 PMCID: PMC4532640 DOI: 10.1016/j.media.2015.06.012] [Citation(s) in RCA: 371] [Impact Index Per Article: 37.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Revised: 06/12/2015] [Accepted: 06/15/2015] [Indexed: 10/23/2022]
Abstract
Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing, et al. (2004), Klein, et al. (2005), and Heckemann, et al. (2006), is becoming one of the most widely-used and successful image segmentation techniques in biomedical applications. By manipulating and utilizing the entire dataset of "atlases" (training images that have been previously labeled, e.g., manually by an expert), rather than some model-based average representation, MAS has the flexibility to better capture anatomical variation, thus offering superior segmentation accuracy. This benefit, however, typically comes at a high computational cost. Recent advancements in computer hardware and image processing software have been instrumental in addressing this challenge and facilitated the wide adoption of MAS. Today, MAS has come a long way and the approach includes a wide array of sophisticated algorithms that employ ideas from machine learning, probabilistic modeling, optimization, and computer vision, among other fields. This paper presents a survey of published MAS algorithms and studies that have applied these methods to various biomedical problems. In writing this survey, we have three distinct aims. Our primary goal is to document how MAS was originally conceived, later evolved, and now relates to alternative methods. Second, this paper is intended to be a detailed reference of past research activity in MAS, which now spans over a decade (2003-2014) and entails novel methodological developments and application-specific solutions. Finally, our goal is to also present a perspective on the future of MAS, which, we believe, will be one of the dominant approaches in biomedical image segmentation.
Collapse
Affiliation(s)
| | - Mert R Sabuncu
- A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA.
| |
Collapse
|
76
|
Dahdouh S, Angelini ED, Grangé G, Bloch I. Segmentation of embryonic and fetal 3D ultrasound images based on pixel intensity distributions and shape priors. Med Image Anal 2015; 24:255-268. [DOI: 10.1016/j.media.2014.12.005] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2014] [Revised: 12/16/2014] [Accepted: 12/18/2014] [Indexed: 12/26/2022]
|
77
|
Foi A, Maggioni M, Pepe A, Rueda S, Noble JA, Papageorghiou AT, Tohka J. Difference of Gaussians revolved along elliptical paths for ultrasound fetal head segmentation. Comput Med Imaging Graph 2014; 38:774-84. [DOI: 10.1016/j.compmedimag.2014.09.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Revised: 07/31/2014] [Accepted: 09/18/2014] [Indexed: 10/24/2022]
|
78
|
Ciurte A, Bresson X, Cuisenaire O, Houhou N, Nedevschi S, Thiran JP, Cuadra MB. Semi-supervised segmentation of ultrasound images based on patch representation and continuous min cut. PLoS One 2014; 9:e100972. [PMID: 25010530 PMCID: PMC4091944 DOI: 10.1371/journal.pone.0100972] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/01/2014] [Indexed: 11/18/2022] Open
Abstract
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.
Collapse
Affiliation(s)
- Anca Ciurte
- Department of Computer Science, Technical University of Cluj-Napoca, Cluj-Napoca, Romania
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- * E-mail:
| | - Xavier Bresson
- Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Center for Biomedical Imaging, Signal Processing Core, Lausanne, Switzerland
| | - Olivier Cuisenaire
- Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Center for Biomedical Imaging, Signal Processing Core, Lausanne, Switzerland
| | - Nawal Houhou
- Swiss Institute of Bioinformatics (SIB), University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Sergiu Nedevschi
- Department of Computer Science, Technical University of Cluj-Napoca, Cluj-Napoca, Romania
| | - Jean-Philippe Thiran
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Meritxell Bach Cuadra
- Signal Processing Laboratory (LTS5), École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Center for Biomedical Imaging, Signal Processing Core, Lausanne, Switzerland
| |
Collapse
|