51
|
Jafari MH, Girgis H, Van Woudenberg N, Moulson N, Luong C, Fung A, Balthazaar S, Jue J, Tsang M, Nair P, Gin K, Rohling R, Abolmaesumi P, Tsang T. Cardiac point-of-care to cart-based ultrasound translation using constrained CycleGAN. Int J Comput Assist Radiol Surg 2020; 15:877-886. [PMID: 32314226 DOI: 10.1007/s11548-020-02141-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 03/25/2020] [Indexed: 12/17/2022]
Abstract
PURPOSE The emerging market of cardiac handheld ultrasound (US) is on the rise. Despite the advantages in ease of access and the lower cost, a gap in image quality can still be observed between the echocardiography (echo) data captured by point-of-care ultrasound (POCUS) compared to conventional cart-based US, which limits the further adaptation of POCUS. In this work, we aim to present a machine learning solution based on recent advances in adversarial training to investigate the feasibility of translating POCUS echo images to the quality level of high-end cart-based US systems. METHODS We propose a constrained cycle-consistent generative adversarial architecture for unpaired translation of cardiac POCUS to cart-based US data. We impose a structured shape-wise regularization via a critic segmentation network to preserve the underlying shape of the heart during quality translation. The proposed deep transfer model is constrained to the anatomy of the left ventricle (LV) in apical two-chamber (AP2) echo views. RESULTS A total of 1089 echo studies from 841 patients are used in this study. The AP2 frames are captured by POCUS (Philips Lumify and Clarius) and cart-based (Philips iE33 and Vivid E9) US machines. The dataset of quality translation comprises a total of 441 echo studies from 395 patients. Data from both POCUS and cart-based systems of the same patient were available in 122 cases. The deep-quality transfer model is integrated into a pipeline for an automated cardiac evaluation task, namely segmentation of LV in AP2 view. By transferring the low-quality POCUS data to the cart-based US, a significant average improvement of 30% and 34 mm is obtained in the LV segmentation Dice score and Hausdorff distance metrics, respectively. CONCLUSION This paper presents the feasibility of a machine learning solution to transform the image quality of POCUS data to that of high-quality high-end cart-based systems. The experiments show that by leveraging the quality translation through the proposed constrained adversarial training, the accuracy of automatic segmentation with POCUS data could be improved.
Collapse
Affiliation(s)
| | - Hany Girgis
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | | | - Nathaniel Moulson
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Christina Luong
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Andrea Fung
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Shane Balthazaar
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - John Jue
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Micheal Tsang
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Parvathy Nair
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | - Ken Gin
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| | | | | | - Teresa Tsang
- The University of British Columbia, Vancouver, Canada
- Vancouver General Hospital, Vancouver, Canada
| |
Collapse
|
52
|
Dong S, Luo G, Tam C, Wang W, Wang K, Cao S, Chen B, Zhang H, Li S. Deep Atlas Network for Efficient 3D Left Ventricle Segmentation on Echocardiography. Med Image Anal 2020; 61:101638. [DOI: 10.1016/j.media.2020.101638] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Revised: 01/06/2020] [Accepted: 01/09/2020] [Indexed: 10/25/2022]
|
53
|
Mason SA, White IM, Lalondrelle S, Bamber JC, Harris EJ. The Stacked-Ellipse Algorithm: An Ultrasound-Based 3-D Uterine Segmentation Tool for Enabling Adaptive Radiotherapy for Uterine Cervix Cancer. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:1040-1052. [PMID: 31926750 PMCID: PMC7043010 DOI: 10.1016/j.ultrasmedbio.2019.09.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Revised: 08/30/2019] [Accepted: 09/04/2019] [Indexed: 06/10/2023]
Abstract
The stacked-ellipse (SE) algorithm was developed to rapidly segment the uterus on 3-D ultrasound (US) for the purpose of enabling US-guided adaptive radiotherapy (RT) for uterine cervix cancer patients. The algorithm was initialised manually on a single sagittal slice to provide a series of elliptical initialisation contours in semi-axial planes along the uterus. The elliptical initialisation contours were deformed according to US features such that they conformed to the uterine boundary. The uterus of 15 patients was scanned with 3-D US using the Clarity System (Elekta Ltd.) at multiple days during RT and manually contoured (n = 49 images and corresponding contours). The median (interquartile range) Dice similarity coefficient and mean surface-to-surface-distance between the SE algorithm and manual contours were 0.80 (0.03) and 3.3 (0.2) mm, respectively, which are within the ranges of reported inter-observer contouring variabilities. The SE algorithm could be implemented in adaptive RT to precisely segment the uterus on 3-D US.
Collapse
Affiliation(s)
- Sarah A Mason
- Joint Department of Physics, Institute of Cancer Research, London, United Kingdom
| | - Ingrid M White
- Radiotherapy Department, Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Susan Lalondrelle
- Radiotherapy Department, Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Jeffrey C Bamber
- Joint Department of Physics, Institute of Cancer Research, London, United Kingdom
| | - Emma J Harris
- Joint Department of Physics, Institute of Cancer Research, London, United Kingdom.
| |
Collapse
|
54
|
Chen C, Qin C, Qiu H, Tarroni G, Duan J, Bai W, Rueckert D. Deep Learning for Cardiac Image Segmentation: A Review. Front Cardiovasc Med 2020; 7:25. [PMID: 32195270 PMCID: PMC7066212 DOI: 10.3389/fcvm.2020.00025] [Citation(s) in RCA: 355] [Impact Index Per Article: 71.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 02/17/2020] [Indexed: 12/15/2022] Open
Abstract
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound and major anatomical structures of interest (ventricles, atria, and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research.
Collapse
Affiliation(s)
- Chen Chen
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Chen Qin
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Huaqi Qiu
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Giacomo Tarroni
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
- CitAI Research Centre, Department of Computer Science, City University of London, London, United Kingdom
| | - Jinming Duan
- School of Computer Science, University of Birmingham, Birmingham, United Kingdom
| | - Wenjia Bai
- Data Science Institute, Imperial College London, London, United Kingdom
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Daniel Rueckert
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| |
Collapse
|
55
|
Hu SY, Xu H, Li Q, Telfer BA, Brattain LJ, Samir AE. Deep Learning-Based Automatic Endometrium Segmentation and Thickness Measurement for 2D Transvaginal Ultrasound. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:993-997. [PMID: 31946060 DOI: 10.1109/embc.2019.8856367] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Endometrial thickness is closely related to gyneco-logical function and is an important biomarker in transvaginal ultrasound (TVUS) examinations for assessing female reproductive health. Manual measurement is time-consuming and subject to high inter- and intra- observer variability. In this paper, we present a fully automated endometrial thickness measurement method using deep learning. Our pipeline consists of: 1) endometrium segmentation using a VGG-based U-Net, and 2) endometrial thickness estimation using medial axis transformation. We conducted experimental studies on 137 2D TVUS cases (74/63 secretory phase/proliferative phase). On a test set of 27 cases/277 images, the segmentation Dice score is 0.83. For thickness measurement, we achieved mean absolute error of 1.23/1.38 mm and root mean squared error of 1.79/1.85 mm on two different test sets. The results are considered well within the clinically acceptable range of ±2 mm. Furthermore, our phase-stratified analysis shows that the measurement variance from the secretory phase is higher than that from the proliferative phase, largely due to the high variability of the endometrium appearance in the secretory phase. Future work will extend our current algorithm toward different clinical outcomes for a broader spectrum of clinical applications.
Collapse
|
56
|
Li M, Dong S, Gao Z, Feng C, Xiong H, Zheng W, Ghista D, Zhang H, de Albuquerque VHC. Unified model for interpreting multi-view echocardiographic sequences without temporal information. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2019.106049] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
57
|
Jun Guo B, He X, Lei Y, Harms J, Wang T, Curran WJ, Liu T, Jiang Zhang L, Yang X. Automated left ventricular myocardium segmentation using 3D deeply supervised attention U‐net for coronary computed tomography angiography; CT myocardium segmentation. Med Phys 2020; 47:1775-1785. [DOI: 10.1002/mp.14066] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Revised: 01/22/2020] [Accepted: 01/28/2020] [Indexed: 01/30/2023] Open
Affiliation(s)
- Bang Jun Guo
- Department of Medical Imaging Jinling Hospital The First School of Clinical Medicine Southern Medical University Nanjing210002China
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Joseph Harms
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| | - Long Jiang Zhang
- Department of Medical Imaging Jinling Hospital The First School of Clinical Medicine Southern Medical University Nanjing210002China
- Department of Medical Imaging Jinling Hospital Medical School of Nanjing University Nanjing210002China
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute Emory University Atlanta GA 30322USA
| |
Collapse
|
58
|
Ferdian E, Suinesiaputra A, Fung K, Aung N, Lukaschuk E, Barutcu A, Maclean E, Paiva J, Piechnik SK, Neubauer S, Petersen SE, Young AA. Fully Automated Myocardial Strain Estimation from Cardiovascular MRI-tagged Images Using a Deep Learning Framework in the UK Biobank. Radiol Cardiothorac Imaging 2020; 2:e190032. [PMID: 32715298 PMCID: PMC7051160 DOI: 10.1148/ryct.2020190032] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 07/19/2019] [Accepted: 08/21/2019] [Indexed: 11/25/2022]
Abstract
PURPOSE To demonstrate the feasibility and performance of a fully automated deep learning framework to estimate myocardial strain from short-axis cardiac MRI-tagged images. MATERIALS AND METHODS In this retrospective cross-sectional study, 4508 cases from the U.K. Biobank were split randomly into 3244 training cases, 812 validation cases, and 452 test cases. Ground truth myocardial landmarks were defined and tracked by manual initialization and correction of deformable image registration using previously validated software with five readers. The fully automatic framework consisted of (a) a convolutional neural network (CNN) for localization and (b) a combination of a recurrent neural network (RNN) and a CNN to detect and track the myocardial landmarks through the image sequence for each slice. Radial and circumferential strain were then calculated from the motion of the landmarks and averaged on a slice basis. RESULTS Within the test set, myocardial end-systolic circumferential Green strain errors were -0.001 ± 0.025, -0.001 ± 0.021, and 0.004 ± 0.035 in the basal, mid-, and apical slices, respectively (mean ± standard deviation of differences between predicted and manual strain). The framework reproduced significant reductions in circumferential strain in participants with diabetes, hypertensive participants, and participants with a previous heart attack. Typical processing time was approximately 260 frames (approximately 13 slices) per second on a GPU with 12 GB RAM compared with 6-8 minutes per slice for the manual analysis. CONCLUSION The fully automated combined RNN and CNN framework for analysis of myocardial strain enabled unbiased strain evaluation in a high-throughput workflow, with similar ability to distinguish impairment due to diabetes, hypertension, and previous heart attack.Published under a CC BY 4.0 license. Supplemental material is available for this article.
Collapse
Affiliation(s)
- Edward Ferdian
- From the Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand (E.F., A.S., A.A.Y.); William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, England (K.F., N.A., E.M., J.P., S.E.P.); and Oxford NIHR Biomedical Research Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, England (E.L., A.B., S.K.P., S.N.); Department of Biomedical Engineering, King’s College London, 5th Floor Becket House, 1 Lambeth Palace Rd, London SE1 7EU, England (A.A.Y.)
| | - Avan Suinesiaputra
- From the Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand (E.F., A.S., A.A.Y.); William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, England (K.F., N.A., E.M., J.P., S.E.P.); and Oxford NIHR Biomedical Research Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, England (E.L., A.B., S.K.P., S.N.); Department of Biomedical Engineering, King’s College London, 5th Floor Becket House, 1 Lambeth Palace Rd, London SE1 7EU, England (A.A.Y.)
| | - Kenneth Fung
- From the Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand (E.F., A.S., A.A.Y.); William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, England (K.F., N.A., E.M., J.P., S.E.P.); and Oxford NIHR Biomedical Research Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, England (E.L., A.B., S.K.P., S.N.); Department of Biomedical Engineering, King’s College London, 5th Floor Becket House, 1 Lambeth Palace Rd, London SE1 7EU, England (A.A.Y.)
| | - Nay Aung
- From the Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand (E.F., A.S., A.A.Y.); William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, England (K.F., N.A., E.M., J.P., S.E.P.); and Oxford NIHR Biomedical Research Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, England (E.L., A.B., S.K.P., S.N.); Department of Biomedical Engineering, King’s College London, 5th Floor Becket House, 1 Lambeth Palace Rd, London SE1 7EU, England (A.A.Y.)
| | - Elena Lukaschuk
- From the Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand (E.F., A.S., A.A.Y.); William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, England (K.F., N.A., E.M., J.P., S.E.P.); and Oxford NIHR Biomedical Research Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, England (E.L., A.B., S.K.P., S.N.); Department of Biomedical Engineering, King’s College London, 5th Floor Becket House, 1 Lambeth Palace Rd, London SE1 7EU, England (A.A.Y.)
| | - Ahmet Barutcu
- From the Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand (E.F., A.S., A.A.Y.); William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, England (K.F., N.A., E.M., J.P., S.E.P.); and Oxford NIHR Biomedical Research Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, England (E.L., A.B., S.K.P., S.N.); Department of Biomedical Engineering, King’s College London, 5th Floor Becket House, 1 Lambeth Palace Rd, London SE1 7EU, England (A.A.Y.)
| | - Edd Maclean
- From the Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand (E.F., A.S., A.A.Y.); William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, England (K.F., N.A., E.M., J.P., S.E.P.); and Oxford NIHR Biomedical Research Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, England (E.L., A.B., S.K.P., S.N.); Department of Biomedical Engineering, King’s College London, 5th Floor Becket House, 1 Lambeth Palace Rd, London SE1 7EU, England (A.A.Y.)
| | - Jose Paiva
- From the Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand (E.F., A.S., A.A.Y.); William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, England (K.F., N.A., E.M., J.P., S.E.P.); and Oxford NIHR Biomedical Research Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, England (E.L., A.B., S.K.P., S.N.); Department of Biomedical Engineering, King’s College London, 5th Floor Becket House, 1 Lambeth Palace Rd, London SE1 7EU, England (A.A.Y.)
| | - Stefan K. Piechnik
- From the Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand (E.F., A.S., A.A.Y.); William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, England (K.F., N.A., E.M., J.P., S.E.P.); and Oxford NIHR Biomedical Research Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, England (E.L., A.B., S.K.P., S.N.); Department of Biomedical Engineering, King’s College London, 5th Floor Becket House, 1 Lambeth Palace Rd, London SE1 7EU, England (A.A.Y.)
| | - Stefan Neubauer
- From the Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand (E.F., A.S., A.A.Y.); William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, England (K.F., N.A., E.M., J.P., S.E.P.); and Oxford NIHR Biomedical Research Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, England (E.L., A.B., S.K.P., S.N.); Department of Biomedical Engineering, King’s College London, 5th Floor Becket House, 1 Lambeth Palace Rd, London SE1 7EU, England (A.A.Y.)
| | - Steffen E. Petersen
- From the Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand (E.F., A.S., A.A.Y.); William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, England (K.F., N.A., E.M., J.P., S.E.P.); and Oxford NIHR Biomedical Research Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, England (E.L., A.B., S.K.P., S.N.); Department of Biomedical Engineering, King’s College London, 5th Floor Becket House, 1 Lambeth Palace Rd, London SE1 7EU, England (A.A.Y.)
| | - Alistair A. Young
- From the Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand (E.F., A.S., A.A.Y.); William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, England (K.F., N.A., E.M., J.P., S.E.P.); and Oxford NIHR Biomedical Research Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, England (E.L., A.B., S.K.P., S.N.); Department of Biomedical Engineering, King’s College London, 5th Floor Becket House, 1 Lambeth Palace Rd, London SE1 7EU, England (A.A.Y.)
| |
Collapse
|
59
|
Srinivasa Rao ASR, Diamond MP. Deep Learning of Markov Model-Based Machines for Determination of Better Treatment Option Decisions for Infertile Women. Reprod Sci 2020; 27:763-770. [PMID: 31939200 DOI: 10.1007/s43032-019-00082-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 06/26/2019] [Indexed: 12/25/2022]
Abstract
In this technical article, we are proposing ideas, that we have been developing on how machine learning and deep learning techniques can potentially assist obstetricians/gynecologists in better clinical decision-making, using infertile women in their treatment options in combination with mathematical modeling in pregnant women as examples.
Collapse
Affiliation(s)
- Arni S R Srinivasa Rao
- Division of Health Economics and Modeling, Department of Population Health Sciences, Medical College of Georgia, Augusta University, 1120, 15th Street, AE 1015, Augusta, GA, 30912, USA.
- Laboratory for Theory and Mathematical Modeling, Division of Infectious Diseases Department of Medicine, Medical College of Georgia, Augusta University, 1120, 15th Street, AE 1015, Augusta, GA, 30912, USA.
- Department of Mathematics, Augusta University, 1120, 15th Street, AE 1015, Augusta, GA, 30912, USA.
| | - Michael P Diamond
- Medical College of Georgia, Augusta University, 1120 15th Street, CJ-1036, Augusta, Georgia
| |
Collapse
|
60
|
He Y, Qin W, Wu Y, Zhang M, Yang Y, Liu X, Zheng H, Liang D, Hu Z. Automatic left ventricle segmentation from cardiac magnetic resonance images using a capsule network. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:541-553. [PMID: 32176675 DOI: 10.3233/xst-190621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE Segmentation of magnetic resonance images (MRI) of the left ventricle (LV) plays a key role in quantifying the volumetric functions of the heart, such as the area, volume, and ejection fraction. Traditionally, LV segmentation is performed manually by experienced experts, which is both time-consuming and prone to subjective bias. This study aims to develop a novel capsule-based automated segmentation method to automatically segment the LV from images obtained by cardiac MRI. METHOD The technique applied for segmentation uses Fourier analysis and the circular Hough transform (CHT) to indicate the approximate location of the LV and a network capsule to precisely segment the LV. The neurons of the capsule network output a vector and preserve much of the information about the input by replacing the largest pooling layer with convolutional strides and dynamic routing. Finally, the segmentation result is postprocessed by threshold segmentation and morphological processing to increase the accuracy of LV segmentation. RESULTS We fully exploit the capsule network to achieve the segmentation goal and combine LV detection and capsule concepts to complete LV segmentation. In the experiments, the tested methods achieved LV Dice scores of 0.922±0.05 end-diastolic (ED) and 0.898±0.11 end-systolic (ES) on the ACDC 2017 data set. The experimental results confirm that the algorithm can effectively perform LV segmentation from a cardiac magnetic resonance image. To verify the performance of the proposed method, visual and quantitative comparisons are also performed, which show that the proposed method exhibits improved segmentation accuracy compared with the traditional method. CONCLUSIONS The evaluation metrics of medical image segmentation indicate that the proposed method in combination with postprocessing and feature detection effectively improves segmentation accuracy for cardiac MRI. To the best of our knowledge, this study is the first to use a deep learning model based on capsule networks to systematically evaluate end-to-end LV segmentation.
Collapse
Affiliation(s)
- Yangsu He
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- School of Electrical and Information Engineering, Hunan University, Changsha, China
| | - Wenjian Qin
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yin Wu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Mengxi Zhang
- Department of Biomedical Engineering, University of California, Davis, CA, USA
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
61
|
Al-Kadi OS. Spatio-Temporal Segmentation in 3-D Echocardiographic Sequences Using Fractional Brownian Motion. IEEE Trans Biomed Eng 2019; 67:2286-2296. [PMID: 31831403 DOI: 10.1109/tbme.2019.2958701] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
An important aspect for an improved cardiac functional analysis is the accurate segmentation of the left ventricle (LV). A novel approach for fully-automated segmentation of the LV endocardium and epicardium contours is presented. This is mainly based on the natural physical characteristics of the LV shape structure. Both sides of the LV boundaries exhibit natural elliptical curvatures by having details on various scales, i.e. exhibiting fractal-like characteristics. The fractional Brownian motion (fBm), which is a non-stationary stochastic process, integrates well with the stochastic nature of ultrasound echoes. It has the advantage of representing a wide range of non-stationary signals and can quantify statistical local self-similarity throughout the time-sequence ultrasound images. The locally characterized boundaries of the fBm segmented LV were further iteratively refined using global information by means of second-order moments. The method is benchmarked using synthetic 3D+time echocardiographic sequences for normal and different ischemic cardiomyopathy, and results compared with state-of-the-art LV segmentation. Furthermore, the framework was validated against real data from canine cases with expert-defined segmentations and demonstrated improved accuracy. The fBm-based segmentation algorithm is fully automatic and has the potential to be used clinically together with 3D echocardiography for improved cardiovascular disease diagnosis.
Collapse
|
62
|
Ge R, Yang G, Chen Y, Luo L, Feng C, Zhang H, Li S. PV-LVNet: Direct left ventricle multitype indices estimation from 2D echocardiograms of paired apical views with deep neural networks. Med Image Anal 2019; 58:101554. [DOI: 10.1016/j.media.2019.101554] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Revised: 05/15/2019] [Accepted: 09/04/2019] [Indexed: 11/16/2022]
|
63
|
MFP-Unet: A novel deep learning based approach for left ventricle segmentation in echocardiography. Phys Med 2019; 67:58-69. [DOI: 10.1016/j.ejmp.2019.10.001] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 09/08/2019] [Accepted: 10/01/2019] [Indexed: 11/18/2022] Open
|
64
|
Leclerc S, Smistad E, Pedrosa J, Ostvik A, Cervenansky F, Espinosa F, Espeland T, Berg EAR, Jodoin PM, Grenier T, Lartizien C, Dhooge J, Lovstakken L, Bernard O. Deep Learning for Segmentation Using an Open Large-Scale Dataset in 2D Echocardiography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2198-2210. [PMID: 30802851 DOI: 10.1109/tmi.2019.2900516] [Citation(s) in RCA: 223] [Impact Index Per Article: 37.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Delineation of the cardiac structures from 2D echocardiographic images is a common clinical task to establish a diagnosis. Over the past decades, the automation of this task has been the subject of intense research. In this paper, we evaluate how far the state-of-the-art encoder-decoder deep convolutional neural network methods can go at assessing 2D echocardiographic images, i.e., segmenting cardiac structures and estimating clinical indices, on a dataset, especially, designed to answer this objective. We, therefore, introduce the cardiac acquisitions for multi-structure ultrasound segmentation dataset, the largest publicly-available and fully-annotated dataset for the purpose of echocardiographic assessment. The dataset contains two and four-chamber acquisitions from 500 patients with reference measurements from one cardiologist on the full dataset and from three cardiologists on a fold of 50 patients. Results show that encoder-decoder-based architectures outperform state-of-the-art non-deep learning methods and faithfully reproduce the expert analysis for the end-diastolic and end-systolic left ventricular volumes, with a mean correlation of 0.95 and an absolute mean error of 9.5 ml. Concerning the ejection fraction of the left ventricle, results are more contrasted with a mean correlation coefficient of 0.80 and an absolute mean error of 5.6%. Although these results are below the inter-observer scores, they remain slightly worse than the intra-observer's ones. Based on this observation, areas for improvement are defined, which open the door for accurate and fully-automatic analysis of 2D echocardiographic images.
Collapse
|
65
|
Feng-Ping A, Zhi-Wen L. Medical image segmentation algorithm based on feedback mechanism convolutional neural network. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101589] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
66
|
Medical Image Segmentation Algorithm Based on Feedback Mechanism CNN. CONTRAST MEDIA & MOLECULAR IMAGING 2019; 2019:6134942. [PMID: 31481851 PMCID: PMC6701432 DOI: 10.1155/2019/6134942] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2018] [Revised: 05/18/2019] [Accepted: 06/16/2019] [Indexed: 01/17/2023]
Abstract
With the development of computer vision and image segmentation technology, medical image segmentation and recognition technology has become an important part of computer-aided diagnosis. The traditional image segmentation method relies on artificial means to extract and select information such as edges, colors, and textures in the image. It not only consumes considerable energy resources and people's time but also requires certain expertise to obtain useful feature information, which no longer meets the practical application requirements of medical image segmentation and recognition. As an efficient image segmentation method, convolutional neural networks (CNNs) have been widely promoted and applied in the field of medical image segmentation. However, CNNs that rely on simple feedforward methods have not met the actual needs of the rapid development of the medical field. Thus, this paper is inspired by the feedback mechanism of the human visual cortex, and an effective feedback mechanism calculation model and operation framework is proposed, and the feedback optimization problem is presented. A new feedback convolutional neural network algorithm based on neuron screening and neuron visual information recovery is constructed. So, a medical image segmentation algorithm based on a feedback mechanism convolutional neural network is proposed. The basic idea is as follows: The model for obtaining an initial region with the segmented medical image classifies the pixel block samples in the segmented image. Then, the initial results are optimized by threshold segmentation and morphological methods to obtain accurate medical image segmentation results. Experiments show that the proposed segmentation method has not only high segmentation accuracy but also extremely high adaptive segmentation ability for various medical images. The research in this paper provides a new perspective for medical image segmentation research. It is a new attempt to explore more advanced intelligent medical image segmentation methods. It also provides technical approaches and methods for further development and improvement of adaptive medical image segmentation technology.
Collapse
|
67
|
Wang S, He K, Nie D, Zhou S, Gao Y, Shen D. CT male pelvic organ segmentation using fully convolutional networks with boundary sensitive representation. Med Image Anal 2019; 54:168-178. [PMID: 30928830 PMCID: PMC6506162 DOI: 10.1016/j.media.2019.03.003] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 03/17/2019] [Accepted: 03/20/2019] [Indexed: 12/27/2022]
Abstract
Accurate segmentation of the prostate and organs at risk (e.g., bladder and rectum) in CT images is a crucial step for radiation therapy in the treatment of prostate cancer. However, it is a very challenging task due to unclear boundaries, large intra- and inter-patient shape variability, and uncertain existence of bowel gases and fiducial markers. In this paper, we propose a novel automatic segmentation framework using fully convolutional networks with boundary sensitive representation to address this challenging problem. Our novel segmentation framework contains three modules. First, an organ localization model is designed to focus on the candidate segmentation region of each organ for better performance. Then, a boundary sensitive representation model based on multi-task learning is proposed to represent the semantic boundary information in a more robust and accurate manner. Finally, a multi-label cross-entropy loss function combining boundary sensitive representation is introduced to train a fully convolutional network for the organ segmentation. The proposed method is evaluated on a large and diverse planning CT dataset with 313 images from 313 prostate cancer patients. Experimental results show that the performance of our proposed method outperforms the baseline fully convolutional networks, as well as other state-of-the-art methods in CT male pelvic organ segmentation.
Collapse
Affiliation(s)
- Shuai Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Kelei He
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Sihang Zhou
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; School of Computer, National University of Defense Technology, Changsha, China
| | - Yaozong Gao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.
| |
Collapse
|
68
|
Jafari MH, Girgis H, Van Woudenberg N, Liao Z, Rohling R, Gin K, Abolmaesumi P, Tsang T. Automatic biplane left ventricular ejection fraction estimation with mobile point-of-care ultrasound using multi-task learning and adversarial training. Int J Comput Assist Radiol Surg 2019; 14:1027-1037. [PMID: 30941679 DOI: 10.1007/s11548-019-01954-w] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 03/22/2019] [Indexed: 12/13/2022]
Abstract
PURPOSE Left ventricular ejection fraction (LVEF) is one of the key metrics to assess the heart functionality, and cardiac ultrasound (echo) is a standard imaging modality for EF measurement. There is an emerging interest to exploit the point-of-care ultrasound (POCUS) usability due to low cost and ease of access. In this work, we aim to present a computationally efficient mobile application for accurate LVEF estimation. METHODS Our proposed mobile application for LVEF estimation runs in real time on Android mobile devices that have either a wired or wireless connection to a cardiac POCUS device. We propose a pipeline for biplane ejection fraction estimation using apical two-chamber (AP2) and apical four-chamber (AP4) echo views. A computationally efficient multi-task deep fully convolutional network is proposed for simultaneous LV segmentation and landmark detection in these views, which is integrated into the LVEF estimation pipeline. An adversarial critic model is used in the training phase to impose a shape prior on the LV segmentation output. RESULTS The system is evaluated on a dataset of 427 patients. Each patient has a pair of captured AP2 and AP4 echo studies, resulting in a total of more than 40,000 echo frames. The mobile system reaches a noticeably high average Dice score of 92% for LV segmentation, an average Euclidean distance error of 2.85 pixels for the detection of anatomical landmarks used in LVEF calculation, and a median absolute error of 6.2% for LVEF estimation compared to the expert cardiologist's annotations and measurements. CONCLUSION The proposed system runs in real time on mobile devices. The experiments show the effectiveness of the proposed system for automatic LVEF estimation by demonstrating an adequate correlation with the cardiologist's examination.
Collapse
Affiliation(s)
| | - Hany Girgis
- The University of British Columbia, Vancouver, Canada.,Vancouver General Hospital, Vancouver, Canada
| | | | - Zhibin Liao
- The University of British Columbia, Vancouver, Canada
| | | | - Ken Gin
- The University of British Columbia, Vancouver, Canada.,Vancouver General Hospital, Vancouver, Canada
| | | | - Terasa Tsang
- The University of British Columbia, Vancouver, Canada.,Vancouver General Hospital, Vancouver, Canada
| |
Collapse
|
69
|
Li J, Yu ZL, Gu Z, Liu H, Li Y. Dilated-Inception Net: Multi-Scale Feature Aggregation for Cardiac Right Ventricle Segmentation. IEEE Trans Biomed Eng 2019; 66:3499-3508. [PMID: 30932820 DOI: 10.1109/tbme.2019.2906667] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Segmentation of cardiac ventricle from magnetic resonance images is significant for cardiac disease diagnosis, progression assessment, and monitoring cardiac conditions. Manual segmentation is so time consuming, tedious, and subjective that automated segmentation methods are highly desired in practice. However, conventional segmentation methods performed poorly in cardiac ventricle, especially in the right ventricle. Compared with the left ventricle, whose shape is a simple thick-walled circle, the structure of the right ventricle is more complex due to ambiguous boundary, irregular cavity, and variable crescent shape. Hence, effective feature extractors and segmentation models are preferred. In this paper, we propose a dilated-inception net (DIN) to extract and aggregate multi-scale features for right ventricle segmentation. The DIN outperforms many state-of-the-art models on the benchmark database of right ventricle segmentation challenge. In addition, the experimental results indicate that the proposed model has potential to reach expert-level performance in right ventricular epicardium segmentation. More importantly, DIN behaves similarly to clinical expert with high correlation coefficients in four clinical cardiac indices. Therefore, the proposed DIN is promising for automated cardiac right ventricle segmentation in clinical applications.
Collapse
|
70
|
Furqan Qadri S, Ai D, Hu G, Ahmad M, Huang Y, Wang Y, Yang J. Automatic Deep Feature Learning via Patch-Based Deep Belief Network for Vertebrae Segmentation in CT Images. APPLIED SCIENCES 2018; 9:69. [DOI: 10.3390/app9010069] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Precise automatic vertebra segmentation in computed tomography (CT) images is important for the quantitative analysis of vertebrae-related diseases but remains a challenging task due to high variation in spinal anatomy among patients. In this paper, we propose a deep learning approach for automatic CT vertebra segmentation named patch-based deep belief networks (PaDBNs). Our proposed PaDBN model automatically selects the features from image patches and then measures the differences between classes and investigates performance. The region of interest (ROI) is obtained from CT images. Unsupervised feature reduction contrastive divergence algorithm is applied for weight initialization, and the weights are optimized by layers in a supervised fine-tuning procedure. The discriminative learning features obtained from the steps above are used as input of a classifier to obtain the likelihood of the vertebrae. Experimental results demonstrate that the proposed PaDBN model can considerably reduce computational cost and produce an excellent performance in vertebra segmentation in terms of accuracy compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Syed Furqan Qadri
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Guoyu Hu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Mubashir Ahmad
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yong Huang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yongtian Wang
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
71
|
Abstract
The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction, and intervention. Deep learning is a representation learning method that consists of layers that transform data nonlinearly, thus, revealing hierarchical relationships and structures. In this review, we survey deep learning application papers that use structured data, and signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.
Collapse
|
72
|
Viscosity Prediction in a Physiologically Controlled Ventricular Assist Device. IEEE Trans Biomed Eng 2018; 65:2355-2364. [DOI: 10.1109/tbme.2018.2797424] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
73
|
Dong S, Luo G, Wang K, Cao S, Li Q, Zhang H. A Combined Fully Convolutional Networks and Deformable Model for Automatic Left Ventricle Segmentation Based on 3D Echocardiography. BIOMED RESEARCH INTERNATIONAL 2018; 2018:5682365. [PMID: 30276211 PMCID: PMC6151364 DOI: 10.1155/2018/5682365] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 06/17/2018] [Accepted: 07/29/2018] [Indexed: 11/17/2022]
Abstract
Segmentation of the left ventricle (LV) from three-dimensional echocardiography (3DE) plays a key role in the clinical diagnosis of the LV function. In this work, we proposed a new automatic method for the segmentation of LV, based on the fully convolutional networks (FCN) and deformable model. This method implemented a coarse-to-fine framework. Firstly, a new deep fusion network based on feature fusion and transfer learning, combining the residual modules, was proposed to achieve coarse segmentation of LV on 3DE. Secondly, we proposed a method of geometrical model initialization for a deformable model based on the results of coarse segmentation. Thirdly, the deformable model was implemented to further optimize the segmentation results with a regularization item to avoid the leakage between left atria and left ventricle to achieve the goal of fine segmentation of LV. Numerical experiments have demonstrated that the proposed method outperforms the state-of-the-art methods on the challenging CETUS benchmark in the segmentation accuracy and has a potential for practical applications.
Collapse
Affiliation(s)
- Suyu Dong
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
| | - Gongning Luo
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
| | - Kuanquan Wang
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
| | - Shaodong Cao
- Department of Radiology, The Fourth Hospital of Harbin Medical University, Harbin 150001, China
| | - Qince Li
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
| | - Henggui Zhang
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
- School of Physics and Astronomy, University of Manchester, Manchester, UK
- Space Institute of Southern China, Shenzhen, Guangdong, China
| |
Collapse
|
74
|
Qu J, Hiruta N, Terai K, Nosato H, Murakawa M, Sakanashi H. Gastric Pathology Image Classification Using Stepwise Fine-Tuning for Deep Neural Networks. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:8961781. [PMID: 30034677 PMCID: PMC6033298 DOI: 10.1155/2018/8961781] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Revised: 05/14/2018] [Accepted: 05/27/2018] [Indexed: 02/06/2023]
Abstract
Deep learning using convolutional neural networks (CNNs) is a distinguished tool for many image classification tasks. Due to its outstanding robustness and generalization, it is also expected to play a key role to facilitate advanced computer-aided diagnosis (CAD) for pathology images. However, the shortage of well-annotated pathology image data for training deep neural networks has become a major issue at present because of the high-cost annotation upon pathologist's professional observation. Faced with this problem, transfer learning techniques are generally used to reinforcing the capacity of deep neural networks. In order to further boost the performance of the state-of-the-art deep neural networks and alleviate insufficiency of well-annotated data, this paper presents a novel stepwise fine-tuning-based deep learning scheme for gastric pathology image classification and establishes a new type of target-correlative intermediate datasets. Our proposed scheme is deemed capable of making the deep neural network imitating the pathologist's perception manner and of acquiring pathology-related knowledge in advance, but with very limited extra cost in data annotation. The experiments are conducted with both well-annotated gastric pathology data and the proposed target-correlative intermediate data on several state-of-the-art deep neural networks. The results congruously demonstrate the feasibility and superiority of our proposed scheme for boosting the classification performance.
Collapse
Affiliation(s)
- Jia Qu
- Department of Intelligent Interaction Technologies, University of Tsukuba, Tsukuba 305-8573, Japan
| | - Nobuyuki Hiruta
- Department of Surgical Pathology, Toho University Sakura Medical Center, Sakura 285-8741, Japan
| | - Kensuke Terai
- Department of Surgical Pathology, Toho University Sakura Medical Center, Sakura 285-8741, Japan
| | - Hirokazu Nosato
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan
| | - Masahiro Murakawa
- Department of Intelligent Interaction Technologies, University of Tsukuba, Tsukuba 305-8573, Japan
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan
| | - Hidenori Sakanashi
- Department of Intelligent Interaction Technologies, University of Tsukuba, Tsukuba 305-8573, Japan
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan
| |
Collapse
|
75
|
Yan J, Pan B, Qi Y, Ben J, Fu Y. Prior knowledge snake segmentation of ultrasound images denoised by J-divergence anisotropy diffusion. Int J Med Robot 2018; 14:e1924. [PMID: 29873448 DOI: 10.1002/rcs.1924] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Revised: 04/18/2018] [Accepted: 04/19/2018] [Indexed: 11/06/2022]
Abstract
BACKGROUND Applying transrectal ultrasound to robot-assisted laparoscopic radical prostatectomy has attracted attention in recent years, and it is considered as a proper method to provide real-time subsurface anatomic features. A precise registration between the ultrasound equipment and robotic surgical system is necessary, which usually requires a fast and accurate recognition of the registration tool in the ultrasound image. METHODS Tissue forceps are chosen as the registration tool. J-divergence anisotropy diffusion and prior knowledge snake segmentation are proposed for the automatic recognition of forceps in ultrasound images. RESULTS Simulation, gel tissue phantom experiments and in vitro experiments are carried out. Several evaluation indices are calculated to compare results under different methods. CONCLUSIONS The proposed methods are proved to be practicable, reliable and superior to existing ones, with reduced calculation time and higher accuracy.
Collapse
Affiliation(s)
- Jiawen Yan
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Bo Pan
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Yunfeng Qi
- The Fourth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Jin Ben
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Yili Fu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
76
|
Brattain LJ, Telfer BA, Dhyani M, Grajo JR, Samir AE. Machine learning for medical ultrasound: status, methods, and future opportunities. Abdom Radiol (NY) 2018; 43:786-799. [PMID: 29492605 PMCID: PMC5886811 DOI: 10.1007/s00261-018-1517-0] [Citation(s) in RCA: 128] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Ultrasound (US) imaging is the most commonly performed cross-sectional diagnostic imaging modality in the practice of medicine. It is low-cost, non-ionizing, portable, and capable of real-time image acquisition and display. US is a rapidly evolving technology with significant challenges and opportunities. Challenges include high inter- and intra-operator variability and limited image quality control. Tremendous opportunities have arisen in the last decade as a result of exponential growth in available computational power coupled with progressive miniaturization of US devices. As US devices become smaller, enhanced computational capability can contribute significantly to decreasing variability through advanced image processing. In this paper, we review leading machine learning (ML) approaches and research directions in US, with an emphasis on recent ML advances. We also present our outlook on future opportunities for ML techniques to further improve clinical workflow and US-based disease diagnosis and characterization.
Collapse
Affiliation(s)
| | - Brian A Telfer
- MIT Lincoln Laboratory, 244 Wood St, Lexington, MA, 02420, USA
| | - Manish Dhyani
- Department of Internal Medicine, Steward Carney Hospital, Boston, MA, 02124, USA
- Division of Ultrasound, Department of Radiology, Center for Ultrasound Research & Translation, Massachusetts General Hospital, Boston, MA, 02114, USA
| | - Joseph R Grajo
- Department of Radiology, Division of Abdominal Imaging, University of Florida College of Medicine, Gainesville, FL, USA
| | - Anthony E Samir
- Division of Ultrasound, Department of Radiology, Center for Ultrasound Research & Translation, Massachusetts General Hospital, Boston, MA, 02114, USA
| |
Collapse
|
77
|
Gurbani SS, Schreibmann E, Maudsley AA, Cordova JS, Soher BJ, Poptani H, Verma G, Barker PB, Shim H, Cooper LAD. A convolutional neural network to filter artifacts in spectroscopic MRI. Magn Reson Med 2018. [PMID: 29520831 DOI: 10.1002/mrm.27166] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
PURPOSE Proton MRSI is a noninvasive modality capable of generating volumetric maps of in vivo tissue metabolism without the need for ionizing radiation or injected contrast agent. Magnetic resonance spectroscopic imaging has been shown to be a viable imaging modality for studying several neuropathologies. However, a key hurdle in the routine clinical adoption of MRSI is the presence of spectral artifacts that can arise from a number of sources, possibly leading to false information. METHODS A deep learning model was developed that was capable of identifying and filtering out poor quality spectra. The core of the model used a tiled convolutional neural network that analyzed frequency-domain spectra to detect artifacts. RESULTS When compared with a panel of MRS experts, our convolutional neural network achieved high sensitivity and specificity with an area under the curve of 0.95. A visualization scheme was implemented to better understand how the convolutional neural network made its judgement on single-voxel or multivoxel MRSI, and the convolutional neural network was embedded into a pipeline capable of producing whole-brain spectroscopic MRI volumes in real time. CONCLUSION The fully automated method for assessment of spectral quality provides a valuable tool to support clinical MRSI or spectroscopic MRI studies for use in fields such as adaptive radiation therapy planning.
Collapse
Affiliation(s)
- Saumya S Gurbani
- Department of Radiation Oncology, Emory University, Atlanta, Georgia.,Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia.,Winship Cancer Institute of Emory University, Atlanta, Georgia
| | - Eduard Schreibmann
- Department of Radiation Oncology, Emory University, Atlanta, Georgia.,Winship Cancer Institute of Emory University, Atlanta, Georgia
| | - Andrew A Maudsley
- Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida
| | - James Scott Cordova
- Department of Radiation Oncology, Emory University, Atlanta, Georgia.,Winship Cancer Institute of Emory University, Atlanta, Georgia
| | - Brian J Soher
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina
| | - Harish Poptani
- Institute of Translational Medicine, University of Liverpool, Liverpool, United Kingdom
| | - Gaurav Verma
- Department of Radiology, Icahn School of Medicine at Mt. Sinai, New York, New York
| | - Peter B Barker
- Department of Radiology and Radiological Science, The Johns Hopkins University, Baltimore, Maryland
| | - Hyunsuk Shim
- Department of Radiation Oncology, Emory University, Atlanta, Georgia.,Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia.,Winship Cancer Institute of Emory University, Atlanta, Georgia.,Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia
| | - Lee A D Cooper
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia.,Winship Cancer Institute of Emory University, Atlanta, Georgia.,Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, Georgia
| |
Collapse
|
78
|
Santiago C, Nascimento JC, Marques JS. Fast segmentation of the left ventricle in cardiac MRI using dynamic programming. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 154:9-23. [PMID: 29249351 DOI: 10.1016/j.cmpb.2017.10.028] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2016] [Revised: 09/08/2017] [Accepted: 10/30/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE The segmentation of the left ventricle (LV) in cardiac magnetic resonance imaging is a necessary step for the analysis and diagnosis of cardiac function. In most clinical setups, this step is still manually performed by cardiologists, which is time-consuming and laborious. This paper proposes a fast system for the segmentation of the LV that significantly reduces human intervention. METHODS A dynamic programming approach is used to obtain the border of the LV. Using very simple assumptions about the expected shape and location of the segmentation, this system is able to deal with many of the challenges associated with this problem. The system was evaluated on two public datasets: one with 33 patients, comprising a total of 660 magnetic resonance volumes and another with 45 patients, comprising a total of 90 volumes. Quantitative evaluation of the segmentation accuracy and computational complexity was performed. RESULTS The proposed system is able to segment a whole volume in 1.5 seconds and achieves an average Dice similarity coefficient of 86.0% and an average perpendicular distance of 2.4 mm, which compares favorably with other state-of-the-art methods. CONCLUSIONS A system for the segmentation of the left ventricle in cardiac magnetic resonance imaging is proposed. It is a fast framework that significantly reduces the amount of time and work required of cardiologists.
Collapse
Affiliation(s)
- Carlos Santiago
- Institute for Systems and Robotics (ISR/IST), LARSyS, Instituto Superior Técnico, Universidade Lisboa, Portugal.
| | - Jacinto C Nascimento
- Institute for Systems and Robotics (ISR/IST), LARSyS, Instituto Superior Técnico, Universidade Lisboa, Portugal.
| | - Jorge S Marques
- Institute for Systems and Robotics (ISR/IST), LARSyS, Instituto Superior Técnico, Universidade Lisboa, Portugal.
| |
Collapse
|
79
|
Shi J, Zheng X, Li Y, Zhang Q, Ying S. Multimodal Neuroimaging Feature Learning With Multimodal Stacked Deep Polynomial Networks for Diagnosis of Alzheimer's Disease. IEEE J Biomed Health Inform 2018; 22:173-183. [DOI: 10.1109/jbhi.2017.2655720] [Citation(s) in RCA: 222] [Impact Index Per Article: 31.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
80
|
Ye F. Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data. PLoS One 2017; 12:e0188746. [PMID: 29236718 PMCID: PMC5728507 DOI: 10.1371/journal.pone.0188746] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2017] [Accepted: 10/02/2017] [Indexed: 01/02/2023] Open
Abstract
In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks.
Collapse
Affiliation(s)
- Fei Ye
- School of information science and technology, Southwest Jiaotong University, ChengDu, China
| |
Collapse
|
81
|
Meiburger KM, Acharya UR, Molinari F. Automated localization and segmentation techniques for B-mode ultrasound images: A review. Comput Biol Med 2017; 92:210-235. [PMID: 29247890 DOI: 10.1016/j.compbiomed.2017.11.018] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 11/30/2017] [Accepted: 11/30/2017] [Indexed: 12/14/2022]
Abstract
B-mode ultrasound imaging is used extensively in medicine. Hence, there is a need to have efficient segmentation tools to aid in computer-aided diagnosis, image-guided interventions, and therapy. This paper presents a comprehensive review on automated localization and segmentation techniques for B-mode ultrasound images. The paper first describes the general characteristics of B-mode ultrasound images. Then insight on the localization and segmentation of tissues is provided, both in the case in which the organ/tissue localization provides the final segmentation and in the case in which a two-step segmentation process is needed, due to the desired boundaries being too fine to locate from within the entire ultrasound frame. Subsequenly, examples of some main techniques found in literature are shown, including but not limited to shape priors, superpixel and classification, local pixel statistics, active contours, edge-tracking, dynamic programming, and data mining. Ten selected applications (abdomen/kidney, breast, cardiology, thyroid, liver, vascular, musculoskeletal, obstetrics, gynecology, prostate) are then investigated in depth, and the performances of a few specific applications are compared. In conclusion, future perspectives for B-mode based segmentation, such as the integration of RF information, the employment of higher frequency probes when possible, the focus on completely automatic algorithms, and the increase in available data are discussed.
Collapse
Affiliation(s)
- Kristen M Meiburger
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy
| | - U Rajendra Acharya
- Department of Electronic & Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore; Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Filippo Molinari
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy.
| |
Collapse
|
82
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4777] [Impact Index Per Article: 597.1] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
83
|
Nascimento JC, Carneiro G. Deep Learning on Sparse Manifolds for Faster Object Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:4978-4990. [PMID: 28708556 DOI: 10.1109/tip.2017.2725582] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We propose a new combination of deep belief networks and sparse manifold learning strategies for the 2D segmentation of non-rigid visual objects. With this novel combination, we aim to reduce the training and inference complexities while maintaining the accuracy of machine learning-based non-rigid segmentation methodologies. Typical non-rigid object segmentation methodologies divide the problem into a rigid detection followed by a non-rigid segmentation, where the low dimensionality of the rigid detection allows for a robust training (i.e., a training that does not require a vast amount of annotated images to estimate robust appearance and shape models) and a fast search process during inference. Therefore, it is desirable that the dimensionality of this rigid transformation space is as small as possible in order to enhance the advantages brought by the aforementioned division of the problem. In this paper, we propose the use of sparse manifolds to reduce the dimensionality of the rigid detection space. Furthermore, we propose the use of deep belief networks to allow for a training process that can produce robust appearance models without the need of large annotated training sets. We test our approach in the segmentation of the left ventricle of the heart from ultrasound images and lips from frontal face images. Our experiments show that the use of sparse manifolds and deep belief networks for the rigid detection stage leads to segmentation results that are as accurate as the current state of the art, but with lower search complexity and training processes that require a small amount of annotated training data.
Collapse
|
84
|
Cunningham RJ, Harding PJ, Loram ID. Real-Time Ultrasound Segmentation, Analysis and Visualisation of Deep Cervical Muscle Structure. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:653-665. [PMID: 27831867 DOI: 10.1109/tmi.2016.2623819] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Despite widespread availability of ultrasound and a need for personalised muscle diagnosis (neck/back pain-injury, work related disorder, myopathies, neuropathies), robust, online segmentation of muscles within complex groups remains unsolved by existing methods. For example, Cervical Dystonia (CD) is a prevalent neurological condition causing painful spasticity in one or multiple muscles in the cervical muscle system. Clinicians currently have no method for targeting/monitoring treatment of deep muscles. Automated methods of muscle segmentation would enable clinicians to study, target, and monitor the deep cervical muscles via ultrasound. We have developed a method for segmenting five bilateral cervical muscles and the spine via ultrasound alone, in real-time. Magnetic Resonance Imaging (MRI) and ultrasound data were collected from 22 participants (age: 29.0±6.6, male: 12). To acquire ultrasound muscle segment labels, a novel multimodal registration method was developed, involving MRI image annotation, and shape registration to MRI-matched ultrasound images, via approximation of the tissue deformation. We then applied polynomial regression to transform our annotations and textures into a mean space, before using shape statistics to generate a texture-to-shape dictionary. For segmentation, test images were compared to dictionary textures giving an initial segmentation, and then we used a customized Active Shape Model to refine the fit. Using ultrasound alone, on unseen participants, our technique currently segments a single image in [Formula: see text] to over 86% accuracy (Jaccard index). We propose this approach is applicable generally to segment, extrapolate and visualise deep muscle structure, and analyse statistical features online.
Collapse
|
85
|
Yu L, Guo Y, Wang Y, Yu J, Chen P. Segmentation of Fetal Left Ventricle in Echocardiographic Sequences Based on Dynamic Convolutional Neural Networks. IEEE Trans Biomed Eng 2017; 64:1886-1895. [PMID: 28113289 DOI: 10.1109/tbme.2016.2628401] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Segmentation of fetal left ventricle (LV) in echocardiographic sequences is important for further quantitative analysis of fetal cardiac function. However, image gross inhomogeneities and fetal random movements make the segmentation a challenging problem. In this paper, a dynamic convolutional neural networks (CNN) based on multiscale information and fine-tuning is proposed for fetal LV segmentation. The CNN is pretrained by amount of labeled training data. In the segmentation, the first frame of each echocardiographic sequence is delineated manually. The dynamic CNN is fine-tuned by deep tuning with the first frame and shallow tuning with the rest of frames, respectively, to adapt to the individual fetus. Additionally, to separate the connection region between LV and left atrium (LA), a matching approach, which consists of block matching and line matching, is used for mitral valve (MV) base points tracking. Advantages of our proposed method are compared with an active contour model (ACM), a dynamical appearance model (DAM), and a fixed multiscale CNN method. Experimental results in 51 echocardiographic sequences show that the segmentation results agree well with the ground truth, especially in the cases with leakage, blurry boundaries, and subject-to-subject variations. The CNN architecture can be simple, and the dynamic fine-tuning is efficient.
Collapse
|
86
|
Pereira F, Bueno A, Rodriguez A, Perrin D, Marx G, Cardinale M, Salgo I, Del Nido P. Automated detection of coarctation of aorta in neonates from two-dimensional echocardiograms. J Med Imaging (Bellingham) 2017; 4:014502. [PMID: 28149925 DOI: 10.1117/1.jmi.4.1.014502] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2016] [Accepted: 12/20/2016] [Indexed: 11/14/2022] Open
Abstract
Coarctation of aorta (CoA) is a critical congenital heart defect (CCHD) that requires accurate and immediate diagnosis and treatment. Current newborn screening methods to detect CoA lack both in sensitivity and specificity, and when suspected in a newborn, it must be confirmed using specialized imaging and expert diagnosis, both of which are usually unavailable at tertiary birthing centers. We explore the feasibility of applying machine learning methods to reliably determine the presence of this difficult-to-diagnose cardiac abnormality from ultrasound image data. We propose a framework that uses deep learning-based machine learning methods for fully automated detection of CoA from two-dimensional ultrasound clinical data acquired in the parasternal long axis view, the apical four chamber view, and the suprasternal notch view. On a validation set consisting of 26 CoA and 64 normal patients our algorithm achieved a total error rate of 12.9% (11.5% false-negative error and 13.6% false-positive error) when combining decisions of classifiers over three standard echocardiographic view planes. This compares favorably with published results that combine clinical assessments with pulse oximetry to detect CoA (71% sensitivity).
Collapse
Affiliation(s)
- Franklin Pereira
- Philips Ultrasound Inc. , 3000 Minuteman Road, Andover, Massachusetts 02176, United States
| | - Alejandra Bueno
- Boston Children's Hospital , Department of Cardiovascular Surgery, 300 Longwood Avenue, Boston, Massachusetts 02115, United States
| | - Andrea Rodriguez
- Boston Children's Hospital , Department of Cardiovascular Surgery, 300 Longwood Avenue, Boston, Massachusetts 02115, United States
| | - Douglas Perrin
- Boston Children's Hospital , Department of Cardiovascular Surgery, 300 Longwood Avenue, Boston, Massachusetts 02115, United States
| | - Gerald Marx
- Boston Children's Hospital , Department of Cardiovascular Surgery, 300 Longwood Avenue, Boston, Massachusetts 02115, United States
| | - Michael Cardinale
- Philips Ultrasound Inc. , 3000 Minuteman Road, Andover, Massachusetts 02176, United States
| | - Ivan Salgo
- Philips Ultrasound Inc. , 3000 Minuteman Road, Andover, Massachusetts 02176, United States
| | - Pedro Del Nido
- Boston Children's Hospital , Department of Cardiovascular Surgery, 300 Longwood Avenue, Boston, Massachusetts 02115, United States
| |
Collapse
|
87
|
Review of Deep Learning Methods in Mammography, Cardiovascular, and Microscopy Image Analysis. DEEP LEARNING AND CONVOLUTIONAL NEURAL NETWORKS FOR MEDICAL IMAGE COMPUTING 2017. [DOI: 10.1007/978-3-319-42999-1_2] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
88
|
Ngo TA, Lu Z, Carneiro G. Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance. Med Image Anal 2017; 35:159-171. [PMID: 27423113 DOI: 10.1016/j.media.2016.05.009] [Citation(s) in RCA: 166] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2015] [Revised: 05/20/2016] [Accepted: 05/20/2016] [Indexed: 11/28/2022]
Affiliation(s)
- Tuan Anh Ngo
- Vietnam National University of Agriculture, Vietnam
| | - Zhi Lu
- The University of South Australia, Australia
| | - Gustavo Carneiro
- Australian Centre for Visual Technologies, The University of Adelaide, Australia.
| |
Collapse
|
89
|
Shi J, Zhou S, Liu X, Zhang Q, Lu M, Wang T. Stacked deep polynomial network based representation learning for tumor classification with small ultrasound image dataset. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.01.074] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
90
|
Feng C, Zhang S, Zhao D, Li C. Simultaneous extraction of endocardial and epicardial contours of the left ventricle by distance regularized level sets. Med Phys 2016; 43:2741-2755. [DOI: 10.1118/1.4947126] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
91
|
Albarqouni S, Baur C, Achilles F, Belagiannis V, Demirci S, Navab N. AggNet: Deep Learning From Crowds for Mitosis Detection in Breast Cancer Histology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1313-21. [PMID: 26891484 DOI: 10.1109/tmi.2016.2528120] [Citation(s) in RCA: 220] [Impact Index Per Article: 24.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
The lack of publicly available ground-truth data has been identified as the major challenge for transferring recent developments in deep learning to the biomedical imaging domain. Though crowdsourcing has enabled annotation of large scale databases for real world images, its application for biomedical purposes requires a deeper understanding and hence, more precise definition of the actual annotation task. The fact that expert tasks are being outsourced to non-expert users may lead to noisy annotations introducing disagreement between users. Despite being a valuable resource for learning annotation models from crowdsourcing, conventional machine-learning methods may have difficulties dealing with noisy annotations during training. In this manuscript, we present a new concept for learning from crowds that handle data aggregation directly as part of the learning process of the convolutional neural network (CNN) via additional crowdsourcing layer (AggNet). Besides, we present an experimental study on learning from crowds designed to answer the following questions. 1) Can deep CNN be trained with data collected from crowdsourcing? 2) How to adapt the CNN to train on multiple types of annotation datasets (ground truth and crowd-based)? 3) How does the choice of annotation and aggregation affect the accuracy? Our experimental setup involved Annot8, a self-implemented web-platform based on Crowdflower API realizing image annotation tasks for a publicly available biomedical image database. Our results give valuable insights into the functionality of deep CNN learning from crowd annotations and prove the necessity of data aggregation integration.
Collapse
|
92
|
Guo Y, Gao Y, Shen D. Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1077-89. [PMID: 26685226 PMCID: PMC5002995 DOI: 10.1109/tmi.2015.2508280] [Citation(s) in RCA: 123] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.
Collapse
Affiliation(s)
| | | | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599 USA; and also with Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
93
|
Iterative Multi-domain Regularized Deep Learning for Anatomical Structure Detection and Segmentation from Ultrasound Images. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2016 2016. [DOI: 10.1007/978-3-319-46723-8_56] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
94
|
Abstract
High-level noise and low contrast characteristics in medical images continue to present major bottlenecks in their segmentation despite increased imaging modalities. This paper presents a semi-automatic algorithm that utilizes the noise for enhancing the contrast of low contrast input magnetic resonance images followed by a new graph cut method to reconstruct the surface of left ventricle. The main contribution in this work is a new formulation for preventing the conventional cellular automata method to leak into surrounding regions of similar intensity. Instead of segmenting each slice of a subject sequence individually, we empirically select a few slices, segment them, and reconstruct the left ventricular surface. During the course of surface reconstruction, we use level sets to segment the rest of the slices automatically. We have throughly evaluated the method on both York and MICCAI Grand Challenge workshop databases. The average Dice coefficient (in %) is found to be 92.4 ± 1.3 (value indicates the mean and standard deviation) whereas false positive ratio, false negative ratio, and specificity are found to be 0.019, 7.62 × 10-3, and 0.75, respectively. Average Hausdorff distance between segmented contour and ground truth is determined to be 2.94 mm. The encouraging quantitative and qualitative results reflect the potential of the proposed method.
Collapse
|
95
|
Hansson M, Brandt SS, Lindström J, Gudmundsson P, Jujić A, Malmgren A, Cheng Y. Segmentation of B-mode cardiac ultrasound data by Bayesian Probability Maps. Med Image Anal 2014; 18:1184-99. [DOI: 10.1016/j.media.2014.06.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2013] [Revised: 06/02/2014] [Accepted: 06/13/2014] [Indexed: 10/25/2022]
|
96
|
Nascimento JC, Silva JG, Marques JS, Lemos JM. Manifold learning for object tracking with multiple nonlinear models. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:1593-1605. [PMID: 24577194 DOI: 10.1109/tip.2014.2303652] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents a novel manifold learning algorithm for high-dimensional data sets. The scope of the application focuses on the problem of motion tracking in video sequences. The framework presented is twofold. First, it is assumed that the samples are time ordered, providing valuable information that is not presented in the current methodologies. Second, the manifold topology comprises multiple charts, which contrasts to the most current methods that assume one single chart, being overly restrictive. The proposed algorithm, Gaussian process multiple local models (GP-MLM), can deal with arbitrary manifold topology by decomposing the manifold into multiple local models that are probabilistic combined using Gaussian process regression. In addition, the paper presents a multiple filter architecture where standard filtering techniques are integrated within the GP-MLM. The proposed approach exhibits comparable performance of state-of-the-art trackers, namely multiple model data association and deep belief networks, and compares favorably with Gaussian process latent variable models. Extensive experiments are presented using real video data, including a publicly available database of lip sequences and left ventricle ultrasound images, in which the GP-MLM achieves state of the art results.
Collapse
|
97
|
Dietenbeck T, Barbosa D, Alessandrini M, Jasaityte R, Robesyn V, D'hooge J, Friboulet D, Bernard O. Whole myocardium tracking in 2D-echocardiography in multiple orientations using a motion constrained level-set. Med Image Anal 2014; 18:500-14. [PMID: 24561989 DOI: 10.1016/j.media.2014.01.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2013] [Revised: 01/08/2014] [Accepted: 01/24/2014] [Indexed: 11/18/2022]
Abstract
The segmentation and tracking of the myocardium in echocardiographic sequences is an important task for the diagnosis of heart disease. This task is difficult due to the inherent problems of echographic images (i.e. low contrast, speckle noise, signal dropout, presence of shadows). In this article, we extend a level-set method recently proposed in Dietenbeck et al. (2012) in order to track the whole myocardium in echocardiographic sequences. To this end, we enforce temporal coherence by adding a new motion prior energy to the existing framework. This motion prior term is expressed as new constraint that enforces the conservation of the levels of the implicit function along the image sequence. Moreover, the robustness of the proposed method is improved by adjusting the associated hyperparameters in a spatially adaptive way, using the available strong a priori about the echocardiographic regions to be segmented. The accuracy and robustness of the proposed method is evaluated by comparing the obtained segmentation with experts references and to another state-of-the-art method on a dataset of 15 sequences (≃ 900 images) acquired in three echocardiographic views. We show that the algorithm provides results that are consistent with the inter-observer variability and outperforms the state-of-the-art method. We also carry out a complete study on the influence of the parameters settings. The obtained results demonstrate the stability of our method according to those values.
Collapse
Affiliation(s)
- T Dietenbeck
- Université de Lyon, CREATIS, CNRS UMR5220, INSERM U1044, Université Lyon 1, INSA-LYON, France.
| | - D Barbosa
- Université de Lyon, CREATIS, CNRS UMR5220, INSERM U1044, Université Lyon 1, INSA-LYON, France; Cardiovascular Imaging and Dynamics, KU Leuven, Leuven, Belgium
| | - M Alessandrini
- Université de Lyon, CREATIS, CNRS UMR5220, INSERM U1044, Université Lyon 1, INSA-LYON, France
| | - R Jasaityte
- Cardiovascular Imaging and Dynamics, KU Leuven, Leuven, Belgium
| | - V Robesyn
- Cardiovascular Imaging and Dynamics, KU Leuven, Leuven, Belgium
| | - J D'hooge
- Cardiovascular Imaging and Dynamics, KU Leuven, Leuven, Belgium
| | - D Friboulet
- Université de Lyon, CREATIS, CNRS UMR5220, INSERM U1044, Université Lyon 1, INSA-LYON, France
| | - O Bernard
- Université de Lyon, CREATIS, CNRS UMR5220, INSERM U1044, Université Lyon 1, INSA-LYON, France
| |
Collapse
|
98
|
Carneiro G, Nascimento JC. Combining multiple dynamic models and deep learning architectures for tracking the left ventricle endocardium in ultrasound data. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:2592-2607. [PMID: 24051722 DOI: 10.1109/tpami.2013.96] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present a new statistical pattern recognition approach for the problem of left ventricle endocardium tracking in ultrasound data. The problem is formulated as a sequential importance resampling algorithm such that the expected segmentation of the current time step is estimated based on the appearance, shape, and motion models that take into account all previous and current images and previous segmentation contours produced by the method. The new appearance and shape models decouple the affine and nonrigid segmentations of the left ventricle to reduce the running time complexity. The proposed motion model combines the systole and diastole motion patterns and an observation distribution built by a deep neural network. The functionality of our approach is evaluated using a dataset of diseased cases containing 16 sequences and another dataset of normal cases comprised of four sequences, where both sets present long axis views of the left ventricle. Using a training set comprised of diseased and healthy cases, we show that our approach produces more accurate results than current state-of-the-art endocardium tracking methods in two test sequences from healthy subjects. Using three test sequences containing different types of cardiopathies, we show that our method correlates well with interuser statistics produced by four cardiologists.
Collapse
|