1
|
Soltanian-Zadeh S, Liu Z, Liu Y, Lassoued A, Cukras CA, Miller DT, Hammer DX, Farsiu S. Deep learning-enabled volumetric cone photoreceptor segmentation in adaptive optics optical coherence tomography images of normal and diseased eyes. BIOMEDICAL OPTICS EXPRESS 2023; 14:815-833. [PMID: 36874491 PMCID: PMC9979662 DOI: 10.1364/boe.478693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 06/11/2023]
Abstract
Objective quantification of photoreceptor cell morphology, such as cell diameter and outer segment length, is crucial for early, accurate, and sensitive diagnosis and prognosis of retinal neurodegenerative diseases. Adaptive optics optical coherence tomography (AO-OCT) provides three-dimensional (3-D) visualization of photoreceptor cells in the living human eye. The current gold standard for extracting cell morphology from AO-OCT images involves the tedious process of 2-D manual marking. To automate this process and extend to 3-D analysis of the volumetric data, we propose a comprehensive deep learning framework to segment individual cone cells in AO-OCT scans. Our automated method achieved human-level performance in assessing cone photoreceptors of healthy and diseased participants captured with three different AO-OCT systems representing two different types of point scanning OCT: spectral domain and swept source.
Collapse
Affiliation(s)
| | - Zhuolin Liu
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Yan Liu
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Ayoub Lassoued
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Catherine A. Cukras
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Donald T. Miller
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Daniel X. Hammer
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
2
|
Liu J, Aguilera N, Liu T, Tam J. Automated Iterative Label Transfer Improves Segmentation of Noisy Cells in Adaptive Optics Retinal Images. DEEP GENERATIVE MODELS, AND DATA AUGMENTATION, LABELLING, AND IMPERFECTIONS : FIRST WORKSHOP, DGM4MICCAI 2021, AND FIRST WORKSHOP, DALI 2021, HELD IN CONJUNCTION WITH MICCAI 2021, STRASBOURG, FRANCE, OCTOBER 1, 2021, PROCEEDINGS 2021; 13003:201-208. [PMID: 35464297 PMCID: PMC9033000 DOI: 10.1007/978-3-030-88210-5_19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
High quality data labeling is essential for improving the accuracy of deep learning applications in medical imaging. However, noisy images are not only under-represented in training datasets, but also, labeling of noisy data is low quality. Unfortunately, noisy images with poor quality labels are exacerbated by traditional data augmentation strategies. Real world images contain noise and can lead to unexpected drops in algorithm performance. In this paper, we present a non-traditional, purposeful data augmentation method to specifically transfer high quality automated labels into noisy image regions for incorporation into the training dataset. The overall approach is based on the use of paired images of the same cells in which variable image noise results in cell segmentation failures. Iteratively updating the cell segmentation model with accurate labels of noisy image areas resulted in an improvement in Dice coefficient from 77% to 86%. This was achieved by adding only 3.4% more cells to the training dataset, showing that local label transfer through graph matching is an effective augmentation strategy to improve segmentation.
Collapse
Affiliation(s)
- Jianfei Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Nancy Aguilera
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Tao Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Johnny Tam
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
3
|
Liu J, Shen C, Aguilera N, Cukras C, Hufnagel RB, Zein WM, Liu T, Tam J. Active Cell Appearance Model Induced Generative Adversarial Networks for Annotation-Efficient Cell Segmentation and Identification on Adaptive Optics Retinal Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2820-2831. [PMID: 33507868 PMCID: PMC8548993 DOI: 10.1109/tmi.2021.3055483] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Data annotation is a fundamental precursor for establishing large training sets to effectively apply deep learning methods to medical image analysis. For cell segmentation, obtaining high quality annotations is an expensive process that usually requires manual grading by experts. This work introduces an approach to efficiently generate annotated images, called "A-GANs", created by combining an active cell appearance model (ACAM) with conditional generative adversarial networks (C-GANs). ACAM is a statistical model that captures a realistic range of cell characteristics and is used to ensure that the image statistics of generated cells are guided by real data. C-GANs utilize cell contours generated by ACAM to produce cells that match input contours. By pairing ACAM-generated contours with A-GANs-based generated images, high quality annotated images can be efficiently generated. Experimental results on adaptive optics (AO) retinal images showed that A-GANs robustly synthesize realistic, artificial images whose cell distributions are exquisitely specified by ACAM. The cell segmentation performance using as few as 64 manually-annotated real AO images combined with 248 artificially-generated images from A-GANs was similar to the case of using 248 manually-annotated real images alone (Dice coefficients of 88% for both). Finally, application to rare diseases in which images exhibit never-seen characteristics demonstrated improvements in cell segmentation without the need for incorporating manual annotations from these new retinal images. Overall, A-GANs introduce a methodology for generating high quality annotated data that statistically captures the characteristics of any desired dataset and can be used to more efficiently train deep-learning-based medical image analysis applications.
Collapse
|
4
|
Liu L, Wolterink JM, Brune C, Veldhuis RNJ. Anatomy-aided deep learning for medical image segmentation: a review. Phys Med Biol 2021; 66. [PMID: 33906186 DOI: 10.1088/1361-6560/abfbf4] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/27/2021] [Indexed: 01/17/2023]
Abstract
Deep learning (DL) has become widely used for medical image segmentation in recent years. However, despite these advances, there are still problems for which DL-based segmentation fails. Recently, some DL approaches had a breakthrough by using anatomical information which is the crucial cue for manual segmentation. In this paper, we provide a review of anatomy-aided DL for medical image segmentation which covers systematically summarized anatomical information categories and corresponding representation methods. We address known and potentially solvable challenges in anatomy-aided DL and present a categorized methodology overview on using anatomical information with DL from over 70 papers. Finally, we discuss the strengths and limitations of the current anatomy-aided DL approaches and suggest potential future work.
Collapse
Affiliation(s)
- Lu Liu
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands.,Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Jelmer M Wolterink
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Christoph Brune
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Raymond N J Veldhuis
- Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| |
Collapse
|