1
|
Soltanian-Zadeh S, Kovalick K, Aghayee S, Miller DT, Liu Z, Hammer DX, Farsiu S. Identifying retinal pigment epithelium cells in adaptive optics-optical coherence tomography images with partial annotations and superhuman accuracy. BIOMEDICAL OPTICS EXPRESS 2024; 15:6922-6939. [PMID: 39679394 PMCID: PMC11640571 DOI: 10.1364/boe.538473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Revised: 10/25/2024] [Accepted: 10/28/2024] [Indexed: 12/17/2024]
Abstract
Retinal pigment epithelium (RPE) cells are essential for normal retinal function. Morphological defects in these cells are associated with a number of retinal neurodegenerative diseases. Owing to the cellular resolution and depth-sectioning capabilities, individual RPE cells can be visualized in vivo with adaptive optics-optical coherence tomography (AO-OCT). Rapid, cost-efficient, and objective quantification of the RPE mosaic's structural properties necessitates the development of an automated cell segmentation algorithm. This paper presents a deep learning-based method with partial annotation training for detecting RPE cells in AO-OCT images with accuracy better than human performance. We have made the code, imaging datasets, and the manual expert labels available online.
Collapse
Affiliation(s)
- Somayyeh Soltanian-Zadeh
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Katherine Kovalick
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Samira Aghayee
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Donald T. Miller
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Zhuolin Liu
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Daniel X. Hammer
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
2
|
Das V, Zhang F, Bower AJ, Li J, Liu T, Aguilera N, Alvisio B, Liu Z, Hammer DX, Tam J. Revealing speckle obscured living human retinal cells with artificial intelligence assisted adaptive optics optical coherence tomography. COMMUNICATIONS MEDICINE 2024; 4:68. [PMID: 38600290 PMCID: PMC11006674 DOI: 10.1038/s43856-024-00483-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 03/13/2024] [Indexed: 04/12/2024] Open
Abstract
BACKGROUND In vivo imaging of the human retina using adaptive optics optical coherence tomography (AO-OCT) has transformed medical imaging by enabling visualization of 3D retinal structures at cellular-scale resolution, including the retinal pigment epithelial (RPE) cells, which are essential for maintaining visual function. However, because noise inherent to the imaging process (e.g., speckle) makes it difficult to visualize RPE cells from a single volume acquisition, a large number of 3D volumes are typically averaged to improve contrast, substantially increasing the acquisition duration and reducing the overall imaging throughput. METHODS Here, we introduce parallel discriminator generative adversarial network (P-GAN), an artificial intelligence (AI) method designed to recover speckle-obscured cellular features from a single AO-OCT volume, circumventing the need for acquiring a large number of volumes for averaging. The combination of two parallel discriminators in P-GAN provides additional feedback to the generator to more faithfully recover both local and global cellular structures. Imaging data from 8 eyes of 7 participants were used in this study. RESULTS We show that P-GAN not only improves RPE cell contrast by 3.5-fold, but also improves the end-to-end time required to visualize RPE cells by 99-fold, thereby enabling large-scale imaging of cells in the living human eye. RPE cell spacing measured across a large set of AI recovered images from 3 participants were in agreement with expected normative ranges. CONCLUSIONS The results demonstrate the potential of AI assisted imaging in overcoming a key limitation of RPE imaging and making it more accessible in a routine clinical setting.
Collapse
Affiliation(s)
- Vineeta Das
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Furu Zhang
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Andrew J Bower
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Joanne Li
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Tao Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Nancy Aguilera
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Bruno Alvisio
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Zhuolin Liu
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Daniel X Hammer
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Johnny Tam
- National Eye Institute, National Institutes of Health, Bethesda, MD, 20892, USA.
| |
Collapse
|
3
|
Guo W, Hu Y, Qian J, Zhu L, Cheng J, Liao J, Fan X. Laser capture microdissection for biomedical research: towards high-throughput, multi-omics, and single-cell resolution. J Genet Genomics 2023; 50:641-651. [PMID: 37544594 DOI: 10.1016/j.jgg.2023.07.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 07/28/2023] [Accepted: 07/28/2023] [Indexed: 08/08/2023]
Abstract
Spatial omics technologies have become powerful methods to provide valuable insights into cells and tissues within a complex context, significantly enhancing our understanding of the intricate and multifaceted biological system. With an increasing focus on spatial heterogeneity, there is a growing need for unbiased, spatially resolved omics technologies. Laser capture microdissection (LCM) is a cutting-edge method for acquiring spatial information that can quickly collect regions of interest (ROIs) from heterogeneous tissues, with resolutions ranging from single cells to cell populations. Thus, LCM has been widely used for studying the cellular and molecular mechanisms of diseases. This review focuses on the differences among four types of commonly used LCM technologies and their applications in omics and disease research. Key attributes of application cases are also highlighted, such as throughput and spatial resolution. In addition, we comprehensively discuss the existing challenges and the great potential of LCM in biomedical research, disease diagnosis, and targeted therapy from the perspective of high-throughput, multi-omics, and single-cell resolution.
Collapse
Affiliation(s)
- Wenbo Guo
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou, Zhejiang 310058, China; National Key Laboratory of Chinese Medicine Modernization, Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, Zhejiang 314100, China
| | - Yining Hu
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou, Zhejiang 310058, China; National Key Laboratory of Chinese Medicine Modernization, Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, Zhejiang 314100, China
| | - Jingyang Qian
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou, Zhejiang 310058, China; National Key Laboratory of Chinese Medicine Modernization, Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, Zhejiang 314100, China
| | - Lidan Zhu
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou, Zhejiang 310058, China; National Key Laboratory of Chinese Medicine Modernization, Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, Zhejiang 314100, China
| | - Junyun Cheng
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou, Zhejiang 310058, China; National Key Laboratory of Chinese Medicine Modernization, Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, Zhejiang 314100, China
| | - Jie Liao
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou, Zhejiang 310058, China; National Key Laboratory of Chinese Medicine Modernization, Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, Zhejiang 314100, China.
| | - Xiaohui Fan
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou, Zhejiang 310058, China; National Key Laboratory of Chinese Medicine Modernization, Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, Zhejiang 314100, China.
| |
Collapse
|
4
|
Wicaksana J, Yan Z, Zhang D, Huang X, Wu H, Yang X, Cheng KT. FedMix: Mixed Supervised Federated Learning for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1955-1968. [PMID: 37015653 DOI: 10.1109/tmi.2022.3233405] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The purpose of federated learning is to enable multiple clients to jointly train a machine learning model without sharing data. However, the existing methods for training an image segmentation model have been based on an unrealistic assumption that the training set for each local client is annotated in a similar fashion and thus follows the same image supervision level. To relax this assumption, in this work, we propose a label-agnostic unified federated learning framework, named FedMix, for medical image segmentation based on mixed image labels. In FedMix, each client updates the federated model by integrating and effectively making use of all available labeled data ranging from strong pixel-level labels, weak bounding box labels, to weakest image-level class labels. Based on these local models, we further propose an adaptive weight assignment procedure across local clients, where each client learns an aggregation weight during the global model update. Compared to the existing methods, FedMix not only breaks through the constraint of a single level of image supervision but also can dynamically adjust the aggregation weight of each local client, achieving rich yet discriminative feature representations. Experimental results on multiple publicly-available datasets validate that the proposed FedMix outperforms the state-of-the-art methods by a large margin. In addition, we demonstrate through experiments that FedMix is extendable to multi-class medical image segmentation and much more feasible in clinical scenarios. The code is available at: https://github.com/Jwicaksana/FedMix.
Collapse
|
5
|
Soltanian-Zadeh S, Liu Z, Liu Y, Lassoued A, Cukras CA, Miller DT, Hammer DX, Farsiu S. Deep learning-enabled volumetric cone photoreceptor segmentation in adaptive optics optical coherence tomography images of normal and diseased eyes. BIOMEDICAL OPTICS EXPRESS 2023; 14:815-833. [PMID: 36874491 PMCID: PMC9979662 DOI: 10.1364/boe.478693] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 06/11/2023]
Abstract
Objective quantification of photoreceptor cell morphology, such as cell diameter and outer segment length, is crucial for early, accurate, and sensitive diagnosis and prognosis of retinal neurodegenerative diseases. Adaptive optics optical coherence tomography (AO-OCT) provides three-dimensional (3-D) visualization of photoreceptor cells in the living human eye. The current gold standard for extracting cell morphology from AO-OCT images involves the tedious process of 2-D manual marking. To automate this process and extend to 3-D analysis of the volumetric data, we propose a comprehensive deep learning framework to segment individual cone cells in AO-OCT scans. Our automated method achieved human-level performance in assessing cone photoreceptors of healthy and diseased participants captured with three different AO-OCT systems representing two different types of point scanning OCT: spectral domain and swept source.
Collapse
Affiliation(s)
| | - Zhuolin Liu
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Yan Liu
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Ayoub Lassoued
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Catherine A. Cukras
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Donald T. Miller
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Daniel X. Hammer
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, MD 20993, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
6
|
Liu T, Aguilera N, Bower AJ, Li J, Ullah E, Dubra A, Cukras C, Brooks BP, Jeffrey BG, Hufnagel RB, Huryn LA, Zein WM, Tam J. Photoreceptor and Retinal Pigment Epithelium Relationships in Eyes With Vitelliform Macular Dystrophy Revealed by Multimodal Adaptive Optics Imaging. Invest Ophthalmol Vis Sci 2022; 63:27. [PMID: 35900727 PMCID: PMC9344216 DOI: 10.1167/iovs.63.8.27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To assess the structure of cone photoreceptors and retinal pigment epithelial (RPE) cells in vitelliform macular dystrophy (VMD) arising from various genetic etiologies. Methods Multimodal adaptive optics (AO) imaging was performed in 11 patients with VMD using a custom-assembled instrument. Non-confocal split detection and AO-enhanced indocyanine green were used to visualize the cone photoreceptor and RPE mosaics, respectively. Cone and RPE densities were measured and compared across BEST1-, PRPH2-, IMPG1-, and IMPG2-related VMD. Results Within macular lesions associated with VMD, both cone and RPE densities were reduced below normal, to 37% of normal cone density (eccentricity 0.2 mm) and to 8.4% of normal RPE density (eccentricity 0.5 mm). Outside of lesions, cone and RPE densities were slightly reduced (both to 92% of normal values), but with high degree of variability in the individual measurements. Comparison of juxtalesional cone and RPE measurements (<1 mm from the lesion edge) revealed significant differences in RPE density across the four genes (P < 0.05). Overall, cones were affected to a greater extent than RPE in patients with IMPG1 and IMPG2 pathogenic variants, but RPE was affected more than cones in BEST1 and PRPH2 VMD. This trend was observed even in contralateral eyes from a subset of five patients who presented with macular lesions in only one eye. Conclusions Assessment of cones and RPE in retinal locations outside of the macular lesions reveals a pattern of cone and RPE disruption that appears to be gene dependent in VMD. These findings provide insight into the cellular pathogenesis of disease in VMD.
Collapse
Affiliation(s)
- Tao Liu
- National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States.,https://orcid.org/0000-0001-9864-3896
| | - Nancy Aguilera
- National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States.,https://orcid.org/0000-0003-0863-596X
| | - Andrew J Bower
- National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States.,https://orcid.org/0000-0003-1645-5950
| | - Joanne Li
- National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States.,https://orcid.org/0000-0003-2845-2490
| | - Ehsan Ullah
- National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States.,https://orcid.org/0000-0003-0107-6608
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, California, United States.,https://orcid.org/0000-0002-6506-9020
| | - Catherine Cukras
- National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States
| | - Brian P Brooks
- National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States.,https://orcid.org/0000-0002-1916-7551
| | - Brett G Jeffrey
- National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States.,https://orcid.org/0000-0001-9549-0644
| | - Robert B Hufnagel
- National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States.,https://orcid.org/0000-0003-3015-3545
| | - Laryssa A Huryn
- National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States.,https://orcid.org/0000-0002-0309-9419
| | - Wadih M Zein
- National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States.,https://orcid.org/0000-0002-3771-6120
| | - Johnny Tam
- National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States.,https://orcid.org/0000-0003-2300-0567
| |
Collapse
|
7
|
Giannini JP, Lu R, Bower AJ, Fariss R, Tam J. Visualizing retinal cells with adaptive optics imaging modalities using a translational imaging framework. BIOMEDICAL OPTICS EXPRESS 2022; 13:3042-3055. [PMID: 35774328 PMCID: PMC9203084 DOI: 10.1364/boe.454560] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 04/08/2022] [Accepted: 04/08/2022] [Indexed: 05/18/2023]
Abstract
Adaptive optics reflectance-based retinal imaging has proved a valuable tool for the noninvasive visualization of cells in the living human retina. Many subcellular features that remain at or below the resolution limit of current in vivo techniques may be more easily visualized with the same modalities in an ex vivo setting. While most microscopy techniques provide significantly higher resolution, enabling the visualization of fine cellular detail in ex vivo retinal samples, they do not replicate the reflectance-based imaging modalities of in vivo retinal imaging. Here, we introduce a strategy for imaging ex vivo samples using the same imaging modalities as those used for in vivo retinal imaging, but with increased resolution. We also demonstrate the ability of this approach to perform protein-specific fluorescence imaging and reflectance imaging simultaneously, enabling the visualization of nearly transparent layers of the retina and the classification of cone photoreceptor types.
Collapse
|
8
|
Tajbakhsh N, Roth H, Terzopoulos D, Liang J. Guest Editorial Annotation-Efficient Deep Learning: The Holy Grail of Medical Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2526-2533. [PMID: 34795461 PMCID: PMC8594751 DOI: 10.1109/tmi.2021.3089292] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Affiliation(s)
| | | | - Demetri Terzopoulos
- University of California, Los Angeles, and VoxelCloud, Inc., Los Angeles, CA, USA
| | | |
Collapse
|