1
|
Cui R, Yang R, Liu F, Cai C. N-Net: Lesion region segmentations using the generalized hybrid dilated convolutions for polyps in colonoscopy images. Front Bioeng Biotechnol 2022; 10:963590. [DOI: 10.3389/fbioe.2022.963590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 08/12/2022] [Indexed: 11/13/2022] Open
Abstract
Colorectal cancer is the cancer with the second highest and the third highest incidence rates for the female and the male, respectively. Colorectal polyps are potential prognostic indicators of colorectal cancer, and colonoscopy is the gold standard for the biopsy and the removal of colorectal polyps. In this scenario, one of the main concerns is to ensure the accuracy of lesion region identifications. However, the missing rate of polyps through manual observations in colonoscopy can reach 14%–30%. In this paper, we focus on the identifications of polyps in clinical colonoscopy images and propose a new N-shaped deep neural network (N-Net) structure to conduct the lesion region segmentations. The encoder-decoder framework is adopted in the N-Net structure and the DenseNet modules are implemented in the encoding path of the network. Moreover, we innovatively propose the strategy to design the generalized hybrid dilated convolution (GHDC), which enables flexible dilated rates and convolutional kernel sizes, to facilitate the transmission of the multi-scale information with the respective fields expanded. Based on the strategy of GHDC designing, we design four GHDC blocks to connect the encoding and the decoding paths. Through the experiments on two publicly available datasets on polyp segmentations of colonoscopy images: the Kvasir-SEG dataset and the CVC-ClinicDB dataset, the rationality and superiority of the proposed GHDC blocks and the proposed N-Net are verified. Through the comparative studies with the state-of-the-art methods, such as TransU-Net, DeepLabV3+ and CA-Net, we show that even with a small amount of network parameters, the N-Net outperforms with the Dice of 94.45%, the average symmetric surface distance (ASSD) of 0.38 pix and the mean intersection-over-union (mIoU) of 89.80% on the Kvasir-SEG dataset, and with the Dice of 97.03%, the ASSD of 0.16 pix and the mIoU of 94.35% on the CVC-ClinicDB dataset.
Collapse
|
2
|
Widespread subclinical cellular changes revealed across a neural-epithelial-vascular complex in choroideremia using adaptive optics. Commun Biol 2022; 5:893. [PMID: 36100689 PMCID: PMC9470576 DOI: 10.1038/s42003-022-03842-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 08/12/2022] [Indexed: 11/08/2022] Open
Abstract
AbstractChoroideremia is an X-linked, blinding retinal degeneration with progressive loss of photoreceptors, retinal pigment epithelial (RPE) cells, and choriocapillaris. To study the extent to which these layers are disrupted in affected males and female carriers, we performed multimodal adaptive optics imaging to better visualize the in vivo pathogenesis of choroideremia in the living human eye. We demonstrate the presence of subclinical, widespread enlarged RPE cells present in all subjects imaged. In the fovea, the last area to be affected in choroideremia, we found greater disruption to the RPE than to either the photoreceptor or choriocapillaris layers. The unexpected finding of patches of photoreceptors that were fluorescently-labeled, but structurally and functionally normal, suggests that the RPE blood barrier function may be altered in choroideremia. Finally, we introduce a strategy for detecting enlarged cells using conventional ophthalmic imaging instrumentation. These findings establish that there is subclinical polymegathism of RPE cells in choroideremia.
Collapse
|
3
|
Liu J, Shen C, Aguilera N, Cukras C, Hufnagel RB, Zein WM, Liu T, Tam J. Active Cell Appearance Model Induced Generative Adversarial Networks for Annotation-Efficient Cell Segmentation and Identification on Adaptive Optics Retinal Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2820-2831. [PMID: 33507868 PMCID: PMC8548993 DOI: 10.1109/tmi.2021.3055483] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Data annotation is a fundamental precursor for establishing large training sets to effectively apply deep learning methods to medical image analysis. For cell segmentation, obtaining high quality annotations is an expensive process that usually requires manual grading by experts. This work introduces an approach to efficiently generate annotated images, called "A-GANs", created by combining an active cell appearance model (ACAM) with conditional generative adversarial networks (C-GANs). ACAM is a statistical model that captures a realistic range of cell characteristics and is used to ensure that the image statistics of generated cells are guided by real data. C-GANs utilize cell contours generated by ACAM to produce cells that match input contours. By pairing ACAM-generated contours with A-GANs-based generated images, high quality annotated images can be efficiently generated. Experimental results on adaptive optics (AO) retinal images showed that A-GANs robustly synthesize realistic, artificial images whose cell distributions are exquisitely specified by ACAM. The cell segmentation performance using as few as 64 manually-annotated real AO images combined with 248 artificially-generated images from A-GANs was similar to the case of using 248 manually-annotated real images alone (Dice coefficients of 88% for both). Finally, application to rare diseases in which images exhibit never-seen characteristics demonstrated improvements in cell segmentation without the need for incorporating manual annotations from these new retinal images. Overall, A-GANs introduce a methodology for generating high quality annotated data that statistically captures the characteristics of any desired dataset and can be used to more efficiently train deep-learning-based medical image analysis applications.
Collapse
|
4
|
Bower AJ, Liu T, Aguilera N, Li J, Liu J, Lu R, Giannini JP, Huryn LA, Dubra A, Liu Z, Hammer DX, Tam J. Integrating adaptive optics-SLO and OCT for multimodal visualization of the human retinal pigment epithelial mosaic. BIOMEDICAL OPTICS EXPRESS 2021; 12:1449-1466. [PMID: 33796365 PMCID: PMC7984802 DOI: 10.1364/boe.413438] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 01/29/2021] [Accepted: 01/31/2021] [Indexed: 05/03/2023]
Abstract
In vivo imaging of human retinal pigment epithelial (RPE) cells has been demonstrated through multiple adaptive optics (AO)-based modalities. However, whether consistent and complete information regarding the cellular structure of the RPE mosaic is obtained across these modalities remains uncertain due to limited comparisons performed in the same eye. Here, an imaging platform combining multimodal AO-scanning light ophthalmoscopy (AO-SLO) with AO-optical coherence tomography (AO-OCT) is developed to make a side-by-side comparison of the same RPE cells imaged across four modalities: AO-darkfield, AO-enhanced indocyanine green (AO-ICG), AO-infrared autofluorescence (AO-IRAF), and AO-OCT. Co-registered images were acquired in five subjects, including one patient with choroideremia. Multimodal imaging provided multiple perspectives of the RPE mosaic that were used to explore variations in RPE cell contrast in a subject-, location-, and even cell-dependent manner. Estimated cell-to-cell spacing and density were found to be consistent both across modalities and with normative data. Multimodal images from a patient with choroideremia illustrate the benefit of using multiple modalities to infer the cellular structure of the RPE mosaic in an affected eye, in which disruptions to the RPE mosaic may locally alter the signal strength, visibility of individual RPE cells, or even source of contrast in unpredictable ways.
Collapse
Affiliation(s)
- Andrew J. Bower
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Tao Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Nancy Aguilera
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Joanne Li
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Jianfei Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Rongwen Lu
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - John P. Giannini
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Laryssa A. Huryn
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| | - Zhuolin Liu
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD 20993, USA
| | - Daniel X. Hammer
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD 20993, USA
| | - Johnny Tam
- National Eye Institute, National Institutes of Health, Bethesda, MD 20892, USA
| |
Collapse
|
5
|
Cheng J, Fu H, Cabrera DeBuc D, Tian J. Guest Editorial Ophthalmic Image Analysis and Informatics. IEEE J Biomed Health Inform 2020. [DOI: 10.1109/jbhi.2020.3037388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|