1
|
Hu Y, Gong M, Qiu Z, Liu J, Shen H, Yuan M, Zhang X, Li H, Lu H, Liu J. COph100: A comprehensive fundus image registration dataset from infants constituting the "RIDIRP" database. Sci Data 2025; 12:99. [PMID: 39824846 PMCID: PMC11742693 DOI: 10.1038/s41597-025-04426-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Accepted: 01/03/2025] [Indexed: 01/20/2025] Open
Abstract
Retinal image registration is vital for diagnostic therapeutic applications within the field of ophthalmology. Existing public datasets, focusing on adult retinal pathologies with high-quality images, have limited number of image pairs and neglect clinical challenges. To address this gap, we introduce COph100, a novel and challenging dataset known as the Comprehensive Ophthalmology Retinal Image Registration dataset for infants with a wide range of image quality issues constituting the public "RIDIRP" database. COph100 consists of 100 eyes, each with 2 to 9 examination sessions, amounting to a total of 491 image pairs carefully selected from the publicly available dataset. We manually labeled the corresponding ground truth image points and provided automatic vessel segmentation masks for each image. We have assessed COph100 in terms of image quality and registration outcomes using state-of-the-art algorithms. This resource enables a robust comparison of retinal registration methodologies and aids in the analysis of disease progression in infants, thereby deepening our understanding of pediatric ophthalmic conditions.
Collapse
Affiliation(s)
- Yan Hu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China.
| | - Mingdao Gong
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Zhongxi Qiu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jiabao Liu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Hongli Shen
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Mingzhen Yuan
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Xiaoqing Zhang
- Center for High Performance Computing and Shenzhen Key Laboratory of Intelligent Bioinformatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Heng Li
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Hai Lu
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China.
| |
Collapse
|
2
|
Xie Q, Li X, Li Y, Lu J, Ma S, Zhao Y, Zhang J. A multi-modal multi-branch framework for retinal vessel segmentation using ultra-widefield fundus photographs. Front Cell Dev Biol 2025; 12:1532228. [PMID: 39845080 PMCID: PMC11751237 DOI: 10.3389/fcell.2024.1532228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2024] [Accepted: 12/20/2024] [Indexed: 01/24/2025] Open
Abstract
Background Vessel segmentation in fundus photography has become a cornerstone technique for disease analysis. Within this field, Ultra-WideField (UWF) fundus images offer distinct advantages, including an expansive imaging range, detailed lesion data, and minimal adverse effects. However, the high resolution and low contrast inherent to UWF fundus images present significant challenges for accurate segmentation using deep learning methods, thereby complicating disease analysis in this context. Methods To address these issues, this study introduces M3B-Net, a novel multi-modal, multi-branch framework that leverages fundus fluorescence angiography (FFA) images to improve retinal vessel segmentation in UWF fundus images. Specifically, M3B-Net tackles the low segmentation accuracy caused by the inherently low contrast of UWF fundus images. Additionally, we propose an enhanced UWF-based segmentation network in M3B-Net, specifically designed to improve the segmentation of fine retinal vessels. The segmentation network includes the Selective Fusion Module (SFM), which enhances feature extraction within the segmentation network by integrating features generated during the FFA imaging process. To further address the challenges of high-resolution UWF fundus images, we introduce a Local Perception Fusion Module (LPFM) to mitigate context loss during the segmentation cut-patch process. Complementing this, the Attention-Guided Upsampling Module (AUM) enhances segmentation performance through convolution operations guided by attention mechanisms. Results Extensive experimental evaluations demonstrate that our approach significantly outperforms existing state-of-the-art methods for UWF fundus image segmentation.
Collapse
Affiliation(s)
- Qihang Xie
- Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China
- Laboratory of Advanced Theranostic Materials and Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Xuefei Li
- Laboratory of Advanced Theranostic Materials and Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Yuanyuan Li
- Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China
- Laboratory of Advanced Theranostic Materials and Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Jiayi Lu
- Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China
- Laboratory of Advanced Theranostic Materials and Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Shaodong Ma
- Laboratory of Advanced Theranostic Materials and Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Yitian Zhao
- Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China
- Laboratory of Advanced Theranostic Materials and Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Jiong Zhang
- Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China
- Laboratory of Advanced Theranostic Materials and Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| |
Collapse
|
3
|
Jiang H, Qian Y, Zhang L, Jiang T, Tai Y. ReIU: an efficient preliminary framework for Alzheimer patients based on multi-model data. Front Public Health 2025; 12:1449798. [PMID: 39830185 PMCID: PMC11739287 DOI: 10.3389/fpubh.2024.1449798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2024] [Accepted: 12/10/2024] [Indexed: 01/22/2025] Open
Abstract
The rising incidence of Alzheimer's disease (AD) poses significant challenges to traditional diagnostic methods, which primarily rely on neuropsychological assessments and brain MRIs. The advent of deep learning in medical diagnosis opens new possibilities for early AD detection. In this study, we introduce retinal vessel segmentation methods based on U-Net ad iterative registration Learning (ReIU), which extract retinal vessel maps from OCT angiography (OCT-A) facilities. Our method achieved segmentation accuracies of 79.1% on the DRIVE dataset, 68.3% on the HRF dataset. Utilizing a multimodal dataset comprising both healthy and AD subjects, ReIU extracted vascular density from fundus images, facilitating primary AD screening with a classification accuracy of 79%. These results demonstrate ReIU's substantial accuracy and its potential as an economical, non-invasive screening tool for Alzheimer's disease. This study underscores the importance of integrating multi-modal data and deep learning techniques in advancing the early detection and management of Alzheimer's disease.
Collapse
Affiliation(s)
- Hao Jiang
- Engineering Research Center of Photoelectric Detection and Perception Technology, Yunnan Normal University, Kunming, China
- Yunnan Key Laboratory of Optoelectronic Information Technology, Kunming, China
| | - Yishan Qian
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
| | - Liqiang Zhang
- Engineering Research Center of Photoelectric Detection and Perception Technology, Yunnan Normal University, Kunming, China
- Yunnan Key Laboratory of Optoelectronic Information Technology, Kunming, China
| | - Tao Jiang
- Engineering Research Center of Photoelectric Detection and Perception Technology, Yunnan Normal University, Kunming, China
- Yunnan Key Laboratory of Optoelectronic Information Technology, Kunming, China
| | - Yonghang Tai
- Engineering Research Center of Photoelectric Detection and Perception Technology, Yunnan Normal University, Kunming, China
- Yunnan Key Laboratory of Optoelectronic Information Technology, Kunming, China
| |
Collapse
|
4
|
He S, Ye X, Xie W, Shen Y, Yang S, Zhong X, Guan H, Zhou X, Wu J, Shen L. Open ultrawidefield fundus image dataset with disease diagnosis and clinical image quality assessment. Sci Data 2024; 11:1251. [PMID: 39567563 PMCID: PMC11579006 DOI: 10.1038/s41597-024-04113-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 11/07/2024] [Indexed: 11/22/2024] Open
Abstract
Ultrawidefield fundus (UWF) images have a wide imaging range (200° of the retinal region), which offers the opportunity to show more information for ophthalmic diseases. Image quality assessment (IQA) is a prerequisite for applying UWF and is crucial for developing artificial intelligence-driven diagnosis and screening systems. Most image quality systems have been applied to the assessments of natural images, but whether these systems are suitable for evaluating the UWF image quality remains debatable. Additionally, existing IQA datasets only provide photographs of diabetic retinopathy (DR) patients and quality evaluation results applicable for natural image, neglecting patients' clinical information. To address these issues, we established a real-world clinical practice ultra-widefield fundus images dataset, with 700 high-resolution UWF images and corresponding clinical information from six common fundus diseases and healthy volunteers. The image quality is annotated by three ophthalmologists based on the field of view, illumination, artifact, contrast, and overall quality. This dataset illustrates the distribution of UWF image quality across diseases in clinical practice, offering a foundation for developing effective IQA systems.
Collapse
Affiliation(s)
- Shucheng He
- Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China
| | - Xin Ye
- Zhejiang Provincial People's Hospital Bijie Hospital, Bijie, Guizhou, China.
| | - Wenbin Xie
- Zhejiang Provincial People's Hospital Bijie Hospital, Bijie, Guizhou, China
| | - Yingjiao Shen
- Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China
| | | | - Xiaxing Zhong
- Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Hanyi Guan
- Wenzhou Medical University, Wenzhou, Zhejiang, China
| | | | - Jiang Wu
- Hangzhou Medical College, Hangzhou, Zhejiang, China
| | - Lijun Shen
- Department of Ophthalmology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China.
- Wenzhou Medical University, Wenzhou, Zhejiang, China.
- Hangzhou Medical College, Hangzhou, Zhejiang, China.
| |
Collapse
|
5
|
Wang CY, Sadrieh FK, Shen YT, Chen SE, Kim S, Chen V, Raghavendra A, Wang D, Saeedi O, Tao Y. MEMO: dataset and methods for robust multimodal retinal image registration with large or small vessel density differences. BIOMEDICAL OPTICS EXPRESS 2024; 15:3457-3479. [PMID: 38855695 PMCID: PMC11161385 DOI: 10.1364/boe.516481] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/20/2024] [Accepted: 04/18/2024] [Indexed: 06/11/2024]
Abstract
The measurement of retinal blood flow (RBF) in capillaries can provide a powerful biomarker for the early diagnosis and treatment of ocular diseases. However, no single modality can determine capillary flowrates with high precision. Combining erythrocyte-mediated angiography (EMA) with optical coherence tomography angiography (OCTA) has the potential to achieve this goal, as EMA can measure the absolute RBF of retinal microvasculature and OCTA can provide the structural images of capillaries. However, multimodal retinal image registration between these two modalities remains largely unexplored. To fill this gap, we establish MEMO, the first public multimodal EMA and OCTA retinal image dataset. A unique challenge in multimodal retinal image registration between these modalities is the relatively large difference in vessel density (VD). To address this challenge, we propose a segmentation-based deep-learning framework (VDD-Reg), which provides robust results despite differences in vessel density. VDD-Reg consists of a vessel segmentation module and a registration module. To train the vessel segmentation module, we further designed a two-stage semi-supervised learning framework (LVD-Seg) combining supervised and unsupervised losses. We demonstrate that VDD-Reg outperforms existing methods quantitatively and qualitatively for cases of both small VD differences (using the CF-FA dataset) and large VD differences (using our MEMO dataset). Moreover, VDD-Reg requires as few as three annotated vessel segmentation masks to maintain its accuracy, demonstrating its feasibility.
Collapse
Affiliation(s)
- Chiao-Yi Wang
- Department of Bioengineering, University of Maryland, College Park, MD 20742, USA
| | | | - Yi-Ting Shen
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
| | - Shih-En Chen
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| | - Sarah Kim
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| | - Victoria Chen
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| | - Achyut Raghavendra
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| | - Dongyi Wang
- Department of Biological and Agricultural Engineering, University of Arkansas, Fayetteville, AR 72701, USA
| | - Osamah Saeedi
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| | - Yang Tao
- Department of Bioengineering, University of Maryland, College Park, MD 20742, USA
| |
Collapse
|
6
|
Kalaw FGP, Cavichini M, Zhang J, Wen B, Lin AC, Heinke A, Nguyen T, An C, Bartsch DUG, Cheng L, Freeman WR. Ultra-wide field and new wide field composite retinal image registration with AI-enabled pipeline and 3D distortion correction algorithm. Eye (Lond) 2024; 38:1189-1195. [PMID: 38114568 PMCID: PMC11009222 DOI: 10.1038/s41433-023-02868-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 11/07/2023] [Accepted: 11/22/2023] [Indexed: 12/21/2023] Open
Abstract
PURPOSE This study aimed to compare a new Artificial Intelligence (AI) method to conventional mathematical warping in accurately overlaying peripheral retinal vessels from two different imaging devices: confocal scanning laser ophthalmoscope (cSLO) wide-field images and SLO ultra-wide field images. METHODS Images were captured using the Heidelberg Spectralis 55-degree field-of-view and Optos ultra-wide field. The conventional mathematical warping was performed using Random Sample Consensus-Sample and Consensus sets (RANSAC-SC). This was compared to an AI alignment algorithm based on a one-way forward registration procedure consisting of full Convolutional Neural Networks (CNNs) with Outlier Rejection (OR CNN), as well as an iterative 3D camera pose optimization process (OR CNN + Distortion Correction [DC]). Images were provided in a checkerboard pattern, and peripheral vessels were graded in four quadrants based on alignment to the adjacent box. RESULTS A total of 660 boxes were analysed from 55 eyes. Dice scores were compared between the three methods (RANSAC-SC/OR CNN/OR CNN + DC): 0.3341/0.4665/4784 for fold 1-2 and 0.3315/0.4494/4596 for fold 2-1 in composite images. The images composed using the OR CNN + DC have a median rating of 4 (out of 5) versus 2 using RANSAC-SC. The odds of getting a higher grading level are 4.8 times higher using our OR CNN + DC than RANSAC-SC (p < 0.0001). CONCLUSION Peripheral retinal vessel alignment performed better using our AI algorithm than RANSAC-SC. This may help improve co-localizing retinal anatomy and pathology with our algorithm.
Collapse
Affiliation(s)
- Fritz Gerald P Kalaw
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Melina Cavichini
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Junkang Zhang
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Bo Wen
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Andrew C Lin
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Anna Heinke
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Truong Nguyen
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Cheolhong An
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | | | - Lingyun Cheng
- Jacobs Retina Center, University of California, San Diego, CA, USA
| | - William R Freeman
- Jacobs Retina Center, University of California, San Diego, CA, USA.
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA.
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA.
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA.
| |
Collapse
|
7
|
Qiu Z, Hu Y, Chen X, Zeng D, Hu Q, Liu J. Rethinking Dual-Stream Super-Resolution Semantic Learning in Medical Image Segmentation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; 46:451-464. [PMID: 37812562 DOI: 10.1109/tpami.2023.3322735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/11/2023]
Abstract
Image segmentation is fundamental task for medical image analysis, whose accuracy is improved by the development of neural networks. However, the existing algorithms that achieve high-resolution performance require high-resolution input, resulting in substantial computational expenses and limiting their applicability in the medical field. Several studies have proposed dual-stream learning frameworks incorporating a super-resolution task as auxiliary. In this paper, we rethink these frameworks and reveal that the feature similarity between tasks is insufficient to constrain vessels or lesion segmentation in the medical field, due to their small proportion in the image. To address this issue, we propose a DS2F (Dual-Stream Shared Feature) framework, including a Shared Feature Extraction Module (SFEM). Specifically, we present Multi-Scale Cross Gate (MSCG) utilizing multi-scale features as a novel example of SFEM. Then we define a proxy task and proxy loss to enable the features focus on the targets based on the assumption that a limited set of shared features between tasks is helpful for their performance. Extensive experiments on six publicly available datasets across three different scenarios are conducted to verify the effectiveness of our framework. Furthermore, various ablation studies are conducted to demonstrate the significance of our DS2F.
Collapse
|
8
|
Chen JS, Marra KV, Robles-Holmes HK, Ly KB, Miller J, Wei G, Aguilar E, Bucher F, Ideguchi Y, Coyner AS, Ferrara N, Campbell JP, Friedlander M, Nudleman E. Applications of Deep Learning: Automated Assessment of Vascular Tortuosity in Mouse Models of Oxygen-Induced Retinopathy. OPHTHALMOLOGY SCIENCE 2024; 4:100338. [PMID: 37869029 PMCID: PMC10585474 DOI: 10.1016/j.xops.2023.100338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 05/01/2023] [Accepted: 05/19/2023] [Indexed: 10/24/2023]
Abstract
Objective To develop a generative adversarial network (GAN) to segment major blood vessels from retinal flat-mount images from oxygen-induced retinopathy (OIR) and demonstrate the utility of these GAN-generated vessel segmentations in quantifying vascular tortuosity. Design Development and validation of GAN. Subjects Three datasets containing 1084, 50, and 20 flat-mount mice retina images with various stains used and ages at sacrifice acquired from previously published manuscripts. Methods Four graders manually segmented major blood vessels from flat-mount images of retinas from OIR mice. Pix2Pix, a high-resolution GAN, was trained on 984 pairs of raw flat-mount images and manual vessel segmentations and then tested on 100 and 50 image pairs from a held-out and external test set, respectively. GAN-generated and manual vessel segmentations were then used as an input into a previously published algorithm (iROP-Assist) to generate a vascular cumulative tortuosity index (CTI) for 20 image pairs containing mouse eyes treated with aflibercept versus control. Main Outcome Measures Mean dice coefficients were used to compare segmentation accuracy between the GAN-generated and manually annotated segmentation maps. For the image pairs treated with aflibercept versus control, mean CTIs were also calculated for both GAN-generated and manual vessel maps. Statistical significance was evaluated using Wilcoxon signed-rank tests (P ≤ 0.05 threshold for significance). Results The dice coefficient for the GAN-generated versus manual vessel segmentations was 0.75 ± 0.27 and 0.77 ± 0.17 for the held-out test set and external test set, respectively. The mean CTI generated from the GAN-generated and manual vessel segmentations was 1.12 ± 0.07 versus 1.03 ± 0.02 (P = 0.003) and 1.06 ± 0.04 versus 1.01 ± 0.01 (P < 0.001), respectively, for eyes treated with aflibercept versus control, demonstrating that vascular tortuosity was rescued by aflibercept when quantified by GAN-generated and manual vessel segmentations. Conclusions GANs can be used to accurately generate vessel map segmentations from flat-mount images. These vessel maps may be used to evaluate novel metrics of vascular tortuosity in OIR, such as CTI, and have the potential to accelerate research in treatments for ischemic retinopathies. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Kyle V. Marra
- Molecular Medicine, the Scripps Research Institute, San Diego, California
- School of Medicine, University of California San Diego, San Diego, California
| | - Hailey K. Robles-Holmes
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Kristine B. Ly
- College of Optometry, Pacific University, Forest Grove, Oregon
| | - Joseph Miller
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - Guoqin Wei
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Edith Aguilar
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Felicitas Bucher
- Eye Center, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Yoichi Ideguchi
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Aaron S. Coyner
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Napoleone Ferrara
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| | - J. Peter Campbell
- Casey Eye Institute, Department of Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Martin Friedlander
- Molecular Medicine, the Scripps Research Institute, San Diego, California
| | - Eric Nudleman
- Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, San Diego, California
| |
Collapse
|
9
|
Zhang J, Wen B, Kalaw FGP, Cavichini M, Bartsch DUG, Freeman WR, Nguyen TQ, An C. ACCURATE REGISTRATION BETWEEN ULTRA-WIDE-FIELD AND NARROW ANGLE RETINA IMAGES WITH 3D EYEBALL SHAPE OPTIMIZATION. PROCEEDINGS. INTERNATIONAL CONFERENCE ON IMAGE PROCESSING 2023; 2023:2750-2754. [PMID: 38946915 PMCID: PMC11211856 DOI: 10.1109/icip49359.2023.10223163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
The Ultra-Wide-Field (UWF) retina images have attracted wide attentions in recent years in the study of retina. However, accurate registration between the UWF images and the other types of retina images could be challenging due to the distortion in the peripheral areas of an UWF image, which a 2D warping can not handle. In this paper, we propose a novel 3D distortion correction method which sets up a 3D projection model and optimizes a dense 3D retina mesh to correct the distortion in the UWF image. The corrected UWF image can then be accurately aligned to the target image using 2D alignment methods. The experimental results show that our proposed method outperforms the state-of-the-art method by 30%.
Collapse
Affiliation(s)
- Junkang Zhang
- Department of Electrical and Computer Engineering, UC San Diego
| | - Bo Wen
- Department of Electrical and Computer Engineering, UC San Diego
| | - Fritz Gerald P Kalaw
- Department of Ophthalmology, Jacobs Retina Center at Shiley Eye Institute, UC San Diego
| | - Melina Cavichini
- Department of Ophthalmology, Jacobs Retina Center at Shiley Eye Institute, UC San Diego
| | - Dirk-Uwe G Bartsch
- Department of Ophthalmology, Jacobs Retina Center at Shiley Eye Institute, UC San Diego
| | - William R Freeman
- Department of Ophthalmology, Jacobs Retina Center at Shiley Eye Institute, UC San Diego
| | - Truong Q Nguyen
- Department of Electrical and Computer Engineering, UC San Diego
| | - Cheolhong An
- Department of Electrical and Computer Engineering, UC San Diego
| |
Collapse
|
10
|
Hu T, Yang B, Guo J, Zhang W, Liu H, Wang N, Li H. A fundus image classification framework for learning with noisy labels. Comput Med Imaging Graph 2023; 108:102278. [PMID: 37586260 DOI: 10.1016/j.compmedimag.2023.102278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 07/21/2023] [Accepted: 07/22/2023] [Indexed: 08/18/2023]
Abstract
Fundus images are widely used in the screening and diagnosis of eye diseases. Current classification algorithms for computer-aided diagnosis in fundus images rely on large amounts of data with reliable labels. However, the appearance of noisy labels degrades the performance of data-dependent algorithms, such as supervised deep learning. A noisy label learning framework suitable for the multiclass classification of fundus diseases is presented in this paper, which combines data cleansing (DC), adaptive negative learning (ANL), and sharpness-aware minimization (SAM) modules. Firstly, the DC module filters the noisy labels in the training dataset based on the prediction confidence. Then, the ANL module modifies the loss function by choosing complementary labels, which are neither the given labels nor the labels with the highest confidence. Moreover, for better generalization, the SAM module is applied by simultaneously optimizing the loss and its sharpness. Extensive experiments on both private and public datasets show that our method greatly promotes the performance for classification of multiple fundus diseases with noisy labels.
Collapse
Affiliation(s)
- Tingxin Hu
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China
| | - Bingyu Yang
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China
| | - Jia Guo
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China
| | - Weihang Zhang
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China.
| | - Hanruo Liu
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China
| | - Ningli Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China
| | - Huiqi Li
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China.
| |
Collapse
|
11
|
Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal 2023; 85:102762. [PMID: 36738650 PMCID: PMC10010286 DOI: 10.1016/j.media.2023.102762] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 02/01/2023]
Abstract
Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Collapse
Affiliation(s)
- Jun Li
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ce Wang
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - S Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China; School of Biomedical Engineering & Suzhou Institute for Advanced Research, Center for Medical Imaging, Robotics, and Analytic Computing & Learning (MIRACLE), University of Science and Technology of China, Suzhou 215123, China.
| |
Collapse
|
12
|
Bhambra N, Antaki F, Malt FE, Xu A, Duval R. Deep learning for ultra-widefield imaging: a scoping review. Graefes Arch Clin Exp Ophthalmol 2022; 260:3737-3778. [PMID: 35857087 DOI: 10.1007/s00417-022-05741-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 05/16/2022] [Accepted: 06/22/2022] [Indexed: 11/04/2022] Open
Abstract
PURPOSE This article is a scoping review of published and peer-reviewed articles using deep-learning (DL) applied to ultra-widefield (UWF) imaging. This study provides an overview of the published uses of DL and UWF imaging for the detection of ophthalmic and systemic diseases, generative image synthesis, quality assessment of images, and segmentation and localization of ophthalmic image features. METHODS A literature search was performed up to August 31st, 2021 using PubMed, Embase, Cochrane Library, and Google Scholar. The inclusion criteria were as follows: (1) deep learning, (2) ultra-widefield imaging. The exclusion criteria were as follows: (1) articles published in any language other than English, (2) articles not peer-reviewed (usually preprints), (3) no full-text availability, (4) articles using machine learning algorithms other than deep learning. No study design was excluded from consideration. RESULTS A total of 36 studies were included. Twenty-three studies discussed ophthalmic disease detection and classification, 5 discussed segmentation and localization of ultra-widefield images (UWFIs), 3 discussed generative image synthesis, 3 discussed ophthalmic image quality assessment, and 2 discussed detecting systemic diseases via UWF imaging. CONCLUSION The application of DL to UWF imaging has demonstrated significant effectiveness in the diagnosis and detection of ophthalmic diseases including diabetic retinopathy, retinal detachment, and glaucoma. DL has also been applied in the generation of synthetic ophthalmic images. This scoping review highlights and discusses the current uses of DL with UWF imaging, and the future of DL applications in this field.
Collapse
Affiliation(s)
- Nishaant Bhambra
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - Fares Antaki
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada
| | - Farida El Malt
- Faculty of Medicine, McGill University, Montréal, Québec, Canada
| | - AnQi Xu
- Faculty of Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montréal, Québec, Canada.
- Centre Universitaire d'Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, CIUSSS de L'Est-de-L'Île-de-Montréal, 5415 Assumption Blvd, Montréal, Québec, H1T 2M4, Canada.
| |
Collapse
|
13
|
Li Y, Zhang Y, Cui W, Lei B, Kuang X, Zhang T. Dual Encoder-Based Dynamic-Channel Graph Convolutional Network With Edge Enhancement for Retinal Vessel Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1975-1989. [PMID: 35167444 DOI: 10.1109/tmi.2022.3151666] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Retinal vessel segmentation with deep learning technology is a crucial auxiliary method for clinicians to diagnose fundus diseases. However, the deep learning approaches inevitably lose the edge information, which contains spatial features of vessels while performing down-sampling, leading to the limited segmentation performance of fine blood vessels. Furthermore, the existing methods ignore the dynamic topological correlations among feature maps in the deep learning framework, resulting in the inefficient capture of the channel characterization. To address these limitations, we propose a novel dual encoder-based dynamic-channel graph convolutional network with edge enhancement (DE-DCGCN-EE) for retinal vessel segmentation. Specifically, we first design an edge detection-based dual encoder to preserve the edge of vessels in down-sampling. Secondly, we investigate a dynamic-channel graph convolutional network to map the image channels to the topological space and synthesize the features of each channel on the topological map, which solves the limitation of insufficient channel information utilization. Finally, we study an edge enhancement block, aiming to fuse the edge and spatial features in the dual encoder, which is beneficial to improve the accuracy of fine blood vessel segmentation. Competitive experimental results on five retinal image datasets validate the efficacy of the proposed DE-DCGCN-EE, which achieves more remarkable segmentation results against the other state-of-the-art methods, indicating its potential clinical application.
Collapse
|
14
|
Hofer D, Schmidt-Erfurth U, Orlando JI, Goldbach F, Gerendas BS, Seeböck P. Improving foveal avascular zone segmentation in fluorescein angiograms by leveraging manual vessel labels from public color fundus pictures. BIOMEDICAL OPTICS EXPRESS 2022; 13:2566-2580. [PMID: 35774310 PMCID: PMC9203117 DOI: 10.1364/boe.452873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 03/11/2022] [Accepted: 03/24/2022] [Indexed: 06/15/2023]
Abstract
In clinical routine, ophthalmologists frequently analyze the shape and size of the foveal avascular zone (FAZ) to detect and monitor retinal diseases. In order to extract those parameters, the contours of the FAZ need to be segmented, which is normally achieved by analyzing the retinal vasculature (RV) around the macula in fluorescein angiograms (FA). Computer-aided segmentation methods based on deep learning (DL) can automate this task. However, current approaches for segmenting the FAZ are often tailored to a specific dataset or require manual initialization. Furthermore, they do not take the variability and challenges of clinical FA into account, which are often of low quality and difficult to analyze. In this paper we propose a DL-based framework to automatically segment the FAZ in challenging FA scans from clinical routine. Our approach mimics the workflow of retinal experts by using additional RV labels as a guidance during training. Hence, our model is able to produce RV segmentations simultaneously. We minimize the annotation work by using a multi-modal approach that leverages already available public datasets of color fundus pictures (CFPs) and their respective manual RV labels. Our experimental evaluation on two datasets with FA from 1) clinical routine and 2) large multicenter clinical trials shows that the addition of weak RV labels as a guidance during training improves the FAZ segmentation significantly with respect to using only manual FAZ annotations.
Collapse
Affiliation(s)
- Dominik Hofer
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - José Ignacio Orlando
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
- Yatiris Group, PLADEMA Institute, CON-ICET, Universidad Nacional del Centro de la Provincia de Buenos Aires, Gral. Pinto 399, Tandil, Buenos Aires, Argentina
| | - Felix Goldbach
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Bianca S. Gerendas
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Philipp Seeböck
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| |
Collapse
|
15
|
Hatamizadeh A, Hosseini H, Patel N, Choi J, Pole CC, Hoeferlin CM, Schwartz SD, Terzopoulos D. RAVIR: A Dataset and Methodology for the Semantic Segmentation and Quantitative Analysis of Retinal Arteries and Veins in Infrared Reflectance Imaging. IEEE J Biomed Health Inform 2022; 26:3272-3283. [PMID: 35349464 DOI: 10.1109/jbhi.2022.3163352] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The retinal vasculature provides important clues in the diagnosis and monitoring of systemic diseases including hypertension and diabetes. The microvascular system is of primary involvement in such conditions, and the retina is the only anatomical site where the microvasculature can be directly observed. The objective assessment of retinal vessels has long been considered a surrogate biomarker for systemic vascular diseases, and with recent advancements in retinal imaging and computer vision technologies, this topic has become the subject of renewed attention. In this paper, we present a novel dataset, dubbed RAVIR, for the semantic segmentation of Retinal Arteries and Veins in Infrared Reflectance (IR) imaging. It enables the creation of deep learning-based models that distinguish extracted vessel type without extensive post-processing. We propose a novel deep learning-based methodology, denoted as SegRAVIR, for the semantic segmentation of retinal arteries and veins and the quantitative measurement of the widths of segmented vessels. Our extensive experiments validate the effectiveness of SegRAVIR and demonstrate its superior performance in comparison to state-of-the-art models. Additionally, we propose a knowledge distillation framework for the domain adaptation of RAVIR pretrained networks on color images. We demonstrate that our pretraining procedure yields new state-of-the-art benchmarks on the DRIVE, STARE, and CHASE\_DB1 datasets. Dataset link: https://ravirdataset.github.io/data.
Collapse
|
16
|
Yi M, Wu LC, Du QY, Guan CZ, Liu MD, Li XS, Xiong HL, Tan HS, Wang XH, Zhong JP, Han DA, Wang MY, Zeng YG. Spatiotemporal absorption fluctuation imaging based on U-Net. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:026002. [PMID: 35137573 PMCID: PMC8823698 DOI: 10.1117/1.jbo.27.2.026002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 12/09/2021] [Indexed: 06/14/2023]
Abstract
SIGNIFICANCE Full-field optical angiography is critical for vascular disease research and clinical diagnosis. Existing methods struggle to improve the temporal and spatial resolutions simultaneously. AIM Spatiotemporal absorption fluctuation imaging (ST-AFI) is proposed to achieve dynamic blood flow imaging with high spatial and temporal resolutions. APPROACH ST-AFI is a dynamic optical angiography based on a low-coherence imaging system and U-Net. The system was used to acquire a series of dynamic red blood cell (RBC) signals and static background tissue signals, and U-Net is used to predict optical absorption properties and spatiotemporal fluctuation information. U-Net was generally used in two-dimensional blood flow segmentation as an image processing algorithm for biomedical imaging. In the proposed approach, the network simultaneously analyzes the spatial absorption coefficient differences and the temporal dynamic absorption fluctuation. RESULTS The spatial resolution of ST-AFI is up to 4.33 μm, and the temporal resolution is up to 0.032 s. In vivo experiments on 2.5-day-old chicken embryos were conducted. The results demonstrate that intermittent RBCs flow in capillaries can be resolved, and the blood vessels without blood flow can be suppressed. CONCLUSIONS Using ST-AFI to achieve convolutional neural network (CNN)-based dynamic angiography is a novel approach that may be useful for several clinical applications. Owing to their strong feature extraction ability, CNNs exhibit the potential to be expanded to other blood flow imaging methods for the prediction of the spatiotemporal optical properties with improved temporal and spatial resolutions.
Collapse
Affiliation(s)
- Min Yi
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
| | - Lin-Chang Wu
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
| | - Qian-Yi Du
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
| | - Cai-Zhong Guan
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
| | - Ming-Di Liu
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
| | - Xiao-Song Li
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
| | - Hong-Lian Xiong
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
| | - Hai-Shu Tan
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
| | - Xue-Hua Wang
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
| | - Jun-Ping Zhong
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
| | - Ding-An Han
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
| | - Ming-Yi Wang
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
- Guangdong Provincial Key Laboratory of Animal Molecular Design and Precise Breeding, Foshan, China
| | - Ya-Guang Zeng
- Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
- Foshan University, School of Physics and Optoelectronic Engineering, Foshan, China
| |
Collapse
|
17
|
Zhang J, Wang Y, Dai J, Cavichini M, Bartsch DUG, Freeman WR, Nguyen TQ, An C. Two-Step Registration on Multi-Modal Retinal Images via Deep Neural Networks. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:823-838. [PMID: 34932479 PMCID: PMC8912939 DOI: 10.1109/tip.2021.3135708] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Multi-modal retinal image registration plays an important role in the ophthalmological diagnosis process. The conventional methods lack robustness in aligning multi-modal images of various imaging qualities. Deep-learning methods have not been widely developed for this task, especially for the coarse-to-fine registration pipeline. To handle this task, we propose a two-step method based on deep convolutional networks, including a coarse alignment step and a fine alignment step. In the coarse alignment step, a global registration matrix is estimated by three sequentially connected networks for vessel segmentation, feature detection and description, and outlier rejection, respectively. In the fine alignment step, a deformable registration network is set up to find pixel-wise correspondence between a target image and a coarsely aligned image from the previous step to further improve the alignment accuracy. Particularly, an unsupervised learning framework is proposed to handle the difficulties of inconsistent modalities and lack of labeled training data for the fine alignment step. The proposed framework first changes multi-modal images into a same modality through modality transformers, and then adopts photometric consistency loss and smoothness loss to train the deformable registration network. The experimental results show that the proposed method achieves state-of-the-art results in Dice metrics and is more robust in challenging cases.
Collapse
|
18
|
Więcławek W, Danch-Wierzchowska M, Rudzki M, Sędziak-Marcinek B, Teper SJ. Ultra-Widefield Fluorescein Angiography Image Brightness Compensation Based on Geometrical Features. SENSORS (BASEL, SWITZERLAND) 2021; 22:12. [PMID: 35009554 PMCID: PMC8747562 DOI: 10.3390/s22010012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 12/08/2021] [Accepted: 12/16/2021] [Indexed: 06/14/2023]
Abstract
Ultra-widefield fluorescein angiography (UWFA) is an emerging imaging modality used to characterise pathologies in the retinal vasculature, such as microaneurysms (MAs) and vascular leakages. Despite its potential value for diagnosis and disease screening, objective quantitative assessment of retinal pathologies by UWFA is currently limited because laborious manual processing is required. In this report, we describe a geometrical method for uneven brightness compensation inherent to UWFA imaging technique. The correction function is based on the geometrical eyeball shape, therefore it is fully automated and depends only on pixel distance from the center of the imaged retina. The method's performance was assessed on a database containing 256 UWFA images with the use of several image quality measures that show the correction method improves image quality. The method is also compared to the commonly used CLAHE approach and was also employed in a pilot study for vascular segmentation, giving a noticeable improvement in segmentation results. Therefore, the method can be used as an image preprocessing step in retinal UWFA image analysis.
Collapse
Affiliation(s)
- Wojciech Więcławek
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta St. 40, 41-800 Zabrze, Poland; (M.D.-W.); (M.R.)
| | - Marta Danch-Wierzchowska
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta St. 40, 41-800 Zabrze, Poland; (M.D.-W.); (M.R.)
| | - Marcin Rudzki
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta St. 40, 41-800 Zabrze, Poland; (M.D.-W.); (M.R.)
| | - Bogumiła Sędziak-Marcinek
- Clinical Department of Ophthalmology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Panewnicka St. 65, 40-760 Katowice, Poland; (B.S.-M.); (S.J.T.)
| | - Slawomir Jan Teper
- Clinical Department of Ophthalmology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Panewnicka St. 65, 40-760 Katowice, Poland; (B.S.-M.); (S.J.T.)
| |
Collapse
|
19
|
Zhang J, Wang Y, Bartsch DUG, Freeman WR, Nguyen TQ, An C. Perspective Distortion Correction for Multi-Modal Registration between Ultra-Widefield and Narrow-Angle Retinal Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4086-4091. [PMID: 34892126 PMCID: PMC9359414 DOI: 10.1109/embc46164.2021.9631084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Multi-modal retinal image registration between 2D Ultra-Widefield (UWF) and narrow-angle (NA) images has not been well-studied, since most existing methods mainly focus on NA image alignment. The stereographic projection model used in UWF imaging causes strong distortions in peripheral areas, which leads to inferior alignment quality. We propose a distortion correction method that remaps the UWF images based on estimated camera view points of NA images. In addition, we set up a CNN-based registration pipeline for UWF and NA images, which consists of the distortion correction method and three networks for vessel segmentation, feature detection and matching, and outlier rejection. Experimental results on our collected dataset shows the effectiveness of the proposed pipeline and the distortion correction method.
Collapse
|
20
|
Tajbakhsh N, Roth H, Terzopoulos D, Liang J. Guest Editorial Annotation-Efficient Deep Learning: The Holy Grail of Medical Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2526-2533. [PMID: 34795461 PMCID: PMC8594751 DOI: 10.1109/tmi.2021.3089292] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Affiliation(s)
| | | | - Demetri Terzopoulos
- University of California, Los Angeles, and VoxelCloud, Inc., Los Angeles, CA, USA
| | | |
Collapse
|
21
|
Ju L, Wang X, Zhao X, Bonnington P, Drummond T, Ge Z. Leveraging Regular Fundus Images for Training UWF Fundus Diagnosis Models via Adversarial Learning and Pseudo-Labeling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2911-2925. [PMID: 33531297 DOI: 10.1109/tmi.2021.3056395] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recently, ultra-widefield (UWF) 200° fundus imaging by Optos cameras has gradually been introduced because of its broader insights for detecting more information on the fundus than regular 30° - 60° fundus cameras. Compared with UWF fundus images, regular fundus images contain a large amount of high-quality and well-annotated data. Due to the domain gap, models trained by regular fundus images to recognize UWF fundus images perform poorly. Hence, given that annotating medical data is labor intensive and time consuming, in this paper, we explore how to leverage regular fundus images to improve the limited UWF fundus data and annotations for more efficient training. We propose the use of a modified cycle generative adversarial network (CycleGAN) model to bridge the gap between regular and UWF fundus and generate additional UWF fundus images for training. A consistency regularization term is proposed in the loss of the GAN to improve and regulate the quality of the generated data. Our method does not require that images from the two domains be paired or even that the semantic labels be the same, which provides great convenience for data collection. Furthermore, we show that our method is robust to noise and errors introduced by the generated unlabeled data with the pseudo-labeling technique. We evaluated the effectiveness of our methods on several common fundus diseases and tasks, such as diabetic retinopathy (DR) classification, lesion detection and tessellated fundus segmentation. The experimental results demonstrate that our proposed method simultaneously achieves superior generalizability of the learned representations and performance improvements in multiple tasks.
Collapse
|
22
|
Jia D, Zhuang X. Learning-based algorithms for vessel tracking: A review. Comput Med Imaging Graph 2021; 89:101840. [PMID: 33548822 DOI: 10.1016/j.compmedimag.2020.101840] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Revised: 10/07/2020] [Accepted: 12/03/2020] [Indexed: 11/24/2022]
Abstract
Developing efficient vessel-tracking algorithms is crucial for imaging-based diagnosis and treatment of vascular diseases. Vessel tracking aims to solve recognition problems such as key (seed) point detection, centerline extraction, and vascular segmentation. Extensive image-processing techniques have been developed to overcome the problems of vessel tracking that are mainly attributed to the complex morphologies of vessels and image characteristics of angiography. This paper presents a literature review on vessel-tracking methods, focusing on machine-learning-based methods. First, the conventional machine-learning-based algorithms are reviewed, and then, a general survey of deep-learning-based frameworks is provided. On the basis of the reviewed methods, the evaluation issues are introduced. The paper is concluded with discussions about the remaining exigencies and future research.
Collapse
Affiliation(s)
- Dengqiang Jia
- School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiahai Zhuang
- School of Data Science, Fudan University, Shanghai, China.
| |
Collapse
|