1
|
Yan F, Xu Y, Kong Y, Zhang W, Li H. Two-stage color fundus image registration via Keypoint Refinement and Confidence-Guided Estimation. Comput Med Imaging Graph 2025; 123:102554. [PMID: 40294515 DOI: 10.1016/j.compmedimag.2025.102554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2025] [Revised: 04/08/2025] [Accepted: 04/09/2025] [Indexed: 04/30/2025]
Abstract
Color fundus images are widely used for diagnosing diseases such as Glaucoma, Cataracts, and Diabetic Retinopathy. The registration of color fundus images is crucial for assessing changes in fundus appearance to determine disease progression. In this paper, a novel two-stage framework is proposed for conducting end-to-end color fundus image registration without requiring any training or annotation. In the first stage, a pre-trained SuperPoint and SuperGlue network are used to obtain matching pairs, which are then refined based on their slopes. In the second stage, Confidence-Guided Transformation Matrix Estimation (CGTME) is proposed to estimate the final perspective transformation matrix. Specifically, a variant of 4-point algorithm, namely CG 4-point algorithm, is designed to adjust the contribution of matched points in estimating the perspective transformation matrix based on the confidence of SuperGlue. Then, we select the matched points with high confidence for the final estimation of transformation matrix. Experimental results show that our proposed algorithm can improve the registration performance effectively.
Collapse
Affiliation(s)
- Feihong Yan
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China
| | - Yubin Xu
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China
| | - Yiran Kong
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China
| | - Weihang Zhang
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China
| | - Huiqi Li
- Beijing Institute of Technology, No. 5, Zhong Guan Cun South Street, Beijing, 100081, China.
| |
Collapse
|
2
|
Wang Z, Zou H, Guo Y, Guo S, Zhao X, Wang Y, Sun M. Retinal image registration method for myopia development. Med Image Anal 2024; 97:103242. [PMID: 38901099 DOI: 10.1016/j.media.2024.103242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 04/23/2024] [Accepted: 06/10/2024] [Indexed: 06/22/2024]
Abstract
OBJECTIVE The development of myopia is usually accompanied by changes in retinal vessels, optic disc, optic cup, fovea, and other retinal structures as well as the length of the ocular axis. And the accurate registration of retinal images is very important for the extraction and analysis of retinal structural changes. However, the registration of retinal images with myopia development faces a series of challenges, due to the unique curved surface of the retina, as well as the changes in fundus curvature caused by ocular axis elongation. Therefore, our goal is to improve the registration accuracy of the retinal images with myopia development. METHOD In this study, we propose a 3D spatial model for the pair of retinal images with myopia development. In this model, we introduce a novel myopia development model that simulates the changes in the length of ocular axis and fundus curvature due to the development of myopia. We also consider the distortion model of the fundus camera during the imaging process. Based on the 3D spatial model, we further implement a registration framework, which utilizes corresponding points in the pair of retinal images to achieve registration in the way of 3D pose estimation. RESULTS The proposed method is quantitatively evaluated on the publicly available dataset without myopia development and our Fundus Image Myopia Development (FIMD) dataset. The proposed method is shown to perform more accurate and stable registration than state-of-the-art methods, especially for retinal images with myopia development. SIGNIFICANCE To the best of our knowledge, this is the first retinal image registration method for the study of myopia development. This method significantly improves the registration accuracy of retinal images which have myopia development. The FIMD dataset we constructed has been made publicly available to promote the study in related fields.
Collapse
Affiliation(s)
- Zengshuo Wang
- Nankai University Eye Institute, Nankai University, Tianjin 300350, China; Institute of Robotics and Automatic Information System (IRAIS), the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin 300350, China
| | - Haohan Zou
- Nankai University Eye Institute, Nankai University, Tianjin 300350, China; Tianjin Eye Hospital, Tianjin Eye Institute, Tianjin Key Laboratory of Ophthalmology and Visual Science, Tianjin Medical University, Tianjin 300350, China
| | - Yin Guo
- Department of Ophthalmology, Haidian Section of Peking University Third Hospital (Beijing Haidian Hospital), Beijing 100089, China
| | - Shan Guo
- Nankai University Eye Institute, Nankai University, Tianjin 300350, China; Institute of Robotics and Automatic Information System (IRAIS), the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin 300350, China
| | - Xin Zhao
- Nankai University Eye Institute, Nankai University, Tianjin 300350, China; Institute of Robotics and Automatic Information System (IRAIS), the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin 300350, China
| | - Yan Wang
- Nankai University Eye Institute, Nankai University, Tianjin 300350, China; Tianjin Eye Hospital, Tianjin Eye Institute, Tianjin Key Laboratory of Ophthalmology and Visual Science, Tianjin Medical University, Tianjin 300350, China.
| | - Mingzhu Sun
- Nankai University Eye Institute, Nankai University, Tianjin 300350, China; Institute of Robotics and Automatic Information System (IRAIS), the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin 300350, China.
| |
Collapse
|
3
|
Ochoa-Astorga JE, Wang L, Du W, Peng Y. A Straightforward Bifurcation Pattern-Based Fundus Image Registration Method. SENSORS (BASEL, SWITZERLAND) 2023; 23:7809. [PMID: 37765866 PMCID: PMC10534639 DOI: 10.3390/s23187809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 08/23/2023] [Accepted: 09/08/2023] [Indexed: 09/29/2023]
Abstract
Fundus image registration is crucial in eye disease examination, as it enables the alignment of overlapping fundus images, facilitating a comprehensive assessment of conditions like diabetic retinopathy, where a single image's limited field of view might be insufficient. By combining multiple images, the field of view for retinal analysis is extended, and resolution is enhanced through super-resolution imaging. Moreover, this method facilitates patient follow-up through longitudinal studies. This paper proposes a straightforward method for fundus image registration based on bifurcations, which serve as prominent landmarks. The approach aims to establish a baseline for fundus image registration using these landmarks as feature points, addressing the current challenge of validation in this field. The proposed approach involves the use of a robust vascular tree segmentation method to detect feature points within a specified range. The method involves coarse vessel segmentation to analyze patterns in the skeleton of the segmentation foreground, followed by feature description based on the generation of a histogram of oriented gradients and determination of image relation through a transformation matrix. Image blending produces a seamless registered image. Evaluation on the FIRE dataset using registration error as the key parameter for accuracy demonstrates the method's effectiveness. The results show the superior performance of the proposed method compared to other techniques using vessel-based feature extraction or partially based on SURF, achieving an area under the curve of 0.526 for the entire FIRE dataset.
Collapse
Affiliation(s)
| | - Linni Wang
- Retina & Neuron-Ophthalmology, Tianjin Medical University Eye Hospital, Tianjin 300084, China
| | - Weiwei Du
- Information and Human Science, Kyoto Institute of Technology University, Kyoto 6068585, Japan;
| | - Yahui Peng
- School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China;
| |
Collapse
|
4
|
Xu J, Yang K, Chen Y, Dai L, Zhang D, Shuai P, Shi R, Yang Z. Reliable and stable fundus image registration based on brain-inspired spatially-varying adaptive pyramid context aggregation network. Front Neurosci 2023; 16:1117134. [PMID: 36726854 PMCID: PMC9884961 DOI: 10.3389/fnins.2022.1117134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 12/28/2022] [Indexed: 01/18/2023] Open
Abstract
The task of fundus image registration aims to find matching keypoints between an image pair. Traditional methods detect the keypoint by hand-designed features, which fail to cope with complex application scenarios. Due to the strong feature learning ability of deep neural network, current image registration methods based on deep learning directly learn to align the geometric transformation between the reference image and test image in an end-to-end manner. Another mainstream of this task aims to learn the displacement vector field between the image pair. In this way, the image registration has achieved significant advances. However, due to the complicated vascular morphology of retinal image, such as texture and shape, current widely used image registration methods based on deep learning fail to achieve reliable and stable keypoint detection and registration results. To this end, in this paper, we aim to bridge this gap. Concretely, since the vessel crossing and branching points can reliably and stably characterize the key components of fundus image, we propose to learn to detect and match all the crossing and branching points of the input images based on a single deep neural network. Moreover, in order to accurately locate the keypoints and learn discriminative feature embedding, a brain-inspired spatially-varying adaptive pyramid context aggregation network is proposed to incorporate the contextual cues under the supervision of structured triplet ranking loss. Experimental results show that the proposed method achieves more accurate registration results with significant speed advantage.
Collapse
Affiliation(s)
- Jie Xu
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Key Laboratory of Ophthalmology and Visual Sciences, Beijing, China
| | - Kang Yang
- Beijing Zhizhen Internet Technology Co. Ltd.,Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China
| | - Liming Dai
- Beijing Zhizhen Internet Technology Co. Ltd.,Beijing, China
| | - Dongdong Zhang
- Beijing Zhizhen Internet Technology Co. Ltd.,Beijing, China
| | - Ping Shuai
- Department of Health Management and Physical Examination, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, China
- School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Rongjie Shi
- Beijing Zhizhen Internet Technology Co. Ltd.,Beijing, China
| | - Zhanbo Yang
- Beijing Zhizhen Internet Technology Co. Ltd.,Beijing, China
| |
Collapse
|
5
|
An C, Wang Y, Zhang J, Nguyen TQ. Self-Supervised Rigid Registration for Multimodal Retinal Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5733-5747. [PMID: 36040946 PMCID: PMC11211857 DOI: 10.1109/tip.2022.3201476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The ability to accurately overlay one modality retinal image to another is critical in ophthalmology. Our previous framework achieved the state-of-the-art results for multimodal retinal image registration. However, it requires human-annotated labels due to the supervised approach of the previous work. In this paper, we propose a self-supervised multimodal retina registration method to alleviate the burdens of time and expense to prepare for training data, that is, aiming to automatically register multimodal retinal images without any human annotations. Specially, we focus on registering color fundus images with infrared reflectance and fluorescein angiography images, and compare registration results with several conventional and supervised and unsupervised deep learning methods. From the experimental results, the proposed self-supervised framework achieves a comparable accuracy comparing to the state-of-the-art supervised learning method in terms of registration accuracy and Dice coefficient.
Collapse
|
6
|
Hao H, Xu C, Zhang D, Yan Q, Zhang J, Liu Y, Zhao Y. Sparse-based Domain Adaptation Network for OCTA Image Super-Resolution Reconstruction. IEEE J Biomed Health Inform 2022; 26:4402-4413. [PMID: 35895639 DOI: 10.1109/jbhi.2022.3194025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Retinal Optical Coherence Tomography Angiography (OCTA) with high-resolution is important for the quantification and analysis of retinal vasculature. However, the resolution of OCTA images is inversely proportional to the field of view at the same sampling frequency, which is not conducive to clinicians for analyzing larger vascular areas. In this paper, we propose a novel Sparse-based domain Adaptation Super-Resolution network (SASR) for the reconstruction of realistic [Formula: see text]/low-resolution (LR) OCTA images to high-resolution (HR) representations. To be more specific, we first perform a simple degradation of the [Formula: see text]/high-resolution (HR) image to obtain the synthetic LR image. An efficient registration method is then employed to register the synthetic LR with its corresponding [Formula: see text] image region within the [Formula: see text] image to obtain the cropped realistic LR image. We then propose a multi-level super-resolution model for the fully-supervised reconstruction of the synthetic data, guiding the reconstruction of the realistic LR images through a generative-adversarial strategy that allows the synthetic and realistic LR images to be unified in the feature domain. Finally, a novel sparse edge-aware loss is designed to dynamically optimize the vessel edge structure. Extensive experiments on two OCTA sets have shown that our method performs better than state-of-the-art super-resolution reconstruction methods. In addition, we have investigated the performance of the reconstruction results on retina structure segmentations, which further validate the effectiveness of our approach.
Collapse
|
7
|
Tsai YY, Lin WY, Chen SJ, Ruamviboonsuk P, King CH, Tsai CL. Diagnosis of Polypoidal Choroidal Vasculopathy From Fluorescein Angiography Using Deep Learning. Transl Vis Sci Technol 2022; 11:6. [PMID: 35113129 PMCID: PMC8822364 DOI: 10.1167/tvst.11.2.6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Purpose To differentiate polypoidal choroidal vasculopathy (PCV) from choroidal neovascularization (CNV) and to determine the extent of PCV from fluorescein angiography (FA) using attention-based deep learning networks. Methods We build two deep learning networks for diagnosis of PCV using FA, one for detection and one for segmentation. Attention-gated convolutional neural network (AG-CNN) differentiates PCV from other types of wet age-related macular degeneration. Gradient-weighted class activation map (Grad-CAM) is generated to highlight important regions in the image for making the prediction, which offers explainability of the network. Attention-gated recurrent neural network (AG-PCVNet) for spatiotemporal prediction is applied for segmentation of PCV. Results AG-CNN is validated with a dataset containing 167 FA sequences of PCV and 70 FA sequences of CNV. AG-CNN achieves a classification accuracy of 82.80% at image-level, and 86.21% at patient-level for PCV. Grad-CAM shows that regions contributing to decision-making have on average 21.91% agreement with pathological regions identified by experts. AG-PCVNet is validated with 56 PCV sequences from the EVEREST-I study and achieves a balanced accuracy of 81.132% and dice score of 0.54. Conclusions The developed software provides a means of performing detection and segmentation of PCV on FA images for the first time. This study is a promising step in changing the diagnostic procedure of PCV and therefore improving the detection rate of PCV using FA alone. Translational Relevance The developed deep learning system enables early diagnosis of PCV using FA to assist the physician in choosing the best treatment for optimal visual prognosis.
Collapse
Affiliation(s)
- Yu-Yeh Tsai
- Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi Taiwan
| | - Wei-Yang Lin
- Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi Taiwan
| | - Shih-Jen Chen
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei, Taiwan.,School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | | | - Cheng-Ho King
- Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi Taiwan
| | - Chia-Ling Tsai
- Computer Science Department, Queens College, CUNY, Queens, New York, USA
| |
Collapse
|
8
|
Rivas-Villar D, Hervella ÁS, Rouco J, Novo J. Color fundus image registration using a learning-based domain-specific landmark detection methodology. Comput Biol Med 2022; 140:105101. [PMID: 34875412 DOI: 10.1016/j.compbiomed.2021.105101] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 11/29/2021] [Accepted: 11/29/2021] [Indexed: 11/17/2022]
Abstract
Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.
Collapse
Affiliation(s)
- David Rivas-Villar
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| | - Álvaro S Hervella
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| | - José Rouco
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| | - Jorge Novo
- Centro de investigacion CITIC, Universidade da Coruña, 15 071, A Coruña, Spain; Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15 006, A Coruña, Spain.
| |
Collapse
|
9
|
Fundus Image Registration Technique Based on Local Feature of Retinal Vessels. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112311201] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Feature-based retinal fundus image registration (RIR) technique aligns fundus images according to geometrical transformations estimated between feature point correspondences. To ensure accurate registration, the feature points extracted must be from the retinal vessels and throughout the image. However, noises in the fundus image may resemble retinal vessels in local patches. Therefore, this paper introduces a feature extraction method based on a local feature of retinal vessels (CURVE) that incorporates retinal vessels and noises characteristics to accurately extract feature points on retinal vessels and throughout the fundus image. The CURVE performance is tested on CHASE, DRIVE, HRF and STARE datasets and compared with six feature extraction methods used in the existing feature-based RIR techniques. From the experiment, the feature extraction accuracy of CURVE (86.021%) significantly outperformed the existing feature extraction methods (p ≤ 0.001*). Then, CURVE is paired with a scale-invariant feature transform (SIFT) descriptor to test its registration capability on the fundus image registration (FIRE) dataset. Overall, CURVE-SIFT successfully registered 44.030% of the image pairs while the existing feature-based RIR techniques (GDB-ICP, Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG) only registered less than 27.612% of the image pairs. The one-way ANOVA analysis showed that CURVE-SIFT significantly outperformed GDB-ICP (p = 0.007*), Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG (p ≤ 0.001*).
Collapse
|
10
|
Wang Y, Zhang J, Cavichini M, Bartsch DUG, Freeman WR, Nguyen TQ, An C. Robust Content-Adaptive Global Registration for Multimodal Retinal Images Using Weakly Supervised Deep-Learning Framework. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3167-3178. [PMID: 33600314 DOI: 10.1109/tip.2021.3058570] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multimodal retinal imaging plays an important role in ophthalmology. We propose a content-adaptive multimodal retinal image registration method in this paper that focuses on the globally coarse alignment and includes three weakly supervised neural networks for vessel segmentation, feature detection and description, and outlier rejection. We apply the proposed framework to register color fundus images with infrared reflectance and fluorescein angiography images, and compare it with several conventional and deep learning methods. Our proposed framework demonstrates a significant improvement in robustness and accuracy reflected by a higher success rate and Dice coefficient compared with other methods.
Collapse
|
11
|
Hernandez-Matas C, Zabulis X, Argyros AA. REMPE: Registration of Retinal Images Through Eye Modelling and Pose Estimation. IEEE J Biomed Health Inform 2020; 24:3362-3373. [DOI: 10.1109/jbhi.2020.2984483] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
12
|
Image matching based on the adaptive redundant keypoint elimination method in the SIFT algorithm. Pattern Anal Appl 2020. [DOI: 10.1007/s10044-020-00938-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
13
|
Motta D, Casaca W, Paiva A. Vessel Optimal Transport for Automated Alignment of Retinal Fundus Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:6154-6168. [PMID: 31283507 DOI: 10.1109/tip.2019.2925287] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Optimal transport has emerged as a promising and useful tool for supporting modern image processing applications such as medical imaging and scientific visualization. Indeed, the optimal transport theory enables great flexibility in modeling problems related to image registration, as different optimization resources can be successfully used as well as the choice of suitable matching models to align the images. In this paper, we introduce an automated framework for fundus image registration which unifies optimal transport theory, image processing tools, and graph matching schemes into a functional and concise methodology. Given two ocular fundus images, we construct representative graphs which embed in their structures spatial and topological information from the eye's blood vessels. The graphs produced are then used as input by our optimal transport model in order to establish a correspondence between their sets of nodes. Finally, geometric transformations are performed between the images so as to accomplish the registration task properly. Our formulation relies on the solid mathematical foundation of optimal transport as a constrained optimization problem, being also robust when dealing with outliers created during the matching stage. We demonstrate the accuracy and effectiveness of the present framework throughout a comprehensive set of qualitative and quantitative comparisons against several influential state-of-the-art methods on various fundus image databases.
Collapse
|
14
|
Gong C, Erichson NB, Kelly JP, Trutoiu L, Schowengerdt BT, Brunton SL, Seibel EJ. RetinaMatch: Efficient Template Matching of Retina Images for Teleophthalmology. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1993-2004. [PMID: 31217098 DOI: 10.1109/tmi.2019.2923466] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Retinal template matching and registration is an important challenge in teleophthalmology with low-cost imaging devices. However, the images from such devices generally have a small field of view (FOV) and image quality degradations, making matching difficult. In this paper, we develop an efficient and accurate retinal matching technique that combines dimension reduction and mutual information (MI), called RetinaMatch. The dimension reduction initializes the MI optimization as a coarse localization process, which narrows the optimization domain and avoids local optima. The effectiveness of RetinaMatch is demonstrated on the open fundus image database STARE with simulated reduced FOV and anticipated degradations, and on retinal images acquired by adapter-based optics attached to a smartphone. RetinaMatch achieves a success rate over 94% on human retinal images with the matched target registration errors below 2 pixels on average, excluding the observer variability, outperforming standard template matching solutions. In the application of measuring vessel diameter repeatedly, single pixel errors are expected. In addition, our method can be used in the process of image mosaicking with area-based registration, providing a robust approach when feature-based methods fail. To the best of our knowledge, this is the first template matching algorithm for retina images with small template images from unconstrained retinal areas. In the context of the emerging mixed reality market, we envision automated retinal image matching and registration methods as transformative for advanced teleophthalmology and long-term retinal monitoring.
Collapse
|
15
|
A-RANSAC: Adaptive random sample consensus method in multimodal retinal image registration. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2018.06.002] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
16
|
Zhou Z, Tu J, Geng C, Hu J, Tong B, Ji J, Dai Y. Accurate and Robust Non-rigid Point Set Registration using Student's-t Mixture Model with Prior Probability Modeling. Sci Rep 2018; 8:8742. [PMID: 29880859 PMCID: PMC5992220 DOI: 10.1038/s41598-018-26288-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 04/16/2018] [Indexed: 11/21/2022] Open
Abstract
A new accurate and robust non-rigid point set registration method, named DSMM, is proposed for non-rigid point set registration in the presence of significant amounts of missing correspondences and outliers. The key idea of this algorithm is to consider the relationship between the point sets as random variables and model the prior probabilities via Dirichlet distribution. We assign the various prior probabilities of each point to its correspondences in the Student's-t mixture model. We later incorporate the local spatial representation of the point sets by representing the posterior probabilities in a linear smoothing filter and get closed-form mixture proportions, leading to a computationally efficient registration algorithm comparing to other Student's-t mixture model based methods. Finally, by introducing the hidden random variables in the Bayesian framework, we propose a general mixture model family for generalizing the mixture-model-based point set registration, where the existing methods can be considered as members of the proposed family. We evaluate DSMM and other state-of-the-art finite mixture models based point set registration algorithms on both artificial point set and various 2D and 3D point sets, where DSMM demonstrates its statistical accuracy and robustness, outperforming the competing algorithms.
Collapse
Affiliation(s)
- Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Jianfei Tu
- Lishui Central Hospital, Lishui, 323000, China
| | - Chen Geng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Jisu Hu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Baotong Tong
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Jiansong Ji
- Lishui Central Hospital, Lishui, 323000, China.
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| |
Collapse
|
17
|
Sun G, Liu X, Gao L, Zhang P, Wang S, Zhou Y. Automatic measurement of global retinal circulation in fluorescein angiography. JOURNAL OF BIOMEDICAL OPTICS 2018; 23:1-8. [PMID: 29956508 DOI: 10.1117/1.jbo.23.6.065006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Accepted: 06/04/2018] [Indexed: 06/08/2023]
Abstract
Examination of the retinal circulation in patients with retinal diseases is a clinical routine for ophthalmologists. In the present work, an automatic method is proposed for measuring the global retinal circulation in fluorescein angiography (FA). First, the perfusion region in FA images is segmented using a multiscale line detector. Then, the time evolution of the perfusion area is modeled using damped least-squares regression. Based on the perfusion area profile, some circulation parameters are defined to describe quantitatively the global retinal circulation. The effectiveness of the proposed method is tested using our own and public datasets, with reasonable results and satisfactory accuracy compared with manual measurement. The proposed method has good computing efficiency and thus has potential to be used in clinical practice for evaluation of global retinal circulation.
Collapse
Affiliation(s)
| | | | - Ling Gao
- The Second Xiangya Hospital of Central South Univ., China
| | - Pu Zhang
- The Second Xiangya Hospital of Central South Univ., China
| | | | - Yandan Zhou
- The Second Xiangya Hospital of Central South Univ., China
| |
Collapse
|
18
|
Jian BL, Chen CL, Chu WL, Huang MW. The facial expression of schizophrenic patients applied with infrared thermal facial image sequence. BMC Psychiatry 2017; 17:229. [PMID: 28646852 PMCID: PMC5483292 DOI: 10.1186/s12888-017-1387-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND Schizophrenia is a neurological disease characterized by alterations to patients' cognitive functions and emotional expressions. Relevant studies often use magnetic resonance imaging (MRI) of the brain to explore structural differences and responsiveness within brain regions. However, as this technique is expensive and commonly induces claustrophobia, it is frequently refused by patients. Thus, this study used non-contact infrared thermal facial images (ITFIs) to analyze facial temperature changes evoked by different emotions in moderately and markedly ill schizophrenia patients. METHODS Schizophrenia is an emotion-related disorder, and images eliciting different types of emotions were selected from the international affective picture system (IAPS) and presented to subjects during ITFI collection. ITFIs were aligned using affine registration, and the changes induced by small irregular head movements were corrected. The average temperatures from the forehead, nose, mouth, left cheek, and right cheek were calculated, and continuous temperature changes were used as features. After performing dimensionality reduction and noise removal using the component analysis method, multivariate analysis of variance and the Support Vector Machine (SVM) classification algorithm were used to identify moderately and markedly ill schizophrenia patients. RESULTS Analysis of five facial areas indicated significant temperature changes in the forehead and nose upon exposure to various emotional stimuli and in the right cheek upon evocation of high valence low arousal (HVLA) stimuli. The most significant P-value (lower than 0.001) was obtained in the forehead area upon evocation of disgust. Finally, when the features of forehead temperature changes in response to low valence high arousal (LVHA) were reduced to 9 using dimensionality reduction and noise removal, the identification rate was as high as 94.3%. CONCLUSIONS Our results show that features obtained in the forehead, nose, and right cheek significantly differed between moderately and markedly ill schizophrenia patients. We then chose the features that most effectively distinguish between moderately and markedly ill schizophrenia patients using the SVM. These results demonstrate that the ITFI analysis protocol proposed in this study can effectively provide reference information regarding the phase of the disease in patients with schizophrenia.
Collapse
Affiliation(s)
- Bo-Lin Jian
- 0000 0004 0532 3255grid.64523.36Department of Aeronautics and Astronautics, National Cheng Kung University, Tainan, 701 Taiwan
| | - Chieh-Li Chen
- 0000 0004 0532 3255grid.64523.36Department of Aeronautics and Astronautics, National Cheng Kung University, Tainan, 701 Taiwan
| | - Wen-Lin Chu
- 0000 0004 0532 3255grid.64523.36Institute of Biomedical Engineering, National Cheng Kung University, Tainan, 701 Taiwan
| | - Min-Wei Huang
- Institute of Biomedical Engineering, National Cheng Kung University, Tainan, 701, Taiwan. .,Department of Psychiatry, Chiayi Branch, Taichung Veterans General Hospital, Chia-Yi, 600, Taiwan.
| |
Collapse
|
19
|
Stevenson RL. Multimodal Image Registration With Line Segments by Selective Search. IEEE TRANSACTIONS ON CYBERNETICS 2017; 47:1285-1298. [PMID: 28113568 DOI: 10.1109/tcyb.2016.2548484] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper proposes a line segment-based image registration method. Edges are detected from images by a modified Canny operator, and line segments are then extracted from these edges. At registration, triplets (quaternions) of line segment correspondences are tentatively formed by applying the distance and orientation constraints, which determine an intermediate transformation. Those triplets (quaternions) of lines resulting in higher similarity metrics are preserved, and their intersections are refined by an iterative process or random sample consensus. The proposed method is tested on indoor and outdoor EO/IR image pairs, and the average registration error is calculated to be compared with existing algorithms. Experimental results show that the proposed registration method can robustly align EO/IR images containing line segments, providing more reliable and accurate registration results on multimodal images.
Collapse
|
20
|
Accurate Joint-Alignment of Indocyanine Green and Fluorescein Angiograph Sequences for Treatment of Subretinal Lesions. IEEE J Biomed Health Inform 2017; 21:785-793. [PMID: 28113480 DOI: 10.1109/jbhi.2016.2538265] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In ophthalmology, aligning images in indocyanine green and fluorescein angiograph sequences is important for the treatment of subretinal lesions. This paper introduces an algorithm that is tailored to align jointly in a common reference space all the images in an angiogram sequence containing both modalities. To overcome the issues of low image contrast and low signal-to-noise ratio for late-phase images, the structural similarity between two images is enhanced using Gabor wavelet transform. Image pairs are pairwise registered and the transformations are simultaneously and globally adjusted for a mutually consistent joint alignment. The joint registration process is incremental and the success depends on the correctness of matches from the pairwise registration. To safeguard the joint process, our system performs the consistency test to exclude incorrect pairwise results automatically to ensure correct matches as more images are jointly aligned. Our dataset consists of 60 sequences of polypoidal choroidal vasculopathy collected by the EVEREST Study Group. On average, each sequence contains 20 images. Our algorithm successfully pairwise registered 95.04% of all image pairs, and joint registered 98.7% of all images, with an average alignment error of 1.58 pixels.
Collapse
|
21
|
Lin WY, Chou SF, Tsai CL, Chen SJ. Temporally Coherent Illumination Normalization for Indocyanine Green Video Angiography. IEEE J Biomed Health Inform 2017; 22:570-578. [PMID: 28092584 DOI: 10.1109/jbhi.2017.2652446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Indocyanine green (ICG) angiography is an imaging method for doctors to observe choroidal abnormalities in human eyes. The ICG angiograms typically exhibit inhomogeneous illumination, which poses serious difficulties for the development of computer-aided diagnostic tools. In this paper, we propose a novel illumination normalization method to alleviate the inhomogeneous illumination in ICG video angiograms. In particular, we first align the viewpoint of the input ICG video angiogram using an image registration method. Then, we acquire temporal information using time-dependent intrinsic image and compute the corresponding illumination image. Finally, we correct inhomogeneous illumination from the illumination image by estimating contrast and luminosity distortion. We have conducted extensive evaluation using ICG video angiograms of 60 patients. Two video quality assessment methods are utilized to evaluate the performance of our proposed illumination normalization method. The results show that our proposed method can help improve the visual quality of ICG video angiogram. Visual evaluation by a human expert also confirms that our method yields better illumination normalization results.
Collapse
|
22
|
Removing mismatches for retinal image registration via multi-attribute-driven regularized mixture model. Inf Sci (N Y) 2016. [DOI: 10.1016/j.ins.2016.08.041] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
23
|
Miri MS, Abràmoff MD, Kwon YH, Garvin MK. Multimodal registration of SD-OCT volumes and fundus photographs using histograms of oriented gradients. BIOMEDICAL OPTICS EXPRESS 2016; 7:5252-5267. [PMID: 28018740 PMCID: PMC5175567 DOI: 10.1364/boe.7.005252] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Revised: 10/19/2016] [Accepted: 11/11/2016] [Indexed: 05/14/2023]
Abstract
With availability of different retinal imaging modalities such as fundus photography and spectral domain optical coherence tomography (SD-OCT), having a robust and accurate registration scheme to enable utilization of this complementary information is beneficial. The few existing fundus-OCT registration approaches contain a vessel segmentation step, as the retinal blood vessels are the most dominant structures that are in common between the pair of images. However, errors in the vessel segmentation from either modality may cause corresponding errors in the registration. In this paper, we propose a feature-based registration method for registering fundus photographs and SD-OCT projection images that benefits from vasculature structural information without requiring blood vessel segmentation. In particular, after a preprocessing step, a set of control points (CPs) are identified by looking for the corners in the images. Next, each CP is represented by a feature vector which encodes the local structural information via computing the histograms of oriented gradients (HOG) from the neighborhood of each CP. The best matching CPs are identified by calculating the distance of their corresponding feature vectors. After removing the incorrect matches the best affine transform that registers fundus photographs to SD-OCT projection images is computed using the random sample consensus (RANSAC) method. The proposed method was tested on 44 pairs of fundus and SD-OCT projection images of glaucoma patients and the result showed that the proposed method successfully registers the multimodal images and produced a registration error of 25.34 ± 12.34 μm (0.84 ± 0.41 pixels).
Collapse
Affiliation(s)
- Mohammad Saleh Miri
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242,
USA
| | - Michael D. Abràmoff
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242,
USA
- Department of Ophthalmology and Visual Sciences, The University of Iowa, Iowa City, IA 52242,
USA
- Iowa City VA Health Care System, Iowa City, IA 52246,
USA
| | - Young H. Kwon
- Department of Ophthalmology and Visual Sciences, The University of Iowa, Iowa City, IA 52242,
USA
| | - Mona K. Garvin
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242,
USA
- Iowa City VA Health Care System, Iowa City, IA 52246,
USA
| |
Collapse
|
24
|
Hernandez-Matas C, Zabulis X, Triantafyllou A, Anyfanti P, Argyros AA. Retinal image registration under the assumption of a spherical eye. Comput Med Imaging Graph 2016; 55:95-105. [PMID: 27370900 DOI: 10.1016/j.compmedimag.2016.06.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2016] [Revised: 05/23/2016] [Accepted: 06/21/2016] [Indexed: 10/21/2022]
Abstract
We propose a method for registering a pair of retinal images. The proposed approach employs point correspondences and assumes that the human eye has a spherical shape. The image registration problem is formulated as a 3D pose estimation problem, solved by estimating the rigid transformation that relates the views from which the two images were acquired. Given this estimate, each image can be warped upon the other so that pixels with the same coordinates image the same retinal point. Extensive experimental evaluation shows improved accuracy over state of the art methods, as well as robustness to noise and spurious keypoint matches. Experiments also indicate the method's applicability to the comparative analysis of images from different examinations that may exhibit changes and its applicability to diagnostic support.
Collapse
Affiliation(s)
- Carlos Hernandez-Matas
- Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece; Computer Science Department, University of Crete, Heraklion, Greece
| | - Xenophon Zabulis
- Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece
| | - Areti Triantafyllou
- Department of Internal Medicine, Papageorgiou Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Panagiota Anyfanti
- Department of Internal Medicine, Papageorgiou Hospital, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Antonis A Argyros
- Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece; Computer Science Department, University of Crete, Heraklion, Greece
| |
Collapse
|
25
|
Nguyen TM, Wu QMJ. Multiple Kernel Point Set Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1381-1394. [PMID: 26841389 DOI: 10.1109/tmi.2015.2511063] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.
Collapse
|
26
|
Wang G, Wang Z, Chen Y, Zhao W. Robust point matching method for multimodal retinal image registration. Biomed Signal Process Control 2015. [DOI: 10.1016/j.bspc.2015.03.004] [Citation(s) in RCA: 81] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
27
|
Lin WY, Yang SC, Chen SJ, Tsai CL, Du SZ, Lim TH. Automatic Segmentation of Polypoidal Choroidal Vasculopathy from Indocyanine Green Angiography Using Spatial and Temporal Patterns. Transl Vis Sci Technol 2015; 4:7. [PMID: 25806144 DOI: 10.1167/tvst.4.2.7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Accepted: 12/09/2014] [Indexed: 11/24/2022] Open
Abstract
PURPOSE To develop a computer-aided diagnostic tool for automated detection and quantification of polypoidal regions in indocyanine green angiography (ICGA) images. METHODS The ICGA sequences of 59 polypoidal choroidal vasculopathy (PCV) treatment-naïve patients from five Asian countries (Hong Kong, Singapore, South Korea, Taiwan, and Thailand) were provided by the EVEREST study. The ground truth was provided by the reading center for the presence of polypoidal regions. The proposed detection algorithm used both temporal and spatial features to characterize the severity of polypoidal lesions in ICGA sequences. Leave-one-out cross validation was carried out so that each patient was used once as the validation sample. For each patient, a fixed detection threshold of 0.5 on the severity was applied to obtain sensitivity, specificity, and balanced accuracy with respect to the ground truth. RESULTS Our system achieved an average accuracy of 0.9126 (sensitivity = 0.9125, specificity = 0.9127) for detection of polyps in the 59 ICGA sequences. Among the total of 222 features extracted from ICGA sequence, the spatial variances exhibited best discriminative power in distinguishing between polyp and nonpolyp regions. The results also indicated the importance of combining spatial and temporal features to further improve detection accuracy. CONCLUSIONS The developed software provided a means of detecting and quantifying polypoidal regions in ICGA images for the first time. TRANSLATIONAL RELEVANCE This preliminary study demonstrated a computer-aided diagnostic tool, which enables objective evaluation of PCV and its progression. Ophthalmologists can easily visualize the polypoidal regions and obtain quantitative information about polyps by using the proposed system.
Collapse
Affiliation(s)
- Wei-Yang Lin
- Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621, Taiwan
| | - Sheng-Chang Yang
- Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621, Taiwan
| | - Shih-Jen Chen
- Department of Ophthalmology, Taipei Veterans General Hospital, Taiwan ; School of Medicine, National Yang Ming University, Taipei 11217, Taiwan
| | - Chia-Ling Tsai
- Computer Science Department, Iona College, New Rochelle, New York, USA
| | - Shuo-Zhao Du
- Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi 621, Taiwan
| | - Tock-Han Lim
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore
| |
Collapse
|
28
|
Rabbani H, Allingham MJ, Mettu PS, Cousins SW, Farsiu S. Fully automatic segmentation of fluorescein leakage in subjects with diabetic macular edema. Invest Ophthalmol Vis Sci 2015; 56:1482-92. [PMID: 25634978 DOI: 10.1167/iovs.14-15457] [Citation(s) in RCA: 59] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
PURPOSE To create and validate software to automatically segment leakage area in real-world clinical fluorescein angiography (FA) images of subjects with diabetic macular edema (DME). METHODS Fluorescein angiography images obtained from 24 eyes of 24 subjects with DME were retrospectively analyzed. Both video and still-frame images were obtained using a Heidelberg Spectralis 6-mode HRA/OCT unit. We aligned early and late FA frames in the video by a two-step nonrigid registration method. To remove background artifacts, we subtracted early and late FA frames. Finally, after postprocessing steps, including detection and inpainting of the vessels, a robust active contour method was utilized to obtain leakage area in a 1500-μm-radius circular region centered at the fovea. Images were captured at different fields of view (FOVs) and were often contaminated with outliers, as is the case in real-world clinical imaging. Our algorithm was applied to these images with no manual input. Separately, all images were manually segmented by two retina specialists. The sensitivity, specificity, and accuracy of manual interobserver, manual intraobserver, and automatic methods were calculated. RESULTS The mean accuracy was 0.86 ± 0.08 for automatic versus manual, 0.83 ± 0.16 for manual interobserver, and 0.90 ± 0.08 for manual intraobserver segmentation methods. CONCLUSIONS Our fully automated algorithm can reproducibly and accurately quantify the area of leakage of clinical-grade FA video and is congruent with expert manual segmentation. The performance was reliable for different DME subtypes. This approach has the potential to reduce time and labor costs and may yield objective and reproducible quantitative measurements of DME imaging biomarkers.
Collapse
Affiliation(s)
- Hossein Rabbani
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina, United States Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Michael J Allingham
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina, United States
| | - Priyatham S Mettu
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina, United States
| | - Scott W Cousins
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina, United States
| | - Sina Farsiu
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina, United States Department of Biomedical Engineering, Duke University, Durham, North Carolina, United States
| |
Collapse
|
29
|
Hernandez-Matas C, Zabulis X. Super resolution for fundoscopy based on 3D image registration. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:6332-8. [PMID: 25571445 DOI: 10.1109/embc.2014.6945077] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
An approach to the generation of super-resolution (SR) images from fundoscopy images is proposed that is based on the 3D registration of the original fundoscopy images. The proposed approach utilizes a simple 3D registration method to enable the application of conventional SR techniques which, otherwise, employ 2D image registration. Qualitative and quantitative comparative evaluation shows that the obtained results improve image definition and alleviate noise.
Collapse
|
30
|
Han Y, Feng XC, Baciu G. Local joint entropy based non-rigid multimodality image registration. Pattern Recognit Lett 2013. [DOI: 10.1016/j.patrec.2013.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
31
|
Elastic image registration using hierarchical spatially based mean shift. Comput Biol Med 2013; 43:1086-97. [PMID: 23930802 DOI: 10.1016/j.compbiomed.2013.05.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2012] [Revised: 04/27/2013] [Accepted: 05/15/2013] [Indexed: 11/24/2022]
Abstract
In this paper, a novel estimation technique for corresponding points using a hierarchical, spatially based mean shift algorithm is proposed. We proposed a spatially based probability estimation using different spatial masks. For a given point on reference image, its corresponding register point is found along the search trajectory generated by optimizing Bhattacharyya coefficient between two windows centered at the points on the register and reference images. The outliers are further eliminated by analyzing statistical information on the displacements of the candidate register points. Experiments on various monomodal medical images show that the proposed method is feasible and fast.
Collapse
|
32
|
Abdelmoula WM, Shah SM, Fahmy AS. Segmentation of choroidal neovascularization in fundus fluorescein angiograms. IEEE Trans Biomed Eng 2013; 60:1439-45. [PMID: 23314765 DOI: 10.1109/tbme.2013.2237906] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Choroidal neovascularization (CNV) is a common manifestation of age-related macular degeneration (AMD). It is characterized by the growth of abnormal blood vessels in the choroidal layer causing blurring and deterioration of the vision. In late stages, these abnormal vessels can rupture the retinal layers causing complete loss of vision at the affected regions. Determining the CNV size and type in fluorescein angiograms is required for proper treatment and prognosis of the disease. Computer-aided methods for CNV segmentation is needed not only to reduce the burden of manual segmentation but also to reduce inter- and intraobserver variability. In this paper, we present a framework for segmenting CNV lesions based on parametric modeling of the intensity variation in fundus fluorescein angiograms. First, a novel model is proposed to describe the temporal intensity variation at each pixel in image sequences acquired by fluorescein angiography. The set of model parameters at each pixel are used to segment the image into regions of homogeneous parameters. Preliminary results on datasets from 21 patients with Wet-AMD show the potential of the method to segment CNV lesions in close agreement with the manual segmentation.
Collapse
|
33
|
Lin KS, Tsai CL, Tsai CH, Sofka M, Chen SJ, Lin WY. Retinal Vascular Tree Reconstruction With Anatomical Realism. IEEE Trans Biomed Eng 2012; 59:3337-47. [DOI: 10.1109/tbme.2012.2215034] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
34
|
Xiao D, Vignarajan J, Lock J, Frost S, Tay-Kearney ML, Kanagasingam Y. Retinal image registration and comparison for clinical decision support. Australas Med J 2012. [PMID: 23115586 DOI: 10.4066/amj.2012.1364.] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
BACKGROUND For eye diseases, such as glaucoma and age-related macular degeneration (ARMD), involved in long-term degeneration procedure, longitudinal comparison of retinal images is a common step for reliable diagnosis of these kinds of diseases. AIMS To provide a retinal image registration approach for longitudinal retinal image alignment and comparison. METHOD Two image registration solutions were proposed for facing different image qualities of retinal images to make the registration methods more robust and feasible in a clinical application system. RESULTS Thirty pairs of longitudinal retinal images were used for the registration test. The experiments showed both solutions provided good performance for the accurate image registrations with efficiency. CONCLUSION We proposed a set of retinal image registration solutions for longitudinal retinal image observation and comparison targeting a clinical application environment.
Collapse
Affiliation(s)
- Di Xiao
- The Australian e-Health Research Centre, CSIRO
| | | | | | | | | | | |
Collapse
|
35
|
Abstract
This paper presents a review of automated image registration methodologies that have been used in the medical field. The aim of this paper is to be an introduction to the field, provide knowledge on the work that has been developed and to be a suitable reference for those who are looking for registration methods for a specific application. The registration methodologies under review are classified into intensity or feature based. The main steps of these methodologies, the common geometric transformations, the similarity measures and accuracy assessment techniques are introduced and described.
Collapse
Affiliation(s)
- Francisco P M Oliveira
- a Instituto de Engenharia Mecânica e Gestão Industrial, Faculdade de Engenharia, Universidade do Porto , Rua Dr. Roberto Frias, 4200-465 , Porto , Portugal
| | | |
Collapse
|
36
|
Perez-Rovira A, Cabido R, Trucco E, McKenna SJ, Hubschman JP. RERBEE: robust efficient registration via bifurcations and elongated elements applied to retinal fluorescein angiogram sequences. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:140-150. [PMID: 21908251 DOI: 10.1109/tmi.2011.2167517] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We present RERBEE (robust efficient registration via bifurcations and elongated elements), a novel feature-based registration algorithm able to correct local deformations in high-resolution ultra-wide field-of-view (UWFV) fluorescein angiogram (FA) sequences of the retina. The algorithm is able to cope with peripheral blurring, severe occlusions, presence of retinal pathologies and the change of image content due to the perfusion of the fluorescein dye in time. We have used the computational power of a graphics processor to increase the performance of the most computationally expensive parts of the algorithm by a factor of over × 1300, enabling the algorithm to register a pair of 3900 × 3072 UWFV FA images in 5-10 min instead of the 5-7 h required using only the CPU. We demonstrate accurate results on real data with 267 image pairs from a total of 277 (96.4%) graded as correctly registered by a clinician and 10 (3.6%) graded as correctly registered with minor errors but usable for clinical purposes. Quantitative comparison with state-of-the-art intensity-based and feature-based registration methods using synthetic data is also reported. We also show some potential usage of a correctly aligned sequence for vein/artery discrimination and automatic lesion detection.
Collapse
|
37
|
Zheng J, Tian J, Deng K, Dai X, Zhang X, Xu M. Salient feature region: a new method for retinal image registration. ACTA ACUST UNITED AC 2010; 15:221-32. [PMID: 21138808 DOI: 10.1109/titb.2010.2091145] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Retinal image registration is crucial for the diagnoses and treatments of various eye diseases. A great number of methods have been developed to solve this problem; however, fast and accurate registration of low-quality retinal images is still a challenging problem since the low content contrast, large intensity variance as well as deterioration of unhealthy retina caused by various pathologies. This paper provides a new retinal image registration method based on salient feature region (SFR). We first propose a well-defined region saliency measure that consists of both local adaptive variance and gradient field entropy to extract the SFRs in each image. Next, an innovative local feature descriptor that combines gradient field distribution with corresponding geometric information is then computed to match the SFRs accurately. After that, normalized cross-correlation-based local rigid registration is performed on those matched SFRs to refine the accuracy of local alignment. Finally, the two images are registered by adopting high-order global transformation model with locally well-aligned region centers as control points. Experimental results show that our method is quite effective for retinal image registration.
Collapse
Affiliation(s)
- Jian Zheng
- Medical Image Processing Group, Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | | | | | | | | | | |
Collapse
|
38
|
Retinal Fundus Image Registration via Vascular Structure Graph Matching. Int J Biomed Imaging 2010; 2010. [PMID: 20871853 PMCID: PMC2943092 DOI: 10.1155/2010/906067] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2010] [Accepted: 07/07/2010] [Indexed: 11/18/2022] Open
Abstract
Motivated by the observation that a retinal fundus image may contain some unique geometric structures within
its vascular trees which can be utilized for feature matching, in this paper, we proposed a graph-based registration
framework called GM-ICP to align pairwise retinal images. First, the retinal vessels are automatically detected and
represented as vascular structure graphs. A graph matching is then performed to find global correspondences between
vascular bifurcations. Finally, a revised ICP algorithm incorporating with quadratic transformation model is used at
fine level to register vessel shape models. In order to eliminate the incorrect matches from global correspondence
set obtained via graph matching, we proposed a structure-based sample consensus (STRUCT-SAC) algorithm. The
advantages of our approach are threefold: (1) global optimum solution can be achieved with graph matching; (2)
our method is invariant to linear geometric transformations; and (3) heavy local feature descriptors are not required.
The effectiveness of our method is demonstrated by the experiments with 48 pairs retinal images collected from
clinical patients.
Collapse
|
39
|
Chen J, Tian J, Lee N, Zheng J, Smith RT, Laine AF. A partial intensity invariant feature descriptor for multimodal retinal image registration. IEEE Trans Biomed Eng 2010; 57:1707-18. [PMID: 20176538 DOI: 10.1109/tbme.2010.2042169] [Citation(s) in RCA: 191] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Detection of vascular bifurcations is a challenging task in multimodal retinal image registration. Existing algorithms based on bifurcations usually fail in correctly aligning poor quality retinal image pairs. To solve this problem, we propose a novel highly distinctive local feature descriptor named partial intensity invariant feature descriptor (PIIFD) and describe a robust automatic retinal image registration framework named Harris-PIIFD. PIIFD is invariant to image rotation, partially invariant to image intensity, affine transformation, and viewpoint/perspective change. Our Harris-PIIFD framework consists of four steps. First, corner points are used as control point candidates instead of bifurcations since corner points are sufficient and uniformly distributed across the image domain. Second, PIIFDs are extracted for all corner points, and a bilateral matching technique is applied to identify corresponding PIIFDs matches between image pairs. Third, incorrect matches are removed and inaccurate matches are refined. Finally, an adaptive transformation is used to register the image pairs. PIIFD is so distinctive that it can be correctly identified even in nonvascular areas. When tested on 168 pairs of multimodal retinal images, the Harris-PIIFD far outperforms existing algorithms in terms of robustness, accuracy, and computational efficiency.
Collapse
Affiliation(s)
- Jian Chen
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| | | | | | | | | | | |
Collapse
|