1
|
Kalaw FGP, Cavichini M, Zhang J, Wen B, Lin AC, Heinke A, Nguyen T, An C, Bartsch DUG, Cheng L, Freeman WR. Ultra-wide field and new wide field composite retinal image registration with AI-enabled pipeline and 3D distortion correction algorithm. Eye (Lond) 2024; 38:1189-1195. [PMID: 38114568 PMCID: PMC11009222 DOI: 10.1038/s41433-023-02868-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 11/07/2023] [Accepted: 11/22/2023] [Indexed: 12/21/2023] Open
Abstract
PURPOSE This study aimed to compare a new Artificial Intelligence (AI) method to conventional mathematical warping in accurately overlaying peripheral retinal vessels from two different imaging devices: confocal scanning laser ophthalmoscope (cSLO) wide-field images and SLO ultra-wide field images. METHODS Images were captured using the Heidelberg Spectralis 55-degree field-of-view and Optos ultra-wide field. The conventional mathematical warping was performed using Random Sample Consensus-Sample and Consensus sets (RANSAC-SC). This was compared to an AI alignment algorithm based on a one-way forward registration procedure consisting of full Convolutional Neural Networks (CNNs) with Outlier Rejection (OR CNN), as well as an iterative 3D camera pose optimization process (OR CNN + Distortion Correction [DC]). Images were provided in a checkerboard pattern, and peripheral vessels were graded in four quadrants based on alignment to the adjacent box. RESULTS A total of 660 boxes were analysed from 55 eyes. Dice scores were compared between the three methods (RANSAC-SC/OR CNN/OR CNN + DC): 0.3341/0.4665/4784 for fold 1-2 and 0.3315/0.4494/4596 for fold 2-1 in composite images. The images composed using the OR CNN + DC have a median rating of 4 (out of 5) versus 2 using RANSAC-SC. The odds of getting a higher grading level are 4.8 times higher using our OR CNN + DC than RANSAC-SC (p < 0.0001). CONCLUSION Peripheral retinal vessel alignment performed better using our AI algorithm than RANSAC-SC. This may help improve co-localizing retinal anatomy and pathology with our algorithm.
Collapse
Affiliation(s)
- Fritz Gerald P Kalaw
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Melina Cavichini
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Junkang Zhang
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Bo Wen
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Andrew C Lin
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Anna Heinke
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Truong Nguyen
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Cheolhong An
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | | | - Lingyun Cheng
- Jacobs Retina Center, University of California, San Diego, CA, USA
| | - William R Freeman
- Jacobs Retina Center, University of California, San Diego, CA, USA.
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA.
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA.
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA.
| |
Collapse
|
2
|
Ma X, Cui H, Li S, Yang Y, Xia Y. Deformable medical image registration with global-local transformation network and region similarity constraint. Comput Med Imaging Graph 2023; 108:102263. [PMID: 37487363 DOI: 10.1016/j.compmedimag.2023.102263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 05/04/2023] [Accepted: 06/07/2023] [Indexed: 07/26/2023]
Abstract
Deformable medical image registration can achieve fast and accurate alignment between two images, enabling medical professionals to analyze images of different subjects in a unified anatomical space. As such, it plays an important role in many medical image studies. Current deep learning (DL)-based approaches for image registration directly learn spatial transformation from one image to another, relying on a convolutional neural network and ground truth or similarity metrics. However, these methods only use a global similarity energy function to evaluate the similarity of a pair of images, which ignores the similarity of regions of interest (ROIs) within the images. This can limit the accuracy of the image registration and affect the analysis of specific ROIs. Additionally, DL-based methods often estimate global spatial transformations of images directly, without considering local spatial transformations of ROIs within the images. To address this issue, we propose a novel global-local transformation network with a region similarity constraint that maximizes the similarity of ROIs within the images and estimates both global and local spatial transformations simultaneously. Experiments conducted on four public 3D MRI datasets demonstrate that the proposed method achieves the highest registration performance in terms of accuracy and generalization compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Xinke Ma
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Hengfei Cui
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Shuoyan Li
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Yibo Yang
- King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia
| | - Yong Xia
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| |
Collapse
|
3
|
Wu M, He X, Li F, Zhu J, Wang S, Burstein PD. Weakly supervised volumetric prostate registration for MRI-TRUS image driven by signed distance map. Comput Biol Med 2023; 163:107150. [PMID: 37321103 DOI: 10.1016/j.compbiomed.2023.107150] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 05/24/2023] [Accepted: 06/07/2023] [Indexed: 06/17/2023]
Abstract
Image registration is a fundamental step for MRI-TRUS fusion targeted biopsy. Due to the inherent representational differences between these two image modalities, though, intensity-based similarity losses for registration tend to result in poor performance. To mitigate this, comparison of organ segmentations, functioning as a weak proxy measure of image similarity, has been proposed. Segmentations, though, are limited in their information encoding capabilities. Signed distance maps (SDMs), on the other hand, encode these segmentations into a higher dimensional space where shape and boundary information are implicitly captured, and which, in addition, yield high gradients even for slight mismatches, thus preventing vanishing gradients during deep-network training. Based on these advantages, this study proposes a weakly-supervised deep learning volumetric registration approach driven by a mixed loss that operates both on segmentations and their corresponding SDMs, and which is not only robust to outliers, but also encourages optimal global alignment. Our experimental results, performed on a public prostate MRI-TRUS biopsy dataset, demonstrate that our method outperforms other weakly-supervised registration approaches with a dice similarity coefficient (DSC), Hausdorff distance (HD) and mean surface distance (MSD) of 87.3 ± 11.3, 4.56 ± 1.95 mm, and 0.053 ± 0.026 mm, respectively. We also show that the proposed method effectively preserves the prostate gland's internal structure.
Collapse
Affiliation(s)
- Menglin Wu
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China.
| | - Xuchen He
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
| | - Fan Li
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
| | - Jie Zhu
- Senior Department of Urology, The Third Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China.
| | | | | |
Collapse
|
4
|
Martínez-Río J, Carmona EJ, Cancelas D, Novo J, Ortega M. Deformable registration of multimodal retinal images using a weakly supervised deep learning approach. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08454-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
Abstract
AbstractThere are different retinal vascular imaging modalities widely used in clinical practice to diagnose different retinal pathologies. The joint analysis of these multimodal images is of increasing interest since each of them provides common and complementary visual information. However, if we want to facilitate the comparison of two images, obtained with different techniques and containing the same retinal region of interest, it will be necessary to make a previous registration of both images. Here, we present a weakly supervised deep learning methodology for robust deformable registration of multimodal retinal images, which is applied to implement a method for the registration of fluorescein angiography (FA) and optical coherence tomography angiography (OCTA) images. This methodology is strongly inspired by VoxelMorph, a general unsupervised deep learning framework of the state of the art for deformable registration of unimodal medical images. The method was evaluated in a public dataset with 172 pairs of FA and superficial plexus OCTA images. The degree of alignment of the common information (blood vessels) and preservation of the non-common information (image background) in the transformed image were measured using the Dice coefficient (DC) and zero-normalized cross-correlation (ZNCC), respectively. The average values of the mentioned metrics, including the standard deviations, were DC = 0.72 ± 0.10 and ZNCC = 0.82 ± 0.04. The time required to obtain each pair of registered images was 0.12 s. These results outperform rigid and deformable registration methods with which our method was compared.
Collapse
|
5
|
Xiao H. Optimized soft frame design of traditional printing and dyeing process in Xiangxi based on pattern mining and edge-driven scene understanding. Soft comput 2022. [DOI: 10.1007/s00500-021-06201-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
6
|
An C, Wang Y, Zhang J, Nguyen TQ. Self-Supervised Rigid Registration for Multimodal Retinal Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5733-5747. [PMID: 36040946 DOI: 10.1109/tip.2022.3201476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The ability to accurately overlay one modality retinal image to another is critical in ophthalmology. Our previous framework achieved the state-of-the-art results for multimodal retinal image registration. However, it requires human-annotated labels due to the supervised approach of the previous work. In this paper, we propose a self-supervised multimodal retina registration method to alleviate the burdens of time and expense to prepare for training data, that is, aiming to automatically register multimodal retinal images without any human annotations. Specially, we focus on registering color fundus images with infrared reflectance and fluorescein angiography images, and compare registration results with several conventional and supervised and unsupervised deep learning methods. From the experimental results, the proposed self-supervised framework achieves a comparable accuracy comparing to the state-of-the-art supervised learning method in terms of registration accuracy and Dice coefficient.
Collapse
|
7
|
Multimodal Remote Sensing Image Registration Methods and Advancements: A Survey. REMOTE SENSING 2021. [DOI: 10.3390/rs13245128] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With rapid advancements in remote sensing image registration algorithms, comprehensive imaging applications are no longer limited to single-modal remote sensing images. Instead, multi-modal remote sensing (MMRS) image registration has become a research focus in recent years. However, considering multi-source, multi-temporal, and multi-spectrum input introduces significant nonlinear radiation differences in MMRS images for which researchers need to develop novel solutions. At present, comprehensive reviews and analyses of MMRS image registration methods are inadequate in related fields. Thus, this paper introduces three theoretical frameworks: namely, area-based, feature-based and deep learning-based methods. We present a brief review of traditional methods and focus on more advanced methods for MMRS image registration proposed in recent years. Our review or comprehensive analysis is intended to provide researchers in related fields with advanced understanding to achieve further breakthroughs and innovations.
Collapse
|
8
|
Zhang J, Wang Y, Bartsch DUG, Freeman WR, Nguyen TQ, An C. Perspective Distortion Correction for Multi-Modal Registration between Ultra-Widefield and Narrow-Angle Retinal Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4086-4091. [PMID: 34892126 PMCID: PMC9359414 DOI: 10.1109/embc46164.2021.9631084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Multi-modal retinal image registration between 2D Ultra-Widefield (UWF) and narrow-angle (NA) images has not been well-studied, since most existing methods mainly focus on NA image alignment. The stereographic projection model used in UWF imaging causes strong distortions in peripheral areas, which leads to inferior alignment quality. We propose a distortion correction method that remaps the UWF images based on estimated camera view points of NA images. In addition, we set up a CNN-based registration pipeline for UWF and NA images, which consists of the distortion correction method and three networks for vessel segmentation, feature detection and matching, and outlier rejection. Experimental results on our collected dataset shows the effectiveness of the proposed pipeline and the distortion correction method.
Collapse
|
9
|
Dan T, Hu Y, Han C, Fan Z, Huang Z, Zhang B, Tao G, Liu B, Yu H, Cai H. Fusion of multi-source retinal fundus images via automatic registration for clinical diagnosis. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|