1
|
Mujat M, Akula JD, Fulton AB, Ferguson RD, Iftimia N. Non-Rigid Registration for High-Resolution Retinal Imaging. Diagnostics (Basel) 2023; 13:2285. [PMID: 37443679 PMCID: PMC10341150 DOI: 10.3390/diagnostics13132285] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 06/30/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023] Open
Abstract
Adaptive optics provides improved resolution in ophthalmic imaging when retinal microstructures need to be identified, counted, and mapped. In general, multiple images are averaged to improve the signal-to-noise ratio or analyzed for temporal dynamics. Image registration by cross-correlation is straightforward for small patches; however, larger images require more sophisticated registration techniques. Strip-based registration has been used successfully for photoreceptor mosaic alignment in small patches; however, if the deformations along strips are not simple displacements, averaging can degrade the final image. We have applied a non-rigid registration technique that improves the quality of processed images for mapping cones over large image patches. In this approach, correction of local deformations compensates for local image stretching, compressing, bending, and twisting due to a number of causes. The main result of this procedure is improved definition of retinal microstructures that can be better identified and segmented. Derived metrics such as cone density, wall-to-lumen ratio, and quantification of structural modification of blood vessel walls have diagnostic value in many retinal diseases, including diabetic retinopathy and age-related macular degeneration, and their improved evaluations may facilitate early diagnostics of retinal diseases.
Collapse
Affiliation(s)
- Mircea Mujat
- Physical Sciences, Inc., 20 New England Business Center, Andover, MA 01810, USA; (R.D.F.); (N.I.)
| | - James D. Akula
- Department of Ophthalmology, Boston Children’s Hospital, Boston, MA 02115, USA; (J.D.A.); (A.B.F.)
- Department of Ophthalmology, Harvard Medical School, Boston, MA 02115, USA
| | - Anne B. Fulton
- Department of Ophthalmology, Boston Children’s Hospital, Boston, MA 02115, USA; (J.D.A.); (A.B.F.)
- Department of Ophthalmology, Harvard Medical School, Boston, MA 02115, USA
| | - R. Daniel Ferguson
- Physical Sciences, Inc., 20 New England Business Center, Andover, MA 01810, USA; (R.D.F.); (N.I.)
| | - Nicusor Iftimia
- Physical Sciences, Inc., 20 New England Business Center, Andover, MA 01810, USA; (R.D.F.); (N.I.)
| |
Collapse
|
2
|
Rivas-Villar D, Motschi AR, Pircher M, Hitzenberger CK, Schranz M, Roberts PK, Schmidt-Erfurth U, Bogunović H. Automated inter-device 3D OCT image registration using deep learning and retinal layer segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:3726-3747. [PMID: 37497506 PMCID: PMC10368062 DOI: 10.1364/boe.493047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/18/2023] [Accepted: 05/26/2023] [Indexed: 07/28/2023]
Abstract
Optical coherence tomography (OCT) is the most widely used imaging modality in ophthalmology. There are multiple variations of OCT imaging capable of producing complementary information. Thus, registering these complementary volumes is desirable in order to combine their information. In this work, we propose a novel automated pipeline to register OCT images produced by different devices. This pipeline is based on two steps: a multi-modal 2D en-face registration based on deep learning, and a Z-axis (axial axis) registration based on the retinal layer segmentation. We evaluate our method using data from a Heidelberg Spectralis and an experimental PS-OCT device. The empirical results demonstrated high-quality registrations, with mean errors of approximately 46 µm for the 2D registration and 9.59 µm for the Z-axis registration. These registrations may help in multiple clinical applications such as the validation of layer segmentations among others.
Collapse
Affiliation(s)
- David Rivas-Villar
- Centro de investigacion CITIC, Universidade da Coruña, 15071 A Coruña, Spain
- Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15006 A Coruña, Spain
| | - Alice R Motschi
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Michael Pircher
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Christoph K Hitzenberger
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Markus Schranz
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Philipp K Roberts
- Medical University of Vienna, Department of Ophthalmology and Optometry, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Medical University of Vienna, Department of Ophthalmology and Optometry, Vienna, Austria
| | - Hrvoje Bogunović
- Medical University of Vienna, Department of Ophthalmology and Optometry, Christian Doppler Lab for Artificial Intelligence in Retina, Vienna, Austria
| |
Collapse
|
3
|
Cao Y, Fu T, Duan L, Dai Y, Gong L, Cao W, Liu D, Yang X, Ni X, Zheng J. CDFRegNet: A cross-domain fusion registration network for CT-to-CBCT image registration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 224:107025. [PMID: 35872383 DOI: 10.1016/j.cmpb.2022.107025] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 07/01/2022] [Accepted: 07/13/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer tomography (CT) to cone-beam computed tomography (CBCT) image registration plays an important role in radiotherapy treatment placement, dose verification, and anatomic changes monitoring during radiotherapy. However, fast and accurate CT-to-CBCT image registration is still very challenging due to the intensity differences, the poor image quality of CBCT images, and inconsistent structure information. METHODS To address these problems, a novel unsupervised network named cross-domain fusion registration network (CDFRegNet) is proposed. First, a novel edge-guided attention module (EGAM) is designed, aiming at capturing edge information based on the gradient prior images and guiding the network to model the spatial correspondence between two image domains. Moreover, a novel cross-domain attention module (CDAM) is proposed to improve the network's ability to guide the network to effectively map and fuse the domain-specific features. RESULTS Extensive experiments on a real clinical dataset were carried out, and the experimental results verify that the proposed CDFRegNet can register CT to CBCT images effectively and obtain the best performance, while compared with other representative methods, with a mean DSC of 80.01±7.16%, a mean TRE of 2.27±0.62 mm, and a mean MHD of 1.50±0.32 mm. The ablation experiments also proved that our EGAM and CDAM can further improve the accuracy of the registration network and they can generalize well to other registration networks. CONCLUSION This paper proposed a novel CT-to-CBCT registration method based on EGAM and CDAM, which has the potential to improve the accuracy of multi-domain image registration.
Collapse
Affiliation(s)
- Yuzhu Cao
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Tianxiao Fu
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou 215006, China
| | - Luwen Duan
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Yakang Dai
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Lun Gong
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin 300072, China
| | - Weiwei Cao
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Desen Liu
- Department of Thoracic Surgery, Suzhou Kowloon Hospital, Shanghai Jiao Tong University School of Medicine, Suzhou 215028, China
| | - Xiaodong Yang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China.
| | - Jian Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; Jinan Guoke Medical Technology Development Co., Ltd, Jinan, 250101, China.
| |
Collapse
|
4
|
Kujur SS, Sahana SK. Medical image registration utilizing tissue P systems. Front Pharmacol 2022; 13:949872. [PMID: 35991877 PMCID: PMC9389265 DOI: 10.3389/fphar.2022.949872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 06/27/2022] [Indexed: 11/29/2022] Open
Abstract
The tissue P system (TPS) possesses intrinsic attributes of parallel execution in comprehensive data and instruction space, which provides fast convergence during the transition from local to global optima. Method- In this study, we have proposed and built a TPSysIR framework using the TPS for image registration that optimizes upon the mutual information (MI) similarity metric to find a global solution. Result- The model was tested on single- and multimodal brain MRI scans and other prominent optimization-based image registration techniques. Conclusion- Results show that, among all methods, TPSysIR provides better MI values with minimum deviation in a range of experiment setups conducted iteratively.
Collapse
|
5
|
Intensity-based nonrigid endomicroscopic image mosaicking incorporating texture relevance for compensation of tissue deformation. Comput Biol Med 2021; 142:105169. [PMID: 34974384 DOI: 10.1016/j.compbiomed.2021.105169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Revised: 12/12/2021] [Accepted: 12/20/2021] [Indexed: 12/09/2022]
Abstract
Image mosaicking has emerged as a universal technique to broaden the field-of-view of the probe-based confocal laser endomicroscopy (pCLE) imaging system. However, due to the influence of probe-tissue contact forces and optical components on imaging quality, existing mosaicking methods remain insufficient to deal with practical challenges. In this paper, we present the texture encoded sum of conditional variance (TESCV) as a novel similarity metric, and effectively incorporate it into a sequential mosaicking scheme to simultaneously correct rigid probe shift and nonrigid tissue deformation. TESCV combines both intensity dependency and texture relevance to quantify the differences between pCLE image frames, where a discriminative binary descriptor named fully cross-detected local derivative pattern (FCLDP) is designed to extract more detailed structural textures. Furthermore, we also analytically derive the closed-form gradient of TESCV with respect to the transformation variables. Experiments on the circular dataset highlighted the advantage of the TESCV metric in improving mosaicking performance compared with the other four recently published metrics. The comparison with the other four state-of-the-art mosaicking methods on the spiral and manual datasets indicated that the proposed TESCV-based method not only worked stably at different contact forces, but was also suitable for both low- and high-resolution imaging systems. With more accurate and delicate mosaics, the proposed method holds promises to meet clinical demands for intraoperative optical biopsy.
Collapse
|
6
|
Duan L, Ni X, Liu Q, Gong L, Yuan G, Li M, Yang X, Fu T, Zheng J. Unsupervised learning for deformable registration of thoracic CT and cone-beam CT based on multiscale features matching with spatially adaptive weighting. Med Phys 2020; 47:5632-5647. [PMID: 32949051 DOI: 10.1002/mp.14464] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 07/29/2020] [Accepted: 08/27/2020] [Indexed: 12/21/2022] Open
Abstract
PURPOSE Cone-beam computed tomography (CBCT) is a common on-treatment imaging widely used in image-guided radiotherapy. Fast and accurate registration between the on-treatment CBCT and planning CT is significant for and precise adaptive radiotherapy treatment (ART). However, existing CT-CBCT registration methods, which are mostly affine or time-consuming intensity- based deformation registration, still need further study due to the considerable CT-CBCT intensity discrepancy and the artifacts in low-quality CBCT images. In this paper, we propose a deep learning-based CT-CBCT registration model to promote rapid and accurate CT-CBCT registration for radiotherapy. METHODS The proposed CT-CBCT registration model consists of a registration network and an innovative deep similarity metric network. The registration network is a novel fully convolution network adapted specially for patch-wise CT-CBCT registration. The metric network, going beyond intensity, automatically evaluates the high-dimensional attribute-based dissimilarity between the registered CT and CBCT images. In addition, considering the artifacts in low-quality CBCT images, we add spatial weighting (SW) block to adaptively attach more importance to those informative voxels while inhibit the interference of artifact regions. Such SW-based metric network is expected to extract the most meaningful and discriminative deep features, and form a more reliable CT-CBCT similarity measure to train the registration network. RESULTS We evaluate the proposed method on clinical thoracic CBCT and CT dataset, and compare the registration results with some other common image similarity metrics and some state-of-the-art registration algorithms. The proposed method provides the highest Structural Similarity index (86.17 ± 5.09), minimum Target Registration Error of landmarks (2.37 ± 0.32 mm), and the best DSC coefficient (78.71 ± 10.95) of tumor volumes. Moreover, our model also obtains comparable distance error of lung surfaces (1.75 ± 0.35 mm). CONCLUSION The proposed model shows both efficiency and efficacy for reliable thoracic CT-CBCT registration, and can generate the matched CT and CBCT images within few seconds, which is of great significance to clinical radiotherapy.
Collapse
Affiliation(s)
- Luwen Duan
- School of Biomedical Engineering, University of Science and Technology of China, Hefei, 230026, China.,Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Qi Liu
- School of Biomedical Engineering, University of Science and Technology of China, Hefei, 230026, China.,Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Lun Gong
- The Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, 300072, China
| | - Gang Yuan
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Ming Li
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Xiaodong Yang
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Tianxiao Fu
- Department of Radiation Oncology, The First Affiliated Hospital Of Soochow University, Suzhou, 215006, China
| | - Jian Zheng
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| |
Collapse
|
7
|
Fu Y, Lei Y, Wang T, Higgins K, Bradley JD, Curran WJ, Liu T, Yang X. LungRegNet: An unsupervised deformable image registration method for 4D-CT lung. Med Phys 2020; 47:1763-1774. [PMID: 32017141 PMCID: PMC7165051 DOI: 10.1002/mp.14065] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 01/09/2020] [Accepted: 01/27/2020] [Indexed: 12/11/2022] Open
Abstract
PURPOSE To develop an accurate and fast deformable image registration (DIR) method for four-dimensional computed tomography (4D-CT) lung images. Deep learning-based methods have the potential to quickly predict the deformation vector field (DVF) in a few forward predictions. We have developed an unsupervised deep learning method for 4D-CT lung DIR with excellent performances in terms of registration accuracies, robustness, and computational speed. METHODS A fast and accurate 4D-CT lung DIR method, namely LungRegNet, was proposed using deep learning. LungRegNet consists of two subnetworks which are CoarseNet and FineNet. As the name suggests, CoarseNet predicts large lung motion on a coarse scale image while FineNet predicts local lung motion on a fine scale image. Both the CoarseNet and FineNet include a generator and a discriminator. The generator was trained to directly predict the DVF to deform the moving image. The discriminator was trained to distinguish the deformed images from the original images. CoarseNet was first trained to deform the moving images. The deformed images were then used by the FineNet for FineNet training. To increase the registration accuracy of the LungRegNet, we generated vessel-enhanced images by generating pulmonary vasculature probability maps prior to the network prediction. RESULTS We performed fivefold cross validation on ten 4D-CT datasets from our department. To compare with other methods, we also tested our method using separate 10 DIRLAB datasets that provide 300 manual landmark pairs per case for target registration error (TRE) calculation. Our results suggested that LungRegNet has achieved better registration accuracy in terms of TRE than other deep learning-based methods available in the literature on DIRLAB datasets. Compared to conventional DIR methods, LungRegNet could generate comparable registration accuracy with TRE smaller than 2 mm. The integration of both the discriminator and pulmonary vessel enhancements into the network was crucial to obtain high registration accuracy for 4D-CT lung DIR. The mean and standard deviation of TRE were 1.00 ± 0.53 mm and 1.59 ± 1.58 mm on our datasets and DIRLAB datasets respectively. CONCLUSIONS An unsupervised deep learning-based method has been developed to rapidly and accurately register 4D-CT lung images. LungRegNet has outperformed its deep-learning-based peers and achieved excellent registration accuracy in terms of TRE.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
8
|
Pan L, Shi F, Xiang D, Yu K, Duan L, Zheng J, Chen X. OCTRexpert:A Feature-based 3D Registration Method for Retinal OCT Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:3885-3897. [PMID: 31995490 DOI: 10.1109/tip.2020.2967589] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Medical image registration can be used for studying longitudinal and cross-sectional data, quantitatively monitoring disease progression and guiding computer assisted diagnosis and treatments. However, deformable registration which enables more precise and quantitative comparison has not been well developed for retinal optical coherence tomography (OCT) images. This paper proposes a new 3D registration approach for retinal OCT data called OCTRexpert. To the best of our knowledge, the proposed algorithm is the first full 3D registration approach for retinal OCT images which can be applied to longitudinal OCT images for both normal and serious pathological subjects. In this approach, a pre-processing method is first performed to remove eye motion artifact and then a novel design-detection-deformation strategy is applied for the registration. In the design step, a couple of features are designed for each voxel in the image. In the detection step, active voxels are selected and the point-to-point correspondences between the subject and template images are established. In the deformation step, the image is hierarchically deformed according to the detected correspondences in multi-resolution. The proposed method is evaluated on a dataset with longitudinal OCT images from 20 healthy subjects and 4 subjects diagnosed with serious Choroidal Neovascularization (CNV). Experimental results show that the proposed registration algorithm consistently yields statistically significant improvements in both Dice similarity coefficient and the average unsigned surface error compared with the other registration methods.
Collapse
|