1
|
Osinde NO, Andreff N. Multicriteria assessment of optical coherence tomography using non-raster trajectories. J Microsc 2025; 298:27-43. [PMID: 39740027 DOI: 10.1111/jmi.13383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Revised: 12/02/2024] [Accepted: 12/23/2024] [Indexed: 01/02/2025]
Abstract
This article presents a qualitative, quantitative, and experimental analysis of optical coherence tomography (OCT) volumes obtained using different families of non-raster trajectories. We propose a multicriteria analysis to be used in the assessment of scan trajectories used in obtaining OCT volumetric point cloud data. The novel criteria includes exploitation/exploration ratio of the OCT data obtained, smoothness of the scan trajectory and fast preview of the acquired OCT data in addition to conventional criteria; time and quality (expressed as volume similarity rather than slice-by-slice image quality). The set of criteria proposed will be useful in assessing OCT scan trajectories for optimisation in various applications including robot assisted in vivo optical biopsy. We show in this paper that the rate of data acquisition is improved without degrading the OCT volume quality by scanning using non-raster trajectories (they are fast, smooth, and make the galvanometer scanners have less wear and tear). In particular, the rosette scan trajectory, which was the preferred non-raster trajectory, provided a balanced performance in having better clarity at the centre and periphery of the scanned object.
Collapse
Affiliation(s)
- Nahashon O Osinde
- Université de Franche-Comté, CNRS, AS2M Department, FEMTO-ST Institute, Besançon, France
| | - Nicolas Andreff
- Université de Franche-Comté, CNRS, AS2M Department, FEMTO-ST Institute, Besançon, France
| |
Collapse
|
2
|
Mao H, Ma Y, Zhang D, Meng Y, Ma S, Qiao Y, Fu H, Shan C, Chen D, Zhao Y, Zhang J. -Net: Retinal OCTA Image Stitching via Multi-Scale Representation Learning and Dynamic Location Guidance. IEEE J Biomed Health Inform 2025; 29:482-494. [PMID: 39321005 DOI: 10.1109/jbhi.2024.3467256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2024]
Abstract
Optical coherence tomography angiography (OCTA) plays a crucial role in quantifying and analyzing retinal vascular diseases. However, the limited field of view (FOV) inherent in most commercial OCTA imaging systems poses a significant challenge for clinicians, restricting the possibility to analyze larger retinal regions of high resolution. Automatic stitching of OCTA scans in adjacent regions may provide a promising solution to extend the region of interest. However, commonly-used stitching algorithms face difficulties in achieving effective alignment due to noise, artifacts and dense vasculature present in OCTA images. To address these challenges, we propose a novel retinal OCTA image stitching network, named -Net, which integrates multi-scale representation learning and dynamic location guidance. In the first stage, an image registration network with a progressive multi-resolution feature fusion is proposed to derive deep semantic information effectively. Additionally, we introduce a dynamic guidance strategy to locate the foveal avascular zone (FAZ) and constrain registration errors in overlapping vascular regions. In the second stage, an image fusion network based on multiple mask constraints and adjacent image aggregation (AIA) strategies is developed to further eliminate the artifacts in the overlapping areas of stitched images, thereby achieving precise vessel alignment. To validate the effectiveness of our method, we conduct a series of experiments on two delicately constructed datasets, i.e., OPTOVUE-OCTA and SVision-OCTA. Experimental results demonstrate that our method outperforms other image stitching methods and effectively generates high-quality wide-field OCTA images, achieving a structural similarity index (SSIM) score of 0.8264 and 0.8014 on the two datasets, respectively.
Collapse
|
3
|
Pan L, Cai Z, Hu D, Zhu W, Shi F, Tao W, Wu Q, Xiao S, Chen X. Research on registration method for enface image using multi-feature fusion. Phys Med Biol 2024; 69:215037. [PMID: 39413811 DOI: 10.1088/1361-6560/ad87a5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 10/16/2024] [Indexed: 10/18/2024]
Abstract
Objective.The purpose of this work is to accurately and quickly register the Optical coherence tomography (OCT) projection (enface) images at adjacent time points, and to solve the problem of interference caused by CNV lesions on the registration features.Approach.In this work, a multi-feature registration strategy was proposed, in which a combined feature (com-feature) containing 3D information, intersection information and SURF feature was designed. Firstly, the coordinates of all feature points were extracted as combined features, and then these feature coordinates were added to the initial vascular coordinate set simplified by the Douglas-Peucker algorithm as the point set for registration. Finally, the coherent point drift registration algorithm was used to register the enface coordinate point sets of adjacent time series.Main results.The newly designed features significantly improve the success rate of global registration of vascular networks in enface images, while the simplification step greatly improves the registration speed on the basis of preserving vascular features. The MSE, DSC and time complexity of the proposed method are 0.07993, 0.9693 and 42.7016 s, respectively.Significance.CNV is a serious retinal disease in ophthalmology. The registration of OCT enface images at adjacent time points can timely monitor the progress of the disease and assist doctors in making diagnoses. The proposed method not only improves the accuracy of OCT enface image registration, but also significantly reduces the time complexity. It has good registration results in clinical routine and provides a more efficient method for clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Lingjiao Pan
- Department of Electrical Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Zhongwang Cai
- Department of Mechanical Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Derong Hu
- Department of Mechanical Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Weifang Zhu
- Department of Information Engineering, Suzhou University, Suzhou, People's Republic of China
| | - Fei Shi
- Department of Information Engineering, Suzhou University, Suzhou, People's Republic of China
| | - Weige Tao
- Department of Electrical Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Quanyu Wu
- Department of Electrical Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Shuyan Xiao
- Department of Electrical Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Xinjian Chen
- Department of Information Engineering, Suzhou University, Suzhou, People's Republic of China
| |
Collapse
|
4
|
Hu Y, Feng Y, Long X, Zheng D, Liu G, Lu Y, Ren Q, Huang Z. Megahertz multi-parametric ophthalmic OCT system for whole eye imaging. BIOMEDICAL OPTICS EXPRESS 2024; 15:3000-3017. [PMID: 38855668 PMCID: PMC11161356 DOI: 10.1364/boe.517757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 04/02/2024] [Accepted: 04/04/2024] [Indexed: 06/11/2024]
Abstract
An ultrahigh-speed, wide-field OCT system for the imaging of anterior, posterior, and ocular biometers is crucial for obtaining comprehensive ocular parameters and quantifying ocular pathology size. Here, we demonstrate a multi-parametric ophthalmic OCT system with a speed of up to 1 MHz for wide-field imaging of the retina and 50 kHz for anterior chamber and ocular biometric measurement. A spectrum correction algorithm is proposed to ensure the accurate pairing of adjacent A-lines and elevate the A-scan speed from 500 kHz to 1 MHz for retinal imaging. A registration method employing position feedback signals was introduced, reducing pixel offsets between forward and reverse galvanometer scanning by 2.3 times. Experimental validation on glass sheets and the human eye confirms feasibility and efficacy. Meanwhile, we propose a revised formula to determine the "true" fundus size using all-axial length parameters from different fields of view. The efficient algorithms and compact design enhance system compatibility with clinical requirements, showing promise for widespread commercialization.
Collapse
Affiliation(s)
- Yicheng Hu
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen 518071, China
| | - Yutao Feng
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
- The College of Biochemical Engineering, Beijing Union University, Beijing 100021, China
| | - Xing Long
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
| | - Dongye Zheng
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen 518071, China
| | - Gangjun Liu
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen 518071, China
| | - Yanye Lu
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China
| | - Qiushi Ren
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen 518071, China
| | - Zhiyu Huang
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen 518071, China
| |
Collapse
|
5
|
Cornelio A, Collazo Martinez A, Lu H, Jones C, Kashani AH. Rigid alignment method for secondary analyses of optical coherence tomography volumes. BIOMEDICAL OPTICS EXPRESS 2024; 15:938-952. [PMID: 38404338 PMCID: PMC10890897 DOI: 10.1364/boe.508123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 01/08/2024] [Accepted: 01/09/2024] [Indexed: 02/27/2024]
Abstract
Optical coherence tomography (OCT) provides micron level resolution of retinal tissue and is widely used in ophthalmology. Millions of pre-existing OCT images are available from research and clinical databases. Analysis of this data often requires or can benefit significantly from image registration and reduction of speckle noise. One method of reducing noise is to align and average multiple OCT scans together. We propose to use surface feature information and whole volume information to create a novel and simple pipeline that can rigidly align, and average multiple previously acquired 3D OCT volumes from a commercially available OCT device. This pipeline significantly improves both image quality and visualization of clinically relevant image features over single, unaligned volumes from the commercial scanner.
Collapse
Affiliation(s)
- Andrew Cornelio
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287, USA
| | | | - Hanzhang Lu
- Department of Radiology and Radiological Science, Johns Hopkins University Hospital, Baltimore, MD 21287, USA
| | - Craig Jones
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287, USA
- Department of Radiology and Radiological Science, Johns Hopkins University Hospital, Baltimore, MD 21287, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Amir H Kashani
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287, USA
- Department of Biomedical Engineering, Johns Hopkins Hospital, Baltimore, MD 21287, USA
| |
Collapse
|
6
|
Zhuang Z, Chen D, Liang Z, Zhang S, Liu Z, Chen W, Qi L. Automatic 3D reconstruction of an anatomically correct upper airway from endoscopic long range OCT images. BIOMEDICAL OPTICS EXPRESS 2023; 14:4594-4608. [PMID: 37791278 PMCID: PMC10545183 DOI: 10.1364/boe.496812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 06/29/2023] [Accepted: 08/02/2023] [Indexed: 10/05/2023]
Abstract
Endoscopic airway optical coherence tomography (OCT) is a non-invasive and high resolution imaging modality for the diagnosis and analysis of airway-related diseases. During OCT imaging of the upper airway, in order to reliably characterize its 3D structure, there is a need to automatically detect the airway lumen contour, correct rotational distortion and perform 3D airway reconstruction. Based on a long-range endoscopic OCT imaging system equipped with a magnetic tracker, we present a fully automatic framework to reconstruct the 3D upper airway model with correct bending anatomy. Our method includes an automatic segmentation method for the upper airway based on dynamic programming algorithm, an automatic initial rotation angle error correction method for the detected 2D airway lumen contour, and an anatomic bending method combined with the centerline detected from the magnetically tracked imaging probe. The proposed automatic reconstruction framework is validated on experimental datasets acquired from two healthy adults. The result shows that the proposed framework allows the full automation of 3D airway reconstruction from OCT images and thus reveals its potential to improve analysis efficiency of endoscopic OCT images.
Collapse
Affiliation(s)
- Zhijian Zhuang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- The Third People’s Hospital of Zhuhai, 166 Hezheng Rd., Xiangzhou District, Zhuhai, Guangdong, 519000, China
| | - Delang Chen
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
| | - Zhichao Liang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
| | - Shuangyang Zhang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
| | - Zhenyang Liu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
| | - Li Qi
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong, 510515, China
| |
Collapse
|
7
|
Fan X, Li Z, Li Z, Wang X, Liu R, Luo Z, Huang H. Automated Learning for Deformable Medical Image Registration by Jointly Optimizing Network Architectures and Objective Functions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:4880-4892. [PMID: 37624710 DOI: 10.1109/tip.2023.3307215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
Deformable image registration plays a critical role in various tasks of medical image analysis. A successful registration algorithm, either derived from conventional energy optimization or deep networks, requires tremendous efforts from computer experts to well design registration energy or to carefully tune network architectures with respect to medical data available for a given registration task/scenario. This paper proposes an automated learning registration algorithm (AutoReg) that cooperatively optimizes both architectures and their corresponding training objectives, enabling non-computer experts to conveniently find off-the-shelf registration algorithms for various registration scenarios. Specifically, we establish a triple-level framework to embrace the searching for both network architectures and objectives with a cooperating optimization. Extensive experiments on multiple volumetric datasets and various registration scenarios demonstrate that AutoReg can automatically learn an optimal deep registration network for given volumes and achieve state-of-the-art performance. The automatically learned network also improves computational efficiency over the mainstream UNet architecture from 0.558 to 0.270 seconds for a volume pair on the same configuration.
Collapse
|
8
|
Shen J, Chen Z, Peng Y, Zhang S, Xu C, Zhu W, Liu H, Chen X. Morphological prognosis prediction of choroid neovascularization from longitudinal SD-OCT images. Med Phys 2023; 50:4839-4853. [PMID: 36789971 DOI: 10.1002/mp.16294] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 01/09/2023] [Accepted: 01/31/2023] [Indexed: 02/16/2023] Open
Abstract
BACKGROUND Choroid neovascularization (CNV) has no obvious symptoms in the early stage, but with its gradual expansion, leakage, rupture, and bleeding, it can cause vision loss and central scotoma. In some severe cases, it will lead to permanent visual impairment. PURPOSE Accurate prediction of disease progression can greatly help ophthalmologists to formulate appropriate treatment plans and prevent further deterioration of the disease. Therefore, we aim to predict the growth trend of CNV to help the attending physician judge the effectiveness of treatment. METHODS In this paper, we develop a CNN-based method for CNV growth prediction. To achieve this, we first design a registration network to rigidly register the spectral domain optical coherence tomography (SD-OCT) B-scans of each subject at different time points to eliminate retinal displacements of longitudinal data. Then, considering the correlation of longitudinal data, we propose a co-segmentation network with a correlation attention guidance (CAG) module to cooperatively segment CNV lesions of a group of follow-up images and use them as input for growth prediction. Finally, based on the above registration and segmentation networks, an encoder-recurrent-decoder framework is developed for CNV growth prediction, in which an attention-based gated recurrent unit (AGRU) is embedded as the recurrent neural network to recurrently learn robust representations. RESULTS The registration network rigidly registers the follow-up images of patients to the reference images with a root mean square error (RMSE) of 6.754 pixels. And compared with other state-of-the-art segmentation methods, the proposed segmentation network achieves high performance with the Dice similarity coefficients (Dsc) of 85.27%. Based on the above experiments, the proposed growth prediction network can play a role in predicting the future CNV morphology, and the predicted CNV has a Dsc of 83.69% with the ground truth, which is significantly consistent with the actual follow-up visit. CONCLUSION The proposed registration and segmentation networks provide the possibility for growth prediction. In addition, accurately predicting the growth of CNV enables us to know the efficacy of the drug against individuals in advance, creating opportunities for formulating appropriate treatment plans.
Collapse
Affiliation(s)
- Jiayan Shen
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province, China
| | - Zhongyue Chen
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province, China
| | - Yuanyuan Peng
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province, China
| | - Siqi Zhang
- Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai, China
| | - Chenan Xu
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province, China
| | - Weifang Zhu
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province, China
| | - Haiyun Liu
- Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai, China
| | - Xinjian Chen
- MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, Jiangsu Province, China
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, Suzhou, China
| |
Collapse
|
9
|
Rivas-Villar D, Motschi AR, Pircher M, Hitzenberger CK, Schranz M, Roberts PK, Schmidt-Erfurth U, Bogunović H. Automated inter-device 3D OCT image registration using deep learning and retinal layer segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:3726-3747. [PMID: 37497506 PMCID: PMC10368062 DOI: 10.1364/boe.493047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/18/2023] [Accepted: 05/26/2023] [Indexed: 07/28/2023]
Abstract
Optical coherence tomography (OCT) is the most widely used imaging modality in ophthalmology. There are multiple variations of OCT imaging capable of producing complementary information. Thus, registering these complementary volumes is desirable in order to combine their information. In this work, we propose a novel automated pipeline to register OCT images produced by different devices. This pipeline is based on two steps: a multi-modal 2D en-face registration based on deep learning, and a Z-axis (axial axis) registration based on the retinal layer segmentation. We evaluate our method using data from a Heidelberg Spectralis and an experimental PS-OCT device. The empirical results demonstrated high-quality registrations, with mean errors of approximately 46 µm for the 2D registration and 9.59 µm for the Z-axis registration. These registrations may help in multiple clinical applications such as the validation of layer segmentations among others.
Collapse
Affiliation(s)
- David Rivas-Villar
- Centro de investigacion CITIC, Universidade da Coruña, 15071 A Coruña, Spain
- Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15006 A Coruña, Spain
| | - Alice R Motschi
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Michael Pircher
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Christoph K Hitzenberger
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Markus Schranz
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Philipp K Roberts
- Medical University of Vienna, Department of Ophthalmology and Optometry, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Medical University of Vienna, Department of Ophthalmology and Optometry, Vienna, Austria
| | - Hrvoje Bogunović
- Medical University of Vienna, Department of Ophthalmology and Optometry, Christian Doppler Lab for Artificial Intelligence in Retina, Vienna, Austria
| |
Collapse
|
10
|
Ma G, Son T, Adejumo T, Yao X. Rotational Distortion and Compensation in Optical Coherence Tomography with Anisotropic Pixel Resolution. Bioengineering (Basel) 2023; 10:313. [PMID: 36978706 PMCID: PMC10045376 DOI: 10.3390/bioengineering10030313] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 02/24/2023] [Accepted: 02/27/2023] [Indexed: 03/06/2023] Open
Abstract
Accurate image registration is essential for eye movement compensation in optical coherence tomography (OCT) and OCT angiography (OCTA). The spatial resolution of an OCT instrument is typically anisotropic, i.e., has different resolutions in the lateral and axial dimensions. When OCT images have anisotropic pixel resolution, residual distortion (RD) and false translation (FT) are always observed after image registration for rotational movement. In this study, RD and FT were quantitively analyzed over different degrees of rotational movement and various lateral and axial pixel resolution ratio (RL/RA) values. The RD and FT provide the evaluation criteria for image registration. The theoretical analysis confirmed that the RD and FT increase significantly with the rotation degree and RL/RA. An image resizing assisting registration (RAR) strategy was proposed for accurate image registration. The performance of direct registration (DR) and RAR for retinal OCT and OCTA images were quantitatively compared. Experimental results confirmed that unnormalized RL/RA causes RD and FT; RAR can effectively improve the performance of OCT and OCTA image registration and distortion compensation.
Collapse
Affiliation(s)
- Guangying Ma
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL 60607, USA
| | - Taeyoon Son
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL 60607, USA
| | - Tobiloba Adejumo
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL 60607, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL 60607, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois Chicago, Chicago, IL 60612, USA
| |
Collapse
|
11
|
Hu D, Pan L, Chen X, Xiao S, Wu Q. A novel vessel segmentation algorithm for pathological en-face images based on matched filter. Phys Med Biol 2023; 68. [PMID: 36745931 DOI: 10.1088/1361-6560/acb98a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 02/06/2023] [Indexed: 02/08/2023]
Abstract
The vascular information in fundus images can provide important basis for detection and prediction of retina-related diseases. However, the presence of lesions such as Coroidal Neovascularization can seriously interfere with normal vascular areas in optical coherence tomography (OCT) fundus images. In this paper, a novel method is proposed for detecting blood vessels in pathological OCT fundus images. First of all, an automatic localization and filling method is used in preprocessing step to reduce pathological interference. Afterwards, in terms of vessel extraction, a pore ablation method based on capillary bundle model is applied. The ablation method processes the image after matched filter feature extraction, which can eliminate the interference caused by diseased blood vessels to a great extent. At the end of the proposed method, morphological operations are used to obtain the main vascular features. Experimental results on the dataset show that the proposed method achieves 0.88 ± 0.03, 0.79 ± 0.05, 0.66 ± 0.04, results in DICE, PRECISION and TPR, respectively. Effective extraction of vascular information from OCT fundus images is of great significance for the diagnosis and treatment of retinal related diseases.
Collapse
Affiliation(s)
- Derong Hu
- School of Mechanical Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Lingjiao Pan
- School of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Xinjian Chen
- School of Electronics and Information Engineering, Soochow University, Suzhou, People's Republic of China
| | - Shuyan Xiao
- School of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Quanyu Wu
- School of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| |
Collapse
|
12
|
Intensity-based nonrigid endomicroscopic image mosaicking incorporating texture relevance for compensation of tissue deformation. Comput Biol Med 2021; 142:105169. [PMID: 34974384 DOI: 10.1016/j.compbiomed.2021.105169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Revised: 12/12/2021] [Accepted: 12/20/2021] [Indexed: 12/09/2022]
Abstract
Image mosaicking has emerged as a universal technique to broaden the field-of-view of the probe-based confocal laser endomicroscopy (pCLE) imaging system. However, due to the influence of probe-tissue contact forces and optical components on imaging quality, existing mosaicking methods remain insufficient to deal with practical challenges. In this paper, we present the texture encoded sum of conditional variance (TESCV) as a novel similarity metric, and effectively incorporate it into a sequential mosaicking scheme to simultaneously correct rigid probe shift and nonrigid tissue deformation. TESCV combines both intensity dependency and texture relevance to quantify the differences between pCLE image frames, where a discriminative binary descriptor named fully cross-detected local derivative pattern (FCLDP) is designed to extract more detailed structural textures. Furthermore, we also analytically derive the closed-form gradient of TESCV with respect to the transformation variables. Experiments on the circular dataset highlighted the advantage of the TESCV metric in improving mosaicking performance compared with the other four recently published metrics. The comparison with the other four state-of-the-art mosaicking methods on the spiral and manual datasets indicated that the proposed TESCV-based method not only worked stably at different contact forces, but was also suitable for both low- and high-resolution imaging systems. With more accurate and delicate mosaics, the proposed method holds promises to meet clinical demands for intraoperative optical biopsy.
Collapse
|