1
|
Tian X, Anantrasirichai N, Nicholson L, Achim A. The quest for early detection of retinal disease: 3D CycleGAN-based translation of optical coherence tomography into confocal microscopy. BIOLOGICAL IMAGING 2024; 4:e15. [PMID: 39776613 PMCID: PMC11704141 DOI: 10.1017/s2633903x24000163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 08/18/2024] [Accepted: 09/28/2024] [Indexed: 01/11/2025]
Abstract
Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. In vivo OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while ex vivo confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired in vivo OCT to ex vivo confocal microscopy images. This marks the first attempt to exploit the inherent 3D information of OCT and translate it into the rich, detailed color domain of confocal microscopy. We also introduce a unique dataset, OCT2Confocal, comprising mouse OCT and confocal retinal images, facilitating the development of and establishing a benchmark for cross-modal image translation research. Our model has been evaluated both quantitatively and qualitatively, achieving Fréchet inception distance (FID) scores of 0.766 and Kernel Inception Distance (KID) scores as low as 0.153, and leading subjective mean opinion scores (MOS). Our model demonstrated superior image fidelity and quality with limited data over existing methods. Our approach effectively synthesizes color information from 3D confocal images, closely approximating target outcomes and suggesting enhanced potential for diagnostic and monitoring applications in ophthalmology.
Collapse
Affiliation(s)
- Xin Tian
- Visual Information Laboratory, University of Bristol, Bristol, UK
| | | | - Lindsay Nicholson
- Autoimmune Inflammation Research, University of Bristol, Bristol, UK
| | - Alin Achim
- Visual Information Laboratory, University of Bristol, Bristol, UK
| |
Collapse
|
2
|
Opoku M, Weyori BA, Adekoya AF, Adu K. CLAHE-CapsNet: Efficient retina optical coherence tomography classification using capsule networks with contrast limited adaptive histogram equalization. PLoS One 2023; 18:e0288663. [PMID: 38032915 PMCID: PMC10688733 DOI: 10.1371/journal.pone.0288663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 07/01/2023] [Indexed: 12/02/2023] Open
Abstract
Manual detection of eye diseases using retina Optical Coherence Tomography (OCT) images by Ophthalmologists is time consuming, prone to errors and tedious. Previous researchers have developed a computer aided system using deep learning-based convolutional neural networks (CNNs) to aid in faster detection of the retina diseases. However, these methods find it difficult to achieve better classification performance due to noise in the OCT image. Moreover, the pooling operations in CNN reduce resolution of the image that limits the performance of the model. The contributions of the paper are in two folds. Firstly, this paper makes a comprehensive literature review to establish current-state-of-act methods successfully implemented in retina OCT image classifications. Additionally, this paper proposes a capsule network coupled with contrast limited adaptive histogram equalization (CLAHE-CapsNet) for retina OCT image classification. The CLAHE was implemented as layers to minimize the noise in the retina image for better performance of the model. A three-layer convolutional capsule network was designed with carefully chosen hyperparameters. The dataset used for this study was presented by University of California San Diego (UCSD). The dataset consists of 84,495 X-Ray images (JPEG) and 4 categories (NORMAL, CNV, DME, and DRUSEN). The images went through a grading system consisting of multiple layers of trained graders of expertise for verification and correction of image labels. Evaluation experiments were conducted and comparison of results was done with state-of-the-art models to find out the best performing model. The evaluation metrics; accuracy, sensitivity, precision, specificity, and AUC are used to determine the performance of the models. The evaluation results show that the proposed model achieves the best performing model of accuracies of 97.7%, 99.5%, and 99.3% on overall accuracy (OA), overall sensitivity (OS), and overall precision (OP), respectively. The results obtained indicate that the proposed model can be adopted and implemented to help ophthalmologists in detecting retina OCT diseases.
Collapse
Affiliation(s)
- Michael Opoku
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| | - Benjamin Asubam Weyori
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| | - Adebayo Felix Adekoya
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| | - Kwabena Adu
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| |
Collapse
|
3
|
Wang T, Li H, Pu T, Yang L. Microsurgery Robots: Applications, Design, and Development. SENSORS (BASEL, SWITZERLAND) 2023; 23:8503. [PMID: 37896597 PMCID: PMC10611418 DOI: 10.3390/s23208503] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Microsurgical techniques have been widely utilized in various surgical specialties, such as ophthalmology, neurosurgery, and otolaryngology, which require intricate and precise surgical tool manipulation on a small scale. In microsurgery, operations on delicate vessels or tissues require high standards in surgeons' skills. This exceptionally high requirement in skills leads to a steep learning curve and lengthy training before the surgeons can perform microsurgical procedures with quality outcomes. The microsurgery robot (MSR), which can improve surgeons' operation skills through various functions, has received extensive research attention in the past three decades. There have been many review papers summarizing the research on MSR for specific surgical specialties. However, an in-depth review of the relevant technologies used in MSR systems is limited in the literature. This review details the technical challenges in microsurgery, and systematically summarizes the key technologies in MSR with a developmental perspective from the basic structural mechanism design, to the perception and human-machine interaction methods, and further to the ability in achieving a certain level of autonomy. By presenting and comparing the methods and technologies in this cutting-edge research, this paper aims to provide readers with a comprehensive understanding of the current state of MSR research and identify potential directions for future development in MSR.
Collapse
Affiliation(s)
- Tiexin Wang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
| | - Haoyu Li
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Tanhong Pu
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Liangjing Yang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
- Department of Mechanical Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| |
Collapse
|
4
|
Li Y, Han Y, Li Z, Zhong Y, Guo Z. A transfer learning-based multimodal neural network combining metadata and multiple medical images for glaucoma type diagnosis. Sci Rep 2023; 13:12076. [PMID: 37495578 PMCID: PMC10372152 DOI: 10.1038/s41598-022-27045-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 12/23/2022] [Indexed: 07/28/2023] Open
Abstract
Glaucoma is an acquired optic neuropathy, which can lead to irreversible vision loss. Deep learning(DL), especially convolutional neural networks(CNN), has achieved considerable success in the field of medical image recognition due to the availability of large-scale annotated datasets and CNNs. However, obtaining fully annotated datasets like ImageNet in the medical field is still a challenge. Meanwhile, single-modal approaches remain both unreliable and inaccurate due to the diversity of glaucoma disease types and the complexity of symptoms. In this paper, a new multimodal dataset for glaucoma is constructed and a new multimodal neural network for glaucoma diagnosis and classification (GMNNnet) is proposed aiming to address both of these issues. Specifically, the dataset includes the five most important types of glaucoma labels, electronic medical records and four kinds of high-resolution medical images. The structure of GMNNnet consists of three branches. Branch 1 consisting of convolutional, cyclic and transposition layers processes patient metadata, branch 2 uses Unet to extract features from glaucoma segmentation based on domain knowledge, and branch 3 uses ResFormer to directly process glaucoma medical images.Branch one and branch two are mixed together and then processed by the Catboost classifier. We introduce a gradient-weighted class activation mapping (Grad-GAM) method to increase the interpretability of the model and a transfer learning method for the case of insufficient training data,i.e.,fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. The results show that GMNNnet can better present the high-dimensional information of glaucoma and achieves excellent performance under multimodal data.
Collapse
Affiliation(s)
- Yi Li
- College of Information Science and Engineering, Northeastern University, Shenyang, Liaoning, China.
| | - Yujie Han
- College of Information Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Zihan Li
- College of Software, Northeastern University, Shenyang, Liaoning, China
| | - Yi Zhong
- College of Metallurgy, Northeastern University, Shenyang, Liaoning, China
| | - Zhifen Guo
- College of Information Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| |
Collapse
|
5
|
Marciniak T, Stankiewicz A, Zaradzki P. Neural Networks Application for Accurate Retina Vessel Segmentation from OCT Fundus Reconstruction. SENSORS (BASEL, SWITZERLAND) 2023; 23:1870. [PMID: 36850467 PMCID: PMC9968084 DOI: 10.3390/s23041870] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 01/31/2023] [Accepted: 02/04/2023] [Indexed: 06/18/2023]
Abstract
The use of neural networks for retinal vessel segmentation has gained significant attention in recent years. Most of the research related to the segmentation of retinal blood vessels is based on fundus images. In this study, we examine five neural network architectures to accurately segment vessels in fundus images reconstructed from 3D OCT scan data. OCT-based fundus reconstructions are of much lower quality compared to color fundus photographs due to noise and lower and disproportionate resolutions. The fundus image reconstruction process was performed based on the segmentation of the retinal layers in B-scans. Three reconstruction variants were proposed, which were then used in the process of detecting blood vessels using neural networks. We evaluated performance using a custom dataset of 24 3D OCT scans (with manual annotations performed by an ophthalmologist) using 6-fold cross-validation and demonstrated segmentation accuracy up to 98%. Our results indicate that the use of neural networks is a promising approach to segmenting the retinal vessel from a properly reconstructed fundus.
Collapse
|
6
|
Ibragimova RR, Gilmanov II, Lopukhova EA, Lakman IA, Bilyalov AR, Mukhamadeev TR, Kutluyarov RV, Idrisova GM. Algorithm of segmentation of OCT macular images to analyze the results in patients with age-related macular degeneration. BULLETIN OF RUSSIAN STATE MEDICAL UNIVERSITY 2022. [DOI: 10.24075/brsmu.2022.062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Age-related macular degeneration (AMD) is one of the main causes of loss of sight and hypovision in people over working age. Results of optical coherence tomography (OCT) are essential for diagnostics of the disease. Developing the recommendation system to analyze OCT images will reduce the time to process visual data and decrease the probability of errors while working as a doctor. The purpose of the study was to develop an algorithm of segmentation to analyze the results of macular OCT in patients with AMD. It allows to provide a correct prediction of an AMD stage based on the form of discovered pathologies. A program has been developed in the Python programming language using the Pytorch and TensorFlow libraries. Its quality was estimated using OCT macular images of 51 patients with early, intermediate, late AMD. A segmentation algorithm of OCT images was developed based on convolutional neural network. UNet network was selected as architecture of high-accuracy neural net. The neural net is trained on macular OCT images of 125 patients (197 eyes). The author algorithm displayed 98.1% of properly segmented areas on OCT images, which are the most essential for diagnostics and determination of an AMD stage. Weighted sensitivity and specificity of AMD stage classifier amounted to 83.8% and 84.9% respectively. The developed algorithm is promising as a recommendation system that implements the AMD classification based on data that promote taking decisions regarding the treatment strategy.
Collapse
Affiliation(s)
| | - II Gilmanov
- Ufa State Aviation Technical University, Ufa, Russia
| | - EA Lopukhova
- Ufa State Aviation Technical University, Ufa, Russia
| | - IA Lakman
- Bashkir State Medical University, Ufa, Russia
| | - AR Bilyalov
- Bashkir State Medical University, Ufa, Russia
| | | | - RV Kutluyarov
- Ufa State Aviation Technical University, Ufa, Russia
| | - GM Idrisova
- Bashkir State Medical University, Ufa, Russia
| |
Collapse
|
7
|
Yang D, Zhao H, Han T. Learning feature-rich integrated comprehensive context networks for automated fundus retinal vessel analysis. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
8
|
Pancreatic cancer segmentation in unregistered multi-parametric MRI with adversarial learning and multi-scale supervision. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.09.058] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
9
|
Bradley LJ, Ward A, Hsue MCY, Liu J, Copland DA, Dick AD, Nicholson LB. Quantitative Assessment of Experimental Ocular Inflammatory Disease. Front Immunol 2021; 12:630022. [PMID: 34220797 PMCID: PMC8250853 DOI: 10.3389/fimmu.2021.630022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 05/28/2021] [Indexed: 11/25/2022] Open
Abstract
Ocular inflammation imposes a high medical burden on patients and substantial costs on the health-care systems that mange these often chronic and debilitating diseases. Many clinical phenotypes are recognized and classifying the severity of inflammation in an eye with uveitis is an ongoing challenge. With the widespread application of optical coherence tomography in the clinic has come the impetus for more robust methods to compare disease between different patients and different treatment centers. Models can recapitulate many of the features seen in the clinic, but until recently the quality of imaging available has lagged that applied in humans. In the model experimental autoimmune uveitis (EAU), we highlight three linked clinical states that produce retinal vulnerability to inflammation, all different from healthy tissue, but distinct from each other. Deploying longitudinal, multimodal imaging approaches can be coupled to analysis in the tissue of changes in architecture, cell content and function. This can enrich our understanding of pathology, increase the sensitivity with which the impacts of therapeutic interventions are assessed and address questions of tissue regeneration and repair. Modern image processing, including the application of artificial intelligence, in the context of such models of disease can lay a foundation for new approaches to monitoring tissue health.
Collapse
Affiliation(s)
- Lydia J Bradley
- School of Cellular and Molecular Medicine, University of Bristol, Bristol, United Kingdom
| | - Amy Ward
- School of Cellular and Molecular Medicine, University of Bristol, Bristol, United Kingdom
| | - Madeleine C Y Hsue
- School of Cellular and Molecular Medicine, University of Bristol, Bristol, United Kingdom
| | - Jian Liu
- Academic Unit of Ophthalmology, Translational Health Sciences, University of Bristol, Bristol, United Kingdom
| | - David A Copland
- Academic Unit of Ophthalmology, Translational Health Sciences, University of Bristol, Bristol, United Kingdom
| | - Andrew D Dick
- School of Cellular and Molecular Medicine, University of Bristol, Bristol, United Kingdom.,Academic Unit of Ophthalmology, Translational Health Sciences, University of Bristol, Bristol, United Kingdom.,University College London, Institute of Ophthalmology, London, United Kingdom
| | - Lindsay B Nicholson
- School of Cellular and Molecular Medicine, University of Bristol, Bristol, United Kingdom
| |
Collapse
|
10
|
Li J, Feng C, Lin X, Qian X. Utilizing GCN and Meta-Learning Strategy in Unsupervised Domain Adaptation for Pancreatic Cancer Segmentation. IEEE J Biomed Health Inform 2021; 26:79-89. [PMID: 34057903 DOI: 10.1109/jbhi.2021.3085092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automated pancreatic cancer segmentation is highly crucial for computer-assisted diagnosis. The general practice is to label images from selected modalities since it is expensive to label all modalities. This practice brought about a significant interest in learning the knowledge transfer from the labeled modalities to unlabeled ones. However, the imaging parameter inconsistency between modalities leads to a domain shift, limiting the transfer learning performance. Therefore, we propose an unsupervised domain adaptation segmentation framework for pancreatic cancer based on GCN and meta-learning strategy. Our model first transforms the source image into a target-like visual appearance through the synergistic collaboration between image and feature adaptation. Specifically, we employ encoders incorporating adversarial learning to separate domain-invariant features from domain-specific ones to achieve visual appearance translation. Then, the meta-learning strategy with good generalization capabilities is exploited to strike a reasonable balance in the training of the source and transformed images. Thus, the model acquires more correlated features and improve the adaptability to the target images. Moreover, a GCN is introduced to supervise the high-dimensional abstract features directly related to the segmentation outcomes, and hence ensure the integrity of key structural features. Extensive experiments on four multi-parameter pancreatic-cancer magnetic resonance imaging datasets demonstrate improved performance in all adaptation directions, confirming our model's effectiveness for unlabeled pancreatic cancer images. The results are promising for reducing the burden of annotation and improving the performance of computer-aided diagnosis of pancreatic cancer. Our source codes will be released at https://github.com/SJTUBME-QianLab/UDAseg, once this manuscript is accepted for publication.
Collapse
|
11
|
Li M, Chen Y, Ji Z, Xie K, Yuan S, Chen Q, Li S. Image Projection Network: 3D to 2D Image Segmentation in OCTA Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3343-3354. [PMID: 32365023 DOI: 10.1109/tmi.2020.2992244] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
We present an image projection network (IPN), which is a novel end-to-end architecture and can achieve 3D-to-2D image segmentation in optical coherence tomography angiography (OCTA) images. Our key insight is to build a projection learning module (PLM) which uses a unidirectional pooling layer to conduct effective features selection and dimension reduction concurrently. By combining multiple PLMs, the proposed network can input 3D OCTA data, and output 2D segmentation results such as retinal vessel segmentation. It provides a new idea for the quantification of retinal indicators: without retinal layer segmentation and without projection maps. We tested the performance of our network for two crucial retinal image segmentation issues: retinal vessel (RV) segmentation and foveal avascular zone (FAZ) segmentation. The experimental results on 316 OCTA volumes demonstrate that the IPN is an effective implementation of 3D-to-2D segmentation networks, and the uses of multi-modality information and volumetric information make IPN perform better than the baseline methods.
Collapse
|
12
|
Syga P, Sieluzycki C, Krzyzanowska-Berkowska P, Iskander DR. A Fully Automated 3D In-Vivo Delineation and Shape Parameterization of the Human Lamina Cribrosa in Optical Coherence Tomography. IEEE Trans Biomed Eng 2018; 66:1422-1428. [PMID: 30295609 DOI: 10.1109/tbme.2018.2873893] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE A fully automated method for delineation of the lamina cribrosa in optical coherence tomography (OCT) is proposed. It assesses the three-dimensional (3D) shape of the lamina cribrosa in-vivo, based on a series of OCT B-scans. METHODS The algorithm has several image processing steps and it is based on active contour detection performed along three orthogonal directions of the B-scan data cuboid. Further, the delineated 3D lamina cribrosa shape is parameterized with a fourth-order polynomial of two variables [Formula: see text] using the least-squares method. Datasets from a total of 255 subjects from three groups were analyzed: 92 primary open angle glaucoma patients, 77 glaucoma suspects, and 86 controls. RESULTS Statistically significant differences were found in the coefficients of monomials xiyj, with both i and j even, between patients and controls and between suspects and controls. CONCLUSIONS From the data obtained, it can be concluded that the mean shape parameterization of the lamina cribrosa of glaucoma suspects has similar appearance to that of glaucoma patients but it is markedly different from that of healthy controls. SIGNIFICANCE The proposed algorithm enables automatically estimating, for the first time, the lamina cribrosa in 3D, further providing clinicians with a time-efficient discrimination tool supporting glaucoma diagnosis.
Collapse
|
13
|
M. S, Issac A, Dutta MK. An automated and robust image processing algorithm for glaucoma diagnosis from fundus images using novel blood vessel tracking and bend point detection. Int J Med Inform 2018; 110:52-70. [DOI: 10.1016/j.ijmedinf.2017.11.015] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2017] [Revised: 11/01/2017] [Accepted: 11/22/2017] [Indexed: 11/30/2022]
|
14
|
Ahdi A, Rabbani H, Vard A. A hybrid method for 3D mosaicing of OCT images of macula and Optic Nerve Head. Comput Biol Med 2017; 91:277-290. [PMID: 29102825 DOI: 10.1016/j.compbiomed.2017.10.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2017] [Revised: 10/26/2017] [Accepted: 10/26/2017] [Indexed: 10/18/2022]
Abstract
A mosaiced image is the result of merging two or more images with overlapping area in order to generate a high resolution panorama of a large scene. A wide view of Optical Coherence Tomography (OCT) images can help clinicians in diagnosis by enabling simultaneous analysis of different portions of the gathered information. In this paper, we present a novel method for mosaicing of 3D OCT images of macula and Optic Nerve Head (ONH) that is carried out in two phases; registration of OCT projections and mosaicing of B-scans. In the first phase, in order to register the OCT projection images of macula and ONH, their corresponding color fundus image is considered as the main frame and the geometrical features of their curvelet-based extracted vessels are employed for registration. The registration parameters obtained are then applied on all x-y slices of the 3D OCT images of macula and ONH. In the B-scan mosaicing phase, the overlapping areas of corresponding reprojected B-scans are extracted and the best registration model is obtained based on line-by-line matching of corresponding A-scans in overlapping areas. This registration model is then applied to the remaining A-scans of the ONH-based B-scan. The aligned B-scans of macular OCT and OCT images of ONH are finally blended and 3D mosaiced OCT images are obtained. Two criteria are considered for assessment of mosaiced images; the quality of alignment/mosaicing of B-scans and the loss of clinical information from the B-scans after mosaicing. The average grading values of 3.5 ± 0.74 and 3.63 ± 0.55 (out of 4) are obtained for the first and second criteria, respectively.
Collapse
Affiliation(s)
- Alieh Ahdi
- Dept. of Biomedical Engineering, School of Advanced Technologies in Medicine, Medical Image & Signal Processing Research Center, Isfahan University of Medical Sciences, Iran
| | - Hossein Rabbani
- Dept. of Biomedical Engineering, School of Advanced Technologies in Medicine, Medical Image & Signal Processing Research Center, Isfahan University of Medical Sciences, Iran.
| | - Alireza Vard
- Dept. of Biomedical Engineering, School of Advanced Technologies in Medicine, Medical Image & Signal Processing Research Center, Isfahan University of Medical Sciences, Iran
| |
Collapse
|
15
|
Antony BJ, Carass A, Lang A, Kim BJ, Zack DJ, Prince JL. Longitudinal Analysis of Mouse SDOCT Volumes. ACTA ACUST UNITED AC 2017; 10137. [PMID: 29138527 DOI: 10.1117/12.2257432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Spectral-domain optical coherence tomography (SDOCT), in addition to its routine clinical use in the diagnosis of ocular diseases, has begun to find increasing use in animal studies. Animal models are frequently used to study disease mechanisms as well as to test drug efficacy. In particular, SDOCT provides the ability to study animals longitudinally and non-invasively over long periods of time. However, the lack of anatomical landmarks makes the longitudinal scan acquisition prone to inconsistencies in orientation. Here, we propose a method for the automated registration of mouse SDOCT volumes. The method begins by accurately segmenting the blood vessels and the optic nerve head region in the scans using a pixel classification approach. The segmented vessel maps from follow-up scans were registered using an iterative closest point (ICP) algorithm to the baseline scan to allow for the accurate longitudinal tracking of thickness changes. Eighteen SDOCT volumes from a light damage model study were used to train a random forest utilized in the pixel classification step. The area under the curve (AUC) in a leave-one-out study for the retinal blood vessels and the optic nerve head (ONH) was found to be 0.93 and 0.98, respectively. The complete proposed framework, the retinal vasculature segmentation and the ICP registration, was applied to a secondary set of scans obtained from a light damage model. A qualitative assessment of the registration showed no registration failures.
Collapse
Affiliation(s)
- Bhavna J Antony
- Department of Electrical and Computer Engineering, Johns Hopkins University
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University
| | - Andrew Lang
- Department of Electrical and Computer Engineering, Johns Hopkins University
| | - Byung-Jin Kim
- Wilmer Eye Institute, Johns Hopkins University School of Medicine
| | - Donald J Zack
- Wilmer Eye Institute, Johns Hopkins University School of Medicine
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University
| |
Collapse
|
16
|
IMAGING AND MEASUREMENT OF THE PRERETINAL SPACE IN VITREOMACULAR ADHESION AND VITREOMACULAR TRACTION BY A NEW SPECTRAL DOMAIN OPTICAL COHERENCE TOMOGRAPHY ANALYSIS. Retina 2017; 37:1839-1846. [PMID: 28045789 DOI: 10.1097/iae.0000000000001439] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE To evaluate a new method for volumetric imaging of the preretinal space (also known as the subhyaloid, subcortical, or retrocortical space) and investigate differences in preretinal space volume in vitreomacular adhesion (VMA) and vitreomacular traction (VMT). METHODS Nine patients with VMA and 13 with VMT were prospectively evaluated. Automatic inner limiting membrane line segmentation, which exploits graph search theory implementation, and posterior cortical vitreous line segmentation were performed on 141 horizontal spectral domain optical coherence tomography B-scans per patient. Vertical distances (depths) between the posterior cortical vitreous and inner limiting membrane lines were calculated for each optical coherence tomography B-scan acquired. The derived distances were merged and visualized as a color depth map that represented the preretinal space between the posterior surface of the hyaloid and the anterior surface of the retina. The early treatment d retinopathy study macular map was overlaid onto final virtual maps, and preretinal space volumes were calculated for each early treatment diabetic retinopathy study map sector. RESULTS Volumetric maps representing preretinal space volumes were created for each patient in the VMA and VMT groups. Preretinal space volumes were larger in all early treatment diabetic retinopathy study map macular regions in the VMT group compared with those in the VMA group. The differences reached statistical significance in all early treatment diabetic retinopathy study sectors, except for the superior outer macula and temporal outer macula where significance values were P = 0.05 and P = 0.08, respectively. Overall, the relative differences in preretinal space volumes between the VMT and VMA groups varied from 2.7 to 4.3 in inner regions and 1.8 to 2.9 in outer regions. CONCLUSION Our study provides evidence of significant differences in preretinal space volume between eyes with VMA and those with VMT. This may be useful not only in the investigation of preretinal space properties in VMA and VMT, but also in other conditions, such as age-related macular degeneration, diabetic retinopathy, and central retinal vein occlusion.
Collapse
|
17
|
Chen Q, Niu S, Yuan S, Fan W, Liu Q. High-low reflectivity enhancement based retinal vessel projection for SD-OCT images. Med Phys 2016; 43:5464. [DOI: 10.1118/1.4962470] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
|
18
|
Prentašic P, Heisler M, Mammo Z, Lee S, Merkur A, Navajas E, Beg MF, Šarunic M, Loncaric S. Segmentation of the foveal microvasculature using deep learning networks. JOURNAL OF BIOMEDICAL OPTICS 2016; 21:75008. [PMID: 27401936 DOI: 10.1117/1.jbo.21.7.075008] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2016] [Accepted: 06/16/2016] [Indexed: 05/22/2023]
Abstract
Accurate segmentation of the retinal microvasculature is a critical step in the quantitative analysis of the retinal circulation, which can be an important marker in evaluating the severity of retinal diseases. As manual segmentation remains the gold standard for segmentation of optical coherence tomography angiography (OCT-A) images, we present a method for automating the segmentation of OCT-A images using deep neural networks (DNNs). Eighty OCT-A images of the foveal region in 12 eyes from 6 healthy volunteers were acquired using a prototype OCT-A system and subsequently manually segmented. The automated segmentation of the blood vessels in the OCT-A images was then performed by classifying each pixel into vessel or nonvessel class using deep convolutional neural networks. When the automated results were compared against the manual segmentation results, a maximum mean accuracy of 0.83 was obtained. When the automated results were compared with inter and intrarater accuracies, the automated results were shown to be comparable to the human raters suggesting that segmentation using DNNs is comparable to a second manual rater. As manually segmenting the retinal microvasculature is a tedious task, having a reliable automated output such as automated segmentation by DNNs, is an important step in creating an automated output.
Collapse
Affiliation(s)
- Pavle Prentašic
- University of Zagreb, Faculty of Electrical Engineering and Computing, Unska ul. 3, Zagreb 10000, Croatia
| | - Morgan Heisler
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, British Columbia V5A1S6, Canada
| | - Zaid Mammo
- University of British Columbia, Department of Ophthalmology and Visual Science, Eye Care Center, 2550 Willow Street, Vancouver, British Columbia V5Z 3N9, Canada
| | - Sieun Lee
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, British Columbia V5A1S6, Canada
| | - Andrew Merkur
- University of British Columbia, Department of Ophthalmology and Visual Science, Eye Care Center, 2550 Willow Street, Vancouver, British Columbia V5Z 3N9, Canada
| | - Eduardo Navajas
- University of British Columbia, Department of Ophthalmology and Visual Science, Eye Care Center, 2550 Willow Street, Vancouver, British Columbia V5Z 3N9, Canada
| | - Mirza Faisal Beg
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, British Columbia V5A1S6, Canada
| | - Marinko Šarunic
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, British Columbia V5A1S6, Canada
| | - Sven Loncaric
- University of Zagreb, Faculty of Electrical Engineering and Computing, Unska ul. 3, Zagreb 10000, Croatia
| |
Collapse
|
19
|
Miri MS, Abràmoff MD, Lee K, Niemeijer M, Wang JK, Kwon YH, Garvin MK. Multimodal Segmentation of Optic Disc and Cup From SD-OCT and Color Fundus Photographs Using a Machine-Learning Graph-Based Approach. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1854-66. [PMID: 25781623 PMCID: PMC4560662 DOI: 10.1109/tmi.2015.2412881] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
In this work, a multimodal approach is proposed to use the complementary information from fundus photographs and spectral domain optical coherence tomography (SD-OCT) volumes in order to segment the optic disc and cup boundaries. The problem is formulated as an optimization problem where the optimal solution is obtained using a machine-learning theoretical graph-based method. In particular, first the fundus photograph is registered to the 2D projection of the SD-OCT volume. Three in-region cost functions are designed using a random forest classifier corresponding to three regions of cup, rim, and background. Next, the volumes are resampled to create radial scans in which the Bruch's Membrane Opening (BMO) endpoints are easier to detect. Similar to in-region cost function design, the disc-boundary cost function is designed using a random forest classifier for which the features are created by applying the Haar Stationary Wavelet Transform (SWT) to the radial projection image. A multisurface graph-based approach utilizes the in-region and disc-boundary cost images to segment the boundaries of optic disc and cup under feasibility constraints. The approach is evaluated on 25 multimodal image pairs from 25 subjects in a leave-one-out fashion (by subject). The performances of the graph-theoretic approach using three sets of cost functions are compared: 1) using unimodal (OCT only) in-region costs, 2) using multimodal in-region costs, and 3) using multimodal in-region and disc-boundary costs. Results show that the multimodal approaches outperform the unimodal approach in segmenting the optic disc and cup.
Collapse
Affiliation(s)
- Mohammad Saleh Miri
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242
| | - Michael D. Abràmoff
- Department of Ophthalmology and Visual Sciences and the Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242. He is also with the Iowa City VA Health Care System, Iowa City, IA, 52246
| | - Kyungmoo Lee
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242
| | | | - Jui-Kai Wang
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242
| | - Young H. Kwon
- Department of Ophthalmology and Visual Sciences, The University of Iowa, Iowa City, IA, 52242
| | - Mona K. Garvin
- Iowa City VA Health Care System, Iowa City, IA, 52246 and the Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242
| |
Collapse
|
20
|
Stability, structure and scale: improvements in multi-modal vessel extraction for SEEG trajectory planning. Int J Comput Assist Radiol Surg 2015; 10:1227-37. [PMID: 25847663 PMCID: PMC4523698 DOI: 10.1007/s11548-015-1174-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2014] [Accepted: 03/09/2015] [Indexed: 11/06/2022]
Abstract
Purpose Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. Methods The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Results Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.89\pm 0.04$$\end{document}0.89±0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.80 \pm 0.03$$\end{document}0.80±0.03). Conclusions Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.
Collapse
|
21
|
Retinal image registration using topological vascular tree segmentation and bifurcation structures. Biomed Signal Process Control 2015. [DOI: 10.1016/j.bspc.2014.10.009] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
22
|
Li M, Liu Y, Chen F, Hu D. Including signal intensity increases the performance of blind source separation on brain imaging data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:551-563. [PMID: 25314698 DOI: 10.1109/tmi.2014.2362519] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
When analyzing brain imaging data, blind source separation (BSS) techniques critically depend on the level of dimensional reduction. If the reduction level is too slight, the BSS model would be overfitted and become unavailable. Thus, the reduction level must be set relatively heavy. This approach risks discarding useful information and crucially limits the performance of BSS techniques. In this study, a new BSS method that can work well even at a slight reduction level is presented. We proposed the concept of "signal intensity" which measures the significance of the source. Only picking the sources with significant intensity, the new method can avoid the overfitted solutions which are nonexistent artifacts. This approach enables the reduction level to be set slight and retains more useful dimensions in the preliminary reduction. Comparisons between the new and conventional algorithms were performed on both simulated and real data.
Collapse
|
23
|
Retinal vessel diameter measurements by spectral domain optical coherence tomography. Graefes Arch Clin Exp Ophthalmol 2014; 253:499-509. [PMID: 25128960 DOI: 10.1007/s00417-014-2715-2] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2014] [Revised: 05/15/2014] [Accepted: 06/30/2014] [Indexed: 10/24/2022] Open
Abstract
PURPOSE To describe a spectral domain optical coherence (OCT)-assisted method of measuring retinal vessel diameters. METHODS All Patients with an OCT circle scan centered at the optic nerve head using a Spectralis OCT (Heidelberg Engineering, Heidelberg, Germany) were retrospectively reviewed. Individual retinal vessels were identified on infrared reflectance (IR) images and given unique labels both on IR and spectral domain OCT (SD-OCT). Vessel width and vessel types obtained by IR were documented as ground truth. From OCT, measurements of each vessel, including horizontal vessel contour diameter, vertical vessel contour diameter, horizontal hyperreflective core diameter, and reflectance shadowing width, were assessed. RESULTS A total of 220 vessels from 13 eyes of 12 patients were labeled, among which, 194 vessels (88 arteries and 65 veins confirmed from IR) larger than 40 microns were included in the study. The mean vessel width obtained from IR was 107.9 ± 36.1 microns. A mean vertical vessel contour diameter of 119.6 ± 29.9 microns and a mean horizontal vessel contour diameter of 124.1 ± 31.1 microns were measured by SD-OCT. Vertical vessel contour diameter did not differ from vessel width in all subgroup analysis. Horizontal vessel contour diameter was not significantly different from vessel width for arteries and had strong or very strong correlation with vessel width for veins. CONCLUSION In our study, vertical vessel contour diameter measured by current commercially available SD-OCT was consistent with vessel width obtained by IR with good reproducibility. This SD-OCT based method could potentially be used as a standard measurement procedure to evaluate retinal vessel diameters and their changes in ocular and systemic disorders.
Collapse
|
24
|
Adaptive-weighted bilateral filtering and other pre-processing techniques for optical coherence tomography. Comput Med Imaging Graph 2014; 38:526-39. [PMID: 25034317 DOI: 10.1016/j.compmedimag.2014.06.012] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2013] [Revised: 05/16/2014] [Accepted: 06/13/2014] [Indexed: 11/20/2022]
Abstract
This paper presents novel pre-processing image enhancement algorithms for retinal optical coherence tomography (OCT). These images contain a large amount of speckle causing them to be grainy and of very low contrast. To make these images valuable for clinical interpretation, we propose a novel method to remove speckle, while preserving useful information contained in each retinal layer. The process starts with multi-scale despeckling based on a dual-tree complex wavelet transform (DT-CWT). We further enhance the OCT image through a smoothing process that uses a novel adaptive-weighted bilateral filter (AWBF). This offers the desirable property of preserving texture within the OCT image layers. The enhanced OCT image is then segmented to extract inner retinal layers that contain useful information for eye research. Our layer segmentation technique is also performed in the DT-CWT domain. Finally we describe an OCT/fundus image registration algorithm which is helpful when two modalities are used together for diagnosis and for information fusion.
Collapse
|
25
|
SEEG trajectory planning: combining stability, structure and scale in vessel extraction. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2014; 17:651-8. [PMID: 25485435 DOI: 10.1007/978-3-319-10470-6_81] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
StereoEEG implantation is performed in patients with epilepsy to determine the site of the seizure onset zone. Intracranial haemorrhage is the most common complication associated to implantation carrying a risk that ranges from 0.6 to 2.7%, with significant associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice neurosurgeons have no assistance in the planning of the electrode trajectories. There is great interest in developing computer assisted planning systems that can optimize the safety profile of electrode trajectories, maximizing the distance to critical brain structures. In this work, we address the problem of blood vessel extraction for SEEG trajectory planning. The proposed method exploits the availability of multi-modal images within a trajectory planning system to formulate a vessel extraction framework that combines the scale and the neighbouring structure of an object. We validated the proposed method in twelve multi-modal patient image sets. The mean Dice similarity coefficient (DSC) was 0.88 ± 0.03, representing a statistically significantly improvement when compared to the semi-automated single rater, single modality segmentation protocol used in current practice (DSC = 0.78 ± 0.02).
Collapse
|
26
|
Li B, Li HK. Automated analysis of diabetic retinopathy images: principles, recent developments, and emerging trends. Curr Diab Rep 2013; 13:453-9. [PMID: 23686810 DOI: 10.1007/s11892-013-0393-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Diabetic retinopathy (DR) is a vision-threatening complication of diabetes. Timely diagnosis and intervention are essential for treatment that reduces the risk of vision loss. A good color retinal (fundus) photograph can be used as a surrogate for face-to-face evaluation of DR. The use of computers to assist or fully automate DR evaluation from retinal images has been studied for many years. Early work showed promising results for algorithms in detecting and classifying DR pathology. Newer techniques include those that adapt machine learning technology to DR image analysis. Challenges remain, however, that must be overcome before fully automatic DR detection and analysis systems become practical clinical tools.
Collapse
Affiliation(s)
- Baoxin Li
- School of Computing, Informatics & Decision Systems Engineering, Arizona State University, Tempe, AZ 85281, USA.
| | | |
Collapse
|
27
|
Kafieh R, Rabbani H, Hajizadeh F, Ommani M. An accurate multimodal 3-D vessel segmentation method based on brightness variations on OCT layers and curvelet domain fundus image analysis. IEEE Trans Biomed Eng 2013; 60:2815-23. [PMID: 23722446 DOI: 10.1109/tbme.2013.2263844] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper proposes a multimodal approach for vessel segmentation of macular optical coherence tomography (OCT) slices along with the fundus image. The method is comprised of two separate stages; the first step is 2-D segmentation of blood vessels in curvelet domain, enhanced by taking advantage of vessel information in crossing OCT slices (named feedback procedure), and improved by suppressing the false positives around the optic nerve head. The proposed method for vessel localization of OCT slices is also enhanced utilizing the fact that retinal nerve fiber layer becomes thicker in the presence of the blood vessels. The second stage of this method is axial localization of the vessels in OCT slices and 3-D reconstruction of the blood vessels. Twenty-four macular spectral 3-D OCT scans of 16 normal subjects were acquired using a Heidelberg HRA OCT scanner. Each dataset consisted of a scanning laser ophthalmoscopy (SLO) image and limited number of OCT scans with size of 496 × 512 (namely, for a data with 19 selected OCT slices, the whole data size was 496 × 512 × 19). The method is developed with least complicated algorithms and the results show considerable improvement in accuracy of vessel segmentation over similar methods to produce a local accuracy of 0.9632 in area of SLO, covered with OCT slices, and the overall accuracy of 0.9467 in the whole SLO image. The results are also demonstrative of a direct relation between the overall accuracy and percentage of SLO coverage by OCT slices.
Collapse
|