1
|
Ma Y, Yan Q, Liu Y, Liu J, Zhang J, Zhao Y. StruNet: Perceptual and low-rank regularized transformer for medical image denoising. Med Phys 2023; 50:7654-7669. [PMID: 37278312 DOI: 10.1002/mp.16550] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 02/03/2023] [Accepted: 02/06/2023] [Indexed: 06/07/2023] Open
Abstract
BACKGROUND Various types of noise artifacts inevitably exist in some medical imaging modalities due to limitations of imaging techniques, which impair either clinical diagnosis or subsequent analysis. Recently, deep learning approaches have been rapidly developed and applied on medical images for noise removal or image quality enhancement. Nevertheless, due to complexity and diversity of noise distribution representations in different medical imaging modalities, most of the existing deep learning frameworks are incapable to flexibly remove noise artifacts while retaining detailed information. As a result, it remains challenging to design an effective and unified medical image denoising method that will work across a variety of noise artifacts for different imaging modalities without requiring specialized knowledge in performing the task. PURPOSE In this paper, we propose a novel encoder-decoder architecture called Swin transformer-based residual u-shape Network (StruNet), for medical image denoising. METHODS Our StruNet adopts a well-designed block as the backbone of the encoder-decoder architecture, which integrates Swin Transformer modules with residual block in parallel connection. Swin Transformer modules could effectively learn hierarchical representations of noise artifacts via self-attention mechanism in non-overlapping shifted windows and cross-window connection, while residual block is advantageous to compensate loss of detailed information via shortcut connection. Furthermore, perceptual loss and low-rank regularization are incorporated into loss function respectively in order to constrain the denoising results on feature-level consistency and low-rank characteristics. RESULTS To evaluate the performance of the proposed method, we have conducted experiments on three medical imaging modalities including computed tomography (CT), optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA). CONCLUSIONS The results demonstrate that the proposed architecture yields a promising performance of suppressing multiform noise artifacts existing in different imaging modalities.
Collapse
Affiliation(s)
- Yuhui Ma
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering Chinese Academy of Sciences, Cixi, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Qifeng Yan
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering Chinese Academy of Sciences, Cixi, China
| | - Yonghuai Liu
- Department of Computer Science, Edge Hill University, Ormskirk, UK
| | - Jiang Liu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jiong Zhang
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering Chinese Academy of Sciences, Cixi, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering Chinese Academy of Sciences, Cixi, China
| |
Collapse
|
2
|
Al-Hinnawi AR, Al-Latayfeh M, Tavakoli M. Innovative Macula Capillaries Plexuses Visualization with OCTA B-Scan Graph Representation: Transforming OCTA B-Scan into OCTA Graph Representation. J Multidiscip Healthc 2023; 16:3477-3491. [PMID: 38024137 PMCID: PMC10662934 DOI: 10.2147/jmdh.s433405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 11/01/2023] [Indexed: 12/01/2023] Open
Abstract
Purpose The aim of this study is to transform optical coherence tomography angiography (OCTA) scans into innovative OCTA graphs, serving as novel biomarkers representing the macular vasculature. Patients and Methods The study included 90 healthy subjects and 39 subjects with various abnormalities (29 with diabetic retinopathy, 5 with age-related macular degeneration, and 5 with choroid neovascularization). OCTA 5µm macular coronal views (MCVs) were generated for each subject, followed by blood vessel segmentation and skeleton processing. Subsequently, the blood vessel density index, blood vessel skeleton index, and blood vessel tortuosity index were computed. The graphs of each metric were plotted against the axial axes of the OCTA B-scan, representing the integrity of vasculature at successive 5µm macular depths. Results The results revealed two significant findings. First, the B-scans from OCTA can be transformed into OCTA graphs, yielding three specific OCTA graphs in this study. These graphs provide new biomarkers for assessing the integrity of deep vascular complex (DVC) and superficial vascular complex (SVC) within the macula. Second, a statistically significant difference was observed between normal (n=90) and abnormal (n=39) subjects, with a t-test p-value significantly lower than 0.001. The Mann-Whitney u-test also yielded significant difference but only between the 90 normal and 29 DR subjects. Conclusion The novel OCTA graphs offer a unique representation of the macula's SVC and DVC, suggesting their potential in aiding physicians in the diagnosis of eye health within OCTA clinics. Further research is warranted to finalize the shape of these newly derived OCTA graphs and establish their clinical relevance and utility.
Collapse
Affiliation(s)
- Abdel-Razzak Al-Hinnawi
- Department of Medical Imaging, Faculty of Allied Medical Sciences, Isra University, Amman, Jordan
| | - Motasem Al-Latayfeh
- Department of Special Surgery, Faculty of Medicine, The Hashemite University, Zarqa, Jordan
| | - Mitra Tavakoli
- Exeter Centre of Excellence for Diabetes Research, National Institute for Health and Care Research (NIHR) Exeter Clinical Research Facility, and Institute of Biomedical and Clinical Sciences, University of Exeter Medical School, Exeter, UK
| |
Collapse
|
3
|
Hormel TT, Jia Y. OCT angiography and its retinal biomarkers [Invited]. Biomed Opt Express 2023; 14:4542-4566. [PMID: 37791289 PMCID: PMC10545210 DOI: 10.1364/boe.495627] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 07/13/2023] [Accepted: 07/13/2023] [Indexed: 10/05/2023]
Abstract
Optical coherence tomography angiography (OCTA) is a high-resolution, depth-resolved imaging modality with important applications in ophthalmic practice. An extension of structural OCT, OCTA enables non-invasive, high-contrast imaging of retinal and choroidal vasculature that are amenable to quantification. As such, OCTA offers the capability to identify and characterize biomarkers important for clinical practice and therapeutic research. Here, we review new methods for analyzing biomarkers and discuss new insights provided by OCTA.
Collapse
Affiliation(s)
- Tristan T. Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon, USA
| |
Collapse
|
4
|
Zhang T, Huang F, Gao N, Du M, Cheng H, Huang W, Ji Y, Zheng S, Wan W, Hu K. Three-Dimensional Quantitative Description of the Implantable Collamer Lens in the Ocular Anterior Segment of Patients With Myopia. Am J Ophthalmol 2023; 252:59-68. [PMID: 36933857 DOI: 10.1016/j.ajo.2023.03.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 02/22/2023] [Accepted: 03/05/2023] [Indexed: 03/19/2023]
Abstract
PURPOSE To describe the 3-dimensional (3D) location of the implantable collamer lens (ICL) quantitatively in the posterior ocular chamber of patients with myopia. DESIGN Cross-sectional study. METHODS To obtain visualization models before and after mydriasis, an automatic 3D imaging method based on swept-source optical coherence tomography was created. Parameters like the ICL lens volume (ILV), the tilt of the ICL and crystalline lens, the vault distribution index, and topographic maps were evaluated to describe the ICL location. Using a paired sample t test and the Wilcoxon signed rank test, the difference between nonmydriasis and postmydriasis conditions was compared. RESULTS The study investigated 32 eyes from 20 patients. The 3D central vault did not differ significantly before (P = .994) or after mydriasis (P = .549) compared with the 2D central vault. After mydriasis, the 5-mm ILV decreased by 0.85 mm2 (P = .016), and the vault distribution index increased significantly (P = .001). The ICL and the crystalline lens exhibited tilt (nonmydriasis: ICL total tilt 3.78 ± 1.85 degrees, lens total tilt 4.03 ± 1.53 degrees; postmydriasis: ICL total tilt 3.84 ± 1.56 degrees, lens total tilt 4.09 ± 1.64 degrees). The phenomenon of asynchronous tilt of the ICL and lens was found in 5 eyes, leading to the spatially asymmetric distribution of the ICL-lens distance. CONCLUSION The 3D imaging technique provided exhaustive and reliable data for the anterior segment. The visualization models offered multiple perspectives on the ICL in the posterior chamber. Before and after mydriasis, the intraocular ICL position was described by the 3D parameters.
Collapse
Affiliation(s)
- Tong Zhang
- From Chongqing Medical University (T.Z., F.H., N.G., M.D., H.C., W.H., W.W., K.H.) and The First Affiliated Hospital of Chongqing Medical University (Y.J., S.Z., W.W., K.H.), Chongqing Key Laboratory of Ophthalmology, Chongqing Eye Institute, Chongqing Branch (Municipality Division) of National Clinical Research Center for Ocular Diseases, Chongqing, China
| | - Fanfan Huang
- From Chongqing Medical University (T.Z., F.H., N.G., M.D., H.C., W.H., W.W., K.H.) and The First Affiliated Hospital of Chongqing Medical University (Y.J., S.Z., W.W., K.H.), Chongqing Key Laboratory of Ophthalmology, Chongqing Eye Institute, Chongqing Branch (Municipality Division) of National Clinical Research Center for Ocular Diseases, Chongqing, China
| | - Ning Gao
- From Chongqing Medical University (T.Z., F.H., N.G., M.D., H.C., W.H., W.W., K.H.) and The First Affiliated Hospital of Chongqing Medical University (Y.J., S.Z., W.W., K.H.), Chongqing Key Laboratory of Ophthalmology, Chongqing Eye Institute, Chongqing Branch (Municipality Division) of National Clinical Research Center for Ocular Diseases, Chongqing, China
| | - Miaomiao Du
- From Chongqing Medical University (T.Z., F.H., N.G., M.D., H.C., W.H., W.W., K.H.) and The First Affiliated Hospital of Chongqing Medical University (Y.J., S.Z., W.W., K.H.), Chongqing Key Laboratory of Ophthalmology, Chongqing Eye Institute, Chongqing Branch (Municipality Division) of National Clinical Research Center for Ocular Diseases, Chongqing, China
| | - Hong Cheng
- From Chongqing Medical University (T.Z., F.H., N.G., M.D., H.C., W.H., W.W., K.H.) and The First Affiliated Hospital of Chongqing Medical University (Y.J., S.Z., W.W., K.H.), Chongqing Key Laboratory of Ophthalmology, Chongqing Eye Institute, Chongqing Branch (Municipality Division) of National Clinical Research Center for Ocular Diseases, Chongqing, China
| | - Wanyao Huang
- From Chongqing Medical University (T.Z., F.H., N.G., M.D., H.C., W.H., W.W., K.H.) and The First Affiliated Hospital of Chongqing Medical University (Y.J., S.Z., W.W., K.H.), Chongqing Key Laboratory of Ophthalmology, Chongqing Eye Institute, Chongqing Branch (Municipality Division) of National Clinical Research Center for Ocular Diseases, Chongqing, China
| | - Yan Ji
- The First Affiliated Hospital of Chongqing Medical University, Chongqing Key Laboratory of Ophthalmology, Chongqing Eye Institute, Chongqing Branch (Municipality Division) of National Clinical Research Center for Ocular Diseases, Chongqing, China
| | - Shijie Zheng
- The First Affiliated Hospital of Chongqing Medical University, Chongqing Key Laboratory of Ophthalmology, Chongqing Eye Institute, Chongqing Branch (Municipality Division) of National Clinical Research Center for Ocular Diseases, Chongqing, China
| | - Wenjuan Wan
- From Chongqing Medical University (T.Z., F.H., N.G., M.D., H.C., W.H., W.W., K.H.) and The First Affiliated Hospital of Chongqing Medical University (Y.J., S.Z., W.W., K.H.), Chongqing Key Laboratory of Ophthalmology, Chongqing Eye Institute, Chongqing Branch (Municipality Division) of National Clinical Research Center for Ocular Diseases, Chongqing, China.; The First Affiliated Hospital of Chongqing Medical University, Chongqing Key Laboratory of Ophthalmology, Chongqing Eye Institute, Chongqing Branch (Municipality Division) of National Clinical Research Center for Ocular Diseases, Chongqing, China..
| | - Ke Hu
- From Chongqing Medical University (T.Z., F.H., N.G., M.D., H.C., W.H., W.W., K.H.) and The First Affiliated Hospital of Chongqing Medical University (Y.J., S.Z., W.W., K.H.), Chongqing Key Laboratory of Ophthalmology, Chongqing Eye Institute, Chongqing Branch (Municipality Division) of National Clinical Research Center for Ocular Diseases, Chongqing, China.; The First Affiliated Hospital of Chongqing Medical University, Chongqing Key Laboratory of Ophthalmology, Chongqing Eye Institute, Chongqing Branch (Municipality Division) of National Clinical Research Center for Ocular Diseases, Chongqing, China..
| |
Collapse
|
5
|
Yang C, Yao L, Zhou L, Qian S, Meng J, Yang L, Chen L, Tan Y, Qiu H, Gu Y, Ding Z, Li P, Liu Z. Mapping port wine stain in vivo by optical coherence tomography angiography and multi-metric characterization. Opt Express 2023; 31:13613-13626. [PMID: 37157245 DOI: 10.1364/oe.485619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Port wine stain (PWS) is a congenital cutaneous capillary malformation composed of ecstatic vessels, while the microstructure of these vessels remains largely unknown. Optical coherence tomography angiography (OCTA) serves as a non-invasive, label-free and high-resolution tool to visualize the 3D tissue microvasculature. However, even as the 3D vessel images of PWS become readily accessible, quantitative analysis algorithms for their organization have mainly remained limited to analysis of 2D images. Especially, 3D orientations of vasculature in PWS have not yet been resolved at a voxel-wise basis. In this study, we employed the inverse signal-to-noise ratio (iSNR)-decorrelation (D) OCTA (ID-OCTA) to acquire 3D blood vessel images in vivo from PWS patients, and used the mean-subtraction method for de-shadowing to correct the tail artifacts. We developed algorithms which mapped blood vessels in spatial-angular hyperspace in a 3D context, and obtained orientation-derived metrics including directional variance and waviness for the characterization of vessel alignment and crimping level, respectively. Combining with thickness and local density measures, our method served as a multi-parametric analysis platform which covered a variety of morphological and organizational characteristics at a voxel-wise basis. We found that blood vessels were thicker, denser and less aligned in lesion skin in contrast to normal skin (symmetrical parts of skin lesions on the cheek), and complementary insights from these metrics led to a classification accuracy of ∼90% in identifying PWS. An improvement in sensitivity of 3D analysis was validated over 2D analysis. Our imaging and analysis system provides a clear picture of the microstructure of blood vessels within PWS tissues, which leads to a better understanding of this capillary malformation disease and facilitates improvements in diagnosis and treatment of PWS.
Collapse
|
6
|
Moradi M, Chen Y, Du X, Seddon JM. Deep ensemble learning for automated non-advanced AMD classification using optimized retinal layer segmentation and SD-OCT scans. Comput Biol Med 2023; 154:106512. [PMID: 36701964 DOI: 10.1016/j.compbiomed.2022.106512] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 11/30/2022] [Accepted: 12/31/2022] [Indexed: 01/11/2023]
Abstract
BACKGROUND Accurate retinal layer segmentation in optical coherence tomography (OCT) images is crucial for quantitatively analyzing age-related macular degeneration (AMD) and monitoring its progression. However, previous retinal segmentation models depend on experienced experts and manually annotating retinal layers is time-consuming. On the other hand, accuracy of AMD diagnosis is directly related to the segmentation model's performance. To address these issues, we aimed to improve AMD detection using optimized retinal layer segmentation and deep ensemble learning. METHOD We integrated a graph-cut algorithm with a cubic spline to automatically annotate 11 retinal boundaries. The refined images were fed into a deep ensemble mechanism that combined a Bagged Tree and end-to-end deep learning classifiers. We tested the developed deep ensemble model on internal and external datasets. RESULTS The total error rates for our segmentation model using the boundary refinement approach was significantly lower than OCT Explorer segmentations (1.7% vs. 7.8%, p-value = 0.03). We utilized the refinement approach to quantify 169 imaging features using Zeiss SD-OCT volume scans. The presence of drusen and thickness of total retina, neurosensory retina, and ellipsoid zone to inner-outer segment (EZ-ISOS) thickness had higher contributions to AMD classification compared to other features. The developed ensemble learning model obtained a higher diagnostic accuracy in a shorter time compared with two human graders. The area under the curve (AUC) for normal vs. early AMD was 99.4%. CONCLUSION Testing results showed that the developed framework is repeatable and effective as a potentially valuable tool in retinal imaging research.
Collapse
Affiliation(s)
- Mousa Moradi
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, United States
| | - Yu Chen
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, United States.
| | - Xian Du
- Department of Mechanical and Industrial Engineering, University of Massachusetts, Amherst, MA, United States.
| | - Johanna M Seddon
- Department of Ophthalmology & Visual Sciences, University of Massachusetts Chan Medical School, Worcester, MA, United States.
| |
Collapse
|
7
|
Cao J, Xu Z, Xu M, Ma Y, Zhao Y. A two-stage framework for optical coherence tomography angiography image quality improvement. Front Med (Lausanne) 2023; 10:1061357. [PMID: 36756179 PMCID: PMC9899819 DOI: 10.3389/fmed.2023.1061357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 01/02/2023] [Indexed: 01/24/2023] Open
Abstract
Introduction Optical Coherence Tomography Angiography (OCTA) is a new non-invasive imaging modality that gains increasing popularity for the observation of the microvasculatures in the retina and the conjunctiva, assisting clinical diagnosis and treatment planning. However, poor imaging quality, such as stripe artifacts and low contrast, is common in the acquired OCTA and in particular Anterior Segment OCTA (AS-OCTA) due to eye microtremor and poor illumination conditions. These issues lead to incomplete vasculature maps that in turn makes it hard to make accurate interpretation and subsequent diagnosis. Methods In this work, we propose a two-stage framework that comprises a de-striping stage and a re-enhancing stage, with aims to remove stripe noise and to enhance blood vessel structure from the background. We introduce a new de-striping objective function in a Stripe Removal Net (SR-Net) to suppress the stripe noise in the original image. The vasculatures in acquired AS-OCTA images usually exhibit poor contrast, so we use a Perceptual Structure Generative Adversarial Network (PS-GAN) to enhance the de-striped AS-OCTA image in the re-enhancing stage, which combined cyclic perceptual loss with structure loss to achieve further image quality improvement. Results and discussion To evaluate the effectiveness of the proposed method, we apply the proposed framework to two synthetic OCTA datasets and a real AS-OCTA dataset. Our results show that the proposed framework yields a promising enhancement performance, which enables both conventional and deep learning-based vessel segmentation methods to produce improved results after enhancement of both retina and AS-OCTA modalities.
Collapse
Affiliation(s)
- Juan Cao
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing, China
| | - Zihao Xu
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing, China,Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Mengjia Xu
- Affiliated Cixi Hospital, Wenzhou Medical University, Ningbo, China,*Correspondence: Mengjia Xu ✉
| | - Yuhui Ma
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China,Yuhui Ma ✉
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| |
Collapse
|
8
|
Wei X, Liu Q, Liu M, Wang Y, Meijering E. 3D Soma Detection in Large-Scale Whole Brain Images via a Two-Stage Neural Network. IEEE Trans Med Imaging 2023; 42:148-157. [PMID: 36103445 DOI: 10.1109/tmi.2022.3206605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
3D soma detection in whole brain images is a critical step for neuron reconstruction. However, existing soma detection methods are not suitable for whole mouse brain images with large amounts of data and complex structure. In this paper, we propose a two-stage deep neural network to achieve fast and accurate soma detection in large-scale and high-resolution whole mouse brain images (more than 1TB). For the first stage, a lightweight Multi-level Cross Classification Network (MCC-Net) is proposed to filter out images without somas and generate coarse candidate images by combining the advantages of the multi convolution layer's feature extraction ability. It can speed up the detection of somas and reduce the computational complexity. For the second stage, to further obtain the accurate locations of somas in the whole mouse brain images, the Scale Fusion Segmentation Network (SFS-Net) is developed to segment soma regions from candidate images. Specifically, the SFS-Net captures multi-scale context information and establishes a complementary relationship between encoder and decoder by combining the encoder-decoder structure and a 3D Scale-Aware Pyramid Fusion (SAPF) module for better segmentation performance. The experimental results on three whole mouse brain images verify that the proposed method can achieve excellent performance and provide the reconstruction of neurons with beneficial information. Additionally, we have established a public dataset named WBMSD, including 798 high-resolution and representative images ( 256 ×256 ×256 voxels) from three whole mouse brain images, dedicated to the research of soma detection, which will be released along with this paper.
Collapse
|
9
|
López-Varela E, Vidal PL, Pascual NO, Novo J, Ortega M. Fully-Automatic 3D Intuitive Visualization of Age-Related Macular Degeneration Fluid Accumulations in OCT Cubes. J Digit Imaging 2022; 35:1271-1282. [PMID: 35513586 PMCID: PMC9582110 DOI: 10.1007/s10278-022-00643-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 04/06/2022] [Accepted: 04/13/2022] [Indexed: 11/16/2022] Open
Abstract
Age-related macular degeneration is the leading cause of vision loss in developed countries, and wet-type AMD requires urgent treatment and rapid diagnosis because it causes rapid irreversible vision loss. Currently, AMD diagnosis is mainly carried out using images obtained by optical coherence tomography. This diagnostic process is performed by human clinicians, so human error may occur in some cases. Therefore, fully automatic methodologies are highly desirable adding a layer of robustness to the diagnosis. In this work, a novel computer-aided diagnosis and visualization methodology is proposed for the rapid identification and visualization of wet AMD. We adapted a convolutional neural network for segmentation of a similar domain of medical images to the problem of wet AMD segmentation, taking advantage of transfer learning, which allows us to work with and exploit a reduced number of samples. We generate a 3D intuitive visualization where the existence, position and severity of the fluid were represented in a clear and intuitive way to facilitate the analysis of the clinicians. The 3D visualization is robust and accurate, obtaining satisfactory 0.949 and 0.960 Dice coefficients in the different evaluated OCT cube configurations, allowing to quickly assess the presence and extension of the fluid associated to wet AMD.
Collapse
Affiliation(s)
- Emilio López-Varela
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| | - Plácido L. Vidal
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| | - Nuria Olivier Pascual
- Servizo de Oftalmoloxía, Complexo Hospitalario Universitario de Ferrol, CHUF, Av. da Residencia, S/N, Ferrol, 15405 Spain
| | - Jorge Novo
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| | - Marcos Ortega
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| |
Collapse
|
10
|
Wang X, Liu M, Wang Y, Fan J, Meijering E. A 3D Tubular Flux Model for Centerline Extraction in Neuron Volumetric Images. IEEE Trans Med Imaging 2022; 41:1069-1079. [PMID: 34826295 DOI: 10.1109/tmi.2021.3130987] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital morphology reconstruction from neuron volumetric images is essential for computational neuroscience. The centerline of the axonal and dendritic tree provides an effective shape representation and serves as a basis for further neuron reconstruction. However, it is still a challenge to directly extract the accurate centerline from the complex neuron structure with poor image quality. In this paper, we propose a neuron centerline extraction method based on a 3D tubular flux model via a two-stage CNN framework. In the first stage, a 3D CNN is used to learn the latent neuron structure features, namely flux features, from neuron images. In the second stage, a light-weight U-Net takes the learned flux features as input to extract the centerline with a spatial weighted average strategy to constrain the multi-voxel width response. Specifically, the labels of flux features in the first stage are generated by the 3D tubular model which calculates the geometric representations of the flux between each voxel in the tubular region and the nearest point on the centerline ground truth. Compared with self-learned features by networks, flux features, as a kind of prior knowledge, explicitly take advantage of the contextual distance and direction distribution information around the centerline, which is beneficial for the precise centerline extraction. Experiments on two challenging datasets demonstrate that the proposed method outperforms other state-of-the-art methods by 18% and 35.1% in F1-measurement and average distance scores at the most, and the extracted centerline is helpful to improve the neuron reconstruction performance.
Collapse
|
11
|
Chen Z, Xiong Y, Wei H, Zhao R, Duan X, Shen H. Dual-consistency semi-supervision combined with self-supervision for vessel segmentation in retinal OCTA images. Biomed Opt Express 2022; 13:2824-2834. [PMID: 35774329 PMCID: PMC9203111 DOI: 10.1364/boe.458004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/04/2022] [Accepted: 04/12/2022] [Indexed: 06/15/2023]
Abstract
Optical coherence tomography angiography(OCTA) is an advanced noninvasive vascular imaging technique that has important implications in many vision-related diseases. The automatic segmentation of retinal vessels in OCTA is understudied, and the existing segmentation methods require large-scale pixel-level annotated images. However, manually annotating labels is time-consuming and labor-intensive. Therefore, we propose a dual-consistency semi-supervised segmentation network incorporating multi-scale self-supervised puzzle subtasks(DCSS-Net) to tackle the challenge of limited annotations. First, we adopt a novel self-supervised task in assisting semi-supervised networks in training to learn better feature representations. Second, we propose a dual-consistency regularization strategy that imposed data-based and feature-based perturbation to effectively utilize a large number of unlabeled data, alleviate the overfitting of the model, and generate more accurate segmentation predictions. Experimental results on two OCTA retina datasets validate the effectiveness of our DCSS-Net. With very little labeled data, the performance of our method is comparable with fully supervised methods trained on the entire labeled dataset.
Collapse
Affiliation(s)
- Zailiang Chen
- School of Information Science and Engineering, Central South University, Changsha 410083, China
| | - Yuchen Xiong
- School of Information Science and Engineering, Central South University, Changsha 410083, China
| | - Hao Wei
- School of Information Science and Engineering, Central South University, Changsha 410083, China
| | - Rongchang Zhao
- School of Information Science and Engineering, Central South University, Changsha 410083, China
| | - Xuanchu Duan
- Changsha Aier Eye Hospital, Changsha 410015, China
| | - Hailan Shen
- School of Information Science and Engineering, Central South University, Changsha 410083, China
| |
Collapse
|
12
|
Galdran A, Anjos A, Dolz J, Chakor H, Lombaert H, Ayed IB. State-of-the-art retinal vessel segmentation with minimalistic models. Sci Rep 2022; 12:6174. [PMID: 35418576 DOI: 10.1038/s41598-022-09675-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 03/10/2022] [Indexed: 01/03/2023] Open
Abstract
The segmentation of retinal vasculature from eye fundus images is a fundamental task in retinal image analysis. Over recent years, increasingly complex approaches based on sophisticated Convolutional Neural Network architectures have been pushing performance on well-established benchmark datasets. In this paper, we take a step back and analyze the real need of such complexity. We first compile and review the performance of 20 different techniques on some popular databases, and we demonstrate that a minimalistic version of a standard U-Net with several orders of magnitude less parameters, carefully trained and rigorously evaluated, closely approximates the performance of current best techniques. We then show that a cascaded extension (W-Net) reaches outstanding performance on several popular datasets, still using orders of magnitude less learnable weights than any previously published work. Furthermore, we provide the most comprehensive cross-dataset performance analysis to date, involving up to 10 different databases. Our analysis demonstrates that the retinal vessel segmentation is far from solved when considering test images that differ substantially from the training data, and that this task represents an ideal scenario for the exploration of domain adaptation techniques. In this context, we experiment with a simple self-labeling strategy that enables moderate enhancement of cross-dataset performance, indicating that there is still much room for improvement in this area. Finally, we test our approach on Artery/Vein and vessel segmentation from OCTA imaging problems, where we again achieve results well-aligned with the state-of-the-art, at a fraction of the model complexity available in recent literature. Code to reproduce the results in this paper is released.
Collapse
|
13
|
Shi T, Boutry N, Xu Y, Geraud T. Local Intensity Order Transformation for Robust Curvilinear Object Segmentation. IEEE Trans Image Process 2022; 31:2557-2569. [PMID: 35275816 DOI: 10.1109/tip.2022.3155954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Segmentation of curvilinear structures is important in many applications, such as retinal blood vessel segmentation for early detection of vessel diseases and pavement crack segmentation for road condition evaluation and maintenance. Currently, deep learning-based methods have achieved impressive performance on these tasks. Yet, most of them mainly focus on finding powerful deep architectures but ignore capturing the inherent curvilinear structure feature (e.g., the curvilinear structure is darker than the context) for a more robust representation. In consequence, the performance usually drops a lot on cross-datasets, which poses great challenges in practice. In this paper, we aim to improve the generalizability by introducing a novel local intensity order transformation (LIOT). Specifically, we transfer a gray-scale image into a contrast-invariant four-channel image based on the intensity order between each pixel and its nearby pixels along with the four (horizontal and vertical) directions. This results in a representation that preserves the inherent characteristic of the curvilinear structure while being robust to contrast changes. Cross-dataset evaluation on three retinal blood vessel segmentation datasets demonstrates that LIOT improves the generalizability of some state-of-the-art methods. Additionally, the cross-dataset evaluation between retinal blood vessel segmentation and pavement crack segmentation shows that LIOT is able to preserve the inherent characteristic of curvilinear structure with large appearance gaps. An implementation of the proposed method is available at https://github.com/TY-Shi/LIOT.
Collapse
|
14
|
Li W, Zhang H, Li F, Wang L. RPS-Net: An effective retinal image projection segmentation network for retinal vessels and foveal avascular zone based on OCTA data. Med Phys 2022; 49:3830-3844. [PMID: 35297061 DOI: 10.1002/mp.15608] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 03/03/2022] [Accepted: 03/11/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Optical coherence tomography angiography (OCTA) is an advanced imaging technology that can present the three-dimensional (3D) structure of retinal vessels (RVs). Quantitative analysis of retinal vessel density and foveal avascular zone (FAZ) area is of great significance in clinical diagnosis and the automatic semantic segmentation at the pixel level helps quantitative analysis. The existing segmentation methods cannot effectively use the volume data and projection map data of the OCTA image at the same time and lack the trade-off between global perception and local details, which lead to problems such as discontinuity of segmentation results and deviation of morphological estimation. PURPOSE In order to better assist physicians in clinical diagnosis and treatment, the segmentation accuracy of RVs and FAZ needs to be further improved. In this work, we propose an effective retinal image projection segmentation network (RPS-Net) to achieve accurate RVs and FAZ segmentation. Experiments show that this network exhibits good performance and outperforms other existing methods. METHODS Our method considers three aspects. First, we use two parallel projection paths to learn global perceptual features and local supplementary details. Secondly, we use the dual-way projection learning module (DPLM) to reduce the depth of the 3D data and learn image spatial features. Finally, we merged the two-dimensional features learned from the volume data with the two-dimensional projection data, and used a U-shaped network to further learn and generate the final result. RESULTS We validated our model on the OCTA-500, which is a large multi-modal, multi-task retinal dataset. The experimental results showed that our method achieved state-of-the-art performance, the mean Dice coefficients for RVs are 89.89 ± 2.60 (%) and 91.40 ± 9.18 (%) on the two subsets, while the Dice coefficients for FAZ are 91.55 ± 2.05 (%) and 97.80 ± 2.75 (%), respectively. CONCLUSIONS Our method can make full use of the information of 3D data and 2D data to generate segmented images with higher continuity and accuracy. Code is available at https://github.com/hchuanZ/MFFN/tree/master. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Weisheng Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400000, China
| | - Hongchuan Zhang
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400000, China
| | - Feiyan Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400000, China
| | - Linhong Wang
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, 400000, China
| |
Collapse
|
15
|
Lin J, Mou L, Yan Q, Ma S, Yue X, Zhou S, Lin Z, Zhang J, Liu J, Zhao Y. Automated Segmentation of Trigeminal Nerve and Cerebrovasculature in MR-Angiography Images by Deep Learning. Front Neurosci 2021; 15:744967. [PMID: 34955711 PMCID: PMC8702731 DOI: 10.3389/fnins.2021.744967] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 11/17/2021] [Indexed: 11/29/2022] Open
Abstract
Trigeminal neuralgia caused by paroxysmal and severe pain in the distribution of the trigeminal nerve is a rare chronic pain disorder. It is generally accepted that compression of the trigeminal root entry zone by vascular structures is the major cause of primary trigeminal neuralgia, and vascular decompression is the prior choice in neurosurgical treatment. Therefore, accurate preoperative modeling/segmentation/visualization of trigeminal nerve and its surrounding cerebrovascular is important to surgical planning. In this paper, we propose an automated method to segment trigeminal nerve and its surrounding cerebrovascular in the root entry zone, and to further reconstruct and visual these anatomical structures in three-dimensional (3D) Magnetic Resonance Angiography (MRA). The proposed method contains a two-stage neural network. Firstly, a preliminary confidence map of different anatomical structures is produced by a coarse segmentation stage. Secondly, a refinement segmentation stage is proposed to refine and optimize the coarse segmentation map. To model the spatial and morphological relationship between trigeminal nerve and cerebrovascular structures, the proposed network detects the trigeminal nerve, cerebrovasculature, and brainstem simultaneously. The method has been evaluated on a dataset including 50 MRA volumes, and the experimental results show the state-of-the-art performance of the proposed method with an average Dice similarity coefficient, Hausdorff distance, and average surface distance error of 0.8645, 0.2414, and 0.4296 on multi-tissue segmentation, respectively.
Collapse
Affiliation(s)
- Jinghui Lin
- Department of Neurosurgery, Ningbo First Hospital, Ningbo, China
| | - Lei Mou
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Qifeng Yan
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Shaodong Ma
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Xingyu Yue
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Shengjun Zhou
- Department of Neurosurgery, Ningbo First Hospital, Ningbo, China
| | - Zhiqing Lin
- Department of Neurosurgery, Ningbo First Hospital, Ningbo, China
| | - Jiong Zhang
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Jiang Liu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yitian Zhao
- The Affiliated People's Hospital of Ningbo University, Ningbo, China.,Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| |
Collapse
|
16
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. DeepRayburst for Automatic Shape Analysis of Tree-Like Structures in Biomedical Images. IEEE J Biomed Health Inform 2021; 26:2204-2215. [PMID: 34727041 DOI: 10.1109/jbhi.2021.3124514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Precise quantification of tree-like structures from biomedical images, such as neuronal shape reconstruction and retinal blood vessel caliber estimation, is increasingly important in understanding normal function and pathologic processes in biology. Some handcrafted methods have been proposed for this purpose in recent years. However, they are designed only for a specific application. In this paper, we propose a shape analysis algorithm, DeepRayburst, that can be applied to many different applications based on a Multi-Feature Rayburst Sampling (MFRS) and a Dual Channel Temporal Convolutional Network (DC-TCN). Specifically, we first generate a Rayburst Sampling (RS) core containing a set of multidirectional rays. Then the MFRS is designed by extending each ray of the RS to multiple parallel rays which extract a set of feature sequences. A Gaussian kernel is then used to fuse these feature sequences and outputs one feature sequence. Furthermore, we design a DC-TCN to make the rays terminate on the surface of tree-like structures according to the fused feature sequence. Finally, by analyzing the distribution patterns of the terminated rays, the algorithm can serve multiple shape analysis applications of tree-like structures. Experiments on three different applications, including soma shape reconstruction, neuronal shape reconstruction, and vessel caliber estimation, confirm that the proposed method outperforms other state-of-the-art shape analysis methods, which demonstrate its flexibility and robustness.
Collapse
|
17
|
Borrelli E, Grosso D, Parravano M, Costanzo E, Brambati M, Viganò C, Sacconi R, Querques L, Pina A, De Geronimo D, Bandello F, Querques G. Volume rendered 3D OCTA assessment of macular ischemia in patients with type 1 diabetes and without diabetic retinopathy. Sci Rep 2021; 11:19793. [PMID: 34611239 PMCID: PMC8492730 DOI: 10.1038/s41598-021-99297-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 09/09/2021] [Indexed: 11/09/2022] Open
Abstract
The aim of this study was to measure macular perfusion in patients with type 1 diabetes and no signs of diabetic retinopathy (DR) using volume rendered three-dimensional (3D) optical coherence tomography angiography (OCTA). We collected data from 35 patients with diabetes and no DR who had OCTA obtained. An additional control group of 35 eyes from 35 healthy subjects was included for comparison. OCTA volume data were processed with a previously presented algorithm in order to obtain the 3D vascular volume and 3D perfusion density. In order to weigh the contribution of different plexuses' impairment to volume rendered vascular perfusion, OCTA en face images were binarized in order to obtain two-dimensional (2D) perfusion density metrics. Mean ± SD age was 27.2 ± 10.2 years [range 19-64 years] in the diabetic group and 31.0 ± 11.4 years [range 19-61 years] in the control group (p = 0.145). The 3D vascular volume was 0.27 ± 0.05 mm3 in the diabetic group and 0.29 ± 0.04 mm3 in the control group (p = 0.020). The 3D perfusion density was 9.3 ± 1.6% and 10.3 ± 1.6% in diabetic patients and controls, respectively (p = 0.005). Using a 2D visualization, the perfusion density was lower in diabetic patients, but only at the deep vascular complex (DVC) level (38.9 ± 3.7% in diabetes and 41.0 ± 3.1% in controls, p = 0.001), while no differences were detected at the superficial capillary plexus (SCP) level (34.4 ± 3.1% and 34.3 ± 3.8% in the diabetic and healthy subjects, respectively, p = 0.899). In conclusion, eyes without signs of DR of patients with diabetes have a reduced volume rendered macular perfusion compared to control healthy eyes.
Collapse
Affiliation(s)
- Enrico Borrelli
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, Italy
| | - Domenico Grosso
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, Italy
| | | | | | - Maria Brambati
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, Italy
| | - Chiara Viganò
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, Italy
| | - Riccardo Sacconi
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, Italy
| | - Lea Querques
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, Italy
| | - Adelaide Pina
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, Italy
| | | | - Francesco Bandello
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, Italy
| | - Giuseppe Querques
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, Italy.
| |
Collapse
|
18
|
Hu D, Cui C, Li H, Larson KE, Tao YK, Oguz I. LIFE: A Generalizable Autodidactic Pipeline for 3D OCT-A Vessel Segmentation. Med Image Comput Comput Assist Interv 2021; 12901:514-524. [PMID: 34950935 PMCID: PMC8692169 DOI: 10.1007/978-3-030-87193-2_49] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Optical coherence tomography (OCT) is a non-invasive imaging technique widely used for ophthalmology. It can be extended to OCT angiography (OCT-A), which reveals the retinal vasculature with improved contrast. Recent deep learning algorithms produced promising vascular segmentation results; however, 3D retinal vessel segmentation remains difficult due to the lack of manually annotated training data. We propose a learning-based method that is only supervised by a self-synthesized modality named local intensity fusion (LIF). LIF is a capillary-enhanced volume computed directly from the input OCT-A. We then construct the local intensity fusion encoder (LIFE) to map a given OCT-A volume and its LIF counterpart to a shared latent space. The latent space of LIFE has the same dimensions as the input data and it contains features common to both modalities. By binarizing this latent space, we obtain a volumetric vessel segmentation. Our method is evaluated in a human fovea OCT-A and three zebrafish OCT-A volumes with manual labels. It yields a Dice score of 0.7736 on human data and 0.8594 ± 0.0275 on zebrafish data, a dramatic improvement over existing unsupervised algorithms.
Collapse
Affiliation(s)
- Dewei Hu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA
| | - Can Cui
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA
| | - Hao Li
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA
| | - Kathleen E Larson
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Yuankai K Tao
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ipek Oguz
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA
| |
Collapse
|
19
|
Borrelli E, Parravano M, Costanzo E, Sacconi R, Querques L, Pennisi F, De Geronimo D, Bandello F, Querques G. USING THREE-DIMENSIONAL OPTICAL COHERENCE TOMOGRAPHY ANGIOGRAPHY METRICS IMPROVES REPEATABILITY ON QUANTIFICATION OF ISCHEMIA IN EYES WITH DIABETIC MACULAR EDEMA. Retina 2021; 41:1660-1667. [PMID: 33332812 DOI: 10.1097/iae.0000000000003077] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
PURPOSE Two-dimensional (2D) optical coherence tomography angiography (OCTA) is known to be prone to segmentation errors, especially in pathologic eyes. Therefore, our aim was to systematically compare intrasession repeatability between repeated scans for 2D and three-dimensional (3D) OCTA metrics in quantifying retinal perfusion in eyes with diabetic macular edema. METHODS Diabetic patients with diabetic retinopathy and diabetic macular edema who had two consecutive OCTA imaging scans obtained during the same visit were retrospectively included. A previously validated algorithm was applied to OCTA volume data to measure the 3D vascular volume and perfusion density. Optical coherence tomography angiography en face images were also processed to obtain 2D perfusion density metrics. RESULTS Twenty patients (20 eyes) with diabetic retinopathy and diabetic macular edema were included. The intraclass correlation coefficient ranged from 0.591 to 0.824 for 2D OCTA metrics and from 0.935 to 0.967 for 3D OCTA metrics. Therefore, compared with the 2D OCTA analysis, the intraclass correlation coefficients of the 3D OCTA analysis were higher (without overlapping of the 95% confidential intervals). Similarly, the coefficient of variation (ranging from 2.2 to 4.2 for 2D OCTA metrics and from 1.9 to 2.0 for 3D OCTA metrics) indicated that the 3D OCTA-based quantifications had the highest interscan intrasession agreements. Differences in interscan 2D OCTA metrics' values were associated with average macular volume. CONCLUSION Three-dimensional OCTA metrics have higher values of intrasession repeatability, as compared with 2D OCTA metrics. The latter finding seems to be related to the high rate of segmentation errors occurring in diabetic macular edema eyes.
Collapse
Affiliation(s)
- Enrico Borrelli
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Milan, Italy; and
| | | | | | - Riccardo Sacconi
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Milan, Italy; and
| | - Lea Querques
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Milan, Italy; and
| | - Flavia Pennisi
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Milan, Italy; and
| | | | - Francesco Bandello
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Milan, Italy; and
| | - Giuseppe Querques
- Department of Ophthalmology, University Vita-Salute, IRCCS Ospedale San Raffaele, Milan, Italy; and
| |
Collapse
|
20
|
Vujosevic S, Cunha-Vaz J, Figueira J, Löwenstein A, Midena E, Parravano M, Scanlon PH, Simó R, Hernández C, Madeira MH, Marques IP, C-V Martinho A, Santos AR, Simó-Servat O, Salongcay RP, Zur D, Peto T. Standardisation of Optical Coherence Tomography Angiography Imaging Biomarkers in Diabetic Retinal Disease. Ophthalmic Res 2021; 64:871-887. [PMID: 34348330 DOI: 10.1159/000518620] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 07/12/2021] [Indexed: 11/19/2022]
Affiliation(s)
| | - José Cunha-Vaz
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
- Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - João Figueira
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
- Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Anat Löwenstein
- Ophthalmology Division, Tel Aviv Medical Center, affiliated to Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Edoardo Midena
- Department of Neuroscience, University of Padua, Padua, Italy
| | | | - Peter Henry Scanlon
- Department of Ophthalmology, Gloucestershire Hospitals NHS Foundation Trust, Cheltenham, United Kingdom
| | - Rafael Simó
- Diabetes and Metabolism Research Unit, Vall d'Hebron Research Institute, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Diabetes y Enfermedades Metabólicas Asociadas (CIBERDEM), Instituto de Salud Carlos III, Madrid, Spain
| | - Cristina Hernández
- Diabetes and Metabolism Research Unit, Vall d'Hebron Research Institute, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Diabetes y Enfermedades Metabólicas Asociadas (CIBERDEM), Instituto de Salud Carlos III, Madrid, Spain
| | - Maria H Madeira
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
- Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Inês P Marques
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
- Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine, University of Coimbra, Coimbra, Portugal
- Department of Orthoptics, School of Health, Polytechnic of Porto, Porto, Portugal
| | - António C-V Martinho
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
| | - Ana R Santos
- AIBILI-Association for Innovation and Biomedical Research on Light and Image, Coimbra, Portugal
- Department of Orthoptics, School of Health, Polytechnic of Porto, Porto, Portugal
| | - Olga Simó-Servat
- Diabetes and Metabolism Research Unit, Vall d'Hebron Research Institute, Barcelona, Spain
- Centro de Investigación Biomédica en Red de Diabetes y Enfermedades Metabólicas Asociadas (CIBERDEM), Instituto de Salud Carlos III, Madrid, Spain
| | - Recivall P Salongcay
- Centre for Public Health, Queen's University Belfast, Belfast, United Kingdom
- Eye and Vision Institute, The Medical City, Pasig, Philippines
| | - Dinah Zur
- Ophthalmology Division, Tel Aviv Medical Center, affiliated to Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Tunde Peto
- Centre for Public Health, Queen's University Belfast, Belfast, United Kingdom
| |
Collapse
|
21
|
Kashani AH, Asanad S, Chan JW, Singer MB, Zhang J, Sharifi M, Khansari MM, Abdolahi F, Shi Y, Biffi A, Chui H, Ringman JM. Past, present and future role of retinal imaging in neurodegenerative disease. Prog Retin Eye Res 2021; 83:100938. [PMID: 33460813 DOI: 10.1016/j.preteyeres.2020.100938] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 12/11/2020] [Accepted: 12/17/2020] [Indexed: 02/08/2023]
Abstract
Retinal imaging technology is rapidly advancing and can provide ever-increasing amounts of information about the structure, function and molecular composition of retinal tissue in humans in vivo. Most importantly, this information can be obtained rapidly, non-invasively and in many cases using Food and Drug Administration-approved devices that are commercially available. Technologies such as optical coherence tomography have dramatically changed our understanding of retinal disease and in many cases have significantly improved their clinical management. Since the retina is an extension of the brain and shares a common embryological origin with the central nervous system, there has also been intense interest in leveraging the expanding armamentarium of retinal imaging technology to understand, diagnose and monitor neurological diseases. This is particularly appealing because of the high spatial resolution, relatively low-cost and wide availability of retinal imaging modalities such as fundus photography or OCT compared to brain imaging modalities such as magnetic resonance imaging or positron emission tomography. The purpose of this article is to review and synthesize current research about retinal imaging in neurodegenerative disease by providing examples from the literature and elaborating on limitations, challenges and future directions. We begin by providing a general background of the most relevant retinal imaging modalities to ensure that the reader has a foundation on which to understand the clinical studies that are subsequently discussed. We then review the application and results of retinal imaging methodologies to several prevalent neurodegenerative diseases where extensive work has been done including sporadic late onset Alzheimer's Disease, Parkinson's Disease and Huntington's Disease. We also discuss Autosomal Dominant Alzheimer's Disease and cerebrovascular small vessel disease, where the application of retinal imaging holds promise but data is currently scarce. Although cerebrovascular disease is not generally considered a neurodegenerative process, it is both a confounder and contributor to neurodegenerative disease processes that requires more attention. Finally, we discuss ongoing efforts to overcome the limitations in the field and unmet clinical and scientific needs.
Collapse
|
22
|
Ma Y, Hao H, Xie J, Fu H, Zhang J, Yang J, Wang Z, Liu J, Zheng Y, Zhao Y. ROSE: A Retinal OCT-Angiography Vessel Segmentation Dataset and New Model. IEEE Trans Med Imaging 2021; 40:928-939. [PMID: 33284751 DOI: 10.1109/tmi.2020.3042802] [Citation(s) in RCA: 79] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Optical Coherence Tomography Angiography (OCTA) is a non-invasive imaging technique that has been increasingly used to image the retinal vasculature at capillary level resolution. However, automated segmentation of retinal vessels in OCTA has been under-studied due to various challenges such as low capillary visibility and high vessel complexity, despite its significance in understanding many vision-related diseases. In addition, there is no publicly available OCTA dataset with manually graded vessels for training and validation of segmentation algorithms. To address these issues, for the first time in the field of retinal image analysis we construct a dedicated Retinal OCTA SEgmentation dataset (ROSE), which consists of 229 OCTA images with vessel annotations at either centerline-level or pixel level. This dataset with the source code has been released for public access to assist researchers in the community in undertaking research in related topics. Secondly, we introduce a novel split-based coarse-to-fine vessel segmentation network for OCTA images (OCTA-Net), with the ability to detect thick and thin vessels separately. In the OCTA-Net, a split-based coarse segmentation module is first utilized to produce a preliminary confidence map of vessels, and a split-based refined segmentation module is then used to optimize the shape/contour of the retinal microvasculature. We perform a thorough evaluation of the state-of-the-art vessel segmentation models and our OCTA-Net on the constructed ROSE dataset. The experimental results demonstrate that our OCTA-Net yields better vessel segmentation performance in OCTA than both traditional and other deep learning methods. In addition, we provide a fractal dimension analysis on the segmented microvasculature, and the statistical analysis demonstrates significant differences between the healthy control and Alzheimer's Disease group. This consolidates that the analysis of retinal microvasculature may offer a new scheme to study various neurodegenerative diseases.
Collapse
|
23
|
Zhang Z, Zhang S, Feng H, Lv Z. Extraction and Visualization of Ocular Blood Vessels in 3D Medical Images Based on Geometric Transformation Algorithm. Journal of Healthcare Engineering 2021; 2021:1-13. [DOI: 10.1155/2021/5573381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Data extraction and visualization of 3D medical images of ocular blood vessels are performed by geometric transformation algorithm, which first performs random resonance response in a global sense to achieve detection of high-contrast coarse blood vessels and then redefines the input signal as a local image shielding the global detection result to achieve enhanced detection of low-contrast microfine vessels and complete multilevel random resonance segmentation detection. Finally, a random resonance detection method for fundus vessels based on scale decomposition is proposed, in which the images are scale decomposed, the high-frequency signals containing detailed information are randomly resonantly enhanced to achieve microfine vessel segmentation detection, and the final vessel segmentation detection results are obtained after fusing the low-frequency image signals. The optimal stochastic resonance response of the nonlinear model of neurons in the global sense is obtained to detect the high-grade intensity signal; then, the input signal is defined as a local image with high-contrast blood vessels removed, and the parameters are optimized before the detection of the low-grade intensity signal. Finally, the multilevel random resonance response is fused to obtain the segmentation results of the fundus retinal vessels. The sensitivity of the multilevel segmentation method proposed in this paper is significantly improved compared with the global random resonance results, indicating that the method proposed in this paper has obvious advantages in the segmentation of vessels with low-intensity levels. The image library was tested, and the experimental results showed that the new method has a better segmentation effect on low-contrast microscopic blood vessels. The new method not only makes full use of the noise for weak signal detection and segmentation but also provides a new idea of how to achieve multilevel segmentation and recognition of medical images.
Collapse
|
24
|
Sarabi MS, Khansari MM, Zhang J, Kushner-Lenhoff S, Gahm JK, Qiao Y, Kashani AH, Shi Y. 3D Retinal Vessel Density Mapping With OCT-Angiography. IEEE J Biomed Health Inform 2020; 24:3466-3479. [PMID: 32986562 PMCID: PMC7737654 DOI: 10.1109/jbhi.2020.3023308] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Optical Coherence Tomography Angiography (OCTA) is a novel, non-invasive imaging modality of retinal capillaries at micron resolution. Recent studies have correlated macular OCTA vascular measures with retinal disease severity and supported their use as a diagnostic tool. However, these measurements mostly rely on a few summary statistics in retinal layers or regions of interest in the two-dimensional (2D) en face projection images. To enable 3D and localized comparisons of retinal vasculature between longitudinal scans and across populations, we develop a novel approach for mapping retinal vessel density from OCTA images. We first obtain a high-quality 3D representation of OCTA-based vessel networks via curvelet-based denoising and optimally oriented flux (OOF). Then, an effective 3D retinal vessel density mapping method is proposed. In this framework, a vessel density image (VDI) is constructed by diffusing the vessel mask derived from OOF-based analysis to the entire image volume. Subsequently, we utilize a non-linear, 3D OCT image registration method to provide localized comparisons of retinal vasculature across subjects. In our experimental results, we demonstrate an application of our method for longitudinal qualitative analysis of two pathological subjects with edema during the course of clinical care. Additionally, we quantitatively validate our method on synthetic data with simulated capillary dropout, a dataset obtained from a normal control (NC) population divided into two age groups and a dataset obtained from patients with diabetic retinopathy (DR). Our results show that we can successfully detect localized vascular changes caused by simulated capillary loss, normal aging, and DR pathology even in presence of edema. These results demonstrate the potential of the proposed framework in localized detection of microvascular changes and monitoring retinal disease progression.
Collapse
|
25
|
Stefan S, Lee J. Deep learning toolbox for automated enhancement, segmentation, and graphing of cortical optical coherence tomography microangiograms. Biomed Opt Express 2020; 11:7325-7342. [PMID: 33409000 PMCID: PMC7747889 DOI: 10.1364/boe.405763] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 11/17/2020] [Accepted: 11/17/2020] [Indexed: 05/03/2023]
Abstract
Optical coherence tomography angiography (OCTA) is becoming increasingly popular for neuroscientific study, but it remains challenging to objectively quantify angioarchitectural properties from 3D OCTA images. This is mainly due to projection artifacts or "tails" underneath vessels caused by multiple-scattering, as well as the relatively low signal-to-noise ratio compared to fluorescence-based imaging modalities. Here, we propose a set of deep learning approaches based on convolutional neural networks (CNNs) to automated enhancement, segmentation and gap-correction of OCTA images, especially of those obtained from the rodent cortex. Additionally, we present a strategy for skeletonizing the segmented OCTA and extracting the underlying vascular graph, which enables the quantitative assessment of various angioarchitectural properties, including individual vessel lengths and tortuosity. These tools, including the trained CNNs, are made publicly available as a user-friendly toolbox for researchers to input their OCTA images and subsequently receive the underlying vascular network graph with the associated angioarchitectural properties.
Collapse
Affiliation(s)
- Sabina Stefan
- Center for Biomedical Engineering, School of Engineering, Brown University, Providence, RI 02912, USA
| | - Jonghwan Lee
- Center for Biomedical Engineering, School of Engineering, Brown University, Providence, RI 02912, USA
- Carney Institute for Brain Science, Brown University, Providence, RI 02912, USA
| |
Collapse
|
26
|
Mou L, Zhao Y, Fu H, Liu Y, Cheng J, Zheng Y, Su P, Yang J, Chen L, Frangi AF, Akiba M, Liu J. CS 2-Net: Deep learning segmentation of curvilinear structures in medical imaging. Med Image Anal 2020; 67:101874. [PMID: 33166771 DOI: 10.1016/j.media.2020.101874] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 08/26/2020] [Accepted: 10/05/2020] [Indexed: 12/20/2022]
Abstract
Automated detection of curvilinear structures, e.g., blood vessels or nerve fibres, from medical and biomedical images is a crucial early step in automatic image interpretation associated to the management of many diseases. Precise measurement of the morphological changes of these curvilinear organ structures informs clinicians for understanding the mechanism, diagnosis, and treatment of e.g. cardiovascular, kidney, eye, lung, and neurological conditions. In this work, we propose a generic and unified convolution neural network for the segmentation of curvilinear structures and illustrate in several 2D/3D medical imaging modalities. We introduce a new curvilinear structure segmentation network (CS2-Net), which includes a self-attention mechanism in the encoder and decoder to learn rich hierarchical representations of curvilinear structures. Two types of attention modules - spatial attention and channel attention - are utilized to enhance the inter-class discrimination and intra-class responsiveness, to further integrate local features with their global dependencies and normalization, adaptively. Furthermore, to facilitate the segmentation of curvilinear structures in medical images, we employ a 1×3 and a 3×1 convolutional kernel to capture boundary features. Besides, we extend the 2D attention mechanism to 3D to enhance the network's ability to aggregate depth information across different layers/slices. The proposed curvilinear structure segmentation network is thoroughly validated using both 2D and 3D images across six different imaging modalities. Experimental results across nine datasets show the proposed method generally outperforms other state-of-the-art algorithms in various metrics.
Collapse
Affiliation(s)
- Lei Mou
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Yonghuai Liu
- Department of Computer Science, Edge Hill University, Ormskirk, UK
| | - Jun Cheng
- UBTech Research, UBTech Robotics Corp Ltd, Shenzhen, China
| | - Yalin Zheng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, UK; Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Pan Su
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Jianlong Yang
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Li Chen
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
| | - Alejandro F Frangi
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China; Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing and School of Medicine, University of Leeds, Leeds, UK; Leeds Institute of Cardiovascular and Metabolic Medicine, School of Medicine, University of Leeds, Leeds, UK; Medical Imaging Research Centre (MIRC), University Hospital Gasthuisberg, Cardiovascular Sciences and Electrical Engineering Departments, KU Leuven, Leuven, Belgium
| | | | - Jiang Liu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China; Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China; Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China.
| |
Collapse
|
27
|
Borrelli E, Sacconi R, Querques L, Battista M, Bandello F, Querques G. Quantification of diabetic macular ischemia using novel three-dimensional optical coherence tomography angiography metrics. J Biophotonics 2020; 13:e202000152. [PMID: 32526048 DOI: 10.1002/jbio.202000152] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 05/25/2020] [Accepted: 06/07/2020] [Indexed: 06/11/2023]
Abstract
We applied three-dimensional (3D) analysis to optical coherence tomography angiography (OCTA) to measure macular ischemia in eyes affected by non-proliferative diabetic retinopathy (DR). A previously validated algorithm was applied to OCTA data in order to obtain 3D visualization of the retinal vasculature. Successively, a global thresholding algorithm was applied and two novel quantitative metrics were introduced: 3D vascular volume and 3D perfusion density. Two-dimensional (2D) OCTA metrics were also obtained with different binarization thresholds for comparison. Of the 30 patients included, 15 were diagnosed with DR and 15 were controls. The 3D vascular volume and 3D perfusion density were reduced in DR eyes (P < .0001). The 2D variables also significantly differ between groups. The 3D perfusion density had the highest area under the receiver operating characteristic curve (0.964) among tested variables. Assessing quantitative perfusion using 3D analysis is reliable and promising, and with an elevated diagnostic efficacy in identifying DR eyes.
Collapse
Affiliation(s)
- Enrico Borrelli
- Department of Ophthalmology, University Vita-Salute, Milan, Italy
| | - Riccardo Sacconi
- Department of Ophthalmology, University Vita-Salute, Milan, Italy
| | - Lea Querques
- Department of Ophthalmology, University Vita-Salute, Milan, Italy
| | - Marco Battista
- Department of Ophthalmology, University Vita-Salute, Milan, Italy
| | | | | |
Collapse
|
28
|
Markan A, Agarwal A, Arora A, Bazgain K, Rana V, Gupta V. Novel imaging biomarkers in diabetic retinopathy and diabetic macular edema. Ther Adv Ophthalmol 2020; 12:2515841420950513. [PMID: 32954207 PMCID: PMC7475787 DOI: 10.1177/2515841420950513] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 07/13/2020] [Indexed: 12/11/2022] Open
Abstract
Diabetic retinopathy is one of the major microvascular complications of diabetes mellitus. The most common causes of vision loss in diabetic retinopathy are diabetic macular edema and proliferative diabetic retinopathy. Recent developments in ocular imaging have played a significant role in early diagnosis and management of these complications. Color fundus photography is an imaging modality, which is helpful for screening patients with diabetic eye disease and monitoring its progression as well as response to treatment. Fundus fluorescein angiography (FFA) is a dye-based invasive test to detect subtle neovascularization, look for areas of capillary non-perfusion, diagnose macular ischemia, and differentiate between focal and diffuse capillary bed leak in cases of macular edema. Recent advances in retinal imaging like the introduction of spectral-domain and swept source-based optical coherence tomography (OCT), fundus autofluorescence (FAF), OCT angiography, and ultrawide field imaging and FFA have helped clinicians in the detection of certain biomarkers that can identify disease at an early stage and predict response to treatment in diabetic macular edema. This article will summarize the role of different imaging biomarkers in characterizing diabetic retinopathy and their potential contribution in its management.
Collapse
Affiliation(s)
- Ashish Markan
- Advanced Eye Center, Department of Ophthalmology, Post Graduate Institute of Medical Education and Research (PGIMER), Chandigarh, India
| | - Aniruddha Agarwal
- Advanced Eye Center, Department of Ophthalmology, Post Graduate Institute of Medical Education and Research (PGIMER), Chandigarh, India
| | - Atul Arora
- Advanced Eye Center, Department of Ophthalmology, Post Graduate Institute of Medical Education and Research (PGIMER), Chandigarh, India
| | - Krinjeela Bazgain
- Advanced Eye Center, Department of Ophthalmology, Post Graduate Institute of Medical Education and Research (PGIMER), Chandigarh, India
| | - Vipin Rana
- Advanced Eye Center, Department of Ophthalmology, Post Graduate Institute of Medical Education and Research (PGIMER), Chandigarh, India
| | - Vishali Gupta
- Professor of Ophthalmology, Advanced Eye Center, Post Graduate Institute of Medical Education and Research (PGIMER), Sector 12, Chandigarh 160012, India
| |
Collapse
|
29
|
Wu X, Gao D, Borroni D, Madhusudhan S, Jin Z, Zheng Y. Cooperative Low-Rank Models for Removing Stripe Noise From OCTA Images. IEEE J Biomed Health Inform 2020; 24:3480-3490. [PMID: 32750910 DOI: 10.1109/jbhi.2020.2997381] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Optical coherence tomography angiography (OCTA) is an emerging non-invasive imaging technique for imaging the microvasculature of the eye based on phase variance or amplitude decorrelation derived from repeated OCT images of the same tissue area. Stripe noise occurs during the OCTA acquisition process due to the involuntary movement of the eye. To remove the stripe noise (or 'destriping') effectively, we propose two novel image decomposition models to simultaneously destripe all the OCTA images of the same eye cooperatively: cooperative uniformity destriping (CUD) model and cooperative similarity destriping (CSD) model. Both the models consider stripe noise by low-rank constraint but in different ways: the CUD model assumes that stripe noise is identical across all the layers while the CSD model assumes that the stripe noise at different layers are different and have to be considered in the model. Compared to the CUD model, CSD is a more general solution for real OCTA images. An efficient solution (CSD+) is developed for model CSD to reduce the computational complexity. The models were extensively evaluated against state-of-the-art methods on both synthesized and real OCTA datasets. The experiments demonstrated not only the effectiveness of the CSD and CSD+ models in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) and CSD+ is twice faster than CSD, but also their beneficiary effect on the vessel segmentation of OCTA images. We expect our models will become a powerful tool for clinical applications.
Collapse
|
30
|
Pellegrini M, Vagge A, Ferro Desideri L, Bernabei F, Triolo G, Mastropasqua R, Del Noce C, Borrelli E, Sacconi R, Iovino C, Di Zazzo A, Forlini M, Giannaccare G. Optical Coherence Tomography Angiography in Neurodegenerative Disorders. J Clin Med 2020; 9:E1706. [PMID: 32498362 PMCID: PMC7356677 DOI: 10.3390/jcm9061706] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 05/29/2020] [Indexed: 12/15/2022] Open
Abstract
Retinal microcirculation shares similar features with cerebral small blood vessels. Thus, the retina may be considered an accessible 'window' to detect the microvascular damage occurring in the setting of neurodegenerative disorders. Optical coherence tomography angiography (OCT-A) is a non-invasive imaging modality providing depth resolved images of blood flow in the retina, choroid, and optic nerve. In this review, we summarize the current literature on the application of OCT-A in glaucoma and central nervous system conditions such as Alzheimer's disease, Parkinson's disease, and multiple sclerosis. Future directions aiming at evaluating whether OCT-A can be an additional biomarker for the early diagnosis and monitoring of neurodegenerative disorders are also discussed.
Collapse
Affiliation(s)
- Marco Pellegrini
- Ophthalmology Unit, S. Orsola-Malpighi University Hospital, University of Bologna, 40138 Bologna, Italy; (M.P.); (F.B.)
| | - Aldo Vagge
- University Eye Clinic, DINOGMI, Polyclinic Hospital San Martino IRCCS, 16132 Genoa, Italy; (L.F.D.); (C.D.N.)
| | - Lorenzo Ferro Desideri
- University Eye Clinic, DINOGMI, Polyclinic Hospital San Martino IRCCS, 16132 Genoa, Italy; (L.F.D.); (C.D.N.)
| | - Federico Bernabei
- Ophthalmology Unit, S. Orsola-Malpighi University Hospital, University of Bologna, 40138 Bologna, Italy; (M.P.); (F.B.)
| | - Giacinto Triolo
- Ophthalmology Department, Fatebenefratelli and Ophthalmic Hospital, ASST-Fatebenefratelli-Sacco, 63631 Milan, Italy;
| | - Rodolfo Mastropasqua
- Institute of Ophthalmology, University of Modena and Reggio Emilia, 41121 Modena, Italy;
| | - Chiara Del Noce
- University Eye Clinic, DINOGMI, Polyclinic Hospital San Martino IRCCS, 16132 Genoa, Italy; (L.F.D.); (C.D.N.)
| | - Enrico Borrelli
- Department of Ophthalmology, Hospital San Raffaele, University Vita Salute San Raffaele, 20132 Milan, Italy; (E.B.); (R.S.)
| | - Riccardo Sacconi
- Department of Ophthalmology, Hospital San Raffaele, University Vita Salute San Raffaele, 20132 Milan, Italy; (E.B.); (R.S.)
| | - Claudio Iovino
- Department of Surgical Sciences, Eye Clinic, University of Cagliari, 09124 Cagliari, Italy;
| | - Antonio Di Zazzo
- Department of Ophthalmology, University Campus Bio-Medico of Rome, 00128 Rome, Italy;
| | - Matteo Forlini
- Domus Nova Hospital, 48121 Ravenna, Italy;
- Department of Ophthalmology, Ospedale dello Stato della Repubblica di San Marino, 47893 Città di San Marino, San Marino
| | - Giuseppe Giannaccare
- Department of Ophthalmology, University “Magna Graecia”, 88100 Catanzaro, Italy;
| |
Collapse
|