1
|
Khalafi P, Morsali S, Hamidi S, Ashayeri H, Sobhi N, Pedrammehr S, Jafarizadeh A. Artificial intelligence in stroke risk assessment and management via retinal imaging. Front Comput Neurosci 2025; 19:1490603. [PMID: 40034651 PMCID: PMC11872910 DOI: 10.3389/fncom.2025.1490603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2024] [Accepted: 01/10/2025] [Indexed: 03/05/2025] Open
Abstract
Retinal imaging, used for assessing stroke-related retinal changes, is a non-invasive and cost-effective method that can be enhanced by machine learning and deep learning algorithms, showing promise in early disease detection, severity grading, and prognostic evaluation in stroke patients. This review explores the role of artificial intelligence (AI) in stroke patient care, focusing on retinal imaging integration into clinical workflows. Retinal imaging has revealed several microvascular changes, including a decrease in the central retinal artery diameter and an increase in the central retinal vein diameter, both of which are associated with lacunar stroke and intracranial hemorrhage. Additionally, microvascular changes, such as arteriovenous nicking, increased vessel tortuosity, enhanced arteriolar light reflex, decreased retinal fractals, and thinning of retinal nerve fiber layer are also reported to be associated with higher stroke risk. AI models, such as Xception and EfficientNet, have demonstrated accuracy comparable to traditional stroke risk scoring systems in predicting stroke risk. For stroke diagnosis, models like Inception, ResNet, and VGG, alongside machine learning classifiers, have shown high efficacy in distinguishing stroke patients from healthy individuals using retinal imaging. Moreover, a random forest model effectively distinguished between ischemic and hemorrhagic stroke subtypes based on retinal features, showing superior predictive performance compared to traditional clinical characteristics. Additionally, a support vector machine model has achieved high classification accuracy in assessing pial collateral status. Despite this advancements, challenges such as the lack of standardized protocols for imaging modalities, hesitance in trusting AI-generated predictions, insufficient integration of retinal imaging data with electronic health records, the need for validation across diverse populations, and ethical and regulatory concerns persist. Future efforts must focus on validating AI models across diverse populations, ensuring algorithm transparency, and addressing ethical and regulatory issues to enable broader implementation. Overcoming these barriers will be essential for translating this technology into personalized stroke care and improving patient outcomes.
Collapse
Affiliation(s)
- Parsa Khalafi
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Soroush Morsali
- Student Research Committee, Tabriz University of Medical Sciences, Tabriz, Iran
- Tabriz USERN Office, Universal Scientific Education and Research Network (USERN), Tabriz, Iran
- Neuroscience Research Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Sana Hamidi
- Student Research Committee, Tabriz University of Medical Sciences, Tabriz, Iran
- Tabriz USERN Office, Universal Scientific Education and Research Network (USERN), Tabriz, Iran
| | - Hamidreza Ashayeri
- Student Research Committee, Tabriz University of Medical Sciences, Tabriz, Iran
- Neuroscience Research Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Navid Sobhi
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Siamak Pedrammehr
- Faculty of Design, Tabriz Islamic Art University, Tabriz, Iran
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, VIC, Australia
| | - Ali Jafarizadeh
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran
| |
Collapse
|
2
|
Prasad VK, Verma A, Bhattacharya P, Shah S, Chowdhury S, Bhavsar M, Aslam S, Ashraf N. Revolutionizing healthcare: a comparative insight into deep learning's role in medical imaging. Sci Rep 2024; 14:30273. [PMID: 39632902 PMCID: PMC11618441 DOI: 10.1038/s41598-024-71358-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 08/27/2024] [Indexed: 12/07/2024] Open
Abstract
Recently, Deep Learning (DL) models have shown promising accuracy in analysis of medical images. Alzeheimer Disease (AD), a prevalent form of dementia, uses Magnetic Resonance Imaging (MRI) scans, which is then analysed via DL models. To address the model computational constraints, Cloud Computing (CC) is integrated to operate with the DL models. Recent articles on DL-based MRI have not discussed datasets specific to different diseases, which makes it difficult to build the specific DL model. Thus, the article systematically explores a tutorial approach, where we first discuss a classification taxonomy of medical imaging datasets. Next, we present a case-study on AD MRI classification using the DL methods. We analyse three distinct models-Convolutional Neural Networks (CNN), Visual Geometry Group 16 (VGG-16), and an ensemble approach-for classification and predictive outcomes. In addition, we designed a novel framework that offers insight into how various layers interact with the dataset. Our architecture comprises an input layer, a cloud-based layer responsible for preprocessing and model execution, and a diagnostic layer that issues alerts after successful classification and prediction. According to our simulations, CNN outperformed other models with a test accuracy of 99.285%, followed by VGG-16 with 85.113%, while the ensemble model lagged with a disappointing test accuracy of 79.192%. Our cloud Computing framework serves as an efficient mechanism for medical image processing while safeguarding patient confidentiality and data privacy.
Collapse
Affiliation(s)
- Vivek Kumar Prasad
- Department of CSE, Institute of Technology Nirma University, Ahemdabad, Gujarat, India
| | - Ashwin Verma
- Department of CSE, Institute of Technology Nirma University, Ahemdabad, Gujarat, India
| | - Pronaya Bhattacharya
- Department of CSE, Amity School of Engineering and Technology, Research and Innovation Cell, Amity University, Kolkata, West Bengal, India
| | - Sheryal Shah
- Department of CSE, Institute of Technology Nirma University, Ahemdabad, Gujarat, India
| | - Subrata Chowdhury
- Department of Computer Science and Engineering, Sreenivasa Institute of Technology and Management Studies, Chittoor, Andra Pradesh, India
| | - Madhuri Bhavsar
- Department of CSE, Institute of Technology Nirma University, Ahemdabad, Gujarat, India
| | - Sheraz Aslam
- Department of Electrical Engineering, Computer Engineering, and Informatics, Cyprus University of Technology, 3036, Limassol, Cyprus
| | - Nouman Ashraf
- School of Electrical and Electronic Engineering, Technological University Dublin, Dublin, Ireland.
| |
Collapse
|
3
|
Su R, van der Sluijs PM, Chen Y, Cornelissen S, van den Broek R, van Zwam WH, van der Lugt A, Niessen WJ, Ruijters D, van Walsum T. CAVE: Cerebral artery-vein segmentation in digital subtraction angiography. Comput Med Imaging Graph 2024; 115:102392. [PMID: 38714020 DOI: 10.1016/j.compmedimag.2024.102392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 04/22/2024] [Accepted: 04/26/2024] [Indexed: 05/09/2024]
Abstract
Cerebral X-ray digital subtraction angiography (DSA) is a widely used imaging technique in patients with neurovascular disease, allowing for vessel and flow visualization with high spatio-temporal resolution. Automatic artery-vein segmentation in DSA plays a fundamental role in vascular analysis with quantitative biomarker extraction, facilitating a wide range of clinical applications. The widely adopted U-Net applied on static DSA frames often struggles with disentangling vessels from subtraction artifacts. Further, it falls short in effectively separating arteries and veins as it disregards the temporal perspectives inherent in DSA. To address these limitations, we propose to simultaneously leverage spatial vasculature and temporal cerebral flow characteristics to segment arteries and veins in DSA. The proposed network, coined CAVE, encodes a 2D+time DSA series using spatial modules, aggregates all the features using temporal modules, and decodes it into 2D segmentation maps. On a large multi-center clinical dataset, CAVE achieves a vessel segmentation Dice of 0.84 (±0.04) and an artery-vein segmentation Dice of 0.79 (±0.06). CAVE surpasses traditional Frangi-based k-means clustering (P < 0.001) and U-Net (P < 0.001) by a significant margin, demonstrating the advantages of harvesting spatio-temporal features. This study represents the first investigation into automatic artery-vein segmentation in DSA using deep learning. The code is publicly available at https://github.com/RuishengSu/CAVE_DSA.
Collapse
Affiliation(s)
- Ruisheng Su
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, The Netherlands.
| | - P Matthijs van der Sluijs
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, The Netherlands
| | - Yuan Chen
- Department of Radiology & Nuclear Medicine, UMass Chan Medical School, Worcester, USA
| | - Sandra Cornelissen
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, The Netherlands
| | - Ruben van den Broek
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, The Netherlands
| | - Wim H van Zwam
- Department of Radiology & Nuclear Medicine, Maastricht UMC, Cardiovascular Research Institute Maastricht, The Netherlands
| | - Aad van der Lugt
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, The Netherlands
| | - Wiro J Niessen
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, The Netherlands; Imaging Physics, Applied Sciences, Delft University of Technology, The Netherlands
| | | | - Theo van Walsum
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, The Netherlands
| |
Collapse
|
4
|
Khan MZ, Gajendran MK. Generative Neural Framework for Micro-Vessels Classification. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039239 DOI: 10.1109/embc53108.2024.10782802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
The morphological abnormalities in the retinal blood vessel have a close association with cerebrovascular, cardio-vascular, and systemic diseases. It makes the retinal artery/vein (A/V) classification salient for clinical decision-making. The existing methods find it challenging to correctly classify A/V with non-uniform brightness and vessel thickness, especially at the bifurcation and endpoints. To avoid these problems and increase precision, AV-Net is proposed. It uses the context information and performs data fusion to improve A/V classification. Specifically, the AV-Net offers a module that fuses local and global vessel information for creating a weight map to constrain the A/V features. It helps suppress the background-prone features and improve region extraction at the bifurcation and endpoints. In addition, to improve model robustness, the AV-Net uses a multiscale-feature module that captures coarse and fine details.
Collapse
|
5
|
Abtahi M, Le D, Ebrahimi B, Dadzie AK, Rahimi M, Hsieh YT, Heiferman MJ, Lim JI, Yao X. Differential artery-vein analysis improves the OCTA classification of diabetic retinopathy. BIOMEDICAL OPTICS EXPRESS 2024; 15:3889-3899. [PMID: 38867785 PMCID: PMC11166441 DOI: 10.1364/boe.521657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 04/25/2024] [Accepted: 05/14/2024] [Indexed: 06/14/2024]
Abstract
This study investigates the impact of differential artery-vein (AV) analysis in optical coherence tomography angiography (OCTA) on machine learning classification of diabetic retinopathy (DR). Leveraging deep learning for arterial-venous area (AVA) segmentation, six quantitative features, including perfusion intensity density (PID), blood vessel density (BVD), vessel area flux (VAF), blood vessel caliber (BVC), blood vessel tortuosity (BVT), and vessel perimeter index (VPI) features, were derived from OCTA images before and after AV differentiation. A support vector machine (SVM) classifier was utilized to assess both binary and multiclass classifications of control, diabetic patients without DR (NoDR), mild DR, moderate DR, and severe DR groups. Initially, one-region features, i.e., quantitative features extracted from the entire OCTA, were evaluated for DR classification. Differential AV analysis improved classification accuracies from 78.86% to 87.63% and from 79.62% to 85.66% for binary and multiclass classifications, respectively. Additionally, three-region features derived from the entire image, parafovea, and perifovea, were incorporated for DR classification. Differential AV analysis further enhanced classification accuracies from 84.43% to 93.33% and from 83.40% to 89.25% for binary and multiclass classifications, respectively. These findings highlight the potential of differential AV analysis in augmenting disease diagnosis and treatment assessment using OCTA.
Collapse
Affiliation(s)
- Mansour Abtahi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - David Le
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Behrouz Ebrahimi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Albert K. Dadzie
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Mojtaba Rahimi
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Yi-Ting Hsieh
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| | - Michael J. Heiferman
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Jennifer I. Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, IL 60607, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
6
|
Fhima J, Van Eijgen J, Billen Moulin-Romsée MI, Brackenier H, Kulenovic H, Debeuf V, Vangilbergen M, Freiman M, Stalmans I, Behar JA. LUNet: deep learning for the segmentation of arterioles and venules in high resolution fundus images. Physiol Meas 2024; 45:055002. [PMID: 38599224 DOI: 10.1088/1361-6579/ad3d28] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 04/10/2024] [Indexed: 04/12/2024]
Abstract
Objective.This study aims to automate the segmentation of retinal arterioles and venules (A/V) from digital fundus images (DFI), as changes in the spatial distribution of retinal microvasculature are indicative of cardiovascular diseases, positioning the eyes as windows to cardiovascular health.Approach.We utilized active learning to create a new DFI dataset with 240 crowd-sourced manual A/V segmentations performed by 15 medical students and reviewed by an ophthalmologist. We then developed LUNet, a novel deep learning architecture optimized for high-resolution A/V segmentation. The LUNet model features a double dilated convolutional block to widen the receptive field and reduce parameter count, alongside a high-resolution tail to refine segmentation details. A custom loss function was designed to prioritize the continuity of blood vessel segmentation.Main Results.LUNet significantly outperformed three benchmark A/V segmentation algorithms both on a local test set and on four external test sets that simulated variations in ethnicity, comorbidities and annotators.Significance.The release of the new datasets and the LUNet model (www.aimlab-technion.com/lirot-ai) provides a valuable resource for the advancement of retinal microvasculature analysis. The improvements in A/V segmentation accuracy highlight LUNet's potential as a robust tool for diagnosing and understanding cardiovascular diseases through retinal imaging.
Collapse
Affiliation(s)
- Jonathan Fhima
- Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
- Department of Applied Mathematics, Technion-IIT, Haifa, Israel
| | - Jan Van Eijgen
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Department of Ophthalmology, University Hospitals UZ Leuven, Leuven, Belgium
| | - Marie-Isaline Billen Moulin-Romsée
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Department of Ophthalmology, University Hospitals UZ Leuven, Leuven, Belgium
| | - Heloïse Brackenier
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Hana Kulenovic
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Valérie Debeuf
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Marie Vangilbergen
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Moti Freiman
- Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| | - Ingeborg Stalmans
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Department of Ophthalmology, University Hospitals UZ Leuven, Leuven, Belgium
| | - Joachim A Behar
- Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| |
Collapse
|
7
|
Chen Q, Peng J, Zhao S, Liu W. Automatic artery/vein classification methods for retinal blood vessel: A review. Comput Med Imaging Graph 2024; 113:102355. [PMID: 38377630 DOI: 10.1016/j.compmedimag.2024.102355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 02/06/2024] [Accepted: 02/06/2024] [Indexed: 02/22/2024]
Abstract
Automatic retinal arteriovenous classification can assist ophthalmologists in disease early diagnosis. Deep learning-based methods and topological graph-based methods have become the main solutions for retinal arteriovenous classification in recent years. This paper reviews the automatic retinal arteriovenous classification methods from 2003 to 2022. Firstly, we compare different methods and provide comparison tables of the summary results. Secondly, we complete the classification of the public arteriovenous classification datasets and provide the annotation development tables of different datasets. Finally, we sort out the challenges of evaluation methods and provide a comprehensive evaluation system. Quantitative and qualitative analysis shows the changes in research hotspots over time, Quantitative and qualitative analyses reveal the evolution of research hotspots over time, highlighting the significance of exploring the integration of deep learning with topological information in future research.
Collapse
Affiliation(s)
- Qihan Chen
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| | - Jianqing Peng
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China; Guangdong Provincial Key Laboratory of Fire Science and Technology, Guangzhou 510006, China.
| | - Shen Zhao
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China.
| | - Wanquan Liu
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| |
Collapse
|
8
|
Zhou Y, Xu M, Hu Y, Blumberg SB, Zhao A, Wagner SK, Keane PA, Alexander DC. CF-Loss: Clinically-relevant feature optimised loss function for retinal multi-class vessel segmentation and vascular feature measurement. Med Image Anal 2024; 93:103098. [PMID: 38320370 DOI: 10.1016/j.media.2024.103098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 05/22/2023] [Accepted: 01/30/2024] [Indexed: 02/08/2024]
Abstract
Characterising clinically-relevant vascular features, such as vessel density and fractal dimension, can benefit biomarker discovery and disease diagnosis for both ophthalmic and systemic diseases. In this work, we explicitly encode vascular features into an end-to-end loss function for multi-class vessel segmentation, categorising pixels into artery, vein, uncertain pixels, and background. This clinically-relevant feature optimised loss function (CF-Loss) regulates networks to segment accurate multi-class vessel maps that produce precise vascular features. Our experiments first verify that CF-Loss significantly improves both multi-class vessel segmentation and vascular feature estimation, with two standard segmentation networks, on three publicly available datasets. We reveal that pixel-based segmentation performance is not always positively correlated with accuracy of vascular features, thus highlighting the importance of optimising vascular features directly via CF-Loss. Finally, we show that improved vascular features from CF-Loss, as biomarkers, can yield quantitative improvements in the prediction of ischaemic stroke, a real-world clinical downstream task. The code is available at https://github.com/rmaphoh/feature-loss.
Collapse
Affiliation(s)
- Yukun Zhou
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK.
| | - MouCheng Xu
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, UK
| | - Stefano B Blumberg
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| | - An Zhao
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK
| | - Pearse A Keane
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| |
Collapse
|
9
|
Hu J, Qiu L, Wang H, Zhang J. Semi-supervised point consistency network for retinal artery/vein classification. Comput Biol Med 2024; 168:107633. [PMID: 37992471 DOI: 10.1016/j.compbiomed.2023.107633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 10/02/2023] [Accepted: 10/23/2023] [Indexed: 11/24/2023]
Abstract
Recent deep learning methods with convolutional neural networks (CNNs) have boosted advance prosperity of medical image analysis and expedited the automatic retinal artery/vein (A/V) classification. However, it is challenging for these CNN-based approaches in two aspects: (1) specific tubular structures and subtle variations in appearance, contrast, and geometry, which tend to be ignored in CNNs with network layer increasing; (2) limited well-labeled data for supervised segmentation of retinal vessels, which may hinder the effectiveness of deep learning methods. To address these issues, we propose a novel semi-supervised point consistency network (SPC-Net) for retinal A/V classification. SPC-Net consists of an A/V classification (AVC) module and a multi-class point consistency (MPC) module. The AVC module adopts an encoder-decoder segmentation network to generate the prediction probability map of A/V for supervised learning. The MPC module introduces point set representations to adaptively generate point set classification maps of the arteriovenous skeleton, which enjoys its prediction flexibility and consistency (i.e. point consistency) to effectively alleviate arteriovenous confusion. In addition, we propose a consistency regularization between the predicted A/V classification probability maps and point set representations maps for unlabeled data to explore the inherent segmentation perturbation of the point consistency, reducing the need for annotated data. We validate our method on two typical public datasets (DRIVE, HRF) and a private dataset (TR280) with different resolutions. Extensive qualitative and quantitative experimental results demonstrate the effectiveness of our proposed method for supervised and semi-supervised learning.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China
| | - Linwei Qiu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, 100083, China.
| |
Collapse
|
10
|
Suman S, Tiwari AK, Singh K. Computer-aided diagnostic system for hypertensive retinopathy: A review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107627. [PMID: 37320942 DOI: 10.1016/j.cmpb.2023.107627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 05/03/2023] [Accepted: 05/27/2023] [Indexed: 06/17/2023]
Abstract
Hypertensive Retinopathy (HR) is a retinal disease caused by elevated blood pressure for a prolonged period. There are no obvious signs in the early stages of high blood pressure, but it affects various body parts over time, including the eyes. HR is a biomarker for several illnesses, including retinal diseases, atherosclerosis, strokes, kidney disease, and cardiovascular risks. Early microcirculation abnormalities in chronic diseases can be diagnosed through retinal examination prior to the onset of major clinical consequences. Computer-aided diagnosis (CAD) plays a vital role in the early identification of HR with improved diagnostic accuracy, which is time-efficient and demands fewer resources. Recently, numerous studies have been reported on the automatic identification of HR. This paper provides a comprehensive review of the automated tasks of Artery-Vein (A/V) classification, Arteriovenous ratio (AVR) computation, HR detection (Binary classification), and HR severity grading. The review is conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. The paper discusses the clinical features of HR, the availability of datasets, existing methods used for A/V classification, AVR computation, HR detection, and severity grading, and performance evaluation metrics. The reviewed articles are summarized with classifiers details, adoption of different kinds of methodologies, performance comparisons, datasets details, their pros and cons, and computational platform. For each task, a summary and critical in-depth analysis are provided, as well as common research issues and challenges in the existing studies. Finally, the paper proposes future research directions to overcome challenges associated with data set availability, HR detection, and severity grading.
Collapse
Affiliation(s)
- Supriya Suman
- Interdisciplinary Research Platform (IDRP): Smart Healthcare, Indian Institute of Technology, N.H. 62, Nagaur Road, Karwar, Jodhpur, Rajasthan 342030, India.
| | - Anil Kumar Tiwari
- Department of Electrical Engineering, Indian Institute of Technology, N.H. 62, Nagaur Road, Karwar, Jodhpur, Rajasthan 342030, India
| | - Kuldeep Singh
- Department of Pediatrics, All India Institute of Medical Sciences, Basni Industrial Area Phase-2, Jodhpur, Rajasthan 342005, India
| |
Collapse
|
11
|
Luengnaruemitchai G, Kaewmahanin W, Munthuli A, Phienphanich P, Puangarom S, Sangchocanonta S, Jariyakosol S, Hirunwiwatkul P, Tantibundhit C. Alzheimer's Together with Mild Cognitive Impairment Screening Using Polar Transformation of Middle Zone of Fundus Images Based Deep Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083188 DOI: 10.1109/embc40787.2023.10340463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI) are considered an increasing major health problem in elderlies. However, current clinical methods of Alzheimer's detection are expensive and difficult to access, making the detection inconvenient and unsuitable for developing countries such as Thailand. Thus, we developed a method of AD together with MCI screening by fine-tuning a pre-trained Densely Connected Convolutional Network (DenseNet-121) model using the middle zone of polar transformed fundus image. The polar transformation in the middle zone of the fundus is a key factor helping the model to extract features more effectively and that enhances the model accuracy. The dataset was divided into 2 groups: normal and abnormal (AD and MCI). This method can classify between normal and abnormal patients with 96% accuracy, 99% sensitivity, 90% specificity, 95% precision, and 97% F1 score. Parts of both MCI and AD input images that most impact the classification score visualized by Grad-CAM++ focus in superior and inferior retinal quadrants.Clinical relevance- The parts of both MCI and AD input images that have the most impact the classification score (visualized by Grad-CAM++) are superior and inferior retinal quadrants. Polar transformation of the middle zone of retinal fundus images is a key factor that enhances the classification accuracy.
Collapse
|
12
|
Hemelings R, Elen B, Schuster AK, Blaschko MB, Barbosa-Breda J, Hujanen P, Junglas A, Nickels S, White A, Pfeiffer N, Mitchell P, De Boever P, Tuulonen A, Stalmans I. A generalizable deep learning regression model for automated glaucoma screening from fundus images. NPJ Digit Med 2023; 6:112. [PMID: 37311940 PMCID: PMC10264390 DOI: 10.1038/s41746-023-00857-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Accepted: 06/01/2023] [Indexed: 06/15/2023] Open
Abstract
A plethora of classification models for the detection of glaucoma from fundus images have been proposed in recent years. Often trained with data from a single glaucoma clinic, they report impressive performance on internal test sets, but tend to struggle in generalizing to external sets. This performance drop can be attributed to data shifts in glaucoma prevalence, fundus camera, and the definition of glaucoma ground truth. In this study, we confirm that a previously described regression network for glaucoma referral (G-RISK) obtains excellent results in a variety of challenging settings. Thirteen different data sources of labeled fundus images were utilized. The data sources include two large population cohorts (Australian Blue Mountains Eye Study, BMES and German Gutenberg Health Study, GHS) and 11 publicly available datasets (AIROGS, ORIGA, REFUGE1, LAG, ODIR, REFUGE2, GAMMA, RIM-ONEr3, RIM-ONE DL, ACRIMA, PAPILA). To minimize data shifts in input data, a standardized image processing strategy was developed to obtain 30° disc-centered images from the original data. A total of 149,455 images were included for model testing. Area under the receiver operating characteristic curve (AUC) for BMES and GHS population cohorts were at 0.976 [95% CI: 0.967-0.986] and 0.984 [95% CI: 0.980-0.991] on participant level, respectively. At a fixed specificity of 95%, sensitivities were at 87.3% and 90.3%, respectively, surpassing the minimum criteria of 85% sensitivity recommended by Prevent Blindness America. AUC values on the eleven publicly available data sets ranged from 0.854 to 0.988. These results confirm the excellent generalizability of a glaucoma risk regression model trained with homogeneous data from a single tertiary referral center. Further validation using prospective cohort studies is warranted.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium.
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium.
| | - Bart Elen
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - Alexander K Schuster
- Department of Ophthalmology, University Medical Center Mainz, Langenbeckstr. 1, 55131, Mainz, Germany
| | | | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Cardiovascular R&D Center, Faculty of Medicine of the University of Porto, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
- Department of Ophthalmology, Centro Hospitalar e Universitário São João, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
| | - Pekko Hujanen
- Tays Eye Centre, Tampere University Hospital, Tampere, Finland
| | - Annika Junglas
- Department of Ophthalmology, University Medical Center Mainz, Langenbeckstr. 1, 55131, Mainz, Germany
| | - Stefan Nickels
- Department of Ophthalmology, University Medical Center Mainz, Langenbeckstr. 1, 55131, Mainz, Germany
| | - Andrew White
- Department of Ophthalmology, The University of Sydney, Sydney, NSW, Australia
| | - Norbert Pfeiffer
- Department of Ophthalmology, University Medical Center Mainz, Langenbeckstr. 1, 55131, Mainz, Germany
| | - Paul Mitchell
- Department of Ophthalmology, The University of Sydney, Sydney, NSW, Australia
| | - Patrick De Boever
- Centre for Environmental Sciences, Hasselt University, Agoralaan building D, 3590, Diepenbeek, Belgium
- University of Antwerp, Department of Biology, 2610, Wilrijk, Belgium
| | - Anja Tuulonen
- Tays Eye Centre, Tampere University Hospital, Tampere, Finland
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Ophthalmology Department, UZ Leuven, Herestraat 49, 3000, Leuven, Belgium
| |
Collapse
|
13
|
Kv R, Prasad K, Peralam Yegneswaran P. Segmentation and Classification Approaches of Clinically Relevant Curvilinear Structures: A Review. J Med Syst 2023; 47:40. [PMID: 36971852 PMCID: PMC10042761 DOI: 10.1007/s10916-023-01927-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 02/25/2023] [Indexed: 03/29/2023]
Abstract
Detection of curvilinear structures from microscopic images, which help the clinicians to make an unambiguous diagnosis is assuming paramount importance in recent clinical practice. Appearance and size of dermatophytic hyphae, keratitic fungi, corneal and retinal vessels vary widely making their automated detection cumbersome. Automated deep learning methods, endowed with superior self-learning capacity, have superseded the traditional machine learning methods, especially in complex images with challenging background. Automatic feature learning ability using large input data with better generalization and recognition capability, but devoid of human interference and excessive pre-processing, is highly beneficial in the above context. Varied attempts have been made by researchers to overcome challenges such as thin vessels, bifurcations and obstructive lesions in retinal vessel detection as revealed through several publications reviewed here. Revelations of diabetic neuropathic complications such as tortuosity, changes in the density and angles of the corneal fibers have been successfully sorted in many publications reviewed here. Since artifacts complicate the images and affect the quality of analysis, methods addressing these challenges have been described. Traditional and deep learning methods, that have been adapted and published between 2015 and 2021 covering retinal vessels, corneal nerves and filamentous fungi have been summarized in this review. We find several novel and meritorious ideas and techniques being put to use in the case of retinal vessel segmentation and classification, which by way of cross-domain adaptation can be utilized in the case of corneal and filamentous fungi also, making suitable adaptations to the challenges to be addressed.
Collapse
Affiliation(s)
- Rajitha Kv
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India.
| | - Prakash Peralam Yegneswaran
- Department of Microbiology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| |
Collapse
|
14
|
Iqbal S, Khan TM, Naveed K, Naqvi SS, Nawaz SJ. Recent trends and advances in fundus image analysis: A review. Comput Biol Med 2022; 151:106277. [PMID: 36370579 DOI: 10.1016/j.compbiomed.2022.106277] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/19/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022]
Abstract
Automated retinal image analysis holds prime significance in the accurate diagnosis of various critical eye diseases that include diabetic retinopathy (DR), age-related macular degeneration (AMD), atherosclerosis, and glaucoma. Manual diagnosis of retinal diseases by ophthalmologists takes time, effort, and financial resources, and is prone to error, in comparison to computer-aided diagnosis systems. In this context, robust classification and segmentation of retinal images are primary operations that aid clinicians in the early screening of patients to ensure the prevention and/or treatment of these diseases. This paper conducts an extensive review of the state-of-the-art methods for the detection and segmentation of retinal image features. Existing notable techniques for the detection of retinal features are categorized into essential groups and compared in depth. Additionally, a summary of quantifiable performance measures for various important stages of retinal image analysis, such as image acquisition and preprocessing, is provided. Finally, the widely used in the literature datasets for analyzing retinal images are described and their significance is emphasized.
Collapse
Affiliation(s)
- Shahzaib Iqbal
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan; Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark
| | - Syed S Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Syed Junaid Nawaz
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|
15
|
Toptaş B, Hanbay D. Separation of arteries and veins in retinal fundus images with a new CNN architecture. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2151066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Buket Toptaş
- Computer Engineering Department, Engineering and Natural Science Faculty, Bandırma Onyedi Eylül University, Balıkesir, Turkey
| | - Davut Hanbay
- Computer Engineering Department, Engineering Faculty, Inonu University, Malatya, Turkey
| |
Collapse
|
16
|
Kullberg J, Colton J, Gregory CT, Bay A, Munro T. Demonstration of Neural Networks to Reconstruct Temperatures from Simulated Fluorescent Data Toward Use in Bio-microfluidics. INTERNATIONAL JOURNAL OF THERMOPHYSICS 2022; 43:172. [PMID: 36349060 PMCID: PMC9639173 DOI: 10.1007/s10765-022-03102-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 09/12/2022] [Indexed: 06/16/2023]
Abstract
Biological systems often have a narrow temperature range of operation, which require highly accurate spatially resolved temperature measurements, often near ±0.1 K. However, many temperature sensors cannot meet both accuracy and spatial distribution requirements, often because their accuracy is limited by data fitting and temperature reconstruction models. Machine learning algorithms have the potential to meet this need, but their usage in generating spatial distributions of temperature is severely lacking in the literature. This work presents the first instance of using neural networks to process fluorescent images to map the spatial distribution of temperature. Three standard network architectures were investigated using non-spatially resolved fluorescent thermometry (simply-connected feed-forward network) or during image or pixel identification (U-net and convolutional neural network, CNN). Simulated fluorescent images based on experimental data were generated based on known temperature distributions where Gaussian white noise with a standard deviation of ±0.1 K was added. The poor results from these standard networks motivated the creation of what is termed a moving CNN, with an RMSE error of ±0.23 K, where the elements of the matrix represent the neighboring pixels. Finally, the performance of this MCNN is investigated when trained and applied to three distinctive temperature distributions characteristic within microfluidic devices, where the fluorescent image is simulated at either three or five different wavelengths. The results demonstrate that having a minimum of 10 3.5 data points per temperature and the broadest range of temperatures during training provides temperature predictions nearest to the true temperatures of the images, with a minimum RMSE of ±0.15 K. When compared to traditional curve fitting techniques, this work demonstrates that greater accuracy when spatially mapping temperature from fluorescent images can be achieved when using convolutional neural networks.
Collapse
Affiliation(s)
- Jacob Kullberg
- Computer Science Department, Brigham Young University, 3361 TMCB, Provo, 84602, UT, USA
| | - Jacob Colton
- Mechanical Engineering department, Brigham Young University, 3361 TMCB, Provo, 84602, UT, USA
| | - C. Tolex Gregory
- Computer Science Department, Brigham Young University, 3361 TMCB, Provo, 84602, UT, USA
| | - Austin Bay
- Neuroscience Department, Brigham Young University, S-192 ESC, Provo, 84602, UT, USA
| | - Troy Munro
- Mechanical Engineering department, Brigham Young University, 3361 TMCB, Provo, 84602, UT, USA
| |
Collapse
|
17
|
Jin K, Huang X, Zhou J, Li Y, Yan Y, Sun Y, Zhang Q, Wang Y, Ye J. FIVES: A Fundus Image Dataset for Artificial Intelligence based Vessel Segmentation. Sci Data 2022; 9:475. [PMID: 35927290 PMCID: PMC9352679 DOI: 10.1038/s41597-022-01564-3] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 07/12/2022] [Indexed: 12/30/2022] Open
Abstract
Retinal vasculature provides an opportunity for direct observation of vessel morphology, which is linked to multiple clinical conditions. However, objective and quantitative interpretation of the retinal vasculature relies on precise vessel segmentation, which is time consuming and labor intensive. Artificial intelligence (AI) has demonstrated great promise in retinal vessel segmentation. The development and evaluation of AI-based models require large numbers of annotated retinal images. However, the public datasets that are usable for this task are scarce. In this paper, we collected a color fundus image vessel segmentation (FIVES) dataset. The FIVES dataset consists of 800 high-resolution multi-disease color fundus photographs with pixelwise manual annotation. The annotation process was standardized through crowdsourcing among medical experts. The quality of each image was also evaluated. To the best of our knowledge, this is the largest retinal vessel segmentation dataset for which we believe this work will be beneficial to the further development of retinal vessel segmentation.
Collapse
Affiliation(s)
- Kai Jin
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Xingru Huang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E1 4NS, United Kingdom
| | - Jingxing Zhou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Yunxiang Li
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Yan Yan
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Yibao Sun
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E1 4NS, United Kingdom
| | - Qianni Zhang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E1 4NS, United Kingdom
| | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, 310018, China.
| | - Juan Ye
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Zhejiang University, Hangzhou, 310009, China.
| |
Collapse
|
18
|
Lyu X, Cheng L, Zhang S. The RETA Benchmark for Retinal Vascular Tree Analysis. Sci Data 2022; 9:397. [PMID: 35817778 PMCID: PMC9273761 DOI: 10.1038/s41597-022-01507-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Accepted: 06/28/2022] [Indexed: 12/23/2022] Open
Abstract
Topological and geometrical analysis of retinal blood vessels could be a cost-effective way to detect various common diseases. Automated vessel segmentation and vascular tree analysis models require powerful generalization capability in clinical applications. In this work, we constructed a novel benchmark RETA with 81 labelled vessel masks aiming to facilitate retinal vessel analysis. A semi-automated coarse-to-fine workflow was proposed for vessel annotation task. During database construction, we strived to control inter-annotator and intra-annotator variability by means of multi-stage annotation and label disambiguation on self-developed dedicated software. In addition to binary vessel masks, we obtained other types of annotations including artery/vein masks, vascular skeletons, bifurcations, trees and abnormalities. Subjective and objective quality validations of the annotated vessel masks demonstrated significantly improved quality over the existing open datasets. Our annotation software is also made publicly available serving the purpose of pixel-level vessel visualization. Researchers could develop vessel segmentation algorithms and evaluate segmentation performance using RETA. Moreover, it might promote the study of cross-modality tubular structure segmentation and analysis.
Collapse
Affiliation(s)
- Xingzheng Lyu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China.
| | - Li Cheng
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, T6G 1H9, Canada
| | - Sanyuan Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China.
| |
Collapse
|
19
|
Zhou Y, Wagner SK, Chia MA, Zhao A, Woodward-Court P, Xu M, Struyven R, Alexander DC, Keane PA. AutoMorph: Automated Retinal Vascular Morphology Quantification Via a Deep Learning Pipeline. Transl Vis Sci Technol 2022; 11:12. [PMID: 35833885 PMCID: PMC9290317 DOI: 10.1167/tvst.11.7.12] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 06/06/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose To externally validate a deep learning pipeline (AutoMorph) for automated analysis of retinal vascular morphology on fundus photographs. AutoMorph has been made publicly available, facilitating widespread research in ophthalmic and systemic diseases. Methods AutoMorph consists of four functional modules: image preprocessing, image quality grading, anatomical segmentation (including binary vessel, artery/vein, and optic disc/cup segmentation), and vascular morphology feature measurement. Image quality grading and anatomical segmentation use the most recent deep learning techniques. We employ a model ensemble strategy to achieve robust results and analyze the prediction confidence to rectify false gradable cases in image quality grading. We externally validate the performance of each module on several independent publicly available datasets. Results The EfficientNet-b4 architecture used in the image grading module achieves performance comparable to that of the state of the art for EyePACS-Q, with an F1-score of 0.86. The confidence analysis reduces the number of images incorrectly assessed as gradable by 76%. Binary vessel segmentation achieves an F1-score of 0.73 on AV-WIDE and 0.78 on DR HAGIS. Artery/vein scores are 0.66 on IOSTAR-AV, and disc segmentation achieves 0.94 in IDRID. Vascular morphology features measured from the AutoMorph segmentation map and expert annotation show good to excellent agreement. Conclusions AutoMorph modules perform well even when external validation data show domain differences from training data (e.g., with different imaging devices). This fully automated pipeline can thus allow detailed, efficient, and comprehensive analysis of retinal vascular morphology on color fundus photographs. Translational Relevance By making AutoMorph publicly available and open source, we hope to facilitate ophthalmic and systemic disease research, particularly in the emerging field of oculomics.
Collapse
Affiliation(s)
- Yukun Zhou
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Siegfried K. Wagner
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Mark A. Chia
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - An Zhao
- Centre for Medical Image Computing, University College London, London, UK
- Department of Computer Science, University College London, London, UK
| | - Peter Woodward-Court
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Institute of Health Informatics, University College London, London, UK
| | - Moucheng Xu
- Centre for Medical Image Computing, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Robbert Struyven
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Daniel C. Alexander
- Centre for Medical Image Computing, University College London, London, UK
- Department of Computer Science, University College London, London, UK
| | - Pearse A. Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| |
Collapse
|
20
|
Lee AX, Saxena A, Chua J, Schmetterer L, Tan B. Automated Retinal Vascular Topological Information Extraction From OCTA. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1839-1842. [PMID: 36086557 DOI: 10.1109/embc48229.2022.9871160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The retinal vascular system adapts and reacts rapidly to ocular diseases such as glaucoma, diabetic retinopathy and age-related macular degeneration. Here we present a combination of methods to further extract vascular information from [Formula: see text] wide-field optical coherence tomography angiography (OCTA). An integrated U-Net for the segmentation and classification of arteries and veins reached a segmentation IoU of 0.7095±0.0224, and classification IoU of 0.8793±0.1049 and 0.8928±0.0929 respectively. A correcting algorithm which uses topological information was created to correct the misclassification and connectivity of the vessels, which showed an average increase of 8.29% in IoU. Finally, the vessel morphometry of branch orders was extracted, where this allows the direct comparison of artery/vein, arterioles/venules and capillaries.
Collapse
|
21
|
Huang F, Lian J, Ng KS, Shih K, Vardhanabhuti V. Predicting CT-Based Coronary Artery Disease Using Vascular Biomarkers Derived from Fundus Photographs with a Graph Convolutional Neural Network. Diagnostics (Basel) 2022; 12:diagnostics12061390. [PMID: 35741200 PMCID: PMC9221688 DOI: 10.3390/diagnostics12061390] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 05/26/2022] [Accepted: 06/02/2022] [Indexed: 01/27/2023] Open
Abstract
The study population contains 145 patients who were prospectively recruited for coronary CT angiography (CCTA) and fundoscopy. This study first examined the association between retinal vascular changes and the Coronary Artery Disease Reporting and Data System (CAD-RADS) as assessed on CCTA. Then, we developed a graph neural network (GNN) model for predicting the CAD-RADS as a proxy for coronary artery disease. The CCTA scans were stratified by CAD-RADS scores by expert readers, and the vascular biomarkers were extracted from their fundus images. Association analyses of CAD-RADS scores were performed with patient characteristics, retinal diseases, and quantitative vascular biomarkers. Finally, a GNN model was constructed for the task of predicting the CAD-RADS score compared to traditional machine learning (ML) models. The experimental results showed that a few retinal vascular biomarkers were significantly associated with adverse CAD-RADS scores, which were mainly pertaining to arterial width, arterial angle, venous angle, and fractal dimensions. Additionally, the GNN model achieved a sensitivity, specificity, accuracy and area under the curve of 0.711, 0.697, 0.704 and 0.739, respectively. This performance outperformed the same evaluation metrics obtained from the traditional ML models (p < 0.05). The data suggested that retinal vasculature could be a potential biomarker for atherosclerosis in the coronary artery and that the GNN model could be utilized for accurate prediction.
Collapse
Affiliation(s)
- Fan Huang
- Department of Diagnostic Radiology, LKS Faculty of Medicine, The University of Hong Kong, Hong Kong, China; (F.H.); (J.L.); (K.-S.N.)
| | - Jie Lian
- Department of Diagnostic Radiology, LKS Faculty of Medicine, The University of Hong Kong, Hong Kong, China; (F.H.); (J.L.); (K.-S.N.)
| | - Kei-Shing Ng
- Department of Diagnostic Radiology, LKS Faculty of Medicine, The University of Hong Kong, Hong Kong, China; (F.H.); (J.L.); (K.-S.N.)
| | - Kendrick Shih
- Department of Ophthalmology, LKS Faculty of Medicine, The University of Hong Kong, Hong Kong, China;
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, LKS Faculty of Medicine, The University of Hong Kong, Hong Kong, China; (F.H.); (J.L.); (K.-S.N.)
- Correspondence: ; Tel.: +852-2255-3307
| |
Collapse
|
22
|
State-of-the-art retinal vessel segmentation with minimalistic models. Sci Rep 2022; 12:6174. [PMID: 35418576 PMCID: PMC9007957 DOI: 10.1038/s41598-022-09675-y] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 03/10/2022] [Indexed: 01/03/2023] Open
Abstract
The segmentation of retinal vasculature from eye fundus images is a fundamental task in retinal image analysis. Over recent years, increasingly complex approaches based on sophisticated Convolutional Neural Network architectures have been pushing performance on well-established benchmark datasets. In this paper, we take a step back and analyze the real need of such complexity. We first compile and review the performance of 20 different techniques on some popular databases, and we demonstrate that a minimalistic version of a standard U-Net with several orders of magnitude less parameters, carefully trained and rigorously evaluated, closely approximates the performance of current best techniques. We then show that a cascaded extension (W-Net) reaches outstanding performance on several popular datasets, still using orders of magnitude less learnable weights than any previously published work. Furthermore, we provide the most comprehensive cross-dataset performance analysis to date, involving up to 10 different databases. Our analysis demonstrates that the retinal vessel segmentation is far from solved when considering test images that differ substantially from the training data, and that this task represents an ideal scenario for the exploration of domain adaptation techniques. In this context, we experiment with a simple self-labeling strategy that enables moderate enhancement of cross-dataset performance, indicating that there is still much room for improvement in this area. Finally, we test our approach on Artery/Vein and vessel segmentation from OCTA imaging problems, where we again achieve results well-aligned with the state-of-the-art, at a fraction of the model complexity available in recent literature. Code to reproduce the results in this paper is released.
Collapse
|
23
|
Hu J, Wang H, Wu G, Cao Z, Mou L, Zhao Y, Zhang J. Multi-scale Interactive Network with Artery/Vein Discriminator for Retinal Vessel Classification. IEEE J Biomed Health Inform 2022; 26:3896-3905. [PMID: 35394918 DOI: 10.1109/jbhi.2022.3165867] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automatic classification of retinal arteries and veins plays an important role in assisting clinicians to diagnosis cardiovascular and eye-related diseases. However, due to the high degree of anatomical variation across the population, and the presence of inconsistent labels by the subjective judgment of annotators in available training data, most of existing methods generally suffer from blood vessel discontinuity and arteriovenous confusion, the artery/vein (A/V) classification task still faces great challenges. In this work, we propose a multi-scale interactive network with A/V discriminator for retinal artery and vein recognition, which can reduce the arteriovenous confusion and alleviate the disturbance of noisy label. A multi-scale interaction (MI) module is designed in encoder for realizing the cross-space multi-scale features interaction of fundus images, effectively integrate high-level and low-level context information. In particular, we design an ingenious A/V discriminator (AVD) that utilizes the independent and shared information between arteries and veins, and combine with topology loss, to further strengthen the learning ability of model to resolve the arteriovenous confusion. In addition, we adopt a sample re-weighting (SW) strategy to effectively alleviate the disturbance from data labeling errors. The proposed model is verified on three publicly available fundus image datasets (AV-DRIVE, HRF, LES-AV) and a private dataset. We achieve the accuracy of 97.47%, 96.91%, 97.79%, and 98.18% respectively on these four datasets. Extensive experimental results demonstrate that our method achieves competitive performance compared with state-of-the-art methods for A/V classification. To address the problem of training data scarcity, we publicly release 100 fundus images with A/V annotations to promote relevant research in the community.
Collapse
|
24
|
TW-GAN: Topology and width aware GAN for retinal artery/vein classification. Med Image Anal 2022; 77:102340. [DOI: 10.1016/j.media.2021.102340] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 12/18/2021] [Accepted: 12/20/2021] [Indexed: 11/20/2022]
|
25
|
Karlsson RA, Hardarson SH. Artery vein classification in fundus images using serially connected U-Nets. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 216:106650. [PMID: 35139461 DOI: 10.1016/j.cmpb.2022.106650] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 01/12/2022] [Accepted: 01/18/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal vessels provide valuable information when diagnosing or monitoring various diseases affecting the retina and disorders affecting the cardiovascular or central nervous systems. Automated retinal vessel segmentation can assist clinicians and researchers when interpreting retinal images. As there are differences in both the structure and function of retinal arteries and veins, separating these two vessel types is essential. As manual segmentation of retinal images is impractical, an accurate automated method is required. METHODS In this paper, we propose a convolutional neural network based on serially connected U-nets that simultaneously segment the retinal vessels and classify them as arteries or veins. Detailed ablation experiments are performed to understand how the major components contribute to the overall system's performance. The proposed method is trained and tested on the public DRIVE and HRF datasets and a proprietary dataset. RESULTS The proposed convolutional neural network achieves an F1 score of 0.829 for vessel segmentation on the DRIVE dataset and an F1 score of 0.814 on the HRF dataset, consistent with the state-of-the-art methods on the former and outperforming the state-of-the-art on the latter. On the task of classifying the vessels into arteries and veins, the method achieves an F1 score of 0.952 for the DRIVE dataset exceeding the state-of-the-art performance. On the HRF dataset, the method achieves an F1 score of 0.966, which is consistent with the state-of-the-art. CONCLUSIONS The proposed method demonstrates competitive performance on both vessel segmentation and artery vein classification compared with state-of-the-art methods. The method outperforms human experts on the DRIVE dataset when classifying retinal images into arteries, veins, and background simultaneously. The method segments the vasculature on the proprietary dataset and classifies the retinal vessels accurately, even on challenging pathological images. The ablation experiments which utilize repeated runs for each configuration provide statistical evidence for the appropriateness of the proposed solution. Connecting several simple U-nets significantly improved artery vein classification performance. The proposed way of serially connecting base networks is not limited to the proposed base network or segmenting the retinal vessels and could be applied to other tasks.
Collapse
Affiliation(s)
- Robert Arnar Karlsson
- Faculty of Medicine at the University of Iceland, Sæmundargata 2, Reykjavík, 102, Iceland; Faculty of Electrical and Computer Engineering at the University of Iceland, Sæmundargata 2, Reykjavík, 102, Iceland.
| | - Sveinn Hakon Hardarson
- Faculty of Medicine at the University of Iceland, Sæmundargata 2, Reykjavík, 102, Iceland.
| |
Collapse
|
26
|
Hatamizadeh A, Hosseini H, Patel N, Choi J, Pole CC, Hoeferlin CM, Schwartz SD, Terzopoulos D. RAVIR: A Dataset and Methodology for the Semantic Segmentation and Quantitative Analysis of Retinal Arteries and Veins in Infrared Reflectance Imaging. IEEE J Biomed Health Inform 2022; 26:3272-3283. [PMID: 35349464 DOI: 10.1109/jbhi.2022.3163352] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The retinal vasculature provides important clues in the diagnosis and monitoring of systemic diseases including hypertension and diabetes. The microvascular system is of primary involvement in such conditions, and the retina is the only anatomical site where the microvasculature can be directly observed. The objective assessment of retinal vessels has long been considered a surrogate biomarker for systemic vascular diseases, and with recent advancements in retinal imaging and computer vision technologies, this topic has become the subject of renewed attention. In this paper, we present a novel dataset, dubbed RAVIR, for the semantic segmentation of Retinal Arteries and Veins in Infrared Reflectance (IR) imaging. It enables the creation of deep learning-based models that distinguish extracted vessel type without extensive post-processing. We propose a novel deep learning-based methodology, denoted as SegRAVIR, for the semantic segmentation of retinal arteries and veins and the quantitative measurement of the widths of segmented vessels. Our extensive experiments validate the effectiveness of SegRAVIR and demonstrate its superior performance in comparison to state-of-the-art models. Additionally, we propose a knowledge distillation framework for the domain adaptation of RAVIR pretrained networks on color images. We demonstrate that our pretraining procedure yields new state-of-the-art benchmarks on the DRIVE, STARE, and CHASE\_DB1 datasets. Dataset link: https://ravirdataset.github.io/data.
Collapse
|
27
|
Binh NT, Hien NM, Tin DT. Improving U-Net architecture and graph cuts optimization to classify arterioles and venules in retina fundus images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-212259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The central retinal artery and its branches supply blood to the inner retina. Vascular manifestations in the retina indirectly reflect the vascular changes and damage in organs such as the heart, kidneys, and brain because of the similar vascular structure of these organs. The diabetic retinopathy and risk of stroke are caused by increased venular caliber. The degrees of these diseases depend on the changes of arterioles and venules. The ratio between the calibers of arterioles and venules (AVR) is various. AVR is considered as the useful diagnostic indicator of different associated health problems. However, the task is not easy because of the lack of information of the features being used to classify the retinal vessels as arterioles and venules. This paper proposed a method to classify the retinal vessels into the arterioles and venules based on improving U-Net architecture and graph cuts. The accuracy of the proposed method is about 97.6%. The results of the proposed method are better than the other methods in RITE dataset and AVRDB dataset.
Collapse
Affiliation(s)
- Nguyen Thanh Binh
- Department of Information Systems, Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Ho Chi Minh City, Vietnam
- Vietnam National University Ho Chi Minh City, Linh Trung Ward, Thu Duc District, Ho Chi Minh City, Vietnam
| | - Nguyen Mong Hien
- Department of Information Systems, Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Ho Chi Minh City, Vietnam
- Tra Vinh University, Vietnam
| | - Dang Thanh Tin
- Vietnam National University Ho Chi Minh City, Linh Trung Ward, Thu Duc District, Ho Chi Minh City, Vietnam
- Information Systems Engineering Laboratory, Faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology (HCMUT), Ho Chi Minh City, Vietnam
| |
Collapse
|
28
|
Wan C, Zhou X, You Q, Sun J, Shen J, Zhu S, Jiang Q, Yang W. Retinal Image Enhancement Using Cycle-Constraint Adversarial Network. Front Med (Lausanne) 2022; 8:793726. [PMID: 35096883 PMCID: PMC8789669 DOI: 10.3389/fmed.2021.793726] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 12/14/2021] [Indexed: 11/25/2022] Open
Abstract
Retinal images are the most intuitive medical images for the diagnosis of fundus diseases. Low-quality retinal images cause difficulties in computer-aided diagnosis systems and the clinical diagnosis of ophthalmologists. The high quality of retinal images is an important basis of precision medicine in ophthalmology. In this study, we propose a retinal image enhancement method based on deep learning to enhance multiple low-quality retinal images. A generative adversarial network is employed to build a symmetrical network, and a convolutional block attention module is introduced to improve the feature extraction capability. The retinal images in our dataset are sorted into two sets according to their quality: low and high quality. Generators and discriminators alternately learn the features of low/high-quality retinal images without the need for paired images. We analyze the proposed method both qualitatively and quantitatively on public datasets and a private dataset. The study results demonstrate that the proposed method is superior to other advanced algorithms, especially in enhancing color-distorted retinal images. It also performs well in the task of retinal vessel segmentation. The proposed network effectively enhances low-quality retinal images, aiding ophthalmologists and enabling computer-aided diagnosis in pathological analysis. Our method enhances multiple types of low-quality retinal images using a deep learning network.
Collapse
Affiliation(s)
- Cheng Wan
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Xueting Zhou
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Qijing You
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jing Sun
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Qin Jiang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
29
|
Hemelings R, Elen B, Barbosa-Breda J, Blaschko MB, De Boever P, Stalmans I. Deep learning on fundus images detects glaucoma beyond the optic disc. Sci Rep 2021; 11:20313. [PMID: 34645908 PMCID: PMC8514536 DOI: 10.1038/s41598-021-99605-1] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 09/21/2021] [Indexed: 02/07/2023] Open
Abstract
Although unprecedented sensitivity and specificity values are reported, recent glaucoma detection deep learning models lack in decision transparency. Here, we propose a methodology that advances explainable deep learning in the field of glaucoma detection and vertical cup-disc ratio (VCDR), an important risk factor. We trained and evaluated deep learning models using fundus images that underwent a certain cropping policy. We defined the crop radius as a percentage of image size, centered on the optic nerve head (ONH), with an equidistant spaced range from 10-60% (ONH crop policy). The inverse of the cropping mask was also applied (periphery crop policy). Trained models using original images resulted in an area under the curve (AUC) of 0.94 [95% CI 0.92-0.96] for glaucoma detection, and a coefficient of determination (R2) equal to 77% [95% CI 0.77-0.79] for VCDR estimation. Models that were trained on images with absence of the ONH are still able to obtain significant performance (0.88 [95% CI 0.85-0.90] AUC for glaucoma detection and 37% [95% CI 0.35-0.40] R2 score for VCDR estimation in the most extreme setup of 60% ONH crop). Our findings provide the first irrefutable evidence that deep learning can detect glaucoma from fundus image regions outside the ONH.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium.
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium.
| | - Bart Elen
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - João Barbosa-Breda
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Cardiovascular R&D Center, Faculty of Medicine of the University of Porto, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
- Department of Ophthalmology, Centro Hospitalar E Universitário São João, Alameda Prof. Hernâni Monteiro, 4200-319, Porto, Portugal
| | | | - Patrick De Boever
- Hasselt University, Agoralaan building D, 3590, Diepenbeek, Belgium
- Department of Biology, University of Antwerp, 2610, Wilrijk, Belgium
- Flemish Institute for Technological Research (VITO), Boeretang 200, 2400, Mol, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, Department of Neurosciences, KU Leuven, Herestraat 49, 3000, Leuven, Belgium
- Ophthalmology Department, UZ Leuven, Herestraat 49, 3000, Leuven, Belgium
| |
Collapse
|
30
|
Simultaneous segmentation and classification of the retinal arteries and veins from color fundus images. Artif Intell Med 2021; 118:102116. [PMID: 34412839 DOI: 10.1016/j.artmed.2021.102116] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 05/20/2021] [Accepted: 05/21/2021] [Indexed: 01/25/2023]
Abstract
BACKGROUND AND OBJECTIVES The study of the retinal vasculature represents a fundamental stage in the screening and diagnosis of many high-incidence diseases, both systemic and ophthalmic. A complete retinal vascular analysis requires the segmentation of the vascular tree along with the classification of the blood vessels into arteries and veins. Early automatic methods approach these complementary segmentation and classification tasks in two sequential stages. However, currently, these two tasks are approached as a joint semantic segmentation, because the classification results highly depend on the effectiveness of the vessel segmentation. In that regard, we propose a novel approach for the simultaneous segmentation and classification of the retinal arteries and veins from eye fundus images. METHODS We propose a novel method that, unlike previous approaches, and thanks to the proposal of a novel loss, decomposes the joint task into three segmentation problems targeting arteries, veins and the whole vascular tree. This configuration allows to handle vessel crossings intuitively and directly provides accurate segmentation masks of the different target vascular trees. RESULTS The provided ablation study on the public Retinal Images vessel Tree Extraction (RITE) dataset demonstrates that the proposed method provides a satisfactory performance, particularly in the segmentation of the different structures. Furthermore, the comparison with the state of the art shows that our method achieves highly competitive results in the artery/vein classification, while significantly improving the vascular segmentation. CONCLUSIONS The proposed multi-segmentation method allows to detect more vessels and better segment the different structures, while achieving a competitive classification performance. Also, in these terms, our approach outperforms the approaches of various reference works. Moreover, in contrast with previous approaches, the proposed method allows to directly detect the vessel crossings, as well as preserving the continuity of both arteries and veins at these complex locations.
Collapse
|
31
|
Agrawal R, Kulkarni S, Walambe R, Kotecha K. Assistive Framework for Automatic Detection of All the Zones in Retinopathy of Prematurity Using Deep Learning. J Digit Imaging 2021; 34:932-947. [PMID: 34240273 DOI: 10.1007/s10278-021-00477-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 05/06/2021] [Accepted: 05/21/2021] [Indexed: 11/30/2022] Open
Abstract
Retinopathy of prematurity (ROP) is a potentially blinding disorder seen in low birth weight preterm infants. In India, the burden of ROP is high, with nearly 200,000 premature infants at risk. Early detection through screening and treatment can prevent this blindness. The automatic screening systems developed so far can detect "severe ROP" or "plus disease," but this information does not help schedule follow-up. Identifying vascularized retinal zones and detecting the ROP stage is essential for follow-up or discharge from screening. There is no automatic system to assist these crucial decisions to the best of the authors' knowledge. The low contrast of images, incompletely developed vessels, macular structure, and lack of public data sets are a few challenges in creating such a system. In this paper, a novel method using an ensemble of "U-Network" and "Circle Hough Transform" is developed to detect zones I, II, and III from retinal images in which macula is not developed. The model developed is generic and trained on mixed images of different sizes. It detects zones in images of variable sizes captured by two different imaging systems with an accuracy of 98%. All images of the test set (including the low-quality images) are considered. The time taken for training was only 14 min, and a single image was tested in 30 ms. The present study can help medical experts interpret retinal vascular status correctly and reduce subjective variation in diagnosis.
Collapse
Affiliation(s)
- Ranjana Agrawal
- School of Computer Engineering and Technology, Dr. Vishwanath Karad MIT World Peace University, Pune, India.,Symbiosis Institute of Technology, Symbiosis International (Deemed) University, Pune, India
| | | | - Rahee Walambe
- Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis International (Deemed) University, Pune, India.
| | - Ketan Kotecha
- Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis International (Deemed) University, Pune, India.
| |
Collapse
|
32
|
Mill L, Wolff D, Gerrits N, Philipp P, Kling L, Vollnhals F, Ignatenko A, Jaremenko C, Huang Y, De Castro O, Audinot JN, Nelissen I, Wirtz T, Maier A, Christiansen S. Synthetic Image Rendering Solves Annotation Problem in Deep Learning Nanoparticle Segmentation. SMALL METHODS 2021; 5:e2100223. [PMID: 34927995 DOI: 10.1002/smtd.202100223] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 04/17/2021] [Indexed: 05/14/2023]
Abstract
Nanoparticles occur in various environments as a consequence of man-made processes, which raises concerns about their impact on the environment and human health. To allow for proper risk assessment, a precise and statistically relevant analysis of particle characteristics (such as size, shape, and composition) is required that would greatly benefit from automated image analysis procedures. While deep learning shows impressive results in object detection tasks, its applicability is limited by the amount of representative, experimentally collected and manually annotated training data. Here, an elegant, flexible, and versatile method to bypass this costly and tedious data acquisition process is presented. It shows that using a rendering software allows to generate realistic, synthetic training data to train a state-of-the art deep neural network. Using this approach, a segmentation accuracy can be derived that is comparable to man-made annotations for toxicologically relevant metal-oxide nanoparticle ensembles which were chosen as examples. The presented study paves the way toward the use of deep learning for automated, high-throughput particle detection in a variety of imaging techniques such as in microscopies and spectroscopies, for a wide range of applications, including the detection of micro- and nanoplastic particles in water and tissue samples.
Collapse
Affiliation(s)
- Leonid Mill
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
- Institute of Optics, Information and Photonics, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
| | - David Wolff
- Institut für Nanotechnologie und korrelative Mikroskopie, 91301, Forchheim, Germany
| | - Nele Gerrits
- Health Unit, Flemish Institute for Technological Research, Mol, 2400, Belgium
| | - Patrick Philipp
- Advanced Instrumentation for Ion Nano-Analytics, Materials Research and Technology Department, Luxembourg Institute of Science and Technology, Belvaux, L-4422, Luxembourg
| | - Lasse Kling
- Institut für Nanotechnologie und korrelative Mikroskopie, 91301, Forchheim, Germany
| | - Florian Vollnhals
- Institute of Optics, Information and Photonics, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
- Institut für Nanotechnologie und korrelative Mikroskopie, 91301, Forchheim, Germany
| | - Andrew Ignatenko
- Advanced Instrumentation for Ion Nano-Analytics, Materials Research and Technology Department, Luxembourg Institute of Science and Technology, Belvaux, L-4422, Luxembourg
| | - Christian Jaremenko
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
- Institut für Nanotechnologie und korrelative Mikroskopie, 91301, Forchheim, Germany
| | - Yixing Huang
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
- Institut für Nanotechnologie und korrelative Mikroskopie, 91301, Forchheim, Germany
| | - Olivier De Castro
- Advanced Instrumentation for Ion Nano-Analytics, Materials Research and Technology Department, Luxembourg Institute of Science and Technology, Belvaux, L-4422, Luxembourg
| | - Jean-Nicolas Audinot
- Advanced Instrumentation for Ion Nano-Analytics, Materials Research and Technology Department, Luxembourg Institute of Science and Technology, Belvaux, L-4422, Luxembourg
| | - Inge Nelissen
- Health Unit, Flemish Institute for Technological Research, Mol, 2400, Belgium
| | - Tom Wirtz
- Advanced Instrumentation for Ion Nano-Analytics, Materials Research and Technology Department, Luxembourg Institute of Science and Technology, Belvaux, L-4422, Luxembourg
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
| | - Silke Christiansen
- Institute of Optics, Information and Photonics, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
- Physics Department, Free University, 14195, Berlin, Germany
- Correlative Microscopy and Material Data Department, Fraunhofer Institute for Ceramic Technologies and Systems, 01277, Dresden, Germany
| |
Collapse
|
33
|
Hu J, Wang H, Cao Z, Wu G, Jonas JB, Wang YX, Zhang J. Automatic Artery/Vein Classification Using a Vessel-Constraint Network for Multicenter Fundus Images. Front Cell Dev Biol 2021; 9:659941. [PMID: 34178986 PMCID: PMC8226261 DOI: 10.3389/fcell.2021.659941] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022] Open
Abstract
Retinal blood vessel morphological abnormalities are generally associated with cardiovascular, cerebrovascular, and systemic diseases, automatic artery/vein (A/V) classification is particularly important for medical image analysis and clinical decision making. However, the current method still has some limitations in A/V classification, especially the blood vessel edge and end error problems caused by the single scale and the blurred boundary of the A/V. To alleviate these problems, in this work, we propose a vessel-constraint network (VC-Net) that utilizes the information of vessel distribution and edge to enhance A/V classification, which is a high-precision A/V classification model based on data fusion. Particularly, the VC-Net introduces a vessel-constraint (VC) module that combines local and global vessel information to generate a weight map to constrain the A/V features, which suppresses the background-prone features and enhances the edge and end features of blood vessels. In addition, the VC-Net employs a multiscale feature (MSF) module to extract blood vessel information with different scales to improve the feature extraction capability and robustness of the model. And the VC-Net can get vessel segmentation results simultaneously. The proposed method is tested on publicly available fundus image datasets with different scales, namely, DRIVE, LES, and HRF, and validated on two newly created multicenter datasets: Tongren and Kailuan. We achieve a balance accuracy of 0.9554 and F1 scores of 0.7616 and 0.7971 for the arteries and veins, respectively, on the DRIVE dataset. The experimental results prove that the proposed model achieves competitive performance in A/V classification and vessel segmentation tasks compared with state-of-the-art methods. Finally, we test the Kailuan dataset with other trained fusion datasets, the results also show good robustness. To promote research in this area, the Tongren dataset and source code will be made publicly available. The dataset and code will be made available at https://github.com/huawang123/VC-Net.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Zhaohui Cao
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Guang Wu
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Jost B Jonas
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China.,Department of Ophthalmology, Medical Faculty Mannheim of the Ruprecht-Karls-University Heidelberg, Mannheim, Germany
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China.,Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| |
Collapse
|
34
|
Mookiah MRK, Hogg S, MacGillivray T, Trucco E. On the quantitative effects of compression of retinal fundus images on morphometric vascular measurements in VAMPIRE. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 202:105969. [PMID: 33631639 DOI: 10.1016/j.cmpb.2021.105969] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 01/30/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES This paper reports a quantitative analysis of the effects of joint photographic experts group (JPEG) image compression of retinal fundus camera images on automatic vessel segmentation and on morphometric vascular measurements derived from it, including vessel width, tortuosity and fractal dimension. METHODS Measurements are computed with vascular assessment and measurement platform for images of the retina (VAMPIRE), a specialized software application adopted in many international studies on retinal biomarkers. For reproducibility, we use three public archives of fundus images (digital retinal images for vessel extraction (DRIVE), automated retinal image analyzer (ARIA), high-resolution fundus (HRF)). We generate compressed versions of original images in a range of representative levels. RESULTS We compare the resulting vessel segmentations with ground truth maps and morphological measurements of the vascular network with those obtained from the original (uncompressed) images. We assess the segmentation quality with sensitivity, specificity, accuracy, area under the curve and Dice coefficient. We assess the agreement between VAMPIRE measurements from compressed and uncompressed images with correlation, intra-class correlation and Bland-Altman analysis. CONCLUSIONS Results suggest that VAMPIRE width-related measurements (central retinal artery equivalent (CRAE), central retinal vein equivalent (CRVE), arteriolar-venular width ratio (AVR)), the fractal dimension (FD) and arteriolar tortuosity have excellent agreement with those from the original images, remaining substantially stable even for strong loss of quality (20% of the original), suggesting the suitability of VAMPIRE in association studies with compressed images.
Collapse
|
35
|
Zhang Z, Zhang P, Wang P, Sheriff J, Bluestein D, Deng Y. Rapid analysis of streaming platelet images by semi-unsupervised learning. Comput Med Imaging Graph 2021; 89:101895. [PMID: 33798915 DOI: 10.1016/j.compmedimag.2021.101895] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 01/14/2021] [Accepted: 03/05/2021] [Indexed: 10/21/2022]
Abstract
We developed a fast and accurate deep learning approach employing a semi-unsupervised learning system (SULS) for capturing the real-time noisy, sparse, and ambiguous images of platelet activation. Outperforming several leading supervised learning methods when applied to segment various platelet morphologies, the SULS detects their complex boundaries at submicron resolutions and it massively decreases to only a few hours for segmenting streaming images of 45 million platelets that would have taken 40 years to annotate manually. For the first time, the fast dynamics of pseudopod formation and platelet morphological changes including membrane tethers and transient tethering to vessels are accurately captured.
Collapse
Affiliation(s)
- Ziji Zhang
- Department of Applied Mathematics and Statistics, Stony Brook University, NY, 11794, United States.
| | - Peng Zhang
- Department of Applied Mathematics and Statistics, Stony Brook University, NY, 11794, United States; Department of Biomedical Engineering, Stony Brook University, NY, 11794, United States.
| | - Peineng Wang
- Department of Biomedical Engineering, Stony Brook University, NY, 11794, United States.
| | - Jawaad Sheriff
- Department of Biomedical Engineering, Stony Brook University, NY, 11794, United States.
| | - Danny Bluestein
- Department of Biomedical Engineering, Stony Brook University, NY, 11794, United States.
| | - Yuefan Deng
- Department of Applied Mathematics and Statistics, Stony Brook University, NY, 11794, United States.
| |
Collapse
|
36
|
Hemelings R, Elen B, Blaschko MB, Jacob J, Stalmans I, De Boever P. Pathological myopia classification with simultaneous lesion segmentation using deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 199:105920. [PMID: 33412285 DOI: 10.1016/j.cmpb.2020.105920] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 12/21/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Pathological myopia (PM) is the seventh leading cause of blindness, with a reported global prevalence up to 3%. Early and automated PM detection from fundus images could aid to prevent blindness in a world population that is characterized by a rising myopia prevalence. We aim to assess the use of convolutional neural networks (CNNs) for the detection of PM and semantic segmentation of myopia-induced lesions from fundus images on a recently introduced reference data set. METHODS This investigation reports on the results of CNNs developed for the recently introduced Pathological Myopia (PALM) dataset, which consists of 1200 images. Our CNN bundles lesion segmentation and PM classification, as the two tasks are heavily intertwined. Domain knowledge is also inserted through the introduction of a new Optic Nerve Head (ONH)-based prediction enhancement for the segmentation of atrophy and fovea localization. Finally, we are the first to approach fovea localization using segmentation instead of detection or regression models. Evaluation metrics include area under the receiver operating characteristic curve (AUC) for PM detection, Euclidean distance for fovea localization, and Dice and F1 metrics for the semantic segmentation tasks (optic disc, retinal atrophy and retinal detachment). RESULTS Models trained with 400 available training images achieved an AUC of 0.9867 for PM detection, and a Euclidean distance of 58.27 pixels on the fovea localization task, evaluated on a test set of 400 images. Dice and F1 metrics for semantic segmentation of lesions scored 0.9303 and 0.9869 on optic disc, 0.8001 and 0.9135 on retinal atrophy, and 0.8073 and 0.7059 on retinal detachment, respectively. CONCLUSIONS We report a successful approach for a simultaneous classification of pathological myopia and segmentation of associated lesions. Our work was acknowledged with an award in the context of the "Pathological Myopia detection from retinal images" challenge held during the IEEE International Symposium on Biomedical Imaging (April 2019). Considering that (pathological) myopia cases are often identified as false positives and negatives in glaucoma deep learning models, we envisage that the current work could aid in future research to discriminate between glaucomatous and highly-myopic eyes, complemented by the localization and segmentation of landmarks such as fovea, optic disc and atrophy.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, KU Leuven, Herestraat 49, 3000 Leuven, Belgium; VITO NV, Boeretang 200, 2400 Mol, Belgium.
| | - Bart Elen
- VITO NV, Boeretang 200, 2400 Mol, Belgium
| | | | - Julie Jacob
- Ophthalmology Department, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, KU Leuven, Herestraat 49, 3000 Leuven, Belgium; Ophthalmology Department, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium
| | - Patrick De Boever
- Hasselt University, Agoralaan building D, 3590 Diepenbeek, Belgium; VITO NV, Boeretang 200, 2400 Mol, Belgium
| |
Collapse
|
37
|
Mookiah MRK, Hogg S, MacGillivray TJ, Prathiba V, Pradeepa R, Mohan V, Anjana RM, Doney AS, Palmer CNA, Trucco E. A review of machine learning methods for retinal blood vessel segmentation and artery/vein classification. Med Image Anal 2020; 68:101905. [PMID: 33385700 DOI: 10.1016/j.media.2020.101905] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 11/10/2020] [Accepted: 11/11/2020] [Indexed: 12/20/2022]
Abstract
The eye affords a unique opportunity to inspect a rich part of the human microvasculature non-invasively via retinal imaging. Retinal blood vessel segmentation and classification are prime steps for the diagnosis and risk assessment of microvascular and systemic diseases. A high volume of techniques based on deep learning have been published in recent years. In this context, we review 158 papers published between 2012 and 2020, focussing on methods based on machine and deep learning (DL) for automatic vessel segmentation and classification for fundus camera images. We divide the methods into various classes by task (segmentation or artery-vein classification), technique (supervised or unsupervised, deep and non-deep learning, hand-crafted methods) and more specific algorithms (e.g. multiscale, morphology). We discuss advantages and limitations, and include tables summarising results at-a-glance. Finally, we attempt to assess the quantitative merit of DL methods in terms of accuracy improvement compared to other methods. The results allow us to offer our views on the outlook for vessel segmentation and classification for fundus camera images.
Collapse
Affiliation(s)
| | - Stephen Hogg
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| | - Tom J MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh EH16 4SB, UK
| | - Vijayaraghavan Prathiba
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Rajendra Pradeepa
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Viswanathan Mohan
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Ranjit Mohan Anjana
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Alexander S Doney
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Colin N A Palmer
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Emanuele Trucco
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| |
Collapse
|
38
|
Escorcia-Gutierrez J, Torrents-Barrena J, Gamarra M, Romero-Aroca P, Valls A, Puig D. Convexity shape constraints for retinal blood vessel segmentation and foveal avascular zone detection. Comput Biol Med 2020; 127:104049. [PMID: 33099218 DOI: 10.1016/j.compbiomed.2020.104049] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 10/06/2020] [Accepted: 10/07/2020] [Indexed: 11/17/2022]
Abstract
Diabetic retinopathy (DR) has become a major worldwide health problem due to the increase in blindness among diabetics at early ages. The detection of DR pathologies such as microaneurysms, hemorrhages and exudates through advanced computational techniques is of utmost importance in patient health care. New computer vision techniques are needed to improve upon traditional screening of color fundus images. The segmentation of the entire anatomical structure of the retina is a crucial phase in detecting these pathologies. This work proposes a novel framework for fast and fully automatic blood vessel segmentation and fovea detection. The preprocessing method involved both contrast limited adaptive histogram equalization and the brightness preserving dynamic fuzzy histogram equalization algorithms to enhance image contrast and eliminate noise artifacts. Afterwards, the color spaces and their intrinsic components were examined to identify the most suitable color model to reveal the foreground pixels against the entire background. Several samples were then collected and used by the renowned convexity shape prior segmentation algorithm. The proposed methodology achieved an average vasculature segmentation accuracy exceeding 96%, 95%, 98% and 94% for the DRIVE, STARE, HRF and Messidor publicly available datasets, respectively. An additional validation step reached an average accuracy of 94.30% using an in-house dataset provided by the Hospital Sant Joan of Reus (Spain). Moreover, an outstanding detection accuracy of over 98% was achieved for the foveal avascular zone. An extensive state-of-the-art comparison was also conducted. The proposed approach can thus be integrated into daily clinical practice to assist medical experts in the diagnosis of DR.
Collapse
Affiliation(s)
- José Escorcia-Gutierrez
- Electronic and Telecommunications Program, Universidad Autónoma Del Caribe, Barranquilla, Colombia; Departament D'Enginyeria Informàtica I Matemàtiques, Escola Técnica Superior D'Enginyeria, Universitat Rovira I Virgili, Tarragona, Spain.
| | - Jordina Torrents-Barrena
- Departament D'Enginyeria Informàtica I Matemàtiques, Escola Técnica Superior D'Enginyeria, Universitat Rovira I Virgili, Tarragona, Spain.
| | - Margarita Gamarra
- Departament of Computational Science and Electronic, Universidad de La Costa, CUC, Barranquilla, Colombia
| | - Pedro Romero-Aroca
- Ophthalmology Service, Universitari Hospital Sant Joan, Institut de Investigacio Sanitaria Pere Virgili [IISPV], Reus, Spain
| | - Aida Valls
- Departament D'Enginyeria Informàtica I Matemàtiques, Escola Técnica Superior D'Enginyeria, Universitat Rovira I Virgili, Tarragona, Spain.
| | - Domenec Puig
- Departament D'Enginyeria Informàtica I Matemàtiques, Escola Técnica Superior D'Enginyeria, Universitat Rovira I Virgili, Tarragona, Spain.
| |
Collapse
|
39
|
Farahani A, Mohseni H. Medical image segmentation using customized U-Net with adaptive activation functions. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05396-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
40
|
Belagali V, Rao M V A, Gopikishore P, Krishnamurthy R, Ghosh PK. Two step convolutional neural network for automatic glottis localization and segmentation in stroboscopic videos. BIOMEDICAL OPTICS EXPRESS 2020; 11:4695-4713. [PMID: 32923072 PMCID: PMC7449707 DOI: 10.1364/boe.396252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 07/16/2020] [Accepted: 07/16/2020] [Indexed: 06/11/2023]
Abstract
Precise analysis of the vocal fold vibratory pattern in a stroboscopic video plays a key role in the evaluation of voice disorders. Automatic glottis segmentation is one of the preliminary steps in such analysis. In this work, it is divided into two subproblems namely, glottis localization and glottis segmentation. A two step convolutional neural network (CNN) approach is proposed for the automatic glottis segmentation. Data augmentation is carried out using two techniques : 1) Blind rotation (WB), 2) Rotation with respect to glottis orientation (WO). The dataset used in this study contains stroboscopic videos of 18 subjects with Sulcus vocalis, in which the glottis region is annotated by three speech language pathologists (SLPs). The proposed two step CNN approach achieves an average localization accuracy of 90.08% and a mean dice score of 0.65.
Collapse
Affiliation(s)
- Varun Belagali
- Computer Science and Engineering, RV College of Engineering, Bangalore 560059, India
| | - Achuth Rao M V
- Electrical Engineering, Indian Institute of Science, Bangalore 560012, India
| | | | - Rahul Krishnamurthy
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, India
| | | |
Collapse
|
41
|
Meijs M, Pegge SAH, Vos MHE, Patel A, van de Leemput SC, Koschmieder K, Prokop M, Meijer FJA, Manniesing R. Cerebral Artery and Vein Segmentation in Four-dimensional CT Angiography Using Convolutional Neural Networks. Radiol Artif Intell 2020; 2:e190178. [PMID: 33937832 DOI: 10.1148/ryai.2020190178] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Revised: 04/18/2020] [Accepted: 04/23/2020] [Indexed: 12/15/2022]
Abstract
Purpose To implement and test a deep learning approach for the segmentation of the arterial and venous cerebral vasculature with four-dimensional (4D) CT angiography. Materials and Methods Patients who had undergone 4D CT angiography for the suspicion of acute ischemic stroke were retrospectively identified. A total of 390 patients evaluated in 2014 (n = 113) or 2018 (n = 277) were included in this study, with each patient having undergone one 4D CT angiographic scan. One hundred patients from 2014 were randomly selected, and the arteries and veins on their CT scans were manually annotated by five experienced observers. The weighted temporal average and weighted temporal variance from 4D CT angiography were used as input for a three-dimensional Dense-U-Net. The network was trained with the fully annotated cerebral vessel artery-vein maps from 60 patients. Forty patients were used for quantitative evaluation. The relative absolute volume difference and the Dice similarity coefficient are reported. The neural network segmentations from 277 patients who underwent scanning in 2018 were qualitatively evaluated by an experienced neuroradiologist using a five-point scale. Results The average time for processing arterial and venous cerebral vasculature with the network was less than 90 seconds. The mean Dice similarity coefficient in the test set was 0.80 ± 0.04 (standard deviation) for the arteries and 0.88 ± 0.03 for the veins. The mean relative absolute volume difference was 7.3% ± 5.7 for the arteries and 8.5% ± 4.8 for the veins. Most of the segmentations (n = 273, 99.3%) were rated as very good to perfect. Conclusion The proposed convolutional neural network enables accurate artery and vein segmentation with 4D CT angiography with a processing time of less than 90 seconds.© RSNA, 2020.
Collapse
Affiliation(s)
- Midas Meijs
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein-Zuid 10, Nijmegen 6500 HB, the Netherlands
| | - Sjoert A H Pegge
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein-Zuid 10, Nijmegen 6500 HB, the Netherlands
| | - Maria H E Vos
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein-Zuid 10, Nijmegen 6500 HB, the Netherlands
| | - Ajay Patel
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein-Zuid 10, Nijmegen 6500 HB, the Netherlands
| | - Sil C van de Leemput
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein-Zuid 10, Nijmegen 6500 HB, the Netherlands
| | - Kevin Koschmieder
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein-Zuid 10, Nijmegen 6500 HB, the Netherlands
| | - Mathias Prokop
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein-Zuid 10, Nijmegen 6500 HB, the Netherlands
| | - Frederick J A Meijer
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein-Zuid 10, Nijmegen 6500 HB, the Netherlands
| | - Rashindra Manniesing
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Geert Grooteplein-Zuid 10, Nijmegen 6500 HB, the Netherlands
| |
Collapse
|
42
|
Le D, Alam M, Yao CK, Lim JI, Hsieh YT, Chan RVP, Toslak D, Yao X. Transfer Learning for Automated OCTA Detection of Diabetic Retinopathy. Transl Vis Sci Technol 2020; 9:35. [PMID: 32855839 PMCID: PMC7424949 DOI: 10.1167/tvst.9.2.35] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 04/05/2020] [Indexed: 01/10/2023] Open
Abstract
Purpose To test the feasibility of using deep learning for optical coherence tomography angiography (OCTA) detection of diabetic retinopathy. Methods A deep-learning convolutional neural network (CNN) architecture, VGG16, was employed for this study. A transfer learning process was implemented to retrain the CNN for robust OCTA classification. One dataset, consisting of images of 32 healthy eyes, 75 eyes with diabetic retinopathy (DR), and 24 eyes with diabetes but no DR (NoDR), was used for training and cross-validation. A second dataset consisting of 20 NoDR and 26 DR eyes was used for external validation. To demonstrate the feasibility of using artificial intelligence (AI) screening of DR in clinical environments, the CNN was incorporated into a graphical user interface (GUI) platform. Results With the last nine layers retrained, the CNN architecture achieved the best performance for automated OCTA classification. The cross-validation accuracy of the retrained classifier for differentiating among healthy, NoDR, and DR eyes was 87.27%, with 83.76% sensitivity and 90.82% specificity. The AUC metrics for binary classification of healthy, NoDR, and DR eyes were 0.97, 0.98, and 0.97, respectively. The GUI platform enabled easy validation of the method for AI screening of DR in a clinical environment. Conclusions With a transfer learning process for retraining, a CNN can be used for robust OCTA classification of healthy, NoDR, and DR eyes. The AI-based OCTA classification platform may provide a practical solution to reducing the burden of experienced ophthalmologists with regard to mass screening of DR patients. Translational Relevance Deep-learning-based OCTA classification can alleviate the need for manual graders and improve DR screening efficiency.
Collapse
Affiliation(s)
- David Le
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Minhaj Alam
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA
| | - Cham K Yao
- Hinsdale Central High School, Hinsdale, IL, USA
| | - Jennifer I Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Yi-Ting Hsieh
- Department of Ophthalmology, National Taiwan University, Taipei, Taiwan
| | - Robison V P Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| | - Devrim Toslak
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA.,Department of Ophthalmology, Antalya Training and Research Hospital, Antalya, Turkey
| | - Xincheng Yao
- Department of Bioengineering, University of Illinois at Chicago, Chicago, IL, USA.,Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
43
|
Advanced vascular examinations of the retina and optic nerve head in glaucoma. PROGRESS IN BRAIN RESEARCH 2020; 257:77-83. [DOI: 10.1016/bs.pbr.2020.07.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|