1
|
Coronado I, Abdelkhaleq R, Yan J, Marioni SS, Jagolino-Cole A, Channa R, Pachade S, Sheth SA, Giancardo L. Towards Stroke Biomarkers on Fundus Retinal Imaging: A Comparison Between Vasculature Embeddings and General Purpose Convolutional Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3873-3876. [PMID: 34892078 PMCID: PMC8981508 DOI: 10.1109/embc46164.2021.9629856] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Fundus Retinal imaging is an easy-to-acquire modality typically used for monitoring eye health. Current evidence indicates that the retina, and its vasculature in particular, is associated with other disease processes making it an ideal candidate for biomarker discovery. The development of these biomarkers has typically relied on predefined measurements, which makes the development process slow. Recently, representation learning algorithms such as general purpose convolutional neural networks or vasculature embeddings have been proposed as an approach to learn imaging biomarkers directly from the data, hence greatly speeding up their discovery. In this work, we compare and contrast different state-of-the-art retina biomarker discovery methods to identify signs of past stroke in the retinas of a curated patient cohort of 2,472 subjects from the UK Biobank dataset. We investigate two convolutional neural networks previously used in retina biomarker discovery and directly trained on the stroke outcome, and an extension of the vasculature embedding approach which infers its feature representation from the vasculature and combines the information of retinal images from both eyes.In our experiments, we show that the pipeline based on vasculature embeddings has comparable or better performance than other methods with a much more compact feature representation and ease of training.Clinical Relevance-This study compares and contrasts three retinal biomarker discovery strategies, using a curated dataset of subject evidence, for the analysis of the retina as a proxy in the assessment of clinical outcomes, such as stroke risk.
Collapse
|
2
|
Yildiz VM, Tian P, Yildiz I, Brown JM, Kalpathy-Cramer J, Dy J, Ioannidis S, Erdogmus D, Ostmo S, Kim SJ, Chan RVP, Campbell JP, Chiang MF. Plus Disease in Retinopathy of Prematurity: Convolutional Neural Network Performance Using a Combined Neural Network and Feature Extraction Approach. Transl Vis Sci Technol 2020; 9:10. [PMID: 32704416 PMCID: PMC7346878 DOI: 10.1167/tvst.9.2.10] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023] Open
Abstract
Purpose Retinopathy of prematurity (ROP), a leading cause of childhood blindness, is diagnosed by clinical ophthalmoscopic examinations or reading retinal images. Plus disease, defined as abnormal tortuosity and dilation of the posterior retinal blood vessels, is the most important feature to determine treatment-requiring ROP. We aimed to create a complete, publicly available and feature-extraction-based pipeline, I-ROP ASSIST, that achieves convolutional neural network (CNN)-like performance when diagnosing plus disease from retinal images. Methods We developed two datasets containing 100 and 5512 posterior retinal images, respectively. After segmenting retinal vessels, we detected the vessel centerlines. Then, we extracted features relevant to ROP, including tortuosity and dilation measures, and used these features in the classifiers including logistic regression, support vector machine and neural networks to assess a severity score for the input. We tested our system with fivefold cross-validation and calculated the area under the curve (AUC) metric for each classifier and dataset. Results For predicting plus versus not-plus categories, we achieved 99% and 94% AUC on the first and second datasets, respectively. For predicting pre-plus or worse versus normal categories, we achieved 99% and 88% AUC on the first and second datasets, respectively. The CNN method achieved 98% and 94% for predicting two categories on the second dataset. Conclusions Our system combining automatic retinal vessel segmentation, tracing, feature extraction and classification is able to diagnose plus disease in ROP with CNN-like performance. Translational Relevance The high performance of I-ROP ASSIST suggests potential applications in automated and objective diagnosis of plus disease.
Collapse
Affiliation(s)
- Veysi M Yildiz
- Cognitive Systems Laboratory, Northeastern University, Boston, MA, USA
| | - Peng Tian
- Cognitive Systems Laboratory, Northeastern University, Boston, MA, USA
| | - Ilkay Yildiz
- Cognitive Systems Laboratory, Northeastern University, Boston, MA, USA
| | - James M Brown
- Department of Computer Science, University of Lincoln, Lincoln, UK
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Jennifer Dy
- Cognitive Systems Laboratory, Northeastern University, Boston, MA, USA
| | - Stratis Ioannidis
- Cognitive Systems Laboratory, Northeastern University, Boston, MA, USA
| | - Deniz Erdogmus
- Cognitive Systems Laboratory, Northeastern University, Boston, MA, USA
| | - Susan Ostmo
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Sang Jin Kim
- Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - R V Paul Chan
- Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, USA
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | - Michael F Chiang
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA
| | | |
Collapse
|
3
|
Capoglu S, Savarraj JP, Sheth SA, Choi HA, Giancardo L. Representation Learning of 3D Brain Angiograms, an Application for Cerebral Vasospasm Prediction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:3394-3398. [PMID: 31946608 DOI: 10.1109/embc.2019.8857815] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Stroke is the fifth leading cause of death in the United States. Subarachnoid hemorrhage (SAH) is a type of stroke often caused by the spontaneous rupture of a cerebral aneurysm. About 30% of the SAH patients develop delayed cerebral ischemia (DCI) a serious secondary complication with devastating impact. Cerebral vasospasm is one of the major precursors of DCI. Predicting the risk of vasospasm would enable better treatment and improved outcomes. Our overarching goal is to find a brain vasculature representation that can be used to find predictive image-based biomarkers. We propose a new methodology that leverages sparse dictionary learning and covariance-based features in order to encode the whole vessel structure in a vector of fixed size. Using 3D brain angiograms, we use this vasculature representation to train a logistic regression model to predict the occurrence of cerebral vasospasm with an area under the ROC curve of 0.93.
Collapse
|
5
|
Oloumi F, Rangayyan RM, Casti P, Ells AL. Computer-aided diagnosis of plus disease via measurement of vessel thickness in retinal fundus images of preterm infants. Comput Biol Med 2015; 66:316-29. [DOI: 10.1016/j.compbiomed.2015.09.009] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2015] [Revised: 09/09/2015] [Accepted: 09/10/2015] [Indexed: 12/11/2022]
|